Archive

Author Archive

Developing around PlantUML

February 1st, 2015 No comments

In Integration Testing from the Trenches, I made heavy use of UML diagrams. So far, I’ve tried and used many UML editors. I’ve used:

  • The one that came with NetBeans – past tense, they removed it
  • Microsoft Visio
  • Modelio – this might be one of the best
  • others that I don’t remember

I came to the following conclusion: most of the time (read all the time), I just need to document an API or what I want designed, and there’s no need to be compliant to the latest UML specs. Loose is good enough.

True UML editors don’t accept loose, it’s UML after all. UML drawings tools are just that – drawing tools, with a little layer above. All come with these problems:

  • They consume quite a lot of resources
  • They have their own dedicated file format from which you can generate the image

I’m interested in the image, but want to keep the file there just in case I need an update.

PlantUML

Then I came upon PlantUML and I rejoiced. PlantUML defines a simple text syntax from which it generates UML diagrams images. So far, I’ve known 3 ways to generate PlantUML diagrams:

  • The online form
  • The Confluence plugin
  • Installing the stuff on one’s machine

I’ve used the former 2, with a tendency toward the first, as Confluence requires you to save to generate the image. This is good for the initial definition. However, it’s more than a little boring when one has to go through all ~60 diagrams to update them with a higher resolution.

Thus, I decided to develop a batch around PlantUML that will perform the following steps:

  • Read PlantUML description files from a folder
  • Generate images from them through the online form
  • Save the generated images in a folder

The batch run can be configured to apply settings common to all description files. This batch should not only be useful, it should also teach me stuff and be fun to code (note that sometimes, those goals are opposite).

Overall architecture

I’m a Spring fanboy but I wanted the foundations to be completely free on any dependencies.

Overall architecture

  1. jplantuml-api: defines the very basic API. It contains only an Exception and the root interface, which extends Java 8’s Function. This way is not just hype, it let user classes compose functions together which is the highest possible composition potential I’ve seen so far in Java.
  2. jplantuml-online: uses the JSoup library to connect to the online server. Also implements reading from a text file and writing the result to an image file.
  3. jplantuml-batch: adapts everything into the Spring Batch model.

The power of Function

In Java 8, a Function is an interface that transforms an input of type I to an output of type O via its apply() method. But, with the power of Java 8’s default methods, it also provides the compose() and andThen() method implementations.

This enables chaining Function calls in a process pipeline. One can define a base JPLantUml implementation that applies a String to a byte array for a simple call, and a more complex one that does the same for a File (the description) and… a File (the resulting image). However, the latter just compose 3 functions:

  1. A file reader to read the description file content, from File to String
  2. The PlantUML call processor from String to byte[]
  3. A file writer to write the image file, from byte[] to File

And presto, we’ve got our functions composition. This fine-grained nature let us do some unit-testing of the reader and writer functions without requiring any complex mocking.

Outcome

The result of this code is this Github project. It lets you use the following command-line:

java -jar ch.frankel.jplantuml.batch.Application globalParams="skinparam dpi 150; hide empty members"

This will read all PlantUML files from /tmp/in, apply the above parameters and generate the resulting image files in /tmp/out. The in/out folders can be set using standard  Spring Batch properties overriding.

Send to Kindle
Categories: Development Tags:

Entropy of code

January 25th, 2015 No comments

So you were tasked to prepare your next software project. Already, its overall architecture is designed, you already put some code-related parts in place such as the build file, the packages, perhaps a sample use-case. Then after a few months, the project is finished, and the final result has largely diverged from what you imagined:inconsistent packages naming and organization, completely different low-level implementations e.g. XML dependency injection, auto-wiring and Java configuration, etc.

This is not only unsatisfying from a personal point-of-view, it also has a very negative impact on the maintenance of the application. How then could that happen? Here are some reasons I’ve witnessed first-hand.

New requirements

Your upfront architecture has been designed with a set of requirements in mind. Then, during the course of the project, some new requirements creep up. Reasons for this could be a bad business analysis, forgotten requirements or plain brand-new requirements from the business. Those new requirements just throw your architecture upside-down, and of course there’s no time for a global refactoring.

Unfinished refactoring

There comes a good idea: a refactoring that makes sense, because it would improve design, allow to remove code, whatever… Everything is nice and good until the planning changes, something takes priority and the refactoring has be stopped. Now, some parts of the application are refactored and others still use the old design.

New team member(s)

Everything is going fine but development is lagging a little behind, so the higher-ups decide to bring one or some more team members (based on the widespread but wrong assumption that if a woman gives birth to a child in 9 months, 9 women will take only month to do the same). Anyhow, there is no time for the current team to brief the newcomer about design details and existing practices, and he has to develop a user-story right now. So he does as he know how to do it since he has no time to get how things are done on the project.

Cowboy mentality

Whatever the team, probabilities are there’s always a team member who’s not happy with the current design. Possible reasons include:

  • the current design being really less than optimal
  • the cowboy wants to prove himself and the others
  • he just wants to keep control on everything

In all cases, parts of the application will definitely hold his mark. In the best of situations, it can lead to a global refactoring… which has very high chances of never being finished – as described above.

Mismatching technology

Let’s face it, there are plenty of technologies, languages and frameworks around here to help us create applications. When the right combination is made, they can definitely decrease development and maintenance costs; when not… This point especially highlights the 80/20 rule when the technology makes the development of 80% of the application a breeze, while requiring to focus on the remaining 20%.

Changing situations such as New requirements above are particularly bad since a technology well adapted to a specific set of requirements would be wrong if some new one creeps in.

Also, along with the Cowboy mentality above, you can end up with the Golden Hammer of the said cowboy into your application, despite it being completely unadapted to requirements. Another twist is for the Cowboy to try the latest hype framework he never could before e.g. force Node.js to your Java development team.

The myth of self-emerging design

I guess the most important reason for incoherent design in applications is the most pervasive one. Some time ago, a dedicated architect (or team for big applications) designed the application upfront. He probably was senior, and as such didn’t do any coding anymore, so he was very far from real-life and the development team a) bitched about the design b) did as they want anyway. This is what is well-know about the Ivory Tower architect effect. This was the previous situation at one end, but the current situation at the other opposite end is not better.

With books like The Cathedral and the Bazaar, people think upfront design is wrong and that the right design will somehow reveal itself magically. My previous experiences taught me it unfortunately doesn’t work like that: teams just don’t naturally come up with one design. Such teams probably exist, but it takes a rare combination of technological skills, soft skills – among them communication, and chance to assemble them. Most of real life projects will end up like the Frankenstein monster without a kind of central direction.

Conclusion

In order for maintenance costs to be as close as possible to linear, design must be consistent throughout the application. There are many reasons for an application to be heterogeneous: technological, organizational, human, you name it.

Among them, making the assumption that design is self-emerging is a sure way to end up with ugly hybrid beasts. In order to cope with that, balancing the scales again toward the center by making again one (or more) development team member responsible for the design and empowering him to enforce decisions is a way to improve consistency. Worst case scenario, the design won’t be optimal but it will be consistent at least.

Save yourself sweat, toil and tears: the latest meme is not always good for you, your projects and your organization. In this case, having some sort of hierarchy into development teams is not a bad thing per se.

Send to Kindle
Categories: Development Tags:

Improving the Vaadin 4 Spring project with a simpler MVP

January 18th, 2015 No comments

I’ve been using the Vaadin 4 Spring library on my current project, and this has been a very pleasant experience. However, in the middle of the project, a colleague of mine decided to “improve the testability”. The intention was laudable, though the project already tried to implement the MVP pattern (please check this article for more detailed information). Instead of correcting the mistakes here and there, he refactored the whole codebase using the provided MVP module… IMHO, this has been a huge mistake. In this article, I’ll try to highlights the stuff that bugs me in the existing implementation, and an alternative solution to it.

The existing MVP implementation consists of a single class. Here it is, abridged for readability purpose:

public abstract class Presenter<V extends View> {

    @Autowired
    private SpringViewProvider viewProvider;

    @Autowired
    private EventBus eventBus;

    @PostConstruct
    protected void init() {
        eventBus.subscribe(this);
    }

    public V getView() {
        V result = null;
        Class<?> clazz = getClass();
        if (clazz.isAnnotationPresent(VaadinPresenter.class)) {
            VaadinPresenter vp = clazz.getAnnotation(VaadinPresenter.class);
            result = (V) viewProvider.getView(vp.viewName());
        }
        return result;
    }
    // Other plumbing code
}

This class is quite opinionated and suffers from the following drawbacks:

  1. It relies on field auto-wiring, which makes it extremely hard to unit test Presenter classes. As a proof, the provided test class is not a unit test, but an integration test.
  2. It relies solely on component scanning, which prevents explicit dependency injection.
  3. It enforces the implementation of the View interface, whether required or not. When not using the Navigator, it makes the implementation of an empty enterView() method mandatory.
  4. It takes responsibility of creating the View from the view provider.
  5. It couples the Presenter and the View, with its @VaadinPresenter annotation, preventing a single Presenter to handle different View implementations.
  6. It requires to explicitly call the init() method of the Presenter, as the @PostConstruct annotation on a super class is not called when the subclass has one.

I’ve developed an alternative class that tries to address the previous points – and is also simpler:

public abstract class Presenter<T> {

    private final T view;
    private final EventBus eventBus;

    public Presenter(T view, EventBus eventBus) {
        Assert.notNull(view);
        Assert.notNull(eventBus);
        this.view = view;
        this.eventBus = eventBus;
        eventBus.subscribe(this);
    }
    // Other plumbing code
}

This class makes every subclass easily unit-testable, as the following snippets proves:

public class FooView extends Label {}

public class FooPresenter extends Presenter {

    public FooPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

public class PresenterTest {

    private FooPresenter presenter;
    private FooView fooView;
    private EventBus eventBus;

    @Before
    public void setUp() {
        fooView = new FooView();
        eventBus = mock(EventBus.class);
        presenter = new FooPresenter(fooView, eventBus);
    }

    @Test
    public void should_manage_underlying_view() {
        String message = "anymessagecangohere";
        presenter.onNewCaption(message);
        assertEquals(message, fooView.getCaption());
    }
}

The same Integration Test as for the initial class can also be handled, using explicit dependency injection:

public class ExplicitPresenter extends Presenter<FooView> {

    public ExplicitPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

@Configuration
@EnableVaadin
public class ExplicitConfig {

    @Autowired
    private EventBus eventBus;

    @Bean
    @UIScope
    public FooView fooView() {
        return new FooView();
    }

    @Bean
    @UIScope
    public ExplicitPresenter fooPresenter() {
        return new ExplicitPresenter(fooView(), eventBus);
    }
}

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = ExplicitConfig.class)
@VaadinAppConfiguration
public class ExplicitPresenterIT {

    @Autowired
    private ExplicitPresenter explicitPresenter;

    @Autowired
    private EventBus eventBus;

    @Test
    public void should_listen_to_message() {
        String message = "message_from_explicit";
        eventBus.publish(this, message);
        assertEquals(message, explicitPresenter.getView().getCaption());
    }
}

Last but not least, this alternative also let you use auto-wiring and component-scanning if you feel like it! The only difference being that it enforces constructor auto-wiring instead of field auto-wiring (in my eyes, this counts as a plus, albeit a little more verbose):

@UIScope
@VaadinComponent
public class FooView extends Label {}

@UIScope
@VaadinComponent
public class AutowiredPresenter extends Presenter<FooView> {

    @Autowired
    public AutowiredPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

@ComponentScan
@EnableVaadin
public class ScanConfig {}

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = ScanConfig.class)
@VaadinAppConfiguration
public class AutowiredPresenterIT {

    @Autowired
    private AutowiredPresenter autowiredPresenter;

    @Autowired
    private EventBus eventBus;

    @Test
    public void should_listen_to_message() {
        String message = "message_from_autowired";
        eventBus.publish(this, message);
        assertEquals(message, autowiredPresenter.getView().getCaption());
    }
}

The good news is that this module is now part of the vaadin4spring project on Github. If you need MVP for your Vaadin Spring application, you’re just a click away!

Send to Kindle
Categories: JavaEE Tags: , ,

Je suis Charlie

January 8th, 2015 No comments

This week, there won’t be any technical article. As you might have heard about, cowards armed with automatic weapons brutally murdered cartoonists, journalists, policemen… All all in all, 12 people died because of stupidity and obscurantism, just because their truth was contradictory to their murderers’ – or just because they happened to be on site.

Right now, I’m afraid I start to understand what American people felt after September 11th.

I’m not a cartoonist, not a soldier, not a policeman, not a writer, so this is my way of expressing my empathy with the families and those close to the victims. I also urge everyone to clearly separate between the general muslim population who only aspire to live their lives in peace and fanatics. Don’t play the game of the terrorists!

Please let me finish with drawings, one for hope by Banksy, the other for fun in raw Charlie Hebdo style by @LouisonA.


Send to Kindle
Categories: Miscellaneous Tags:

Spring profiles or Maven profiles?

January 4th, 2015 2 comments

Deploying on different environments requires configuration, e.g. database URL(s) must be set on each dedicated environment. In most – if not all Java applications, this is achieved through a .properties file, loaded through the appropriately-named Properties class. During development, there’s no reason not to use the same configuration system, e.g. to use an embedded h2 database instead of the production one.

Unfortunately, Jave EE applications generally fall outside this usage, as the good practice on deployed environments (i.e. all environments save the local developer machine) is to use a JNDI datasource instead of a local connection. Even Tomcat and Jetty – which implement only a fraction of the Java EE Web Profile, provide this nifty and useful feature.

As an example, let’s take the Spring framework. In this case, two datasource configuration fragments have to be defined:

  • For deployed environment, one that specifies the JNDI location lookup
  • For local development (and test), one that configures a connection pool around a direct database connection

Simple properties file cannot manage this kind of switch, one has to use the build system. Shameless self-promotion: a detailed explanation of this setup for integration testing purposes can be found in my book, Integration Testing from the Trenches.

With the Maven build system, change between configuration is achieved through so-called profiles at build time. Roughly, a Maven profile is a portion of a POM that’s can be enabled (or not). For example, the following profile snippet replaces Maven’s standard resource directory with a dedicated one.


    
      dev
      
        
          
            profile/dev
            
              **/*
            
          
        
      
    

Activating a single or different profiles is as easy as using the -P switch with their id on the command-line when invoking Maven. The following command will activate the dev profile (provided it is set in the POM):

mvn package -Pdev

Now, let’s add a simple requirement: as I’m quite lazy, I want to exert the minimum effort possible to package the application along with its final production release configuration. This translates into making the production configuration, i.e. the JNDI fragment, the default one, and using the development fragment explicitly when necessary. Seasoned Maven users know how to implement that: create a production profile and configure it to be the default.


  dev
  
    true
  
  ...

Icing on the cake, profiles can even be set in Maven settings.xml files. Seems to good to be true? Well, very seasoned Maven users know that as soon as a single profile is explicitly activated, the default profile is de-activated. Previous experiences have taught me that because profiles are so easy to implement, they are used (and overused) so that the default one gets easily lost in the process. For example, in one such job, a profile was used on the Continuous Integration server to set some properties for the release in a dedicated setting files. In order to keep the right configuration, one has to a) know about the sneaky profile b) know it will break the default profile c) explicitly set the not-default-anymore profile.

Additional details about the dangers of Maven profiles for building artifacts can be found in this article.

Another drawback of this global approach is the tendency for over-fragmentation of the configuration files. I prefer to have coarse-grained configuration files, with each dedicated to a layer or a use-case. For example, I’d like to declare at least the datasource, the transaction manager and the entity manager in the same file with possibly the different repositories.

Come Spring profiles. As opposed to Maven profiles, Spring profiles are activated at runtime. I’m not sure whether this is a good or a bad thing, but the implementation makes it possible for real default configurations, with the help of @Conditional annotations (see my previous article for more details). That way, the wrapper-around-the-connection bean gets created when the dev profile is activated, and when not, the JNDI lookup bean. This kind of configuration is implemented in the following snippet:

@Configuration
public class MyConfiguration {

    @Bean
    @Profile("dev")
    public DataSource dataSource() throws Exception {
        org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
        dataSource.setDriverClassName("org.h2.Driver");
        dataSource.setUrl("jdbc:h2:file:~/conditional");
        dataSource.setUsername("sa");
        return dataSource;
    }

    @Bean
    @ConditionalOnMissingBean(DataSource.class)
    public DataSource fakeDataSource() {
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
    }
}

In this context, profiles are just a way to activate specific beans, the real magic is achieved through the different @Conditional annotations.

Note: it is advised to create a dedicated annotation to avoid String typos, to be more refactoring friendly and improve search capabilities on the code.

@Retention(RUNTIME)
@Target({TYPE, METHOD})
@Profile("dev")
public @interface Development {}

Now, this approach has some drawbacks as well. The most obvious problem is that the final archive will contain extra libraries, those that are use exclusively for development. This is readily apparent when one uses Spring Boot. One of such extra library is the h2 database, a whooping 1.7 Mb jar file. There are a two main counterarguments to this:

  • First, if you’re concerned about a couple of additional Mb, then your main issue is probably not on the software side, but on the disk management side. Perhaps a virtual layer such as VMWare or Xen could help?
  • Then, if the need be, you can still configure the build system to streamline the produced artifact.

The second drawback of Spring profiles is that along with extra libraries, the development configuration will be packaged into the final artifact as well. To be honest, when I first stumbled this approach, this was a no-go. Then, as usual, I thought more and more about it, and came to the following conclusion: there’s nothing wrong with that. Packaging the development configuration has no consequence whatsoever, whether it is set through XML or JavaConfig. Think about this: once an archive has been created, it is considered sealed, even when the application server explodes it for deployment purposes. It is considered very bad practice to do something on the exploded archive in all cases. So what would be the reason not to package the development configuration along? The only reason I can think of is: to be clean, from a theoretical point of view. Me being a pragmatist, I think the advantages of using Spring profiles is far greater than this drawback.

In my current project, I created a single configuration fragment with all beans that are dependent on the environment, the datasource and the Spring Security authentication provider. For the latter, the production configuration uses an internal LDAP, so that the development bean provides an in-memory provider.

So on one hand, we’ve got Maven profiles, which have definite issues but which we are familiar with, and on the other hand, we’ve got Spring profiles which are brand new, hurt our natural inclination but gets the job done. I’d suggest to give them a try: I did and am so far happy with them.

Send to Kindle
Categories: Java Tags: , ,

Optional dependencies in Spring

December 21st, 2014 6 comments

I’m a regular Spring framework user and I think I know the framework pretty well, but it seems I’m always stumbling upon something useful I didn’t know about. At Devoxx, I learned that you could express conditional dependencies using Java 8’s new Optional type. Note that before Java 8, optional dependencies could be auto-wired using @Autowired(required = false), but then you had to check for null.

How good is that? Well, I can think about a million use-cases, but here are some that come out of my mind:

  • Prevent usage of infrastructure dependencies, depending on the context. For example, in a development environment, one wouldn’t need to send metrics to a MetricRegistry
  • Provide defaults when required infrastructure dependencies are not provided e.g. a h2 datasource
  • The same could be done in a testing environment.
  • etc.

The implementation is very straightforward:

@ContextConfiguration(classes = OptionalConfiguration.class)
public class DependencyPresentTest extends AbstractTestNGSpringContextTests {

    @Autowired
    private Optional<HelloService> myServiceOptional;

    @Test
    public void should_return_hello() {
        String sayHello = null;
        if (myServiceOptional.isPresent()) {
            sayHello = myServiceOptional.get().sayHello();
        }
        assertNotNull(sayHello);
        assertEquals(sayHello, "Hello!");
    }
}

At this point, not only does the code compile fine, but the dependency is evaluated at compile time. Either the OptionalConfiguration contains the HelloService bean – and the above test succeeds, or it doesn’t – and the test fails.

This pattern is very elegant and I suggest you list it into your bag of available tools.

Send to Kindle
Categories: Java Tags: ,

Avoid conditional logic in @Configuration

December 7th, 2014 2 comments

Integration Testing Spring applications mandates to create small dedicated configuration fragments and to assemble them either during normal run of the application or during tests. Even in the latter case, different fragments can be assembled in different tests.

However, this practice doesn’t handle the use-case where I want to use the application in two different environments. As an example, I might want to use a JNDI datasource in deployed environments and a direct connection when developing  on my local machine. Assembling different fragment combinations is not possible, as I want to run the application in both cases, not test it.

My only requirement is that the default should use the JNDI datasource, while activating a flag – a profile, should switch to the direct connection. The Pavlovian reflex in this case would be to add a simple condition in the @Configuration class.

@Configuration
public class MyConfiguration {

    @Autowired
    private Environment env;

    @Bean
    public DataSource dataSource() throws Exception {
        if (env.acceptsProfiles("dev")) {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:file:~/conditional");
            dataSource.setUsername("sa");
            return dataSource;
        }
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional"); 
    }
}

Starting to use this kind flow control statements is the beginning of the end, as it will lead to adding more control flow statements in the future, which will lead in turn to a tangled mess of spaghetti configuration, and ultimately to an unmaintainable application.

Spring Boot offers a nice alternative to handle this use-case with different flavors of @ConditionalXXX annotations. Using them have the following advantages while doing the job: easy to use, readable and limited. While the latter point might seem to be a drawback, it’s the biggest asset IMHO (not unlike Maven plugins). Code is powerful, and with great power must come great responsibility, something that is hardly possible during the course of a project with deadlines and pressure from the higher-ups. That’s the main reason one of my colleagues advocates XML over JavaConfig: with XML, you’re sure there won’t be any abuse while the project runs its course.

But let’s stop the philosophy and back to @ConditionalXXX annotations. Basically, putting such an annotation on a @Bean method will invoke this method and put the bean in the factory based on a dedicated condition. There are many of them, here are some important ones:

  • Dependent on Java version, newer or older – @ConditionalOnJava
  • Dependent on a bean present in factory – @ConditionalOnBean, and its opposite, dependent on a bean name not present – @ConditionalOnMissingBean
  • Dependent on a class present on the classpath – @ConditionalOnClass, and its opposite @ConditionalOnMissingClass
  • Whether it’s a web application or not – @ConditionalOnWebApplication and @ConditionalOnNotWebApplication
  • etc.

Note that the whole list of existing conditions can be browsed in Spring Boot’s org.springframework.boot.autoconfigure.condition package.

With this information, we can migrate the above snippet to a more robust implementation:

@Configuration
public class MyConfiguration {

    @Bean
    @Profile("dev")
    public DataSource dataSource() throws Exception {
        org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
        dataSource.setDriverClassName("org.h2.Driver");
        dataSource.setUrl("jdbc:h2:file:~/localisatordb");
        dataSource.setUsername("sa");
        return dataSource;
    }

    @Bean
    @ConditionalOnMissingBean(DataSource.class)
    public DataSource fakeDataSource() {
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
    }
}

The configuration is now neatly separated into two different methods, the first method will be called only when the dev profile is active while the second will be when the first method is not called, hence when the dev profile is not active.

Finally, the best thing in that feature is that it is easily extensible, as it depends only on the @Conditional annotation and the Condition interface (who are part of Spring proper, not Spring Boot).

Here’s a simple example in Maven/IntelliJ format for you to play with. Have fun!

Send to Kindle
Categories: Java Tags: ,

From Vaadin to Docker, a novice’s journey

November 23rd, 2014 2 comments

I’m a huge Vaadin fan and I’ve created a Github workshop I can demo at conferences. A common issue with such kind of workshops is that attendees have to prepare their workstations in advance… and there’s always a significant part of them that comes with not everything ready. At this point, two options are available to the speaker: either wait for each of the attendee to finish the preparation – too bad for the people who took the time at home to do that, or start anyway – and lose the not-ready part.

Given the current buzz around Docker, I thought that could be a very good way to make the workshop preparation quicker – only one step, and hasslefree – no problem regarding the quirks of your operation system. The required steps I ask the attendees are the following:

  1. Install Git
  2. Install Java, Maven and Tomcat
  3. Clone the git repo
  4. Build the project (to prepare the Maven repository)
  5. Deploy the built webapp
  6. Start Tomcat

These should directly be automated into Docker. As I wasted much time getting this to work, here’s the tale of my journey in achieving this (be warned, it’s quite long). If you’ve got similar use-cases, I hope it will be useful in you getting things done faster.

Starting with Docker

The first step was to get to know the basics about Docker. Fortunately, I had the chance to attend a Docker workshop by David Gageot at Duchess Swiss. This included both Docker installation and basics of Dockerfile. I assume readers have likewise a basic understanding of Docker.

For those who don’t, I guess browsing the Docker’s official documentation is a nice idea:

Building my first Dockerfile

The Docker image can be built with the following command ran into the directory of the Dockerfile:

$ docker build -t vaadinworkshop .

The first issues one can encounter when playing with Docker the first time, is to get the following error message:

Get http:///var/run/docker.sock/v1.14/containers/json: dial unix /var/run/docker.sock: no such file or directory

The reason is because one didn’t export the required environment variables displayed by the boot2docker information message. If you lost the exact data, no worry, just use the shellinit boot2docker parameter:

$ boot2docker shellinit
Writing /Users/i303869/.docker/boot2docker-vm/ca.pem:
Writing /Users/i303869/.docker/boot2docker-vm/cert.pem:
Writing /Users/i303869/.docker/boot2docker-vm/key.pem:
    export DOCKER_HOST=tcp://192.168.59.103:2376
    export DOCKER_CERT_PATH=/Users/i303869/.docker/boot2docker-vm

Copy-paste the export lines above will solve the issue. These can also be set in one’s .bashrc script as it seems these values seldom change.

Next in line is the following error:

Get http://192.168.59.103:2376/v1.14/containers/json: malformed HTTP response "\x15\x03\x01\x00\x02\x02"

This error message seems to be because of a mismatch between versions of the client and the server. It seems it is because of a bug on Mac OSX when upgrading. For a long term solution, reinstall Docker from scratch; for a quick fix, use the --tls flag with the docker command. As it is quite cumbersome to type it everything, one can alias it:

$ alias docker="docker --tls"

My last mistake when building the image comes from building the Dockerfile from a not empty directory. Docker sends every file it finds in the directory of the Dockerfile to the Docker container for build:

$ docker --tls build -t vaadinworkshop .
Sending build context to Docker daemon Too many kB

Fix: do not try this at home and start from a directory container the Dockerfile only.

Starting from scratch

Dockerfiles describe images – images are built as a layered list of instructions. Docker images are designed around single inheritance: one image has to be set a single parent. An image requiring no parent starts from scratch, but Docker provides 4 base official distributions: busybox, debian, ubuntu and centos (operating systems are generally a good start).

Whatever you want to achieve, it is necessary to choose the right parent. Given the requirements I set for myself (Java, Maven, Tomcat and Git), I tried to find the right starting image. Many Dockerfiles are already available online on the Docker hub. The browsing app is quite good, but to be really honest, the search can really be improved.

My intention was to use the image that matched the most of my requirements, then fill the gap. I could find no image providing Git, but I thought the dgageot/maven Dockerfile would be a nice starting point. The problem is that the base image is a busybox and provides no installer out-of-the-box (apt-get, yum, whatever). For this reason, David uses a lot of curl to get Java 8 and Maven in his Dockerfiles.

I foolishly thought I could use a different flavor of busybox that provides the opkg installer. After a while, I accumulated many problems, resolving one heading to another. In the end, I finally decided to use the OS I was most comfortable with and to install everything myself:

FROM ubuntu:utopic

Scripting Java installation

Installing git, maven and tomcat packages is very straightforward (if you don’t forget to use the non-interactive options) with RUN and apt-get:

RUN apt-get update && \\
    apt-get install -y --force-yes git maven tomcat8

Java doesn’t fall into this nice pattern, as Oracle wants you to accept the license. Nice people did however publish it to a third-party repo. Steps are the following:

  1. Add the needed package repository
  2. Configure the system to automatically accept the license
  3. Configure the system to add un-certified packages
  4. Update the list of repositories
  5. At last, install the package
  6. Also add a package for Java 8 system configuration
RUN echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main" | tee -a /etc/apt/sources.list && \\
    echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections && \\
    apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EEA14886

RUN apt-get update && \\
    apt-get install -y --force-yes oracle-java8-installer oracle-java8-set-default

Building the sources

Getting the workshop’s sources and building them is quite straightforward with the following instructions:

RUN git clone  https://github.com/nfrankel/vaadin7-workshop.git
WORKDIR /vaadin7-workshop
RUN mvn package

The drawback of this approach is that Maven will start from a fresh repository, and thus download the Internet the first time it is launched. At first, I wanted to mount a volume from the host to the container to share the ~/.m2/repository folder to avoid this, but I noticed this could only be done at runtime through the -v option as the VOLUME instruction cannot point to a host directory.

Starting the image

The simplest command to start the created Docker image is the following:

$ docker run -p 8080:8080

Do not forget the port forwarding from the container to the host, 8080 for the standard HTTP port. Also, note that it’s not necessary to run the container as a daemon (with the -d option). The added value of that is that the standard output of the CMD (see below) will be redirected to the host. When running as a daemon and wanting to check the logs, one has to execute bash in the container, which requires a sequence of cumbersome manipulations.

Configuring and launching Tomcat

Tomcat can be launched when starting the container by just adding the following instruction to the Dockerfile:

CMD ["catalina.sh", "run"]

However, trying to start the container at this point will result in the following error:

Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/common/classes], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/common], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/server/classes], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/server], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/shared/classes], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/shared], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.Catalina initDirs
SEVERE: Cannot find specified temporary folder at /usr/share/tomcat8/temp
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.Catalina load
WARNING: Unable to load server configuration from [/usr/share/tomcat8/conf/server.xml]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.Catalina initDirs
SEVERE: Cannot find specified temporary folder at /usr/share/tomcat8/temp
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.Catalina load
WARNING: Unable to load server configuration from [/usr/share/tomcat8/conf/server.xml]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.Catalina start
SEVERE: Cannot start server. Server instance is not configured.

I have no idea why, but it seems Tomcat 8 on Ubuntu is not configured in any meaningful way. Everything is available but we need some symbolic links here and there as well as creating the temp directory. This translates into the following instruction in the Dockerfile:

RUN ln -s /var/lib/tomcat8/common $CATALINA_HOME/common && \\
    ln -s /var/lib/tomcat8/server $CATALINA_HOME/server && \\
    ln -s /var/lib/tomcat8/shared $CATALINA_HOME/shared && \\
    ln -s /etc/tomcat8 $CATALINA_HOME/conf && \\
    mkdir $CATALINA_HOME/temp

The final trick is to connect the exploded webapp folder created by Maven to Tomcat’s webapps folder, which it looks for deployments:

RUN mkdir $CATALINA_HOME/webapps && \\
    ln -s /vaadin7-workshop/target/workshop-7.2-1.0-SNAPSHOT/ $CATALINA_HOME/webapps/vaadinworkshop

At this point, the Holy Grail is not far away, you just have to browse the URL… if only we knew what the IP was. Since running on Mac, there’s an additional VM beside the host and the container that’s involved. To get this IP, type:

$ boot2docker ip

The VM's Host only interface IP address is: 192.168.59.103

Now, browsing http://192.168.59.103:8080/vaadinworkshop/ will bring us to the familiar workshop screen:

Developing from there

Everything works fine but didn’t we just forget about one important thing, like how workshop attendees are supposed to work on the sources? Easy enough, just mount the volume when starting the container:

docker run -v /Users//vaadin7-workshop:/vaadin7-workshop  -p 8080:8080 vaadinworkshop

Note that the host volume must be part of /Users and if on OSX, it must use boot2docker v. 1.3+.

Unfortunately, it seems now is the showstopper, as mounting an empty directory from the host to the container will not make the container’s directory available from the host. On the contrary, it will empty the container’s directory given that the host’s directory doesn’t exist… It seems there’s an issue in Docker on Mac. The installation of JHipster runs into the same problem, and proposes to use the Samba Docker folder sharing project.

I’m afraid I was too lazy to go further at this point. However, this taught me much about Docker, its usages and use-cases (as well as OSX integration limitations). For those who are interested, you’ll find below the Docker file. Happy Docker!

FROM ubuntu:utopic

MAINTAINER Nicolas Frankel 

# Config to get to install Java 8 w/o interaction
RUN echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main" | tee -a /etc/apt/sources.list && \
echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections && \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EEA14886

RUN apt-get update && \
apt-get install -y --force-yes git oracle-java8-installer oracle-java8-set-default maven tomcat8

RUN git clone https://github.com/nfrankel/vaadin7-workshop.git
WORKDIR /vaadin7-workshop
RUN git checkout v7.2-1
RUN mvn package

ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
ENV CATALINA_HOME /usr/share/tomcat8
ENV PATH $PATH:$CATALINA_HOME/bin

# Configure Tomcat 8 directories
RUN ln -s /var/lib/tomcat8/common $CATALINA_HOME/common && \
ln -s /var/lib/tomcat8/server $CATALINA_HOME/server && \
ln -s /var/lib/tomcat8/shared $CATALINA_HOME/shared && \
ln -s /etc/tomcat8 $CATALINA_HOME/conf && \
mkdir $CATALINA_HOME/temp && \
mkdir $CATALINA_HOME/webapps && \
ln -s /vaadin7-workshop/target/workshop-7.2-1.0-SNAPSHOT/ $CATALINA_HOME/webapps/vaadinworkshop

VOLUME ["/vaadin7-workshop"]

CMD ["catalina.sh", "run"]

# docker build -t vaadinworkshop .
# docker run -v ~/vaadin7-workshop training/webapp -p 8080:8080 vaadinworkshop
Send to Kindle
Categories: Development Tags: ,

Metrics, metrics everywhere

November 16th, 2014 No comments

With DevOps, metrics are starting to be among the non-functional requirements any application has to bring into scope. Before going further, there are several comments I’d like to make:

  1. Metrics are not only about non-functional stuff. Many metrics represent very important KPI for the business. For example, for an e-commerce shop, the business needs to know how many customers leave the checkout process, and in which screen. True, there are several solutions to achieve this, though they are all web-based (Google Analytics comes to mind) and metrics might also be required for different architectures. And having all metrics in the same backend mean they can be correlated easily.
  2. Metrics, as any other NFR (e.g. logging and exception handling) should be designed and managed upfront and not pushed in as an afterthought. How do I know that? Well, one of my last project focused on functional requirement only, and only in the end did project management realized NFR were important. Trust me when I say it was gory – and it has cost much more than if designed in the early phases of the project.
  3. Metrics have an overhead. However, without metrics, it’s not possible to increase performance. Just accept that and live with it.

The inputs are the following: the application is Spring MVC-based and metrics have to be aggregated in Graphite. We will start by using the excellent Metrics project: not only does it get the job done, its documentation is of very high quality and it’s available under the friendly OpenSource Apache v2.0 license.

That said, let’s imagine a “standard” base architecture to manage those components.

First, though Metrics offer a Graphite endpoint, this requires configuration in each environment and this makes it harder, especially on developers workstations. To manage this, we’ll send metrics to JMX and introduce jmxtrans as a middle component between JMX and graphite. As every JVM provides JMX services, this requires no configuration when there’s none needed – and has no impact on performance.

Second, as developers, we usually enjoy develop everything from scratch in order to show off how good we are – or sometimes because they didn’t browse the documentation. My point of view as a software engineer is that I’d rather not reinvent the wheel and focus on the task at end. Actually, Spring Boot already integrates with Metrics through the Actuator component. However, it only provides GaugeService – to send unique values, and CounterService – to increment/decrement values. This might be good enough for FR but not for NFR so we might want to tweak things a little.

The flow would be designed like this: Code > Spring Boot > Metrics > JMX > Graphite

The starting point is to create an aspect, as performance metric is a cross-cutting concern:

@Aspect
public class MetricAspect {

    private final MetricSender metricSender;

    @Autowired
    public MetricAspect(MetricSender metricSender) {
        this.metricSender = metricSender;
    }

    @Around("execution(* ch.frankel.blog.metrics.ping..*(..)) ||execution(* ch.frankel.blog.metrics.dice..*(..))")
    public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
        StopWatch stopWatch = metricSender.getStartedStopWatch();
        try {
            return pjp.proceed();
        } finally {
            Class clazz = pjp.getTarget().getClass();
            String methodName = pjp.getSignature().getName();
            metricSender.stopAndSend(stopWatch, clazz, methodName);
        }
    }
}

The only thing outside of the ordinary is the usage of autowiring as aspects don’t seem to be able to be the target of explicit wiring (yet?). Also notice the aspect itself doesn’t interact with the Metrics API, it only delegates to a dedicated component:

public class MetricSender {

    private final MetricRegistry registry;

    public MetricSender(MetricRegistry registry) {
        this.registry = registry;
    }

    private Histogram getOrAdd(String metricsName) {
        Map registeredHistograms = registry.getHistograms();
        Histogram registeredHistogram = registeredHistograms.get(metricsName);
        if (registeredHistogram == null) {
            Reservoir reservoir = new ExponentiallyDecayingReservoir();
            registeredHistogram = new Histogram(reservoir);
            registry.register(metricsName, registeredHistogram);
        }
        return registeredHistogram;
    }

    public StopWatch getStartedStopWatch() {
        StopWatch stopWatch = new StopWatch();
        stopWatch.start();
        return stopWatch;
    }

    private String computeMetricName(Class clazz, String methodName) {
        return clazz.getName() + '.' + methodName;
    }

    public void stopAndSend(StopWatch stopWatch, Class clazz, String methodName) {
        stopWatch.stop();
        String metricName = computeMetricName(clazz, methodName);
        getOrAdd(metricName).update(stopWatch.getTotalTimeMillis());
    }
}

The sender does several interesting things (but with no state):

  • It returns a new StopWatch for the aspect to pass back after method execution
  • It computes the metric name depending on the class and the method
  • It stops the StopWatch and sends the time to the MetricRegistry
  • Note it also lazily creates and registers a new Histogram with an ExponentiallyDecayingReservoir instance. The default behavior is to provide an UniformReservoir, which keeps data forever and is not suitable for our need.

The final step is to tell the Metrics API to send data to JMX. This can be done in one of the configuration classes, preferably the one dedicated to metrics, using the @PostConstruct annotation on the desired method.

@Configuration
public class MetricsConfiguration {

    @Autowired
    private MetricRegistry metricRegistry;

    @Bean
    public MetricSender metricSender() {
        return new MetricSender(metricRegistry);
    }

    @PostConstruct
    public void connectRegistryToJmx() {
        JmxReporter reporter = JmxReporter.forRegistry(metricRegistry).build();
        reporter.start();
    }
}

The JConsole should look like the following. Icing on the cake, all default Spring Boot metrics are also available:

Sources for this article are available in Maven “format”.

To go further:

Send to Kindle

Dear recruiters

November 9th, 2014 5 comments

I had been pleasantly surprised last time when, after connecting on LinkedIn, a recruiter sent me a personalized mail. It was the first time it happened, and I found this gesture showed that the recruiter cared about the relationship. Besides, the job I currently have was found on LinkedIn – or more correctly, a recruiter found me for this job. This is to say that I have nothing against neither recruiters nor LinkedIn.

Unfortunately, more often than not, I usually receive such messages:

Hello Nikola,  (First name basis because we’ve known each other since a long time. And do not forget to misspell)

I’m currently searching for a junior PHP developer and I believe you would be a perfect fit. (thanks for reading my profile, I really appreciate custom tailored emails)

The job is located in Bulgaria (oh yes, relocating there has always been the goal of my life, thanks for asking!)

and the daily rate is $200 (wow, what a great incentive to relocate!)

Cheers, your friend the recruiter (located in UK, India, anywhere unrelated to my location or the job location)

Those kind of mails directly end up in my spam folder. However, last week saw another usual but still annoying story:

  1. A recruiter cold-contacts me for a dream job – at least from his point of view. Fortunately, this time was by mail this time, not by phone, interrupting me during my work
  2. I ask about trivial details, such as the job description and the expected salary
  3. The recruiter tells me he cannot disclose them at this point

So, that’s it, dear recruiters, this is the last straw. I would really appreciate if you put yourself in my place (for once). When you take time for me, you’re doing your job – and earning your pay, while when I take time for you, it’s unrelated to what I’m paid to do. The only thing I get is only a vague possibility I can get a better job (if it’s possible). If I ask about details, it’s because I do want to optimize the time consumed, both mine and yours. By the way, I have no interest in running to your competitor and telling him about your extraordinary job offering, I’ve many more interesting things in life so telling me about the job details is mandatory, not something that I have to bargain for.

I’ve devised a little something that can help you, please read the following classes:

public class JobMatchEvaluator {

    private final Company company;

    public JobMatchEvaluator(Company company) {
        this.company = company;
    }

    public InterestLevel evaluate() {
        if (!company.focusOnSoftware()
         || !company.investInPeople()
         || !company.staysUpToDate()
         || !company.allowsAnotherLife()
         || !company.respectCandidates()) {
            return InterestLevel.NONE;
        }
        InterestLevel interest = InterestLevel.NONE;
        if (company.getTraining() == PROACTIVE || company.getTraining() == NOT_ONLY_SOFTWARE) {
            interest = interest.increase();
        }
        if (company.getConference() == ACTIVELY_SENDING) {
            interest = interest.increase();
        }
        if (company.getTime() == TWENTY_PERCENT_PET_PROJECT) {
            interest = interest.increase();
        }
        return interest;
    }
}

public class ContactDecisionHelper {

    private final JobMatchEvaluator evaluator;
    private final Recruiter recruiter;

    public ContactDecisionHelper(JobMatchEvaluator evaluator, Recruiter recruiter) {
        this.evaluator = evaluator;
        this.recruiter = recruiter;
    }

    public boolean shouldContact() {
        if (!recruiter.canDiscloseDetails()) {
            return false;
        }
        InterestLevel interest = evaluator.evaluate();
        switch(interest) {
            case NOT_ENOUGH:
            case NONE:
                return false;
            case TOTAL:
            case SPARKED:
                return true;
            default:
                throw new IllegalStateException();
        }
    }
}

Here are the guidelines to use the previous code:

  • If you don’t want to bother reading it, please don’t contact me
  • If you don’t understand the general idea, please don’t contact me
  • If you think it makes me sound better than you, please don’t contact me
  • If it doesn’t make you smile, or you find it not even remotely amusing, please don’t contact me

The complete project can be found on Github. Feel free to clone and adapt it to your needs, or even send me pull requests for generic improvements.

Send to Kindle
Categories: Miscellaneous Tags: