Archive

Archive for the ‘Java’ Category

Better developer-to-developer collaboration with Bintray

March 29th, 2015 No comments

I recently got interested in Spring Social, and as part of my learning path, I tried to integrate their Github module which is still in Incubator mode. Unfortunately, this module seems to have been left behind, and its dependency on the core module uses an old version of it. And since I use the latest version of this core, Maven resolves one version to put in the WEB-INF/lib folder of the WAR package. Unfortunately, it doesn’t work so well at runtime.

The following diagram shows this situation:

Dependencies original situation

I could have excluded the old version from the transitive dependencies, but I’m lazy and Maven doesn’t make it easy (yet). Instead, I decided to just upgrade the Github module to the latest version and install it in my local repository. That proved to be quite easy as there was no incompatibility with the newest version of the core – I even created a pull request. This is the updated situation:

Dependencies final situation

Unfortunately, if I now decide to distribute this version of my application, nobody will be able to neither build nor run it since only I have the “patched” (latest) version of the Github module available in my local repo. I could distribute along the updated sources, but it would mean you would have to build it and install it into your local repo first before using my app.

Bintray to the rescue! Bintray is a binary repository, able to host any kind of binaries: jars, wars, deb, anything. It is hosted online, and free for OpenSource projects, which nicely suits my use-case. This is how I uploaded my artifact on Bintray.

Create an account
Bintray makes it quite easy to create such an account, using of the available authentication providers – Github, Twitter or Google+. Alternatively, one can create an old-style account, with password.
Create an artifact
Once authentified, an artifact needs to be created. Select your default Maven repository, it can be found at https://bintray.com//maven. Then, click on the big Add New Package button located on the right border. On the opening page, fill in the required information. The package can be named whatever you want, I chose to use the Maven artifact identifier: spring-social-github.
Create a version
Files can only be added to a version, so that a version need to be created first. On the package detail page, click on the New Version link (second column, first line).On the opening page, fill in the version name. Note that snapshots are not accepted and this is only checked through the -SNAPSHOT suffix. I chose to use 1.0.0.BUILD.
Upload files
Once the version is created, files can finally be uploaded. In the top bar, click the Upload Files button. Drag and drop all desired files, of course the main JAR and the POM, but it can also include source and javadoc JARs. Notice the Target Repository Path field: it should be set to the logical path to the Maven artifact, including groupId, artifactId and version separated by slashes. For example, my use-case should resolve to org/springframework/social/spring-social-github/1.0.0.BUILD. Note that instead of filling this field, you can wait for the files to be uploaded as Bintray will detect this upload, analyze the POM and propose to set it automatically: if this fits – and it probably does, just accept the proposal.
Publish
Uploading files is not enough, as those files are temporary until publication. A big notice warns about it: just click on the Publish link located on the right border.

At this point, you need only to add the Bintray repository in the POM.


    
        bintray
        http://dl.bintray.com/nfrankel/maven
        
            true
        
    
Send to Kindle
Categories: Java Tags: ,

Become a DevOps with Spring Boot

March 8th, 2015 1 comment

Have you ever found yourself in the situation to finish a project and you’re about to deliver it to the Ops team. You’re so happy because this time, you covered all the bases: the documentation contains the JNDI datasource name the application will use, all environment-dependent parameters have been externalized in a property file – and documented, and you even made sure logging has been implemented at key points in the code. Unfortunately, Ops refuse your delivery since they don’t know how to monitor the new application. And you missed that… Sure you could hack something to fulfill this requirement, but the project is already over-budget. In some (most?) companies, this means someone will have to be blamed and chances are the developer will bear all the burden. Time for some sleepless nights.

Spring Boot is a product from Spring that brings many out-of-the-box features to the table. Convention over configuration, in-memory default datasource and and embedded Tomcat are part of the features known to most. However, I think there’s a hidden gem that should be much more advertised. The actuator module actually provides metrics and health checks out-of-the-box as well as an easy way to add your own. In this article, we’ll see how to access those metrics from HTTP and send them to JMX and Graphite.

As an example application, let’s use an update of the Spring Pet Clinic made with Boot – thanks forArnaldo Piccinelli for his work. The starting point is commit 790e5d0. Now, let’s add some metrics in no time.

The first step is to add the actuator module starter in the Maven POM and let Boot does its magic:


    org.springframework.boot
    spring-boot-starter-actuator

At this point, we can launch the Spring Pet Clinic with mvn spring-boot:run and navigate to http://localhost:8090/metrics (note that the path is protected by Spring Security, credentials are user/password) to see something like the following:

{
  "mem" : 562688,
  "mem.free" : 328492,
  "processors" : 8,
  "uptime" : 26897,
  "instance.uptime" : 18974,
  "heap.committed" : 562688,
  "heap.init" : 131072,
  "heap.used" : 234195,
  "heap" : 1864192,
  "threads.peak" : 20,
  "threads.daemon" : 17,
  "threads" : 19,
  "classes" : 9440,
  "classes.loaded" : 9443,
  "classes.unloaded" : 3,
  "gc.ps_scavenge.count" : 16,
  "gc.ps_scavenge.time" : 104,
  "gc.ps_marksweep.count" : 2,
  "gc.ps_marksweep.time" : 152
}

As can be seen, Boot provides hardware- and Java-related metrics without further configuration. Even better, if one browses the app e.g. repeatedly refreshed the root, new metrics appear:

{
  "counter.status.200.metrics" : 1,
  "counter.status.200.root" : 2,
  "counter.status.304.star-star" : 4,
  "counter.status.304.webjars.star-star" : 1,
  "gauge.response.metrics" : 72.0,
  "gauge.response.root" : 16.0,
  "gauge.response.star-star" : 8.0,
  "gauge.response.webjars.star-star" : 11.0,
  ...
}

Those metrics are more functional in nature, and they are are separated into two separate groups:

  • Gauges are the simplest metrics and return a numeric value e.g. gauge.response.root is the time (in milliseconds) of the last response from the /metrics path
  • Counters are metrics which can be incremented/decremented e.g. counter.status.200.metrics is the number of times the /metrics path returned a HTTP 200 code

At this point, your Ops team could probably scrape the returned JSON and make something out of it. It will be their responsibility to regularly poll the URL and to use the figures the way they want. However, with just a little more effort, we can ease the life of our beloved Ops team by putting these metrics in JMX.

Spring Boot integrates easily with Dropwizard metrics. By just adding the following dependency to the POM, Boot is able to provide a MetricRegistry, a Dropwizard registry for all metrics:


    io.dropwizard.metrics
    metrics-core
    4.0.0-SNAPSHOT

Using the provided registry, one is able to send metrics to JMX in addition to the HTTP endpoint. We just need a simple configuration class as well as a few API calls:

@Configuration
public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;

    @Bean
    public JmxReporter jmxReporter() {
        JmxReporter reporter = JmxReporter.forRegistry(registry).build();
        reporter.start();
        return reporter;
    }
}

Launching jconsole let us check it works alright: The Ops team now just needs to get metrics from JMX and push them into their preferred graphical display tool, such as Graphite. One such way to achieve this is through jmx-trans. However, it’s also possible to directly send metrics to the Graphite server with just a few different API calls:

@Configuration
public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;

    @Bean
    public GraphiteReporter graphiteReporter() {
        Graphite graphite = new Graphite(new InetSocketAddress("localhost", 2003));
        GraphiteReporter reporter = GraphiteReporter.forRegistry(registry)
                                                    .prefixedWith("boot").build(graphite);
        reporter.start(500, TimeUnit.MILLISECONDS);
        return reporter;
    }
}

The result is quite interesting given the few lines of code: Note that going to Graphite using the JMX route makes things easier as there’s no need for a dedicated Graphite server in development environments.

Send to Kindle
Categories: Java Tags: , ,

Final release of Integration Testing from the Trenches

March 1st, 2015 2 comments
Writing a book is a journey. At the beginning of the journey, you mostly know where you want to go, but have only vague notion of the way to get there and the time it will take. I’ve finally released the paperback version of on Amazon and that means this specific journey is at end.

The book starts by a very generic discussion about testing and continues by defining Integration Testing in comparison to Unit Testing. The next chapter compares the respective merits of Junit and TestNG. It is followed by complete description on how to make a design testable: what works for Unit Testing works also for Integration Testing. Testing in software relies on automation, so that specific usage of the Maven build tool is described in regard to Integration Testing – as well as Gradle. Dependencies on external resources make integration tests more fragile so faking those make them more robust. Those resources include: databases, the file system, SOAP and REST web services, etc. The most important dependency in any application is the container. The last chapters are dedicated to the Spring framework, including Spring MVC and Java EE.

In this journey, I also dared ask Josh Long of Spring fame and Aslak Knutsen, team lead of the Arquillian project to write a foreword to the book – and I’ve been delighted to have them both answer positively. Thank you guys!

I’ve also talked on the subject at some JUG and European conferences: JavaDay Kiev, Joker, Agile Tour London, and JUG Lyon and will again at JavaLand, DevIt, TopConf Romania and GeeCon. I hope that by doing so, Integration Testing will be used more effectively on projects and with bigger ROI.

Should you want to go further, the book is available in multiple formats:

  1. A paperback version on for $49.99
  2. Electronic versions for Mac, Kindle and plain old PDF on . The pricing here is more open, starting from $21.10 with a suggested price of $31.65. Note you can get it in all formats to read on all your devices.

If you’re already a reader and you like it, please feel free to recommend it. If you don’t, I welcome your feedback in the comments section. Of course, if neither – I encourage you to get a book and see for yourself!

Send to Kindle

Avoid sequences of if…else statements

February 15th, 2015 8 comments

Adding a feature to legacy code while trying to improve it can be quite challenging, but also quite straightforward. Nothing angers me more (ok, I might be a little exaggerating) than stumbling upon such pattern:

public Foo getFoo(Bar bar) {
    if (bar instanceof BarA) {
        return new FooA();
    } else if (bar instanceof BarB) {
        return new FooB();
    } else if (bar instanceof BarC) {
        return new FooC();
    } else if (bar instanceof BarD) {
        return new FooD();
    }
    throw new BarNotFoundException();
}

Apply Object-Oriented Programming

The first reflex when writing such thing – yes, please don’t wait for the poor guy coming after you to clean your mess, should be to ask yourself whether applying basic Object-Oriented Programming couldn’t help you. In this case, you would have multiple children classes of:

public interface FooBarFunction<T extends Bar, R extends Foo> extends Function<T, R>

For example:

public class FooBarAFunction implements FooBarFunction<BarA, FooA> {
    public FooA apply(BarA bar) {
        return new FooA();
    }
}

Note: not enjoying the benefits of Java 8 is not reason not to use this: just create your own Function interface or use Guava’s.

Use a Map

I must admit that it not only scatters tightly related code in multiple files (this is Java…), it’s unfortunately not always possible to easily apply OOP. In that case, it’s quite easy to initialise a map that returns the correct type.

public class FooBarFunction {
    private static final Map<Class<Bar>, Foo> MAPPINGS = new HashMap<>();
    static {
        MAPPINGS.put(BarA.class, new FooA());
        MAPPINGS.put(BarB.class, new FooB());
        MAPPINGS.put(BarC.class, new FooC());
        MAPPINGS.put(BarD.class, new FooD());
    }
    public Foo getFoo(Bar bar) {
        Foo foo = MAPPINGS.get(bar.getClass());
        if (foo == null) {
            throw new BarNotFoundException();
        }
        return foo;
    }
}

Note this is only a basic example, and users of Dependency Injection can easily pass the map in the object constructor instead.

More than a return

The previous Map trick works quite well with return statements but not with code snippets. In this case, you need to use the map to return an enum and associate it with a switch-case.

public class FooBarFunction {
    private enum BarEnum {
        A, B, C, D
    }
    private static final Map<Class<Bar>, BarEnum> MAPPINGS = new HashMap<>();
    static {
        MAPPINGS.put(BarA.class, BarEnum.A);
        MAPPINGS.put(BarB.class, BarEnum.B);
        MAPPINGS.put(BarC.class, BarEnum.C);
        MAPPINGS.put(BarD.class, BarEnum.D);
    }
    public Foo getFoo(Bar bar) {
        BarEnum barEnum = MAPPINGS.get(bar.getClass());
        switch(barEnum) {
            case BarEnum.A:
                // Do something;
                break;
            case BarEnum.B:
                // Do something;
                break;
            case BarEnum.C:
                // Do something;
                break;
            case BarEnum.D:
                // Do something;
                break;
            default:
                throw new BarNotFoundException();
        }
    }
}

Note that not only I believe that this code is more readable but it’s also a fact it has better performance and the switch is evaluated once as opposite to each ifelse being evaluated in sequence until it returns true.

Note it’s expected not to put the code directly in the statement but use dedicated method calls.

Conclusion

I hope that at this point, you’re convinced there are other ways than sequences of ifelse. The rest is in your hands.

Send to Kindle
Categories: Java Tags:

Improving the Vaadin 4 Spring project with a simpler MVP

January 18th, 2015 No comments

I’ve been using the Vaadin 4 Spring library on my current project, and this has been a very pleasant experience. However, in the middle of the project, a colleague of mine decided to “improve the testability”. The intention was laudable, though the project already tried to implement the MVP pattern (please check this article for more detailed information). Instead of correcting the mistakes here and there, he refactored the whole codebase using the provided MVP module… IMHO, this has been a huge mistake. In this article, I’ll try to highlights the stuff that bugs me in the existing implementation, and an alternative solution to it.

The existing MVP implementation consists of a single class. Here it is, abridged for readability purpose:

public abstract class Presenter<V extends View> {

    @Autowired
    private SpringViewProvider viewProvider;

    @Autowired
    private EventBus eventBus;

    @PostConstruct
    protected void init() {
        eventBus.subscribe(this);
    }

    public V getView() {
        V result = null;
        Class<?> clazz = getClass();
        if (clazz.isAnnotationPresent(VaadinPresenter.class)) {
            VaadinPresenter vp = clazz.getAnnotation(VaadinPresenter.class);
            result = (V) viewProvider.getView(vp.viewName());
        }
        return result;
    }
    // Other plumbing code
}

This class is quite opinionated and suffers from the following drawbacks:

  1. It relies on field auto-wiring, which makes it extremely hard to unit test Presenter classes. As a proof, the provided test class is not a unit test, but an integration test.
  2. It relies solely on component scanning, which prevents explicit dependency injection.
  3. It enforces the implementation of the View interface, whether required or not. When not using the Navigator, it makes the implementation of an empty enterView() method mandatory.
  4. It takes responsibility of creating the View from the view provider.
  5. It couples the Presenter and the View, with its @VaadinPresenter annotation, preventing a single Presenter to handle different View implementations.
  6. It requires to explicitly call the init() method of the Presenter, as the @PostConstruct annotation on a super class is not called when the subclass has one.

I’ve developed an alternative class that tries to address the previous points – and is also simpler:

public abstract class Presenter<T> {

    private final T view;
    private final EventBus eventBus;

    public Presenter(T view, EventBus eventBus) {
        Assert.notNull(view);
        Assert.notNull(eventBus);
        this.view = view;
        this.eventBus = eventBus;
        eventBus.subscribe(this);
    }
    // Other plumbing code
}

This class makes every subclass easily unit-testable, as the following snippets proves:

public class FooView extends Label {}

public class FooPresenter extends Presenter {

    public FooPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

public class PresenterTest {

    private FooPresenter presenter;
    private FooView fooView;
    private EventBus eventBus;

    @Before
    public void setUp() {
        fooView = new FooView();
        eventBus = mock(EventBus.class);
        presenter = new FooPresenter(fooView, eventBus);
    }

    @Test
    public void should_manage_underlying_view() {
        String message = "anymessagecangohere";
        presenter.onNewCaption(message);
        assertEquals(message, fooView.getCaption());
    }
}

The same Integration Test as for the initial class can also be handled, using explicit dependency injection:

public class ExplicitPresenter extends Presenter<FooView> {

    public ExplicitPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

@Configuration
@EnableVaadin
public class ExplicitConfig {

    @Autowired
    private EventBus eventBus;

    @Bean
    @UIScope
    public FooView fooView() {
        return new FooView();
    }

    @Bean
    @UIScope
    public ExplicitPresenter fooPresenter() {
        return new ExplicitPresenter(fooView(), eventBus);
    }
}

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = ExplicitConfig.class)
@VaadinAppConfiguration
public class ExplicitPresenterIT {

    @Autowired
    private ExplicitPresenter explicitPresenter;

    @Autowired
    private EventBus eventBus;

    @Test
    public void should_listen_to_message() {
        String message = "message_from_explicit";
        eventBus.publish(this, message);
        assertEquals(message, explicitPresenter.getView().getCaption());
    }
}

Last but not least, this alternative also let you use auto-wiring and component-scanning if you feel like it! The only difference being that it enforces constructor auto-wiring instead of field auto-wiring (in my eyes, this counts as a plus, albeit a little more verbose):

@UIScope
@VaadinComponent
public class FooView extends Label {}

@UIScope
@VaadinComponent
public class AutowiredPresenter extends Presenter<FooView> {

    @Autowired
    public AutowiredPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

@ComponentScan
@EnableVaadin
public class ScanConfig {}

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = ScanConfig.class)
@VaadinAppConfiguration
public class AutowiredPresenterIT {

    @Autowired
    private AutowiredPresenter autowiredPresenter;

    @Autowired
    private EventBus eventBus;

    @Test
    public void should_listen_to_message() {
        String message = "message_from_autowired";
        eventBus.publish(this, message);
        assertEquals(message, autowiredPresenter.getView().getCaption());
    }
}

The good news is that this module is now part of the vaadin4spring project on Github. If you need MVP for your Vaadin Spring application, you’re just a click away!

Send to Kindle
Categories: JavaEE Tags: , ,

Spring profiles or Maven profiles?

January 4th, 2015 2 comments

Deploying on different environments requires configuration, e.g. database URL(s) must be set on each dedicated environment. In most – if not all Java applications, this is achieved through a .properties file, loaded through the appropriately-named Properties class. During development, there’s no reason not to use the same configuration system, e.g. to use an embedded h2 database instead of the production one.

Unfortunately, Jave EE applications generally fall outside this usage, as the good practice on deployed environments (i.e. all environments save the local developer machine) is to use a JNDI datasource instead of a local connection. Even Tomcat and Jetty – which implement only a fraction of the Java EE Web Profile, provide this nifty and useful feature.

As an example, let’s take the Spring framework. In this case, two datasource configuration fragments have to be defined:

  • For deployed environment, one that specifies the JNDI location lookup
  • For local development (and test), one that configures a connection pool around a direct database connection

Simple properties file cannot manage this kind of switch, one has to use the build system. Shameless self-promotion: a detailed explanation of this setup for integration testing purposes can be found in my book, Integration Testing from the Trenches.

With the Maven build system, change between configuration is achieved through so-called profiles at build time. Roughly, a Maven profile is a portion of a POM that’s can be enabled (or not). For example, the following profile snippet replaces Maven’s standard resource directory with a dedicated one.


    
      dev
      
        
          
            profile/dev
            
              **/*
            
          
        
      
    

Activating a single or different profiles is as easy as using the -P switch with their id on the command-line when invoking Maven. The following command will activate the dev profile (provided it is set in the POM):

mvn package -Pdev

Now, let’s add a simple requirement: as I’m quite lazy, I want to exert the minimum effort possible to package the application along with its final production release configuration. This translates into making the production configuration, i.e. the JNDI fragment, the default one, and using the development fragment explicitly when necessary. Seasoned Maven users know how to implement that: create a production profile and configure it to be the default.


  dev
  
    true
  
  ...

Icing on the cake, profiles can even be set in Maven settings.xml files. Seems to good to be true? Well, very seasoned Maven users know that as soon as a single profile is explicitly activated, the default profile is de-activated. Previous experiences have taught me that because profiles are so easy to implement, they are used (and overused) so that the default one gets easily lost in the process. For example, in one such job, a profile was used on the Continuous Integration server to set some properties for the release in a dedicated setting files. In order to keep the right configuration, one has to a) know about the sneaky profile b) know it will break the default profile c) explicitly set the not-default-anymore profile.

Additional details about the dangers of Maven profiles for building artifacts can be found in this article.

Another drawback of this global approach is the tendency for over-fragmentation of the configuration files. I prefer to have coarse-grained configuration files, with each dedicated to a layer or a use-case. For example, I’d like to declare at least the datasource, the transaction manager and the entity manager in the same file with possibly the different repositories.

Come Spring profiles. As opposed to Maven profiles, Spring profiles are activated at runtime. I’m not sure whether this is a good or a bad thing, but the implementation makes it possible for real default configurations, with the help of @Conditional annotations (see my previous article for more details). That way, the wrapper-around-the-connection bean gets created when the dev profile is activated, and when not, the JNDI lookup bean. This kind of configuration is implemented in the following snippet:

@Configuration
public class MyConfiguration {

    @Bean
    @Profile("dev")
    public DataSource dataSource() throws Exception {
        org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
        dataSource.setDriverClassName("org.h2.Driver");
        dataSource.setUrl("jdbc:h2:file:~/conditional");
        dataSource.setUsername("sa");
        return dataSource;
    }

    @Bean
    @ConditionalOnMissingBean(DataSource.class)
    public DataSource fakeDataSource() {
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
    }
}

In this context, profiles are just a way to activate specific beans, the real magic is achieved through the different @Conditional annotations.

Note: it is advised to create a dedicated annotation to avoid String typos, to be more refactoring friendly and improve search capabilities on the code.

@Retention(RUNTIME)
@Target({TYPE, METHOD})
@Profile("dev")
public @interface Development {}

Now, this approach has some drawbacks as well. The most obvious problem is that the final archive will contain extra libraries, those that are use exclusively for development. This is readily apparent when one uses Spring Boot. One of such extra library is the h2 database, a whooping 1.7 Mb jar file. There are a two main counterarguments to this:

  • First, if you’re concerned about a couple of additional Mb, then your main issue is probably not on the software side, but on the disk management side. Perhaps a virtual layer such as VMWare or Xen could help?
  • Then, if the need be, you can still configure the build system to streamline the produced artifact.

The second drawback of Spring profiles is that along with extra libraries, the development configuration will be packaged into the final artifact as well. To be honest, when I first stumbled this approach, this was a no-go. Then, as usual, I thought more and more about it, and came to the following conclusion: there’s nothing wrong with that. Packaging the development configuration has no consequence whatsoever, whether it is set through XML or JavaConfig. Think about this: once an archive has been created, it is considered sealed, even when the application server explodes it for deployment purposes. It is considered very bad practice to do something on the exploded archive in all cases. So what would be the reason not to package the development configuration along? The only reason I can think of is: to be clean, from a theoretical point of view. Me being a pragmatist, I think the advantages of using Spring profiles is far greater than this drawback.

In my current project, I created a single configuration fragment with all beans that are dependent on the environment, the datasource and the Spring Security authentication provider. For the latter, the production configuration uses an internal LDAP, so that the development bean provides an in-memory provider.

So on one hand, we’ve got Maven profiles, which have definite issues but which we are familiar with, and on the other hand, we’ve got Spring profiles which are brand new, hurt our natural inclination but gets the job done. I’d suggest to give them a try: I did and am so far happy with them.

Send to Kindle
Categories: Java Tags: , ,

Optional dependencies in Spring

December 21st, 2014 6 comments

I’m a regular Spring framework user and I think I know the framework pretty well, but it seems I’m always stumbling upon something useful I didn’t know about. At Devoxx, I learned that you could express conditional dependencies using Java 8’s new Optional type. Note that before Java 8, optional dependencies could be auto-wired using @Autowired(required = false), but then you had to check for null.

How good is that? Well, I can think about a million use-cases, but here are some that come out of my mind:

  • Prevent usage of infrastructure dependencies, depending on the context. For example, in a development environment, one wouldn’t need to send metrics to a MetricRegistry
  • Provide defaults when required infrastructure dependencies are not provided e.g. a h2 datasource
  • The same could be done in a testing environment.
  • etc.

The implementation is very straightforward:

@ContextConfiguration(classes = OptionalConfiguration.class)
public class DependencyPresentTest extends AbstractTestNGSpringContextTests {

    @Autowired
    private Optional<HelloService> myServiceOptional;

    @Test
    public void should_return_hello() {
        String sayHello = null;
        if (myServiceOptional.isPresent()) {
            sayHello = myServiceOptional.get().sayHello();
        }
        assertNotNull(sayHello);
        assertEquals(sayHello, "Hello!");
    }
}

At this point, not only does the code compile fine, but the dependency is evaluated at compile time. Either the OptionalConfiguration contains the HelloService bean – and the above test succeeds, or it doesn’t – and the test fails.

This pattern is very elegant and I suggest you list it into your bag of available tools.

Send to Kindle
Categories: Java Tags: ,

Avoid conditional logic in @Configuration

December 7th, 2014 2 comments

Integration Testing Spring applications mandates to create small dedicated configuration fragments and to assemble them either during normal run of the application or during tests. Even in the latter case, different fragments can be assembled in different tests.

However, this practice doesn’t handle the use-case where I want to use the application in two different environments. As an example, I might want to use a JNDI datasource in deployed environments and a direct connection when developing  on my local machine. Assembling different fragment combinations is not possible, as I want to run the application in both cases, not test it.

My only requirement is that the default should use the JNDI datasource, while activating a flag – a profile, should switch to the direct connection. The Pavlovian reflex in this case would be to add a simple condition in the @Configuration class.

@Configuration
public class MyConfiguration {

    @Autowired
    private Environment env;

    @Bean
    public DataSource dataSource() throws Exception {
        if (env.acceptsProfiles("dev")) {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:file:~/conditional");
            dataSource.setUsername("sa");
            return dataSource;
        }
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional"); 
    }
}

Starting to use this kind flow control statements is the beginning of the end, as it will lead to adding more control flow statements in the future, which will lead in turn to a tangled mess of spaghetti configuration, and ultimately to an unmaintainable application.

Spring Boot offers a nice alternative to handle this use-case with different flavors of @ConditionalXXX annotations. Using them have the following advantages while doing the job: easy to use, readable and limited. While the latter point might seem to be a drawback, it’s the biggest asset IMHO (not unlike Maven plugins). Code is powerful, and with great power must come great responsibility, something that is hardly possible during the course of a project with deadlines and pressure from the higher-ups. That’s the main reason one of my colleagues advocates XML over JavaConfig: with XML, you’re sure there won’t be any abuse while the project runs its course.

But let’s stop the philosophy and back to @ConditionalXXX annotations. Basically, putting such an annotation on a @Bean method will invoke this method and put the bean in the factory based on a dedicated condition. There are many of them, here are some important ones:

  • Dependent on Java version, newer or older – @ConditionalOnJava
  • Dependent on a bean present in factory – @ConditionalOnBean, and its opposite, dependent on a bean name not present – @ConditionalOnMissingBean
  • Dependent on a class present on the classpath – @ConditionalOnClass, and its opposite @ConditionalOnMissingClass
  • Whether it’s a web application or not – @ConditionalOnWebApplication and @ConditionalOnNotWebApplication
  • etc.

Note that the whole list of existing conditions can be browsed in Spring Boot’s org.springframework.boot.autoconfigure.condition package.

With this information, we can migrate the above snippet to a more robust implementation:

@Configuration
public class MyConfiguration {

    @Bean
    @Profile("dev")
    public DataSource dataSource() throws Exception {
        org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
        dataSource.setDriverClassName("org.h2.Driver");
        dataSource.setUrl("jdbc:h2:file:~/localisatordb");
        dataSource.setUsername("sa");
        return dataSource;
    }

    @Bean
    @ConditionalOnMissingBean(DataSource.class)
    public DataSource fakeDataSource() {
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
    }
}

The configuration is now neatly separated into two different methods, the first method will be called only when the dev profile is active while the second will be when the first method is not called, hence when the dev profile is not active.

Finally, the best thing in that feature is that it is easily extensible, as it depends only on the @Conditional annotation and the Condition interface (who are part of Spring proper, not Spring Boot).

Here’s a simple example in Maven/IntelliJ format for you to play with. Have fun!

Send to Kindle
Categories: Java Tags: ,

Metrics, metrics everywhere

November 16th, 2014 No comments

With DevOps, metrics are starting to be among the non-functional requirements any application has to bring into scope. Before going further, there are several comments I’d like to make:

  1. Metrics are not only about non-functional stuff. Many metrics represent very important KPI for the business. For example, for an e-commerce shop, the business needs to know how many customers leave the checkout process, and in which screen. True, there are several solutions to achieve this, though they are all web-based (Google Analytics comes to mind) and metrics might also be required for different architectures. And having all metrics in the same backend mean they can be correlated easily.
  2. Metrics, as any other NFR (e.g. logging and exception handling) should be designed and managed upfront and not pushed in as an afterthought. How do I know that? Well, one of my last project focused on functional requirement only, and only in the end did project management realized NFR were important. Trust me when I say it was gory – and it has cost much more than if designed in the early phases of the project.
  3. Metrics have an overhead. However, without metrics, it’s not possible to increase performance. Just accept that and live with it.

The inputs are the following: the application is Spring MVC-based and metrics have to be aggregated in Graphite. We will start by using the excellent Metrics project: not only does it get the job done, its documentation is of very high quality and it’s available under the friendly OpenSource Apache v2.0 license.

That said, let’s imagine a “standard” base architecture to manage those components.

First, though Metrics offer a Graphite endpoint, this requires configuration in each environment and this makes it harder, especially on developers workstations. To manage this, we’ll send metrics to JMX and introduce jmxtrans as a middle component between JMX and graphite. As every JVM provides JMX services, this requires no configuration when there’s none needed – and has no impact on performance.

Second, as developers, we usually enjoy develop everything from scratch in order to show off how good we are – or sometimes because they didn’t browse the documentation. My point of view as a software engineer is that I’d rather not reinvent the wheel and focus on the task at end. Actually, Spring Boot already integrates with Metrics through the Actuator component. However, it only provides GaugeService – to send unique values, and CounterService – to increment/decrement values. This might be good enough for FR but not for NFR so we might want to tweak things a little.

The flow would be designed like this: Code > Spring Boot > Metrics > JMX > Graphite

The starting point is to create an aspect, as performance metric is a cross-cutting concern:

@Aspect
public class MetricAspect {

    private final MetricSender metricSender;

    @Autowired
    public MetricAspect(MetricSender metricSender) {
        this.metricSender = metricSender;
    }

    @Around("execution(* ch.frankel.blog.metrics.ping..*(..)) ||execution(* ch.frankel.blog.metrics.dice..*(..))")
    public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
        StopWatch stopWatch = metricSender.getStartedStopWatch();
        try {
            return pjp.proceed();
        } finally {
            Class clazz = pjp.getTarget().getClass();
            String methodName = pjp.getSignature().getName();
            metricSender.stopAndSend(stopWatch, clazz, methodName);
        }
    }
}

The only thing outside of the ordinary is the usage of autowiring as aspects don’t seem to be able to be the target of explicit wiring (yet?). Also notice the aspect itself doesn’t interact with the Metrics API, it only delegates to a dedicated component:

public class MetricSender {

    private final MetricRegistry registry;

    public MetricSender(MetricRegistry registry) {
        this.registry = registry;
    }

    private Histogram getOrAdd(String metricsName) {
        Map registeredHistograms = registry.getHistograms();
        Histogram registeredHistogram = registeredHistograms.get(metricsName);
        if (registeredHistogram == null) {
            Reservoir reservoir = new ExponentiallyDecayingReservoir();
            registeredHistogram = new Histogram(reservoir);
            registry.register(metricsName, registeredHistogram);
        }
        return registeredHistogram;
    }

    public StopWatch getStartedStopWatch() {
        StopWatch stopWatch = new StopWatch();
        stopWatch.start();
        return stopWatch;
    }

    private String computeMetricName(Class clazz, String methodName) {
        return clazz.getName() + '.' + methodName;
    }

    public void stopAndSend(StopWatch stopWatch, Class clazz, String methodName) {
        stopWatch.stop();
        String metricName = computeMetricName(clazz, methodName);
        getOrAdd(metricName).update(stopWatch.getTotalTimeMillis());
    }
}

The sender does several interesting things (but with no state):

  • It returns a new StopWatch for the aspect to pass back after method execution
  • It computes the metric name depending on the class and the method
  • It stops the StopWatch and sends the time to the MetricRegistry
  • Note it also lazily creates and registers a new Histogram with an ExponentiallyDecayingReservoir instance. The default behavior is to provide an UniformReservoir, which keeps data forever and is not suitable for our need.

The final step is to tell the Metrics API to send data to JMX. This can be done in one of the configuration classes, preferably the one dedicated to metrics, using the @PostConstruct annotation on the desired method.

@Configuration
public class MetricsConfiguration {

    @Autowired
    private MetricRegistry metricRegistry;

    @Bean
    public MetricSender metricSender() {
        return new MetricSender(metricRegistry);
    }

    @PostConstruct
    public void connectRegistryToJmx() {
        JmxReporter reporter = JmxReporter.forRegistry(metricRegistry).build();
        reporter.start();
    }
}

The JConsole should look like the following. Icing on the cake, all default Spring Boot metrics are also available:

Sources for this article are available in Maven “format”.

To go further:

Send to Kindle

On resources scarcity, application servers and micro-services

October 26th, 2014 3 comments

While attending JavaZone recently, I went to a talk by Neal Ford. To be honest, the talk was not something mind-blowing, many tools he showed were either outdated or not best of breed, but he stated a very important fact: application servers are meant to address resources scarcity by sharing resources, while those resources are no more scarce in this day and age.

In fact, this completely matches my experience. Remember 10 years ago when we had to order hardware 6 months in advance? At that time, all webapps were deployed on the same application server – which weren’t always clustered. After some years, I noticed that application servers grew in number. Some were even required to be clustered because we couldn’t afford the service offered to be off. It was the time when people started to think about which application to deploy on which application server, because of their specific load profile. In the latest years, a new behavior appeared: deploying a single webapp to a single application server because it was deemed too critical to be potentially affected by other apps on the same application server. And that led sometimes to the practice of doing that for every application, whether the latter was seen as critical or not.

Nowadays, hardware is available instantly, in any required quantity and for nothing: they call it the Cloud. Then why are we still using application servers? I think that the people behind the Spring framework asked themselves the same question and came up with a radical answer: we don’t need them. Spring has always been about pragmatic development when JavaEE (called J2EE then) was still a big bloated standard, full of barbaric acronyms and a real pain in the ass to develop with (remember EJB 2.0?) – no wonder so many developers bitch about Java. Spring valued simple JSP/Servlet containers over fully-compliant JavaEE servers and now they finally crossed the Rubicon as no external application server is necessary anymore.

When I first heard about this, I was flabbergasted, but in the age of micro-services, I guess this makes pretty much sense. Imagine you just finished developing your application. Instead of creating a WAR, an EAR or whatever package you normally do, you just push to a Git repo. Then, a hook pushes the code to the server, stops the existing application and starts it again. Wouldn’t that be not only fun but really Agile/Devops/whatever-cool-concept-you-want? I think that would, and that’s exactly the kind of deployment Spring Boot allows. This is not the only feature of Spring Boot, it also provides a real convention over configuration, useful Maven POM, out-of-the-box metrics and health checks and much much more, but embedding Tomcat is the most important one (IMHO).

On the opposite side, big shops such as IBM, Oracle, even Red Hat still invest huge sums of money into developing their full profile Java EE compliant application servers. The funny stuff is that in order to be JavaEE compliant, you have to implement “interesting” bits such as Java Connector Architecture, something I’ve seen only once early in my career to connect to a CICS. Interestingly, the Web Profile defines a lightweight standard, leaving out JCA… but also JavaMail. However, it goes the way of going lightweight.

Now, only the future will tell what and how happens next, but I can see a trend forming there.

Send to Kindle