Archive

Archive for the ‘Technical’ Category

Better developer-to-developer collaboration with Bintray

March 29th, 2015 No comments

I recently got interested in Spring Social, and as part of my learning path, I tried to integrate their Github module which is still in Incubator mode. Unfortunately, this module seems to have been left behind, and its dependency on the core module uses an old version of it. And since I use the latest version of this core, Maven resolves one version to put in the WEB-INF/lib folder of the WAR package. Unfortunately, it doesn’t work so well at runtime.

The following diagram shows this situation:

Dependencies original situation

I could have excluded the old version from the transitive dependencies, but I’m lazy and Maven doesn’t make it easy (yet). Instead, I decided to just upgrade the Github module to the latest version and install it in my local repository. That proved to be quite easy as there was no incompatibility with the newest version of the core – I even created a pull request. This is the updated situation:

Dependencies final situation

Unfortunately, if I now decide to distribute this version of my application, nobody will be able to neither build nor run it since only I have the “patched” (latest) version of the Github module available in my local repo. I could distribute along the updated sources, but it would mean you would have to build it and install it into your local repo first before using my app.

Bintray to the rescue! Bintray is a binary repository, able to host any kind of binaries: jars, wars, deb, anything. It is hosted online, and free for OpenSource projects, which nicely suits my use-case. This is how I uploaded my artifact on Bintray.

Create an account
Bintray makes it quite easy to create such an account, using of the available authentication providers – Github, Twitter or Google+. Alternatively, one can create an old-style account, with password.
Create an artifact
Once authentified, an artifact needs to be created. Select your default Maven repository, it can be found at https://bintray.com//maven. Then, click on the big Add New Package button located on the right border. On the opening page, fill in the required information. The package can be named whatever you want, I chose to use the Maven artifact identifier: spring-social-github.
Create a version
Files can only be added to a version, so that a version need to be created first. On the package detail page, click on the New Version link (second column, first line).On the opening page, fill in the version name. Note that snapshots are not accepted and this is only checked through the -SNAPSHOT suffix. I chose to use 1.0.0.BUILD.
Upload files
Once the version is created, files can finally be uploaded. In the top bar, click the Upload Files button. Drag and drop all desired files, of course the main JAR and the POM, but it can also include source and javadoc JARs. Notice the Target Repository Path field: it should be set to the logical path to the Maven artifact, including groupId, artifactId and version separated by slashes. For example, my use-case should resolve to org/springframework/social/spring-social-github/1.0.0.BUILD. Note that instead of filling this field, you can wait for the files to be uploaded as Bintray will detect this upload, analyze the POM and propose to set it automatically: if this fits – and it probably does, just accept the proposal.
Publish
Uploading files is not enough, as those files are temporary until publication. A big notice warns about it: just click on the Publish link located on the right border.

At this point, you need only to add the Bintray repository in the POM.


    
        bintray
        http://dl.bintray.com/nfrankel/maven
        
            true
        
    
Send to Kindle
Categories: Java Tags: ,

Become a DevOps with Spring Boot

March 8th, 2015 1 comment

Have you ever found yourself in the situation to finish a project and you’re about to deliver it to the Ops team. You’re so happy because this time, you covered all the bases: the documentation contains the JNDI datasource name the application will use, all environment-dependent parameters have been externalized in a property file – and documented, and you even made sure logging has been implemented at key points in the code. Unfortunately, Ops refuse your delivery since they don’t know how to monitor the new application. And you missed that… Sure you could hack something to fulfill this requirement, but the project is already over-budget. In some (most?) companies, this means someone will have to be blamed and chances are the developer will bear all the burden. Time for some sleepless nights.

Spring Boot is a product from Spring that brings many out-of-the-box features to the table. Convention over configuration, in-memory default datasource and and embedded Tomcat are part of the features known to most. However, I think there’s a hidden gem that should be much more advertised. The actuator module actually provides metrics and health checks out-of-the-box as well as an easy way to add your own. In this article, we’ll see how to access those metrics from HTTP and send them to JMX and Graphite.

As an example application, let’s use an update of the Spring Pet Clinic made with Boot – thanks forArnaldo Piccinelli for his work. The starting point is commit 790e5d0. Now, let’s add some metrics in no time.

The first step is to add the actuator module starter in the Maven POM and let Boot does its magic:


    org.springframework.boot
    spring-boot-starter-actuator

At this point, we can launch the Spring Pet Clinic with mvn spring-boot:run and navigate to http://localhost:8090/metrics (note that the path is protected by Spring Security, credentials are user/password) to see something like the following:

{
  "mem" : 562688,
  "mem.free" : 328492,
  "processors" : 8,
  "uptime" : 26897,
  "instance.uptime" : 18974,
  "heap.committed" : 562688,
  "heap.init" : 131072,
  "heap.used" : 234195,
  "heap" : 1864192,
  "threads.peak" : 20,
  "threads.daemon" : 17,
  "threads" : 19,
  "classes" : 9440,
  "classes.loaded" : 9443,
  "classes.unloaded" : 3,
  "gc.ps_scavenge.count" : 16,
  "gc.ps_scavenge.time" : 104,
  "gc.ps_marksweep.count" : 2,
  "gc.ps_marksweep.time" : 152
}

As can be seen, Boot provides hardware- and Java-related metrics without further configuration. Even better, if one browses the app e.g. repeatedly refreshed the root, new metrics appear:

{
  "counter.status.200.metrics" : 1,
  "counter.status.200.root" : 2,
  "counter.status.304.star-star" : 4,
  "counter.status.304.webjars.star-star" : 1,
  "gauge.response.metrics" : 72.0,
  "gauge.response.root" : 16.0,
  "gauge.response.star-star" : 8.0,
  "gauge.response.webjars.star-star" : 11.0,
  ...
}

Those metrics are more functional in nature, and they are are separated into two separate groups:

  • Gauges are the simplest metrics and return a numeric value e.g. gauge.response.root is the time (in milliseconds) of the last response from the /metrics path
  • Counters are metrics which can be incremented/decremented e.g. counter.status.200.metrics is the number of times the /metrics path returned a HTTP 200 code

At this point, your Ops team could probably scrape the returned JSON and make something out of it. It will be their responsibility to regularly poll the URL and to use the figures the way they want. However, with just a little more effort, we can ease the life of our beloved Ops team by putting these metrics in JMX.

Spring Boot integrates easily with Dropwizard metrics. By just adding the following dependency to the POM, Boot is able to provide a MetricRegistry, a Dropwizard registry for all metrics:


    io.dropwizard.metrics
    metrics-core
    4.0.0-SNAPSHOT

Using the provided registry, one is able to send metrics to JMX in addition to the HTTP endpoint. We just need a simple configuration class as well as a few API calls:

@Configuration
public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;

    @Bean
    public JmxReporter jmxReporter() {
        JmxReporter reporter = JmxReporter.forRegistry(registry).build();
        reporter.start();
        return reporter;
    }
}

Launching jconsole let us check it works alright: The Ops team now just needs to get metrics from JMX and push them into their preferred graphical display tool, such as Graphite. One such way to achieve this is through jmx-trans. However, it’s also possible to directly send metrics to the Graphite server with just a few different API calls:

@Configuration
public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;

    @Bean
    public GraphiteReporter graphiteReporter() {
        Graphite graphite = new Graphite(new InetSocketAddress("localhost", 2003));
        GraphiteReporter reporter = GraphiteReporter.forRegistry(registry)
                                                    .prefixedWith("boot").build(graphite);
        reporter.start(500, TimeUnit.MILLISECONDS);
        return reporter;
    }
}

The result is quite interesting given the few lines of code: Note that going to Graphite using the JMX route makes things easier as there’s no need for a dedicated Graphite server in development environments.

Send to Kindle
Categories: Java Tags: , ,

Final release of Integration Testing from the Trenches

March 1st, 2015 2 comments
Writing a book is a journey. At the beginning of the journey, you mostly know where you want to go, but have only vague notion of the way to get there and the time it will take. I’ve finally released the paperback version of on Amazon and that means this specific journey is at end.

The book starts by a very generic discussion about testing and continues by defining Integration Testing in comparison to Unit Testing. The next chapter compares the respective merits of Junit and TestNG. It is followed by complete description on how to make a design testable: what works for Unit Testing works also for Integration Testing. Testing in software relies on automation, so that specific usage of the Maven build tool is described in regard to Integration Testing – as well as Gradle. Dependencies on external resources make integration tests more fragile so faking those make them more robust. Those resources include: databases, the file system, SOAP and REST web services, etc. The most important dependency in any application is the container. The last chapters are dedicated to the Spring framework, including Spring MVC and Java EE.

In this journey, I also dared ask Josh Long of Spring fame and Aslak Knutsen, team lead of the Arquillian project to write a foreword to the book – and I’ve been delighted to have them both answer positively. Thank you guys!

I’ve also talked on the subject at some JUG and European conferences: JavaDay Kiev, Joker, Agile Tour London, and JUG Lyon and will again at JavaLand, DevIt, TopConf Romania and GeeCon. I hope that by doing so, Integration Testing will be used more effectively on projects and with bigger ROI.

Should you want to go further, the book is available in multiple formats:

  1. A paperback version on for $49.99
  2. Electronic versions for Mac, Kindle and plain old PDF on . The pricing here is more open, starting from $21.10 with a suggested price of $31.65. Note you can get it in all formats to read on all your devices.

If you’re already a reader and you like it, please feel free to recommend it. If you don’t, I welcome your feedback in the comments section. Of course, if neither – I encourage you to get a book and see for yourself!

Send to Kindle

Avoid sequences of if…else statements

February 15th, 2015 8 comments

Adding a feature to legacy code while trying to improve it can be quite challenging, but also quite straightforward. Nothing angers me more (ok, I might be a little exaggerating) than stumbling upon such pattern:

public Foo getFoo(Bar bar) {
    if (bar instanceof BarA) {
        return new FooA();
    } else if (bar instanceof BarB) {
        return new FooB();
    } else if (bar instanceof BarC) {
        return new FooC();
    } else if (bar instanceof BarD) {
        return new FooD();
    }
    throw new BarNotFoundException();
}

Apply Object-Oriented Programming

The first reflex when writing such thing – yes, please don’t wait for the poor guy coming after you to clean your mess, should be to ask yourself whether applying basic Object-Oriented Programming couldn’t help you. In this case, you would have multiple children classes of:

public interface FooBarFunction<T extends Bar, R extends Foo> extends Function<T, R>

For example:

public class FooBarAFunction implements FooBarFunction<BarA, FooA> {
    public FooA apply(BarA bar) {
        return new FooA();
    }
}

Note: not enjoying the benefits of Java 8 is not reason not to use this: just create your own Function interface or use Guava’s.

Use a Map

I must admit that it not only scatters tightly related code in multiple files (this is Java…), it’s unfortunately not always possible to easily apply OOP. In that case, it’s quite easy to initialise a map that returns the correct type.

public class FooBarFunction {
    private static final Map<Class<Bar>, Foo> MAPPINGS = new HashMap<>();
    static {
        MAPPINGS.put(BarA.class, new FooA());
        MAPPINGS.put(BarB.class, new FooB());
        MAPPINGS.put(BarC.class, new FooC());
        MAPPINGS.put(BarD.class, new FooD());
    }
    public Foo getFoo(Bar bar) {
        Foo foo = MAPPINGS.get(bar.getClass());
        if (foo == null) {
            throw new BarNotFoundException();
        }
        return foo;
    }
}

Note this is only a basic example, and users of Dependency Injection can easily pass the map in the object constructor instead.

More than a return

The previous Map trick works quite well with return statements but not with code snippets. In this case, you need to use the map to return an enum and associate it with a switch-case.

public class FooBarFunction {
    private enum BarEnum {
        A, B, C, D
    }
    private static final Map<Class<Bar>, BarEnum> MAPPINGS = new HashMap<>();
    static {
        MAPPINGS.put(BarA.class, BarEnum.A);
        MAPPINGS.put(BarB.class, BarEnum.B);
        MAPPINGS.put(BarC.class, BarEnum.C);
        MAPPINGS.put(BarD.class, BarEnum.D);
    }
    public Foo getFoo(Bar bar) {
        BarEnum barEnum = MAPPINGS.get(bar.getClass());
        switch(barEnum) {
            case BarEnum.A:
                // Do something;
                break;
            case BarEnum.B:
                // Do something;
                break;
            case BarEnum.C:
                // Do something;
                break;
            case BarEnum.D:
                // Do something;
                break;
            default:
                throw new BarNotFoundException();
        }
    }
}

Note that not only I believe that this code is more readable but it’s also a fact it has better performance and the switch is evaluated once as opposite to each ifelse being evaluated in sequence until it returns true.

Note it’s expected not to put the code directly in the statement but use dedicated method calls.

Conclusion

I hope that at this point, you’re convinced there are other ways than sequences of ifelse. The rest is in your hands.

Send to Kindle
Categories: Java Tags:

Show you blog some love

February 8th, 2015 No comments

Blogging is very interesting, but just as for cars, it’s (unfortunately) not only about driving the car, it’s also about its maintenance. As I believe there’s some worthy content inside, and I’ve reached +20k visits per month, I thought it would be time to add or complete some features.

Cookie authorization

Though “normal” blogging sites such as mine are probably below the radar of governments agencies, European laws mandate for sites to ask about users consent before storing cookies – even though no modern site cannot do without. As I use Google Analytics to keep track of visits, I prefer to try to be compliant… just in case. Google provides an out-of-the-box script for web masters with the relevant documentation. Two flavors are available, one pop-up and one bar. You can choose it by calling the relevant method.

Automated SEO

I thought I achieved some degree of SEO via the SEO Smart Links Wordpress plugin ages ago. In essence, I just installed the plugin, the configuration itself being very crude: you just had to edit the different files.

I recently stumbled upon another plugin that is much easier to configure, the WordPress SEO by Yoast. It provides a friendly user interface with which you can design the pattern of the SEO metadata e.g. blog name plus post title, the type of metadata – Facebook OpenGraph, Twitter Card and Google+. This can of course be overridden on a case-by-case basis. Moreover, it offers a preview of the snippet as it would be displayed on Google search page. There are also plenty of configuration settings I’ve yet to use (and understand).

Google Webmasters Tools

The preview provided by Yoast’s SEO is nice but doesn’t replace the real thing. Fortunately, Google offers a neat near-real preview page to test in real-time the final result of your changes (no caching involved). The greatest advantage of this preview feature is that you may add set an URL but also raw HTML code instead. This lets you check how Google parses the metadata of your unpublished pages.

For sake of completeness, there’s a similar tool provided by Yandex (the Google from Russia) that gives slightly different results. From my microscopic experience, Yandex is slower, but seems to less lenient and more importantly, provides solutions. For example, it tells me that the date value is not in the correct format (ISO 8601) and that I forgot the itemscope attribute at some point, which corrected the error I had on both Google and Yandex.

Both were very effective in pointing out one problem: it seems the post’s author linked to a URL that displayed the home page. Since it didn’t give out a 404, I had no hint about that. (I fixed that by a simple 301 Redirect)

Manual SEO

All this automated stuff is nice but somethings cannot just be automated. For example, I’d like my book Integration Testing from the Trenches to be referenced on the page with the right kind of information and this, I have to do by hand. I’ve searched for metadata frameworks and it seems there are two W3C standards, RDFa and Microdata and OpenGraph by Facebook. The latter seems to be quite crude, so I chose to use RDFa and Microdata. For example, for the page about my book, I use the Book schema and for the page introducing me, I use the Person schema.

Interestingly enough, the book is quite straightforward: either add the relevant itemscope, itemtype and typeof or itemprop and property attributes  to an existing tag or to a new span and you’re done. However, this is getting harder on the Me page; challenges include:

  • Referencing entities in other entities. For example, how to set the author of my listed presentations. This requires usage of itemref
  • Setting data that shouldn’t be displayed. This requires usage of the meta tag in the header.

Conclusion

In the end, I noticed the whole SEO thing is still very in its infancy. There are a lot of contradictory norms, standards and proprietary stuff around. The good thing, is that I think I achieved some things:

  • Google Search results display regular posts in a more detailed way
  • Google Search results also display manually referenced pages in a nicer way. You can have a look at the pre-parsing in their tool.
  • Google+ automatically use the correct data to preview the pages I intend to share
  • (As I have no Facebook account, I couldn’t check there)

On the other hand, I’ve checked other sites, and it seems they do only the automated kind of SEO, and not manual. I can only infer why: it’s going to cost you a lot of time. All what I did is probably not necessary, I could have used the Yoast plugin to only override the title, the description and the image. But as always, it made me learn stuff.

Send to Kindle
Categories: Technical Tags: ,

Developing around PlantUML

February 1st, 2015 1 comment

In Integration Testing from the Trenches, I made heavy use of UML diagrams. So far, I’ve tried and used many UML editors. I’ve used:

  • The one that came with NetBeans – past tense, they removed it
  • Microsoft Visio
  • Modelio – this might be one of the best
  • others that I don’t remember

I came to the following conclusion: most of the time (read all the time), I just need to document an API or what I want designed, and there’s no need to be compliant to the latest UML specs. Loose is good enough.

True UML editors don’t accept loose, it’s UML after all. UML drawings tools are just that – drawing tools, with a little layer above. All come with these problems:

  • They consume quite a lot of resources
  • They have their own dedicated file format from which you can generate the image

I’m interested in the image, but want to keep the file there just in case I need an update.

PlantUML

Then I came upon PlantUML and I rejoiced. PlantUML defines a simple text syntax from which it generates UML diagrams images. So far, I’ve known 3 ways to generate PlantUML diagrams:

  • The online form
  • The Confluence plugin
  • Installing the stuff on one’s machine

I’ve used the former 2, with a tendency toward the first, as Confluence requires you to save to generate the image. This is good for the initial definition. However, it’s more than a little boring when one has to go through all ~60 diagrams to update them with a higher resolution.

Thus, I decided to develop a batch around PlantUML that will perform the following steps:

  • Read PlantUML description files from a folder
  • Generate images from them through the online form
  • Save the generated images in a folder

The batch run can be configured to apply settings common to all description files. This batch should not only be useful, it should also teach me stuff and be fun to code (note that sometimes, those goals are opposite).

Overall architecture

I’m a Spring fanboy but I wanted the foundations to be completely free on any dependencies.

Overall architecture

  1. jplantuml-api: defines the very basic API. It contains only an Exception and the root interface, which extends Java 8’s Function. This way is not just hype, it let user classes compose functions together which is the highest possible composition potential I’ve seen so far in Java.
  2. jplantuml-online: uses the JSoup library to connect to the online server. Also implements reading from a text file and writing the result to an image file.
  3. jplantuml-batch: adapts everything into the Spring Batch model.

The power of Function

In Java 8, a Function is an interface that transforms an input of type I to an output of type O via its apply() method. But, with the power of Java 8’s default methods, it also provides the compose() and andThen() method implementations.

This enables chaining Function calls in a process pipeline. One can define a base JPLantUml implementation that applies a String to a byte array for a simple call, and a more complex one that does the same for a File (the description) and… a File (the resulting image). However, the latter just compose 3 functions:

  1. A file reader to read the description file content, from File to String
  2. The PlantUML call processor from String to byte[]
  3. A file writer to write the image file, from byte[] to File

And presto, we’ve got our functions composition. This fine-grained nature let us do some unit-testing of the reader and writer functions without requiring any complex mocking.

Outcome

The result of this code is this Github project. It lets you use the following command-line:

java -jar ch.frankel.jplantuml.batch.Application globalParams="skinparam dpi 150; hide empty members"

This will read all PlantUML files from /tmp/in, apply the above parameters and generate the resulting image files in /tmp/out. The in/out folders can be set using standard  Spring Batch properties overriding.

Send to Kindle
Categories: Development Tags:

Entropy of code

January 25th, 2015 1 comment

So you were tasked to prepare your next software project. Already, its overall architecture is designed, you already put some code-related parts in place such as the build file, the packages, perhaps a sample use-case. Then after a few months, the project is finished, and the final result has largely diverged from what you imagined:inconsistent packages naming and organization, completely different low-level implementations e.g. XML dependency injection, auto-wiring and Java configuration, etc.

This is not only unsatisfying from a personal point-of-view, it also has a very negative impact on the maintenance of the application. How then could that happen? Here are some reasons I’ve witnessed first-hand.

New requirements

Your upfront architecture has been designed with a set of requirements in mind. Then, during the course of the project, some new requirements creep up. Reasons for this could be a bad business analysis, forgotten requirements or plain brand-new requirements from the business. Those new requirements just throw your architecture upside-down, and of course there’s no time for a global refactoring.

Unfinished refactoring

There comes a good idea: a refactoring that makes sense, because it would improve design, allow to remove code, whatever… Everything is nice and good until the planning changes, something takes priority and the refactoring has be stopped. Now, some parts of the application are refactored and others still use the old design.

New team member(s)

Everything is going fine but development is lagging a little behind, so the higher-ups decide to bring one or some more team members (based on the widespread but wrong assumption that if a woman gives birth to a child in 9 months, 9 women will take only month to do the same). Anyhow, there is no time for the current team to brief the newcomer about design details and existing practices, and he has to develop a user-story right now. So he does as he know how to do it since he has no time to get how things are done on the project.

Cowboy mentality

Whatever the team, probabilities are there’s always a team member who’s not happy with the current design. Possible reasons include:

  • the current design being really less than optimal
  • the cowboy wants to prove himself and the others
  • he just wants to keep control on everything

In all cases, parts of the application will definitely hold his mark. In the best of situations, it can lead to a global refactoring… which has very high chances of never being finished – as described above.

Mismatching technology

Let’s face it, there are plenty of technologies, languages and frameworks around here to help us create applications. When the right combination is made, they can definitely decrease development and maintenance costs; when not… This point especially highlights the 80/20 rule when the technology makes the development of 80% of the application a breeze, while requiring to focus on the remaining 20%.

Changing situations such as New requirements above are particularly bad since a technology well adapted to a specific set of requirements would be wrong if some new one creeps in.

Also, along with the Cowboy mentality above, you can end up with the Golden Hammer of the said cowboy into your application, despite it being completely unadapted to requirements. Another twist is for the Cowboy to try the latest hype framework he never could before e.g. force Node.js to your Java development team.

The myth of self-emerging design

I guess the most important reason for incoherent design in applications is the most pervasive one. Some time ago, a dedicated architect (or team for big applications) designed the application upfront. He probably was senior, and as such didn’t do any coding anymore, so he was very far from real-life and the development team a) bitched about the design b) did as they want anyway. This is what is well-know about the Ivory Tower architect effect. This was the previous situation at one end, but the current situation at the other opposite end is not better.

With books like The Cathedral and the Bazaar, people think upfront design is wrong and that the right design will somehow reveal itself magically. My previous experiences taught me it unfortunately doesn’t work like that: teams just don’t naturally come up with one design. Such teams probably exist, but it takes a rare combination of technological skills, soft skills – among them communication, and chance to assemble them. Most of real life projects will end up like the Frankenstein monster without a kind of central direction.

Conclusion

In order for maintenance costs to be as close as possible to linear, design must be consistent throughout the application. There are many reasons for an application to be heterogeneous: technological, organizational, human, you name it.

Among them, making the assumption that design is self-emerging is a sure way to end up with ugly hybrid beasts. In order to cope with that, balancing the scales again toward the center by making again one (or more) development team member responsible for the design and empowering him to enforce decisions is a way to improve consistency. Worst case scenario, the design won’t be optimal but it will be consistent at least.

Save yourself sweat, toil and tears: the latest meme is not always good for you, your projects and your organization. In this case, having some sort of hierarchy into development teams is not a bad thing per se.

Send to Kindle
Categories: Development Tags:

Improving the Vaadin 4 Spring project with a simpler MVP

January 18th, 2015 No comments

I’ve been using the Vaadin 4 Spring library on my current project, and this has been a very pleasant experience. However, in the middle of the project, a colleague of mine decided to “improve the testability”. The intention was laudable, though the project already tried to implement the MVP pattern (please check this article for more detailed information). Instead of correcting the mistakes here and there, he refactored the whole codebase using the provided MVP module… IMHO, this has been a huge mistake. In this article, I’ll try to highlights the stuff that bugs me in the existing implementation, and an alternative solution to it.

The existing MVP implementation consists of a single class. Here it is, abridged for readability purpose:

public abstract class Presenter<V extends View> {

    @Autowired
    private SpringViewProvider viewProvider;

    @Autowired
    private EventBus eventBus;

    @PostConstruct
    protected void init() {
        eventBus.subscribe(this);
    }

    public V getView() {
        V result = null;
        Class<?> clazz = getClass();
        if (clazz.isAnnotationPresent(VaadinPresenter.class)) {
            VaadinPresenter vp = clazz.getAnnotation(VaadinPresenter.class);
            result = (V) viewProvider.getView(vp.viewName());
        }
        return result;
    }
    // Other plumbing code
}

This class is quite opinionated and suffers from the following drawbacks:

  1. It relies on field auto-wiring, which makes it extremely hard to unit test Presenter classes. As a proof, the provided test class is not a unit test, but an integration test.
  2. It relies solely on component scanning, which prevents explicit dependency injection.
  3. It enforces the implementation of the View interface, whether required or not. When not using the Navigator, it makes the implementation of an empty enterView() method mandatory.
  4. It takes responsibility of creating the View from the view provider.
  5. It couples the Presenter and the View, with its @VaadinPresenter annotation, preventing a single Presenter to handle different View implementations.
  6. It requires to explicitly call the init() method of the Presenter, as the @PostConstruct annotation on a super class is not called when the subclass has one.

I’ve developed an alternative class that tries to address the previous points – and is also simpler:

public abstract class Presenter<T> {

    private final T view;
    private final EventBus eventBus;

    public Presenter(T view, EventBus eventBus) {
        Assert.notNull(view);
        Assert.notNull(eventBus);
        this.view = view;
        this.eventBus = eventBus;
        eventBus.subscribe(this);
    }
    // Other plumbing code
}

This class makes every subclass easily unit-testable, as the following snippets proves:

public class FooView extends Label {}

public class FooPresenter extends Presenter {

    public FooPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

public class PresenterTest {

    private FooPresenter presenter;
    private FooView fooView;
    private EventBus eventBus;

    @Before
    public void setUp() {
        fooView = new FooView();
        eventBus = mock(EventBus.class);
        presenter = new FooPresenter(fooView, eventBus);
    }

    @Test
    public void should_manage_underlying_view() {
        String message = "anymessagecangohere";
        presenter.onNewCaption(message);
        assertEquals(message, fooView.getCaption());
    }
}

The same Integration Test as for the initial class can also be handled, using explicit dependency injection:

public class ExplicitPresenter extends Presenter<FooView> {

    public ExplicitPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

@Configuration
@EnableVaadin
public class ExplicitConfig {

    @Autowired
    private EventBus eventBus;

    @Bean
    @UIScope
    public FooView fooView() {
        return new FooView();
    }

    @Bean
    @UIScope
    public ExplicitPresenter fooPresenter() {
        return new ExplicitPresenter(fooView(), eventBus);
    }
}

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = ExplicitConfig.class)
@VaadinAppConfiguration
public class ExplicitPresenterIT {

    @Autowired
    private ExplicitPresenter explicitPresenter;

    @Autowired
    private EventBus eventBus;

    @Test
    public void should_listen_to_message() {
        String message = "message_from_explicit";
        eventBus.publish(this, message);
        assertEquals(message, explicitPresenter.getView().getCaption());
    }
}

Last but not least, this alternative also let you use auto-wiring and component-scanning if you feel like it! The only difference being that it enforces constructor auto-wiring instead of field auto-wiring (in my eyes, this counts as a plus, albeit a little more verbose):

@UIScope
@VaadinComponent
public class FooView extends Label {}

@UIScope
@VaadinComponent
public class AutowiredPresenter extends Presenter<FooView> {

    @Autowired
    public AutowiredPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

@ComponentScan
@EnableVaadin
public class ScanConfig {}

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = ScanConfig.class)
@VaadinAppConfiguration
public class AutowiredPresenterIT {

    @Autowired
    private AutowiredPresenter autowiredPresenter;

    @Autowired
    private EventBus eventBus;

    @Test
    public void should_listen_to_message() {
        String message = "message_from_autowired";
        eventBus.publish(this, message);
        assertEquals(message, autowiredPresenter.getView().getCaption());
    }
}

The good news is that this module is now part of the vaadin4spring project on Github. If you need MVP for your Vaadin Spring application, you’re just a click away!

Send to Kindle
Categories: JavaEE Tags: , ,

Spring profiles or Maven profiles?

January 4th, 2015 2 comments

Deploying on different environments requires configuration, e.g. database URL(s) must be set on each dedicated environment. In most – if not all Java applications, this is achieved through a .properties file, loaded through the appropriately-named Properties class. During development, there’s no reason not to use the same configuration system, e.g. to use an embedded h2 database instead of the production one.

Unfortunately, Jave EE applications generally fall outside this usage, as the good practice on deployed environments (i.e. all environments save the local developer machine) is to use a JNDI datasource instead of a local connection. Even Tomcat and Jetty – which implement only a fraction of the Java EE Web Profile, provide this nifty and useful feature.

As an example, let’s take the Spring framework. In this case, two datasource configuration fragments have to be defined:

  • For deployed environment, one that specifies the JNDI location lookup
  • For local development (and test), one that configures a connection pool around a direct database connection

Simple properties file cannot manage this kind of switch, one has to use the build system. Shameless self-promotion: a detailed explanation of this setup for integration testing purposes can be found in my book, Integration Testing from the Trenches.

With the Maven build system, change between configuration is achieved through so-called profiles at build time. Roughly, a Maven profile is a portion of a POM that’s can be enabled (or not). For example, the following profile snippet replaces Maven’s standard resource directory with a dedicated one.


    
      dev
      
        
          
            profile/dev
            
              **/*
            
          
        
      
    

Activating a single or different profiles is as easy as using the -P switch with their id on the command-line when invoking Maven. The following command will activate the dev profile (provided it is set in the POM):

mvn package -Pdev

Now, let’s add a simple requirement: as I’m quite lazy, I want to exert the minimum effort possible to package the application along with its final production release configuration. This translates into making the production configuration, i.e. the JNDI fragment, the default one, and using the development fragment explicitly when necessary. Seasoned Maven users know how to implement that: create a production profile and configure it to be the default.


  dev
  
    true
  
  ...

Icing on the cake, profiles can even be set in Maven settings.xml files. Seems to good to be true? Well, very seasoned Maven users know that as soon as a single profile is explicitly activated, the default profile is de-activated. Previous experiences have taught me that because profiles are so easy to implement, they are used (and overused) so that the default one gets easily lost in the process. For example, in one such job, a profile was used on the Continuous Integration server to set some properties for the release in a dedicated setting files. In order to keep the right configuration, one has to a) know about the sneaky profile b) know it will break the default profile c) explicitly set the not-default-anymore profile.

Additional details about the dangers of Maven profiles for building artifacts can be found in this article.

Another drawback of this global approach is the tendency for over-fragmentation of the configuration files. I prefer to have coarse-grained configuration files, with each dedicated to a layer or a use-case. For example, I’d like to declare at least the datasource, the transaction manager and the entity manager in the same file with possibly the different repositories.

Come Spring profiles. As opposed to Maven profiles, Spring profiles are activated at runtime. I’m not sure whether this is a good or a bad thing, but the implementation makes it possible for real default configurations, with the help of @Conditional annotations (see my previous article for more details). That way, the wrapper-around-the-connection bean gets created when the dev profile is activated, and when not, the JNDI lookup bean. This kind of configuration is implemented in the following snippet:

@Configuration
public class MyConfiguration {

    @Bean
    @Profile("dev")
    public DataSource dataSource() throws Exception {
        org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
        dataSource.setDriverClassName("org.h2.Driver");
        dataSource.setUrl("jdbc:h2:file:~/conditional");
        dataSource.setUsername("sa");
        return dataSource;
    }

    @Bean
    @ConditionalOnMissingBean(DataSource.class)
    public DataSource fakeDataSource() {
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
    }
}

In this context, profiles are just a way to activate specific beans, the real magic is achieved through the different @Conditional annotations.

Note: it is advised to create a dedicated annotation to avoid String typos, to be more refactoring friendly and improve search capabilities on the code.

@Retention(RUNTIME)
@Target({TYPE, METHOD})
@Profile("dev")
public @interface Development {}

Now, this approach has some drawbacks as well. The most obvious problem is that the final archive will contain extra libraries, those that are use exclusively for development. This is readily apparent when one uses Spring Boot. One of such extra library is the h2 database, a whooping 1.7 Mb jar file. There are a two main counterarguments to this:

  • First, if you’re concerned about a couple of additional Mb, then your main issue is probably not on the software side, but on the disk management side. Perhaps a virtual layer such as VMWare or Xen could help?
  • Then, if the need be, you can still configure the build system to streamline the produced artifact.

The second drawback of Spring profiles is that along with extra libraries, the development configuration will be packaged into the final artifact as well. To be honest, when I first stumbled this approach, this was a no-go. Then, as usual, I thought more and more about it, and came to the following conclusion: there’s nothing wrong with that. Packaging the development configuration has no consequence whatsoever, whether it is set through XML or JavaConfig. Think about this: once an archive has been created, it is considered sealed, even when the application server explodes it for deployment purposes. It is considered very bad practice to do something on the exploded archive in all cases. So what would be the reason not to package the development configuration along? The only reason I can think of is: to be clean, from a theoretical point of view. Me being a pragmatist, I think the advantages of using Spring profiles is far greater than this drawback.

In my current project, I created a single configuration fragment with all beans that are dependent on the environment, the datasource and the Spring Security authentication provider. For the latter, the production configuration uses an internal LDAP, so that the development bean provides an in-memory provider.

So on one hand, we’ve got Maven profiles, which have definite issues but which we are familiar with, and on the other hand, we’ve got Spring profiles which are brand new, hurt our natural inclination but gets the job done. I’d suggest to give them a try: I did and am so far happy with them.

Send to Kindle
Categories: Java Tags: , ,

Optional dependencies in Spring

December 21st, 2014 6 comments

I’m a regular Spring framework user and I think I know the framework pretty well, but it seems I’m always stumbling upon something useful I didn’t know about. At Devoxx, I learned that you could express conditional dependencies using Java 8’s new Optional type. Note that before Java 8, optional dependencies could be auto-wired using @Autowired(required = false), but then you had to check for null.

How good is that? Well, I can think about a million use-cases, but here are some that come out of my mind:

  • Prevent usage of infrastructure dependencies, depending on the context. For example, in a development environment, one wouldn’t need to send metrics to a MetricRegistry
  • Provide defaults when required infrastructure dependencies are not provided e.g. a h2 datasource
  • The same could be done in a testing environment.
  • etc.

The implementation is very straightforward:

@ContextConfiguration(classes = OptionalConfiguration.class)
public class DependencyPresentTest extends AbstractTestNGSpringContextTests {

    @Autowired
    private Optional<HelloService> myServiceOptional;

    @Test
    public void should_return_hello() {
        String sayHello = null;
        if (myServiceOptional.isPresent()) {
            sayHello = myServiceOptional.get().sayHello();
        }
        assertNotNull(sayHello);
        assertEquals(sayHello, "Hello!");
    }
}

At this point, not only does the code compile fine, but the dependency is evaluated at compile time. Either the OptionalConfiguration contains the HelloService bean – and the above test succeeds, or it doesn’t – and the test fails.

This pattern is very elegant and I suggest you list it into your bag of available tools.

Send to Kindle
Categories: Java Tags: ,