Final release of Integration Testing from the Trenches

March 1st, 2015 No comments
Writing a book is a journey. At the beginning of the journey, you mostly know where you want to go, but have only vague notion of the way to get there and the time it will take. I’ve finally released the paperback version of on Amazon and that means this specific journey is at end.

The book starts by a very generic discussion about testing and continues by defining Integration Testing in comparison to Unit Testing. The next chapter compares the respective merits of Junit and TestNG. It is followed by complete description on how to make a design testable: what works for Unit Testing works also for Integration Testing. Testing in software relies on automation, so that specific usage of the Maven build tool is described in regard to Integration Testing – as well as Gradle. Dependencies on external resources make integration tests more fragile so faking those make them more robust. Those resources include: databases, the file system, SOAP and REST web services, etc. The most important dependency in any application is the container. The last chapters are dedicated to the Spring framework, including Spring MVC and Java EE.

In this journey, I also dared ask Josh Long of Spring fame and Aslak Knutsen, team lead of the Arquillian project to write a foreword to the book – and I’ve been delighted to have them both answer positively. Thank you guys!

I’ve also talked on the subject at some JUG and European conferences: JavaDay Kiev, Joker, Agile Tour London, and JUG Lyon and will again at JavaLand, DevIt, TopConf Romania and GeeCon. I hope that by doing so, Integration Testing will be used more effectively on projects and with bigger ROI.

Should you want to go further, the book is available in multiple formats:

  1. A paperback version on for $49.99
  2. Electronic versions for Mac, Kindle and plain old PDF on . The pricing here is more open, starting from $21.10 with a suggested price of $31.65. Note you can get it in all formats to read on all your devices.

If you’re already a reader and you like it, please feel free to recommend it. If you don’t, I welcome your feedback in the comments section. Of course, if neither – I encourage you to get a book and see for yourself!

Work for a company not lead by finance

February 22nd, 2015 2 comments

Disclaimer: this post touches bits and pieces of finance, management and sociology for which I’m far from qualified. However, I’ve plenty of experience working in companies where they had big effects and I couldn’t resist drawing my own conclusions. I’ll happily listen to realistic solutions.

I’ve been working for more than a decade in the software industry, always as a consultant. Most of my employers were pure consulting companies, with only my latest employer being a software provider. During all these years, my customers were traditional companies in various sectors: banking, insurance, telco, public administration, retail, …

Though my work always revolved around software, in various positions, I couldn’t help notice that it could have been much easier if the context had been different. I’m not talking about personal disputes between two persons, or such trivial things, but about pervasive ideas that originate from very up above – most of the time outside the company, and permeate all hierarchy layers in a organised fashion: the closer to upper management, the stronger its presence.

On short-term objectives

Like it or not, most companies are on the stock exchange. Such companies have to legally publish their results on a quarterly basis. To simplify, the better the results, the higher the share value. This tends to promote objectives based on each quarter to increase the latter. This leads to short-term decisions to increase share value that have very definite consequences on the long-term.

Interestingly enough, the GAFA seem to have a whole another strategy, as they aim for the long-term. Google invests massively in bio-technology and self-driven cars, Amazon makes no benefits since its inception and Apple has a tradition of not being really interested in giving its money to shareholders. I guess despite those respective strategies, none of them could be viewed as unsuccessful.

On productivity

Part of our core values is about always doing “better”. This is of course felt in the private sector, but is also ingrained in us as individuals. I ran twice the half-marathon and the second time, I just had to run it faster than the first time. I didn’t think about it, it was just… expected. For vacations also, most of us want to spend better vacations than the year before, even though they might have been the best so far.

If it so in the personal sphere, what is it like in the professional one? Well, I guess anyone having ever worked knows about it. It’s all about defining metrics, measuring them, then finding ways to improve them in order to improve productivity. This is not bad per se, but there are definitely issues. IMHO, the greatest is the term “productivity” itself. I can understand it when applied to simple basic tasks, but I’ve never heard about judge or policeman productivity. People might comment that the productivity of public service cannot be easily evaluated. Perhaps, but what about the productivity of private jobs such as lawyers? Or engineers?

Trying to measure the productivity of developers is based on the the thinking that they have more in common with manual labourers than engineers.

On the nature of development

Though impossible to measure, different developers definitely have different productivity. For example, when one developer commits a compile error, it impacts the work of all other developers working on the same project. Thus, one developer can have a negative productivity, as it is the case with engineers and lawyers.

In order for one to understand that, one must probably have worked as engineers or with them. Thus, one could in the best of case understood the nature of development work, or in the worst, realized the difference in productivity of different people.

For people who never had a chance to get familiar with engineering work, all engineers are the same. The only difference then would be on the cost of each different one.

On quality

Continuous improvement goes through measurement. Measuring some metrics and not others have definite incidence on behaviour of professionals, as qualities not measured exist in neither retrospectives nor in annual reviews. Engineers pride themselves on delivering quality software. But what is quality? Regarding software, there are two facets of quality:

  • On one hand, external quality can be defined as the gap between implementation and specifications. The highest quality possible is when they are both fully aligned i.e. the software just reflects specifications
  • On the other hand, internal quality is defined as the relative cost to add new features. With the highest internal quality, the cost of adding new features is linear. With the lowest internal quality, the cost of adding new features is so high that it’s les expensive to rewrite the software from scratch.

Internal quality is pretty hard to measure (read, I don’t know about any) since it’s defined as a gap – between the current current cost and the “best” cost. To compute such quality, one would have to do 2 softwares… and it’s not something that is likely to happen. The logical conclusion to this inability is that internal quality is at best vaguely talked about – and at worst completely ignored, while measurable metrics such as costs and planning are focused upon. When any of those start to get threatened, then quality becomes the fifth wheel.

On individual objectives

The agreed upon way to improve – in general and specifically profits, is by defining objectives. Those objectives are agreed by C-levels which in turn dispatch more specifics objectives into their respective departments, and this goes down until the lowest levels of the organisational hierarchy. Management does like that because it acts upon an assumption: in this case, the assumption is that optimising every single place in the organisation will likely improve the organisation as a whole. Interesting, but completely wrong as only the simplest of systems work like this. Systems like human organisations are much more complex, so they have plenty of local optimums: optimising one area probably has negative effect in another.

In one of my previous job, managers/sales persons had no individual incentives. Their bonuses was solely based on the performance of the company as a whole. Some of you might wonder how it could possible. They might even be suspicious that they didn’t go that extra mile to get more money flowing in given there was no incentive. I cannot be the judge of that but what I know is that collaboration was very high and they achieved much in that regard. Besides, there was a very good work atmosphere and much less of the political games seen in traditionally-managed companies.

On Return Over Investment

As an engineer, I always expected a company and all its employees would try to to maximize the benefits of their investments. My experience has been completely different.

A colleague told me the following story: imagine a new bank coming on the market. For a limited time, they propose for every $20 bill you bring to exchange it for a $50 bill. As a company, what’s the budget you should allocate to such a project? $100, $1,000, 100k? The answer is quite easy: as many as you afford as the return over investment is very high, and probably higher than any other project running in the company.

Most companies official policy is to at least describe the expected benefits for a planned project and even better approximate a ROI. Most of the times, business want to push a project based on more or less imaginary assumptions, and the benefits/ROI paragraph is just a nicely wrapping of those.

Conclusion

In Software Development, many seemingly strange (i.e. stupid) decisions happening are the direct results of the conjunction of two or more of the previous points.

  • Outsourcing: those who take decisions to outsource have no clues about development, they think all developers have the same productivity and then why hire a local developer when you can have 3 remote ones for the same price? And of course, development is such a simple process that direct communication has no place in it.
  • No attention to quality: as productivity, when a property is hard to measure, it’s better forgotten. If you add to this the fact that project manager have individual objectives on the budget and the planning, it’s no mystery only engineers that pride themselves about their craft care about quality. Besides, lack of quality only shows in the long-term…
  • Death-march projects: no understanding of software development plus individual objectives

And this might goes on ad nauseam.

What I noticed however is that successful recent technology companies have completely different strategies. For sure, they fully understand software development but it’s not only that. They definitely have long-term plans. Proofs of that include amazon still making no benefits and Google investing in biotech. I’ve never heard any of them trying to measure productivity, but surely they know about differences in developers. They also care about quality… a lot, as they realize it’s an asset which maintenance costs are directly related to quality. Finally, they are not ruled by finance.

I don’t think it’s all about company politics as in the introducing cartoons, as I see everyday people who are genuinely interested in teamwork. However, I believe management focusing on short-term financial KPIs is a dead-end and companies blindly following those old ways are doomed to fail. I you happen to find different ones, I suggest you stick to them.

Avoid sequences of if…else statements

February 15th, 2015 8 comments

Adding a feature to legacy code while trying to improve it can be quite challenging, but also quite straightforward. Nothing angers me more (ok, I might be a little exaggerating) than stumbling upon such pattern:

public Foo getFoo(Bar bar) {
    if (bar instanceof BarA) {
        return new FooA();
    } else if (bar instanceof BarB) {
        return new FooB();
    } else if (bar instanceof BarC) {
        return new FooC();
    } else if (bar instanceof BarD) {
        return new FooD();
    }
    throw new BarNotFoundException();
}

Apply Object-Oriented Programming

The first reflex when writing such thing – yes, please don’t wait for the poor guy coming after you to clean your mess, should be to ask yourself whether applying basic Object-Oriented Programming couldn’t help you. In this case, you would have multiple children classes of:

public interface FooBarFunction<T extends Bar, R extends Foo> extends Function<T, R>

For example:

public class FooBarAFunction implements FooBarFunction<BarA, FooA> {
    public FooA apply(BarA bar) {
        return new FooA();
    }
}

Note: not enjoying the benefits of Java 8 is not reason not to use this: just create your own Function interface or use Guava’s.

Use a Map

I must admit that it not only scatters tightly related code in multiple files (this is Java…), it’s unfortunately not always possible to easily apply OOP. In that case, it’s quite easy to initialise a map that returns the correct type.

public class FooBarFunction {
    private static final Map<Class<Bar>, Foo> MAPPINGS = new HashMap<>();
    static {
        MAPPINGS.put(BarA.class, new FooA());
        MAPPINGS.put(BarB.class, new FooB());
        MAPPINGS.put(BarC.class, new FooC());
        MAPPINGS.put(BarD.class, new FooD());
    }
    public Foo getFoo(Bar bar) {
        Foo foo = MAPPINGS.get(bar.getClass());
        if (foo == null) {
            throw new BarNotFoundException();
        }
        return foo;
    }
}

Note this is only a basic example, and users of Dependency Injection can easily pass the map in the object constructor instead.

More than a return

The previous Map trick works quite well with return statements but not with code snippets. In this case, you need to use the map to return an enum and associate it with a switch-case.

public class FooBarFunction {
    private enum BarEnum {
        A, B, C, D
    }
    private static final Map<Class<Bar>, BarEnum> MAPPINGS = new HashMap<>();
    static {
        MAPPINGS.put(BarA.class, BarEnum.A);
        MAPPINGS.put(BarB.class, BarEnum.B);
        MAPPINGS.put(BarC.class, BarEnum.C);
        MAPPINGS.put(BarD.class, BarEnum.D);
    }
    public Foo getFoo(Bar bar) {
        BarEnum barEnum = MAPPINGS.get(bar.getClass());
        switch(barEnum) {
            case BarEnum.A:
                // Do something;
                break;
            case BarEnum.B:
                // Do something;
                break;
            case BarEnum.C:
                // Do something;
                break;
            case BarEnum.D:
                // Do something;
                break;
            default:
                throw new BarNotFoundException();
        }
    }
}

Note that not only I believe that this code is more readable but it’s also a fact it has better performance and the switch is evaluated once as opposite to each ifelse being evaluated in sequence until it returns true.

Note it’s expected not to put the code directly in the statement but use dedicated method calls.

Conclusion

I hope that at this point, you’re convinced there are other ways than sequences of ifelse. The rest is in your hands.

Categories: Java Tags:

Show you blog some love

February 8th, 2015 No comments

Blogging is very interesting, but just as for cars, it’s (unfortunately) not only about driving the car, it’s also about its maintenance. As I believe there’s some worthy content inside, and I’ve reached +20k visits per month, I thought it would be time to add or complete some features.

Cookie authorization

Though “normal” blogging sites such as mine are probably below the radar of governments agencies, European laws mandate for sites to ask about users consent before storing cookies – even though no modern site cannot do without. As I use Google Analytics to keep track of visits, I prefer to try to be compliant… just in case. Google provides an out-of-the-box script for web masters with the relevant documentation. Two flavors are available, one pop-up and one bar. You can choose it by calling the relevant method.

Automated SEO

I thought I achieved some degree of SEO via the SEO Smart Links Wordpress plugin ages ago. In essence, I just installed the plugin, the configuration itself being very crude: you just had to edit the different files.

I recently stumbled upon another plugin that is much easier to configure, the WordPress SEO by Yoast. It provides a friendly user interface with which you can design the pattern of the SEO metadata e.g. blog name plus post title, the type of metadata – Facebook OpenGraph, Twitter Card and Google+. This can of course be overridden on a case-by-case basis. Moreover, it offers a preview of the snippet as it would be displayed on Google search page. There are also plenty of configuration settings I’ve yet to use (and understand).

Google Webmasters Tools

The preview provided by Yoast’s SEO is nice but doesn’t replace the real thing. Fortunately, Google offers a neat near-real preview page to test in real-time the final result of your changes (no caching involved). The greatest advantage of this preview feature is that you may add set an URL but also raw HTML code instead. This lets you check how Google parses the metadata of your unpublished pages.

For sake of completeness, there’s a similar tool provided by Yandex (the Google from Russia) that gives slightly different results. From my microscopic experience, Yandex is slower, but seems to less lenient and more importantly, provides solutions. For example, it tells me that the date value is not in the correct format (ISO 8601) and that I forgot the itemscope attribute at some point, which corrected the error I had on both Google and Yandex.

Both were very effective in pointing out one problem: it seems the post’s author linked to a URL that displayed the home page. Since it didn’t give out a 404, I had no hint about that. (I fixed that by a simple 301 Redirect)

Manual SEO

All this automated stuff is nice but somethings cannot just be automated. For example, I’d like my book Integration Testing from the Trenches to be referenced on the page with the right kind of information and this, I have to do by hand. I’ve searched for metadata frameworks and it seems there are two W3C standards, RDFa and Microdata and OpenGraph by Facebook. The latter seems to be quite crude, so I chose to use RDFa and Microdata. For example, for the page about my book, I use the Book schema and for the page introducing me, I use the Person schema.

Interestingly enough, the book is quite straightforward: either add the relevant itemscope, itemtype and typeof or itemprop and property attributes  to an existing tag or to a new span and you’re done. However, this is getting harder on the Me page; challenges include:

  • Referencing entities in other entities. For example, how to set the author of my listed presentations. This requires usage of itemref
  • Setting data that shouldn’t be displayed. This requires usage of the meta tag in the header.

Conclusion

In the end, I noticed the whole SEO thing is still very in its infancy. There are a lot of contradictory norms, standards and proprietary stuff around. The good thing, is that I think I achieved some things:

  • Google Search results display regular posts in a more detailed way
  • Google Search results also display manually referenced pages in a nicer way. You can have a look at the pre-parsing in their tool.
  • Google+ automatically use the correct data to preview the pages I intend to share
  • (As I have no Facebook account, I couldn’t check there)

On the other hand, I’ve checked other sites, and it seems they do only the automated kind of SEO, and not manual. I can only infer why: it’s going to cost you a lot of time. All what I did is probably not necessary, I could have used the Yoast plugin to only override the title, the description and the image. But as always, it made me learn stuff.

Categories: Technical Tags: ,

Developing around PlantUML

February 1st, 2015 1 comment

In Integration Testing from the Trenches, I made heavy use of UML diagrams. So far, I’ve tried and used many UML editors. I’ve used:

  • The one that came with NetBeans – past tense, they removed it
  • Microsoft Visio
  • Modelio – this might be one of the best
  • others that I don’t remember

I came to the following conclusion: most of the time (read all the time), I just need to document an API or what I want designed, and there’s no need to be compliant to the latest UML specs. Loose is good enough.

True UML editors don’t accept loose, it’s UML after all. UML drawings tools are just that – drawing tools, with a little layer above. All come with these problems:

  • They consume quite a lot of resources
  • They have their own dedicated file format from which you can generate the image

I’m interested in the image, but want to keep the file there just in case I need an update.

PlantUML

Then I came upon PlantUML and I rejoiced. PlantUML defines a simple text syntax from which it generates UML diagrams images. So far, I’ve known 3 ways to generate PlantUML diagrams:

  • The online form
  • The Confluence plugin
  • Installing the stuff on one’s machine

I’ve used the former 2, with a tendency toward the first, as Confluence requires you to save to generate the image. This is good for the initial definition. However, it’s more than a little boring when one has to go through all ~60 diagrams to update them with a higher resolution.

Thus, I decided to develop a batch around PlantUML that will perform the following steps:

  • Read PlantUML description files from a folder
  • Generate images from them through the online form
  • Save the generated images in a folder

The batch run can be configured to apply settings common to all description files. This batch should not only be useful, it should also teach me stuff and be fun to code (note that sometimes, those goals are opposite).

Overall architecture

I’m a Spring fanboy but I wanted the foundations to be completely free on any dependencies.

Overall architecture

  1. jplantuml-api: defines the very basic API. It contains only an Exception and the root interface, which extends Java 8’s Function. This way is not just hype, it let user classes compose functions together which is the highest possible composition potential I’ve seen so far in Java.
  2. jplantuml-online: uses the JSoup library to connect to the online server. Also implements reading from a text file and writing the result to an image file.
  3. jplantuml-batch: adapts everything into the Spring Batch model.

The power of Function

In Java 8, a Function is an interface that transforms an input of type I to an output of type O via its apply() method. But, with the power of Java 8’s default methods, it also provides the compose() and andThen() method implementations.

This enables chaining Function calls in a process pipeline. One can define a base JPLantUml implementation that applies a String to a byte array for a simple call, and a more complex one that does the same for a File (the description) and… a File (the resulting image). However, the latter just compose 3 functions:

  1. A file reader to read the description file content, from File to String
  2. The PlantUML call processor from String to byte[]
  3. A file writer to write the image file, from byte[] to File

And presto, we’ve got our functions composition. This fine-grained nature let us do some unit-testing of the reader and writer functions without requiring any complex mocking.

Outcome

The result of this code is this Github project. It lets you use the following command-line:

java -jar ch.frankel.jplantuml.batch.Application globalParams="skinparam dpi 150; hide empty members"

This will read all PlantUML files from /tmp/in, apply the above parameters and generate the resulting image files in /tmp/out. The in/out folders can be set using standard  Spring Batch properties overriding.

Categories: Development Tags:

Entropy of code

January 25th, 2015 1 comment

So you were tasked to prepare your next software project. Already, its overall architecture is designed, you already put some code-related parts in place such as the build file, the packages, perhaps a sample use-case. Then after a few months, the project is finished, and the final result has largely diverged from what you imagined:inconsistent packages naming and organization, completely different low-level implementations e.g. XML dependency injection, auto-wiring and Java configuration, etc.

This is not only unsatisfying from a personal point-of-view, it also has a very negative impact on the maintenance of the application. How then could that happen? Here are some reasons I’ve witnessed first-hand.

New requirements

Your upfront architecture has been designed with a set of requirements in mind. Then, during the course of the project, some new requirements creep up. Reasons for this could be a bad business analysis, forgotten requirements or plain brand-new requirements from the business. Those new requirements just throw your architecture upside-down, and of course there’s no time for a global refactoring.

Unfinished refactoring

There comes a good idea: a refactoring that makes sense, because it would improve design, allow to remove code, whatever… Everything is nice and good until the planning changes, something takes priority and the refactoring has be stopped. Now, some parts of the application are refactored and others still use the old design.

New team member(s)

Everything is going fine but development is lagging a little behind, so the higher-ups decide to bring one or some more team members (based on the widespread but wrong assumption that if a woman gives birth to a child in 9 months, 9 women will take only month to do the same). Anyhow, there is no time for the current team to brief the newcomer about design details and existing practices, and he has to develop a user-story right now. So he does as he know how to do it since he has no time to get how things are done on the project.

Cowboy mentality

Whatever the team, probabilities are there’s always a team member who’s not happy with the current design. Possible reasons include:

  • the current design being really less than optimal
  • the cowboy wants to prove himself and the others
  • he just wants to keep control on everything

In all cases, parts of the application will definitely hold his mark. In the best of situations, it can lead to a global refactoring… which has very high chances of never being finished – as described above.

Mismatching technology

Let’s face it, there are plenty of technologies, languages and frameworks around here to help us create applications. When the right combination is made, they can definitely decrease development and maintenance costs; when not… This point especially highlights the 80/20 rule when the technology makes the development of 80% of the application a breeze, while requiring to focus on the remaining 20%.

Changing situations such as New requirements above are particularly bad since a technology well adapted to a specific set of requirements would be wrong if some new one creeps in.

Also, along with the Cowboy mentality above, you can end up with the Golden Hammer of the said cowboy into your application, despite it being completely unadapted to requirements. Another twist is for the Cowboy to try the latest hype framework he never could before e.g. force Node.js to your Java development team.

The myth of self-emerging design

I guess the most important reason for incoherent design in applications is the most pervasive one. Some time ago, a dedicated architect (or team for big applications) designed the application upfront. He probably was senior, and as such didn’t do any coding anymore, so he was very far from real-life and the development team a) bitched about the design b) did as they want anyway. This is what is well-know about the Ivory Tower architect effect. This was the previous situation at one end, but the current situation at the other opposite end is not better.

With books like The Cathedral and the Bazaar, people think upfront design is wrong and that the right design will somehow reveal itself magically. My previous experiences taught me it unfortunately doesn’t work like that: teams just don’t naturally come up with one design. Such teams probably exist, but it takes a rare combination of technological skills, soft skills – among them communication, and chance to assemble them. Most of real life projects will end up like the Frankenstein monster without a kind of central direction.

Conclusion

In order for maintenance costs to be as close as possible to linear, design must be consistent throughout the application. There are many reasons for an application to be heterogeneous: technological, organizational, human, you name it.

Among them, making the assumption that design is self-emerging is a sure way to end up with ugly hybrid beasts. In order to cope with that, balancing the scales again toward the center by making again one (or more) development team member responsible for the design and empowering him to enforce decisions is a way to improve consistency. Worst case scenario, the design won’t be optimal but it will be consistent at least.

Save yourself sweat, toil and tears: the latest meme is not always good for you, your projects and your organization. In this case, having some sort of hierarchy into development teams is not a bad thing per se.

Categories: Development Tags:

Improving the Vaadin 4 Spring project with a simpler MVP

January 18th, 2015 No comments

I’ve been using the Vaadin 4 Spring library on my current project, and this has been a very pleasant experience. However, in the middle of the project, a colleague of mine decided to “improve the testability”. The intention was laudable, though the project already tried to implement the MVP pattern (please check this article for more detailed information). Instead of correcting the mistakes here and there, he refactored the whole codebase using the provided MVP module… IMHO, this has been a huge mistake. In this article, I’ll try to highlights the stuff that bugs me in the existing implementation, and an alternative solution to it.

The existing MVP implementation consists of a single class. Here it is, abridged for readability purpose:

public abstract class Presenter<V extends View> {

    @Autowired
    private SpringViewProvider viewProvider;

    @Autowired
    private EventBus eventBus;

    @PostConstruct
    protected void init() {
        eventBus.subscribe(this);
    }

    public V getView() {
        V result = null;
        Class<?> clazz = getClass();
        if (clazz.isAnnotationPresent(VaadinPresenter.class)) {
            VaadinPresenter vp = clazz.getAnnotation(VaadinPresenter.class);
            result = (V) viewProvider.getView(vp.viewName());
        }
        return result;
    }
    // Other plumbing code
}

This class is quite opinionated and suffers from the following drawbacks:

  1. It relies on field auto-wiring, which makes it extremely hard to unit test Presenter classes. As a proof, the provided test class is not a unit test, but an integration test.
  2. It relies solely on component scanning, which prevents explicit dependency injection.
  3. It enforces the implementation of the View interface, whether required or not. When not using the Navigator, it makes the implementation of an empty enterView() method mandatory.
  4. It takes responsibility of creating the View from the view provider.
  5. It couples the Presenter and the View, with its @VaadinPresenter annotation, preventing a single Presenter to handle different View implementations.
  6. It requires to explicitly call the init() method of the Presenter, as the @PostConstruct annotation on a super class is not called when the subclass has one.

I’ve developed an alternative class that tries to address the previous points – and is also simpler:

public abstract class Presenter<T> {

    private final T view;
    private final EventBus eventBus;

    public Presenter(T view, EventBus eventBus) {
        Assert.notNull(view);
        Assert.notNull(eventBus);
        this.view = view;
        this.eventBus = eventBus;
        eventBus.subscribe(this);
    }
    // Other plumbing code
}

This class makes every subclass easily unit-testable, as the following snippets proves:

public class FooView extends Label {}

public class FooPresenter extends Presenter {

    public FooPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

public class PresenterTest {

    private FooPresenter presenter;
    private FooView fooView;
    private EventBus eventBus;

    @Before
    public void setUp() {
        fooView = new FooView();
        eventBus = mock(EventBus.class);
        presenter = new FooPresenter(fooView, eventBus);
    }

    @Test
    public void should_manage_underlying_view() {
        String message = "anymessagecangohere";
        presenter.onNewCaption(message);
        assertEquals(message, fooView.getCaption());
    }
}

The same Integration Test as for the initial class can also be handled, using explicit dependency injection:

public class ExplicitPresenter extends Presenter<FooView> {

    public ExplicitPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

@Configuration
@EnableVaadin
public class ExplicitConfig {

    @Autowired
    private EventBus eventBus;

    @Bean
    @UIScope
    public FooView fooView() {
        return new FooView();
    }

    @Bean
    @UIScope
    public ExplicitPresenter fooPresenter() {
        return new ExplicitPresenter(fooView(), eventBus);
    }
}

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = ExplicitConfig.class)
@VaadinAppConfiguration
public class ExplicitPresenterIT {

    @Autowired
    private ExplicitPresenter explicitPresenter;

    @Autowired
    private EventBus eventBus;

    @Test
    public void should_listen_to_message() {
        String message = "message_from_explicit";
        eventBus.publish(this, message);
        assertEquals(message, explicitPresenter.getView().getCaption());
    }
}

Last but not least, this alternative also let you use auto-wiring and component-scanning if you feel like it! The only difference being that it enforces constructor auto-wiring instead of field auto-wiring (in my eyes, this counts as a plus, albeit a little more verbose):

@UIScope
@VaadinComponent
public class FooView extends Label {}

@UIScope
@VaadinComponent
public class AutowiredPresenter extends Presenter<FooView> {

    @Autowired
    public AutowiredPresenter(FooView view, EventBus eventBus) {
        super(view, eventBus);
    }

    @EventBusListenerMethod
    public void onNewCaption(String caption) {
        getView().setCaption(caption);
    }
}

@ComponentScan
@EnableVaadin
public class ScanConfig {}

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(classes = ScanConfig.class)
@VaadinAppConfiguration
public class AutowiredPresenterIT {

    @Autowired
    private AutowiredPresenter autowiredPresenter;

    @Autowired
    private EventBus eventBus;

    @Test
    public void should_listen_to_message() {
        String message = "message_from_autowired";
        eventBus.publish(this, message);
        assertEquals(message, autowiredPresenter.getView().getCaption());
    }
}

The good news is that this module is now part of the vaadin4spring project on Github. If you need MVP for your Vaadin Spring application, you’re just a click away!

Categories: JavaEE Tags: , ,

Je suis Charlie

January 8th, 2015 No comments

This week, there won’t be any technical article. As you might have heard about, cowards armed with automatic weapons brutally murdered cartoonists, journalists, policemen… All all in all, 12 people died because of stupidity and obscurantism, just because their truth was contradictory to their murderers’ – or just because they happened to be on site.

Right now, I’m afraid I start to understand what American people felt after September 11th.

I’m not a cartoonist, not a soldier, not a policeman, not a writer, so this is my way of expressing my empathy with the families and those close to the victims. I also urge everyone to clearly separate between the general muslim population who only aspire to live their lives in peace and fanatics. Don’t play the game of the terrorists!

Please let me finish with drawings, one for hope by Banksy, the other for fun in raw Charlie Hebdo style by @LouisonA.


Categories: Miscellaneous Tags:

Spring profiles or Maven profiles?

January 4th, 2015 2 comments

Deploying on different environments requires configuration, e.g. database URL(s) must be set on each dedicated environment. In most – if not all Java applications, this is achieved through a .properties file, loaded through the appropriately-named Properties class. During development, there’s no reason not to use the same configuration system, e.g. to use an embedded h2 database instead of the production one.

Unfortunately, Jave EE applications generally fall outside this usage, as the good practice on deployed environments (i.e. all environments save the local developer machine) is to use a JNDI datasource instead of a local connection. Even Tomcat and Jetty – which implement only a fraction of the Java EE Web Profile, provide this nifty and useful feature.

As an example, let’s take the Spring framework. In this case, two datasource configuration fragments have to be defined:

  • For deployed environment, one that specifies the JNDI location lookup
  • For local development (and test), one that configures a connection pool around a direct database connection

Simple properties file cannot manage this kind of switch, one has to use the build system. Shameless self-promotion: a detailed explanation of this setup for integration testing purposes can be found in my book, Integration Testing from the Trenches.

With the Maven build system, change between configuration is achieved through so-called profiles at build time. Roughly, a Maven profile is a portion of a POM that’s can be enabled (or not). For example, the following profile snippet replaces Maven’s standard resource directory with a dedicated one.


    
      dev
      
        
          
            profile/dev
            
              **/*
            
          
        
      
    

Activating a single or different profiles is as easy as using the -P switch with their id on the command-line when invoking Maven. The following command will activate the dev profile (provided it is set in the POM):

mvn package -Pdev

Now, let’s add a simple requirement: as I’m quite lazy, I want to exert the minimum effort possible to package the application along with its final production release configuration. This translates into making the production configuration, i.e. the JNDI fragment, the default one, and using the development fragment explicitly when necessary. Seasoned Maven users know how to implement that: create a production profile and configure it to be the default.


  dev
  
    true
  
  ...

Icing on the cake, profiles can even be set in Maven settings.xml files. Seems to good to be true? Well, very seasoned Maven users know that as soon as a single profile is explicitly activated, the default profile is de-activated. Previous experiences have taught me that because profiles are so easy to implement, they are used (and overused) so that the default one gets easily lost in the process. For example, in one such job, a profile was used on the Continuous Integration server to set some properties for the release in a dedicated setting files. In order to keep the right configuration, one has to a) know about the sneaky profile b) know it will break the default profile c) explicitly set the not-default-anymore profile.

Additional details about the dangers of Maven profiles for building artifacts can be found in this article.

Another drawback of this global approach is the tendency for over-fragmentation of the configuration files. I prefer to have coarse-grained configuration files, with each dedicated to a layer or a use-case. For example, I’d like to declare at least the datasource, the transaction manager and the entity manager in the same file with possibly the different repositories.

Come Spring profiles. As opposed to Maven profiles, Spring profiles are activated at runtime. I’m not sure whether this is a good or a bad thing, but the implementation makes it possible for real default configurations, with the help of @Conditional annotations (see my previous article for more details). That way, the wrapper-around-the-connection bean gets created when the dev profile is activated, and when not, the JNDI lookup bean. This kind of configuration is implemented in the following snippet:

@Configuration
public class MyConfiguration {

    @Bean
    @Profile("dev")
    public DataSource dataSource() throws Exception {
        org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
        dataSource.setDriverClassName("org.h2.Driver");
        dataSource.setUrl("jdbc:h2:file:~/conditional");
        dataSource.setUsername("sa");
        return dataSource;
    }

    @Bean
    @ConditionalOnMissingBean(DataSource.class)
    public DataSource fakeDataSource() {
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
    }
}

In this context, profiles are just a way to activate specific beans, the real magic is achieved through the different @Conditional annotations.

Note: it is advised to create a dedicated annotation to avoid String typos, to be more refactoring friendly and improve search capabilities on the code.

@Retention(RUNTIME)
@Target({TYPE, METHOD})
@Profile("dev")
public @interface Development {}

Now, this approach has some drawbacks as well. The most obvious problem is that the final archive will contain extra libraries, those that are use exclusively for development. This is readily apparent when one uses Spring Boot. One of such extra library is the h2 database, a whooping 1.7 Mb jar file. There are a two main counterarguments to this:

  • First, if you’re concerned about a couple of additional Mb, then your main issue is probably not on the software side, but on the disk management side. Perhaps a virtual layer such as VMWare or Xen could help?
  • Then, if the need be, you can still configure the build system to streamline the produced artifact.

The second drawback of Spring profiles is that along with extra libraries, the development configuration will be packaged into the final artifact as well. To be honest, when I first stumbled this approach, this was a no-go. Then, as usual, I thought more and more about it, and came to the following conclusion: there’s nothing wrong with that. Packaging the development configuration has no consequence whatsoever, whether it is set through XML or JavaConfig. Think about this: once an archive has been created, it is considered sealed, even when the application server explodes it for deployment purposes. It is considered very bad practice to do something on the exploded archive in all cases. So what would be the reason not to package the development configuration along? The only reason I can think of is: to be clean, from a theoretical point of view. Me being a pragmatist, I think the advantages of using Spring profiles is far greater than this drawback.

In my current project, I created a single configuration fragment with all beans that are dependent on the environment, the datasource and the Spring Security authentication provider. For the latter, the production configuration uses an internal LDAP, so that the development bean provides an in-memory provider.

So on one hand, we’ve got Maven profiles, which have definite issues but which we are familiar with, and on the other hand, we’ve got Spring profiles which are brand new, hurt our natural inclination but gets the job done. I’d suggest to give them a try: I did and am so far happy with them.

Categories: Java Tags: , ,

Optional dependencies in Spring

December 21st, 2014 6 comments

I’m a regular Spring framework user and I think I know the framework pretty well, but it seems I’m always stumbling upon something useful I didn’t know about. At Devoxx, I learned that you could express conditional dependencies using Java 8’s new Optional type. Note that before Java 8, optional dependencies could be auto-wired using @Autowired(required = false), but then you had to check for null.

How good is that? Well, I can think about a million use-cases, but here are some that come out of my mind:

  • Prevent usage of infrastructure dependencies, depending on the context. For example, in a development environment, one wouldn’t need to send metrics to a MetricRegistry
  • Provide defaults when required infrastructure dependencies are not provided e.g. a h2 datasource
  • The same could be done in a testing environment.
  • etc.

The implementation is very straightforward:

@ContextConfiguration(classes = OptionalConfiguration.class)
public class DependencyPresentTest extends AbstractTestNGSpringContextTests {

    @Autowired
    private Optional<HelloService> myServiceOptional;

    @Test
    public void should_return_hello() {
        String sayHello = null;
        if (myServiceOptional.isPresent()) {
            sayHello = myServiceOptional.get().sayHello();
        }
        assertNotNull(sayHello);
        assertEquals(sayHello, "Hello!");
    }
}

At this point, not only does the code compile fine, but the dependency is evaluated at compile time. Either the OptionalConfiguration contains the HelloService bean – and the above test succeeds, or it doesn’t – and the test fails.

This pattern is very elegant and I suggest you list it into your bag of available tools.

Categories: Java Tags: ,