Archive

Author Archive

Avoid conditional logic in @Configuration

December 7th, 2014 2 comments

Integration Testing Spring applications mandates to create small dedicated configuration fragments and to assemble them either during normal run of the application or during tests. Even in the latter case, different fragments can be assembled in different tests.

However, this practice doesn’t handle the use-case where I want to use the application in two different environments. As an example, I might want to use a JNDI datasource in deployed environments and a direct connection when developing  on my local machine. Assembling different fragment combinations is not possible, as I want to run the application in both cases, not test it.

My only requirement is that the default should use the JNDI datasource, while activating a flag – a profile, should switch to the direct connection. The Pavlovian reflex in this case would be to add a simple condition in the @Configuration class.

@Configuration
public class MyConfiguration {

    @Autowired
    private Environment env;

    @Bean
    public DataSource dataSource() throws Exception {
        if (env.acceptsProfiles("dev")) {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:file:~/conditional");
            dataSource.setUsername("sa");
            return dataSource;
        }
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional"); 
    }
}

Starting to use this kind flow control statements is the beginning of the end, as it will lead to adding more control flow statements in the future, which will lead in turn to a tangled mess of spaghetti configuration, and ultimately to an unmaintainable application.

Spring Boot offers a nice alternative to handle this use-case with different flavors of @ConditionalXXX annotations. Using them have the following advantages while doing the job: easy to use, readable and limited. While the latter point might seem to be a drawback, it’s the biggest asset IMHO (not unlike Maven plugins). Code is powerful, and with great power must come great responsibility, something that is hardly possible during the course of a project with deadlines and pressure from the higher-ups. That’s the main reason one of my colleagues advocates XML over JavaConfig: with XML, you’re sure there won’t be any abuse while the project runs its course.

But let’s stop the philosophy and back to @ConditionalXXX annotations. Basically, putting such an annotation on a @Bean method will invoke this method and put the bean in the factory based on a dedicated condition. There are many of them, here are some important ones:

  • Dependent on Java version, newer or older – @ConditionalOnJava
  • Dependent on a bean present in factory – @ConditionalOnBean, and its opposite, dependent on a bean name not present – @ConditionalOnMissingBean
  • Dependent on a class present on the classpath – @ConditionalOnClass, and its opposite @ConditionalOnMissingClass
  • Whether it’s a web application or not – @ConditionalOnWebApplication and @ConditionalOnNotWebApplication
  • etc.

Note that the whole list of existing conditions can be browsed in Spring Boot’s org.springframework.boot.autoconfigure.condition package.

With this information, we can migrate the above snippet to a more robust implementation:

@Configuration
public class MyConfiguration {

    @Bean
    @Profile("dev")
    public DataSource dataSource() throws Exception {
        org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
        dataSource.setDriverClassName("org.h2.Driver");
        dataSource.setUrl("jdbc:h2:file:~/localisatordb");
        dataSource.setUsername("sa");
        return dataSource;
    }

    @Bean
    @ConditionalOnMissingBean(DataSource.class)
    public DataSource fakeDataSource() {
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
    }
}

The configuration is now neatly separated into two different methods, the first method will be called only when the dev profile is active while the second will be when the first method is not called, hence when the dev profile is not active.

Finally, the best thing in that feature is that it is easily extensible, as it depends only on the @Conditional annotation and the Condition interface (who are part of Spring proper, not Spring Boot).

Here’s a simple example in Maven/IntelliJ format for you to play with. Have fun!

Send to Kindle
Categories: Java Tags: ,

From Vaadin to Docker, a novice’s journey

November 23rd, 2014 2 comments

I’m a huge Vaadin fan and I’ve created a Github workshop I can demo at conferences. A common issue with such kind of workshops is that attendees have to prepare their workstations in advance… and there’s always a significant part of them that comes with not everything ready. At this point, two options are available to the speaker: either wait for each of the attendee to finish the preparation – too bad for the people who took the time at home to do that, or start anyway – and lose the not-ready part.

Given the current buzz around Docker, I thought that could be a very good way to make the workshop preparation quicker – only one step, and hasslefree – no problem regarding the quirks of your operation system. The required steps I ask the attendees are the following:

  1. Install Git
  2. Install Java, Maven and Tomcat
  3. Clone the git repo
  4. Build the project (to prepare the Maven repository)
  5. Deploy the built webapp
  6. Start Tomcat

These should directly be automated into Docker. As I wasted much time getting this to work, here’s the tale of my journey in achieving this (be warned, it’s quite long). If you’ve got similar use-cases, I hope it will be useful in you getting things done faster.

Starting with Docker

The first step was to get to know the basics about Docker. Fortunately, I had the chance to attend a Docker workshop by David Gageot at Duchess Swiss. This included both Docker installation and basics of Dockerfile. I assume readers have likewise a basic understanding of Docker.

For those who don’t, I guess browsing the Docker’s official documentation is a nice idea:

Building my first Dockerfile

The Docker image can be built with the following command ran into the directory of the Dockerfile:

$ docker build -t vaadinworkshop .

The first issues one can encounter when playing with Docker the first time, is to get the following error message:

Get http:///var/run/docker.sock/v1.14/containers/json: dial unix /var/run/docker.sock: no such file or directory

The reason is because one didn’t export the required environment variables displayed by the boot2docker information message. If you lost the exact data, no worry, just use the shellinit boot2docker parameter:

$ boot2docker shellinit
Writing /Users/i303869/.docker/boot2docker-vm/ca.pem:
Writing /Users/i303869/.docker/boot2docker-vm/cert.pem:
Writing /Users/i303869/.docker/boot2docker-vm/key.pem:
    export DOCKER_HOST=tcp://192.168.59.103:2376
    export DOCKER_CERT_PATH=/Users/i303869/.docker/boot2docker-vm

Copy-paste the export lines above will solve the issue. These can also be set in one’s .bashrc script as it seems these values seldom change.

Next in line is the following error:

Get http://192.168.59.103:2376/v1.14/containers/json: malformed HTTP response "\x15\x03\x01\x00\x02\x02"

This error message seems to be because of a mismatch between versions of the client and the server. It seems it is because of a bug on Mac OSX when upgrading. For a long term solution, reinstall Docker from scratch; for a quick fix, use the --tls flag with the docker command. As it is quite cumbersome to type it everything, one can alias it:

$ alias docker="docker --tls"

My last mistake when building the image comes from building the Dockerfile from a not empty directory. Docker sends every file it finds in the directory of the Dockerfile to the Docker container for build:

$ docker --tls build -t vaadinworkshop .
Sending build context to Docker daemon Too many kB

Fix: do not try this at home and start from a directory container the Dockerfile only.

Starting from scratch

Dockerfiles describe images – images are built as a layered list of instructions. Docker images are designed around single inheritance: one image has to be set a single parent. An image requiring no parent starts from scratch, but Docker provides 4 base official distributions: busybox, debian, ubuntu and centos (operating systems are generally a good start).

Whatever you want to achieve, it is necessary to choose the right parent. Given the requirements I set for myself (Java, Maven, Tomcat and Git), I tried to find the right starting image. Many Dockerfiles are already available online on the Docker hub. The browsing app is quite good, but to be really honest, the search can really be improved.

My intention was to use the image that matched the most of my requirements, then fill the gap. I could find no image providing Git, but I thought the dgageot/maven Dockerfile would be a nice starting point. The problem is that the base image is a busybox and provides no installer out-of-the-box (apt-get, yum, whatever). For this reason, David uses a lot of curl to get Java 8 and Maven in his Dockerfiles.

I foolishly thought I could use a different flavor of busybox that provides the opkg installer. After a while, I accumulated many problems, resolving one heading to another. In the end, I finally decided to use the OS I was most comfortable with and to install everything myself:

FROM ubuntu:utopic

Scripting Java installation

Installing git, maven and tomcat packages is very straightforward (if you don’t forget to use the non-interactive options) with RUN and apt-get:

RUN apt-get update && \\
    apt-get install -y --force-yes git maven tomcat8

Java doesn’t fall into this nice pattern, as Oracle wants you to accept the license. Nice people did however publish it to a third-party repo. Steps are the following:

  1. Add the needed package repository
  2. Configure the system to automatically accept the license
  3. Configure the system to add un-certified packages
  4. Update the list of repositories
  5. At last, install the package
  6. Also add a package for Java 8 system configuration
RUN echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main" | tee -a /etc/apt/sources.list && \\
    echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections && \\
    apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EEA14886

RUN apt-get update && \\
    apt-get install -y --force-yes oracle-java8-installer oracle-java8-set-default

Building the sources

Getting the workshop’s sources and building them is quite straightforward with the following instructions:

RUN git clone  https://github.com/nfrankel/vaadin7-workshop.git
WORKDIR /vaadin7-workshop
RUN mvn package

The drawback of this approach is that Maven will start from a fresh repository, and thus download the Internet the first time it is launched. At first, I wanted to mount a volume from the host to the container to share the ~/.m2/repository folder to avoid this, but I noticed this could only be done at runtime through the -v option as the VOLUME instruction cannot point to a host directory.

Starting the image

The simplest command to start the created Docker image is the following:

$ docker run -p 8080:8080

Do not forget the port forwarding from the container to the host, 8080 for the standard HTTP port. Also, note that it’s not necessary to run the container as a daemon (with the -d option). The added value of that is that the standard output of the CMD (see below) will be redirected to the host. When running as a daemon and wanting to check the logs, one has to execute bash in the container, which requires a sequence of cumbersome manipulations.

Configuring and launching Tomcat

Tomcat can be launched when starting the container by just adding the following instruction to the Dockerfile:

CMD ["catalina.sh", "run"]

However, trying to start the container at this point will result in the following error:

Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/common/classes], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/common], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/server/classes], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/server], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/shared/classes], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.ClassLoaderFactory validateFile
WARNING: Problem with directory [/usr/share/tomcat8/shared], exists: [false], isDirectory: [false], canRead: [false]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.Catalina initDirs
SEVERE: Cannot find specified temporary folder at /usr/share/tomcat8/temp
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.Catalina load
WARNING: Unable to load server configuration from [/usr/share/tomcat8/conf/server.xml]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.Catalina initDirs
SEVERE: Cannot find specified temporary folder at /usr/share/tomcat8/temp
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.Catalina load
WARNING: Unable to load server configuration from [/usr/share/tomcat8/conf/server.xml]
Nov 15, 2014 9:24:18 PM org.apache.catalina.startup.Catalina start
SEVERE: Cannot start server. Server instance is not configured.

I have no idea why, but it seems Tomcat 8 on Ubuntu is not configured in any meaningful way. Everything is available but we need some symbolic links here and there as well as creating the temp directory. This translates into the following instruction in the Dockerfile:

RUN ln -s /var/lib/tomcat8/common $CATALINA_HOME/common && \\
    ln -s /var/lib/tomcat8/server $CATALINA_HOME/server && \\
    ln -s /var/lib/tomcat8/shared $CATALINA_HOME/shared && \\
    ln -s /etc/tomcat8 $CATALINA_HOME/conf && \\
    mkdir $CATALINA_HOME/temp

The final trick is to connect the exploded webapp folder created by Maven to Tomcat’s webapps folder, which it looks for deployments:

RUN mkdir $CATALINA_HOME/webapps && \\
    ln -s /vaadin7-workshop/target/workshop-7.2-1.0-SNAPSHOT/ $CATALINA_HOME/webapps/vaadinworkshop

At this point, the Holy Grail is not far away, you just have to browse the URL… if only we knew what the IP was. Since running on Mac, there’s an additional VM beside the host and the container that’s involved. To get this IP, type:

$ boot2docker ip

The VM's Host only interface IP address is: 192.168.59.103

Now, browsing http://192.168.59.103:8080/vaadinworkshop/ will bring us to the familiar workshop screen:

Developing from there

Everything works fine but didn’t we just forget about one important thing, like how workshop attendees are supposed to work on the sources? Easy enough, just mount the volume when starting the container:

docker run -v /Users//vaadin7-workshop:/vaadin7-workshop  -p 8080:8080 vaadinworkshop

Note that the host volume must be part of /Users and if on OSX, it must use boot2docker v. 1.3+.

Unfortunately, it seems now is the showstopper, as mounting an empty directory from the host to the container will not make the container’s directory available from the host. On the contrary, it will empty the container’s directory given that the host’s directory doesn’t exist… It seems there’s an issue in Docker on Mac. The installation of JHipster runs into the same problem, and proposes to use the Samba Docker folder sharing project.

I’m afraid I was too lazy to go further at this point. However, this taught me much about Docker, its usages and use-cases (as well as OSX integration limitations). For those who are interested, you’ll find below the Docker file. Happy Docker!

FROM ubuntu:utopic

MAINTAINER Nicolas Frankel 

# Config to get to install Java 8 w/o interaction
RUN echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu precise main" | tee -a /etc/apt/sources.list && \
echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections && \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys EEA14886

RUN apt-get update && \
apt-get install -y --force-yes git oracle-java8-installer oracle-java8-set-default maven tomcat8

RUN git clone https://github.com/nfrankel/vaadin7-workshop.git
WORKDIR /vaadin7-workshop
RUN git checkout v7.2-1
RUN mvn package

ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
ENV CATALINA_HOME /usr/share/tomcat8
ENV PATH $PATH:$CATALINA_HOME/bin

# Configure Tomcat 8 directories
RUN ln -s /var/lib/tomcat8/common $CATALINA_HOME/common && \
ln -s /var/lib/tomcat8/server $CATALINA_HOME/server && \
ln -s /var/lib/tomcat8/shared $CATALINA_HOME/shared && \
ln -s /etc/tomcat8 $CATALINA_HOME/conf && \
mkdir $CATALINA_HOME/temp && \
mkdir $CATALINA_HOME/webapps && \
ln -s /vaadin7-workshop/target/workshop-7.2-1.0-SNAPSHOT/ $CATALINA_HOME/webapps/vaadinworkshop

VOLUME ["/vaadin7-workshop"]

CMD ["catalina.sh", "run"]

# docker build -t vaadinworkshop .
# docker run -v ~/vaadin7-workshop training/webapp -p 8080:8080 vaadinworkshop
Send to Kindle
Categories: Development Tags: ,

Metrics, metrics everywhere

November 16th, 2014 No comments

With DevOps, metrics are starting to be among the non-functional requirements any application has to bring into scope. Before going further, there are several comments I’d like to make:

  1. Metrics are not only about non-functional stuff. Many metrics represent very important KPI for the business. For example, for an e-commerce shop, the business needs to know how many customers leave the checkout process, and in which screen. True, there are several solutions to achieve this, though they are all web-based (Google Analytics comes to mind) and metrics might also be required for different architectures. And having all metrics in the same backend mean they can be correlated easily.
  2. Metrics, as any other NFR (e.g. logging and exception handling) should be designed and managed upfront and not pushed in as an afterthought. How do I know that? Well, one of my last project focused on functional requirement only, and only in the end did project management realized NFR were important. Trust me when I say it was gory – and it has cost much more than if designed in the early phases of the project.
  3. Metrics have an overhead. However, without metrics, it’s not possible to increase performance. Just accept that and live with it.

The inputs are the following: the application is Spring MVC-based and metrics have to be aggregated in Graphite. We will start by using the excellent Metrics project: not only does it get the job done, its documentation is of very high quality and it’s available under the friendly OpenSource Apache v2.0 license.

That said, let’s imagine a “standard” base architecture to manage those components.

First, though Metrics offer a Graphite endpoint, this requires configuration in each environment and this makes it harder, especially on developers workstations. To manage this, we’ll send metrics to JMX and introduce jmxtrans as a middle component between JMX and graphite. As every JVM provides JMX services, this requires no configuration when there’s none needed – and has no impact on performance.

Second, as developers, we usually enjoy develop everything from scratch in order to show off how good we are – or sometimes because they didn’t browse the documentation. My point of view as a software engineer is that I’d rather not reinvent the wheel and focus on the task at end. Actually, Spring Boot already integrates with Metrics through the Actuator component. However, it only provides GaugeService – to send unique values, and CounterService – to increment/decrement values. This might be good enough for FR but not for NFR so we might want to tweak things a little.

The flow would be designed like this: Code > Spring Boot > Metrics > JMX > Graphite

The starting point is to create an aspect, as performance metric is a cross-cutting concern:

@Aspect
public class MetricAspect {

    private final MetricSender metricSender;

    @Autowired
    public MetricAspect(MetricSender metricSender) {
        this.metricSender = metricSender;
    }

    @Around("execution(* ch.frankel.blog.metrics.ping..*(..)) ||execution(* ch.frankel.blog.metrics.dice..*(..))")
    public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
        StopWatch stopWatch = metricSender.getStartedStopWatch();
        try {
            return pjp.proceed();
        } finally {
            Class clazz = pjp.getTarget().getClass();
            String methodName = pjp.getSignature().getName();
            metricSender.stopAndSend(stopWatch, clazz, methodName);
        }
    }
}

The only thing outside of the ordinary is the usage of autowiring as aspects don’t seem to be able to be the target of explicit wiring (yet?). Also notice the aspect itself doesn’t interact with the Metrics API, it only delegates to a dedicated component:

public class MetricSender {

    private final MetricRegistry registry;

    public MetricSender(MetricRegistry registry) {
        this.registry = registry;
    }

    private Histogram getOrAdd(String metricsName) {
        Map registeredHistograms = registry.getHistograms();
        Histogram registeredHistogram = registeredHistograms.get(metricsName);
        if (registeredHistogram == null) {
            Reservoir reservoir = new ExponentiallyDecayingReservoir();
            registeredHistogram = new Histogram(reservoir);
            registry.register(metricsName, registeredHistogram);
        }
        return registeredHistogram;
    }

    public StopWatch getStartedStopWatch() {
        StopWatch stopWatch = new StopWatch();
        stopWatch.start();
        return stopWatch;
    }

    private String computeMetricName(Class clazz, String methodName) {
        return clazz.getName() + '.' + methodName;
    }

    public void stopAndSend(StopWatch stopWatch, Class clazz, String methodName) {
        stopWatch.stop();
        String metricName = computeMetricName(clazz, methodName);
        getOrAdd(metricName).update(stopWatch.getTotalTimeMillis());
    }
}

The sender does several interesting things (but with no state):

  • It returns a new StopWatch for the aspect to pass back after method execution
  • It computes the metric name depending on the class and the method
  • It stops the StopWatch and sends the time to the MetricRegistry
  • Note it also lazily creates and registers a new Histogram with an ExponentiallyDecayingReservoir instance. The default behavior is to provide an UniformReservoir, which keeps data forever and is not suitable for our need.

The final step is to tell the Metrics API to send data to JMX. This can be done in one of the configuration classes, preferably the one dedicated to metrics, using the @PostConstruct annotation on the desired method.

@Configuration
public class MetricsConfiguration {

    @Autowired
    private MetricRegistry metricRegistry;

    @Bean
    public MetricSender metricSender() {
        return new MetricSender(metricRegistry);
    }

    @PostConstruct
    public void connectRegistryToJmx() {
        JmxReporter reporter = JmxReporter.forRegistry(metricRegistry).build();
        reporter.start();
    }
}

The JConsole should look like the following. Icing on the cake, all default Spring Boot metrics are also available:

Sources for this article are available in Maven “format”.

To go further:

Send to Kindle

Dear recruiters

November 9th, 2014 5 comments

I had been pleasantly surprised last time when, after connecting on LinkedIn, a recruiter sent me a personalized mail. It was the first time it happened, and I found this gesture showed that the recruiter cared about the relationship. Besides, the job I currently have was found on LinkedIn – or more correctly, a recruiter found me for this job. This is to say that I have nothing against neither recruiters nor LinkedIn.

Unfortunately, more often than not, I usually receive such messages:

Hello Nikola,  (First name basis because we’ve known each other since a long time. And do not forget to misspell)

I’m currently searching for a junior PHP developer and I believe you would be a perfect fit. (thanks for reading my profile, I really appreciate custom tailored emails)

The job is located in Bulgaria (oh yes, relocating there has always been the goal of my life, thanks for asking!)

and the daily rate is $200 (wow, what a great incentive to relocate!)

Cheers, your friend the recruiter (located in UK, India, anywhere unrelated to my location or the job location)

Those kind of mails directly end up in my spam folder. However, last week saw another usual but still annoying story:

  1. A recruiter cold-contacts me for a dream job – at least from his point of view. Fortunately, this time was by mail this time, not by phone, interrupting me during my work
  2. I ask about trivial details, such as the job description and the expected salary
  3. The recruiter tells me he cannot disclose them at this point

So, that’s it, dear recruiters, this is the last straw. I would really appreciate if you put yourself in my place (for once). When you take time for me, you’re doing your job – and earning your pay, while when I take time for you, it’s unrelated to what I’m paid to do. The only thing I get is only a vague possibility I can get a better job (if it’s possible). If I ask about details, it’s because I do want to optimize the time consumed, both mine and yours. By the way, I have no interest in running to your competitor and telling him about your extraordinary job offering, I’ve many more interesting things in life so telling me about the job details is mandatory, not something that I have to bargain for.

I’ve devised a little something that can help you, please read the following classes:

public class JobMatchEvaluator {

    private final Company company;

    public JobMatchEvaluator(Company company) {
        this.company = company;
    }

    public InterestLevel evaluate() {
        if (!company.focusOnSoftware()
         || !company.investInPeople()
         || !company.staysUpToDate()
         || !company.allowsAnotherLife()
         || !company.respectCandidates()) {
            return InterestLevel.NONE;
        }
        InterestLevel interest = InterestLevel.NONE;
        if (company.getTraining() == PROACTIVE || company.getTraining() == NOT_ONLY_SOFTWARE) {
            interest = interest.increase();
        }
        if (company.getConference() == ACTIVELY_SENDING) {
            interest = interest.increase();
        }
        if (company.getTime() == TWENTY_PERCENT_PET_PROJECT) {
            interest = interest.increase();
        }
        return interest;
    }
}

public class ContactDecisionHelper {

    private final JobMatchEvaluator evaluator;
    private final Recruiter recruiter;

    public ContactDecisionHelper(JobMatchEvaluator evaluator, Recruiter recruiter) {
        this.evaluator = evaluator;
        this.recruiter = recruiter;
    }

    public boolean shouldContact() {
        if (!recruiter.canDiscloseDetails()) {
            return false;
        }
        InterestLevel interest = evaluator.evaluate();
        switch(interest) {
            case NOT_ENOUGH:
            case NONE:
                return false;
            case TOTAL:
            case SPARKED:
                return true;
            default:
                throw new IllegalStateException();
        }
    }
}

Here are the guidelines to use the previous code:

  • If you don’t want to bother reading it, please don’t contact me
  • If you don’t understand the general idea, please don’t contact me
  • If you think it makes me sound better than you, please don’t contact me
  • If it doesn’t make you smile, or you find it not even remotely amusing, please don’t contact me

The complete project can be found on Github. Feel free to clone and adapt it to your needs, or even send me pull requests for generic improvements.

Send to Kindle
Categories: Miscellaneous Tags:

Integration Testing around Europe

November 2nd, 2014 No comments

Recently, I was invited to talk in some great conferences around Europe :

It was not only a great trip, it was the occasion to talk about Integration Testing, how it’s different from Unit Testing, its pros and cons, ways to overcome the cons, how to Fake infrastructure dependencies and how to test in-container with Spring, Spring MVC and Java EE – well, a 45minutes/one-hour summary of my Integration Testing from the Trenches book. You can see the slides, no videos have been released yet.

Though JavaDay Kyiv and Joker’s sessions were mainly in Russian, the few English ones there convinced me the three conferences were focused on very different subjects. As its name implies, JavaDay was mainly about Java (though there has been some Groovy involved somewhere). I attended a talk entirely dedicated to Integration Testing with the Spring MVC framework. As for my talk, it took place in a quite big room and the room was packed full. People seemed interested and there were many relevant questions, even some comments based on experience.

Joker conference advertises itself as a conference for experts. After attending, I understand that expert means low-level: there were many talks about bytecode, optimization of software regarding hardware architecture, JVM internals and such. To be honest, most of the stuff, I barely understood, but at least I learned that SAP provides its own JVM thanks to a talk by Volker Simonis. Oh yeah, and since it was my first visit in St Petersburg, I also learned that October there is freaking cold! I ran my talk in a small room, and the audience didn’t give much feedback – too bad.

I concluded my tour with Agile Tour London. The first (and most interesting IMHO) session was a workshop on how to build a city with Legos. The work was divided into teams, one for each part – house, water pump, power plant and road. This highlighted many reasons why Software development projects regularly fail: siloed teams, each focused on his own part of work, no interfaces up-front, etc. As for my own talk, to be honest, though I was invited, I didn’t think it would be a good fit: I never pictured myself as a methodologist, and though my experience has taught me that waterfall projects do not deliver, I’m not really sure Agile fares better. Besides, Integration Testing has less to do with Agile than software quality than methodology. However, the room was quite full and it seemed that despite my warnings about “bits of code being part of the session”, only one person left in the end. I even exchanged ideas with an attendee during the closing party.

Going to conferences is always refreshing: learning about new ideas, trying new stuff and meeting new people – not mentioning old friends. I’ll be going next week at Devoxx and have a talk about Mutation Testing. Meet you there!

Send to Kindle
Categories: Event Tags:

On resources scarcity, application servers and micro-services

October 26th, 2014 2 comments

While attending JavaZone recently, I went to a talk by Neal Ford. To be honest, the talk was not something mind-blowing, many tools he showed were either outdated or not best of breed, but he stated a very important fact: application servers are meant to address resources scarcity by sharing resources, while those resources are no more scarce in this day and age.

In fact, this completely matches my experience. Remember 10 years ago when we had to order hardware 6 months in advance? At that time, all webapps were deployed on the same application server – which weren’t always clustered. After some years, I noticed that application servers grew in number. Some were even required to be clustered because we couldn’t afford the service offered to be off. It was the time when people started to think about which application to deploy on which application server, because of their specific load profile. In the latest years, a new behavior appeared: deploying a single webapp to a single application server because it was deemed too critical to be potentially affected by other apps on the same application server. And that led sometimes to the practice of doing that for every application, whether the latter was seen as critical or not.

Nowadays, hardware is available instantly, in any required quantity and for nothing: they call it the Cloud. Then why are we still using application servers? I think that the people behind the Spring framework asked themselves the same question and came up with a radical answer: we don’t need them. Spring has always been about pragmatic development when JavaEE (called J2EE then) was still a big bloated standard, full of barbaric acronyms and a real pain in the ass to develop with (remember EJB 2.0?) – no wonder so many developers bitch about Java. Spring valued simple JSP/Servlet containers over fully-compliant JavaEE servers and now they finally crossed the Rubicon as no external application server is necessary anymore.

When I first heard about this, I was flabbergasted, but in the age of micro-services, I guess this makes pretty much sense. Imagine you just finished developing your application. Instead of creating a WAR, an EAR or whatever package you normally do, you just push to a Git repo. Then, a hook pushes the code to the server, stops the existing application and starts it again. Wouldn’t that be not only fun but really Agile/Devops/whatever-cool-concept-you-want? I think that would, and that’s exactly the kind of deployment Spring Boot allows. This is not the only feature of Spring Boot, it also provides a real convention over configuration, useful Maven POM, out-of-the-box metrics and health checks and much much more, but embedding Tomcat is the most important one (IMHO).

On the opposite side, big shops such as IBM, Oracle, even Red Hat still invest huge sums of money into developing their full profile Java EE compliant application servers. The funny stuff is that in order to be JavaEE compliant, you have to implement “interesting” bits such as Java Connector Architecture, something I’ve seen only once early in my career to connect to a CICS. Interestingly, the Web Profile defines a lightweight standard, leaving out JCA… but also JavaMail. However, it goes the way of going lightweight.

Now, only the future will tell what and how happens next, but I can see a trend forming there.

Send to Kindle

You shouldn’t follow rules… blindly

October 12th, 2014 2 comments

Some resources on the Internet are written in a very imperative style – you must do that in this way. And beware those that don’t follow the rule! They remind me of a french military joke (or more precisely a joke about the military) – but I guess other countries probably have their own version, regarding military rules. They are quite simple and can be summarized in two articles:

Art. 1: It’s mandatory to obey the orders of a superior.

Art. 2: When the superior is obviously wrong, refer to article 1.

What applies to the military domain, however, doesn’t apply to the software domain. I’ve been fighting for a long time about good practices having to be put in context, so that a specific practice may be the right in one context but plain wrong in another context. The reason is that in the latter case, disadvantages outweigh advantages. Of course, some practices have a wider scope than others. I mistakenly thought that some even have an all-encompassing scope, meaning they give so many benefits, they apply in all contexts. I have been proven wrong this week, regarding 2 of such practices I use:

  • Use JavaConfig over XML for Spring configuration
  • Use constructor injection over attribute injection for Dependency Injection

The use-case is the development of Spring CGLIB-based aspects (the codebase is legacy and interfaces may or may not exist) to collect memory metrics. I must admit this context is very specific, but that doesn’t change that it’s still a context.

First thing first, Spring aspects are not yet completely compatible with JavaConfig – and in any case, the Spring version is also legacy (3.x), so JavaConfig is out of the question. But at least annotations? In this case, two annotations may come into play: @Aspect for the class and @Around for the method that has to be used. The first is used in a very straightforward way, while the second needs to be passed the pointcut… as a String argument.

@Aspect
public class MetricsCollectorAspect

    @Around("execution(...)") // This spans many many lines
    public Object collectMetrics {
        ...
    }
}

The corresponding XML is the following:




    
        
        
    

Benefits of using annotations over XML? None. Beside, the platform’s product we use do not embed Spring configuration fragments, so that it’s quite easy to update it and check results in deployed environment – pointcut included. XML: 1, annotations 0.

Another fun stuff: I’ve been an ardent defender of using constructor injection. This has some advantages, including highlighting dependencies, fewer boiler plate code and immutability. The 3.x version of Spring uses a version of CGLIB that cannot create proxies when there’s no no-args constructor on the proxied class. The paradox is that “good” design prevents proxying, while “bad” design – attribute injection with no-args constructor allows it. Sure, there are a couple of solutions to allow this: add interfaces to allow pure Spring proxies, add a no-args constructor on or filter out those unproxyable classes, but none of them are without impact.

Morality: rules are meant to help you, not hinder you. If you cannot follow them because of a good reason (like the cost is prohibitive), just ignore the. Just write down in comments the reason why you didn’t for your future code’s maintainers.

Send to Kindle
Categories: Java Tags:

Your code coverage metric is not meaningful

October 5th, 2014 3 comments

Last week, I had a heated but interesting Twitter debate about Code Coverage with my long-time friend (and sometimes squash partner) Freddy Mallet.

The essence of my point is the following: the Code Coverage metric that most quality-conscious software engineers cherish doesn’t guarantee anything. Thus, achieving 80% (or 100%) Code Coverage and bragging about it is just as useful as blowing in the wind. For sure, it’s quite hard to have a fact-based debate over Twitter, as 140 chars put a hard limit on any argument. This article is an attempt at writing down my arguments in a limitless space.

The uselessness of raw Code Coverage can be proved quite easily. Let’s have a simple example with the following to-be-tested class:

public class PassFilter {

    private int limit;

    public PassFilter(int limit) {
        this.limit = limit;
    }

    public boolean filter(int i) {
        return i < limit;
    }
}

This class is quite straightforward, there’s no need to comment. A possible test would be the following:

public class PassFilterTest {

    private PassFilter passFilterFive;

    @BeforeMethod
    protected void setUp() {
        passFilterFive = new PassFilter(5);
    }

    @Test
    public void should_pass_when_filtering_one() {
        boolean result = passFilterFive.filter(1);
    }

    @Test
    public void should_not_pass_when_filtering_ten() {
        boolean result = passFilterFive.filter(10);
    }
}

This test class will happily return 100% code coverage as well as 100% line coverage: executing the test will go through all the code’s lines and on both sides of its single branch. Isn’t life sweet? Too bad there are no assertions; they could have been “forgotten” on purpose by a contractor who couldn’t achieve the previously agreed-on code coverage metric. Let’s give the contractor the benefit of the doubt, and assume programmers are of good faith – and put assertions:

public class PassFilterTest {

    private PassFilter passFilterFive;

    @BeforeMethod
    protected void setUp() {
        passFilterFive = new PassFilter(5);
    }

    @Test
    public void should_pass_when_filtering_one() {
        boolean result = passFilterFive.filter(1);
        Assert.assertTrue(result);
    }

    @Test
    public void should_not_pass_when_filtering_ten() {
        boolean result = passFilterFive.filter(10);
        Assert.assertFalse(result);
    }
}

Still 100% code coverage and 100% line coverage – and this time, “real” assertions! But it still is no use… It is a well-know fact that developers tend to test for passing cases. In this case, the two cases use parameters of 1 and 10, while the potential bug is at the exact threshold of 5 (should the filter let it pass or not with this value?).

In conclusion, the raw Code Coverage only guarantees the maximum possible Code Coverage. If it’s 0%, of course, it will be 0%; however, with 80%, it gives you nothing… just the insurance that at most, your code is covered at 80%. But it can also be anything in between, 60%, 40%… or even 0%. What good is a metric that only hints at the maximum? In fact, in this light, Code Coverage is a Bloom Filter. IMHO, the only way to guarantee that test cases and the associated test coverage are really meaningful is to use Mutation Testing. For a basic introduction to Mutation Testing, please check my Introduction to Mutation Testing talk at JavaZone (10 minutes).

The good thing is that Mutation Testing is not some kind of academic paper only known by nerdy scientists, there tools in Java to start using it right now. Configuring PIT with the previous test will yield the following result:

This report pinpoints the remaining mutant to be killed and its associated line (line 12), and it’s easy to add the missing test case:

@Test
public void should_not_pass_when_filtering_five() {
    boolean result = passFilterFive.filter(5);
    Assert.assertFalse(result);
}

Now that we’ve demonstrated without doubt the value of Mutation Testing, what do we do? Some might tried a couple of arguments against Mutation Testing. Let’s have a review of each of them:

  • Mutation Testing takes a long time to be executed. For sure, the combination of all possible mutations takes much more longer than standard Unit Testing – which has to be be under 10 minutes. However, this is not a problem as Code Coverage is not a metric that has to be checked at each build: a nightly build is more than enough.
  • Mutation Testing takes a long time to analyze results. Also right… but what of the time to analyze Code Coverage results that, as seen before, only hint at the maximum possible Code Coverage?

On one hand, the raw Code Coverage metric is only relevant when too low – and requires further analysis when high. On the other hand, Mutation Testing lets you have confidence in the Code Coverage metric at no additional cost. The rest is up to you…

Send to Kindle
Categories: Development Tags: ,

Throwing a NullPointerException… or not

September 28th, 2014 8 comments

This week, I’ve lived again an experience from a few years ago, but in the opposite seat.

As a software architect/team leader/technical lead (select the term you’re more comfortable with), I was doing code reviews on an project we were working on and  I stumbled upon a code that looked like that:

public void someMethod(Type parameter) {
    if (parameter == null) {
        throw new NullPointerException("Parameter Type cannot be null");
    }
    // Rest of the method
}

I was horribly shocked! An applicative code throwing a NullPointerException, that was a big coding mistake. So I gently pointed to the developer that it was a bad idea and that I’d like him to throw an IllegalArgumentException instead, which exactly the exception type matching the use-case. This was very clear in my head, until the dev pointed me the NullPointerException‘s Javadoc. For simplicity’s sake, here’s the juicy part:

Thrown when an application attempts to use null in a case where an object is required. These include:

  • Calling the instance method of a null object.
  • Accessing or modifying the field of a null object.
  • Taking the length of null as if it were an array.
  • Accessing or modifying the slots of null as if it were an array.
  • Throwing null as if it were a Throwable value.

Applications should throw instances of this class to indicate other illegal uses of the null object.

Read the last sentence again: “Applications should throw instances of this class to indicate other illegal uses of the null object“. It seems to be legal for an application to throw an NPE, even recommended by the Javadocs.

In my previous case, I read it again and again… And ruled that yeah, it was OK for an application to throw an NPE. Back to today: a dev reviewed the pull request from another one, found out it threw an NPE and ruled it was bad practice. I took the side of the former dev and said it was OK.

What would be your ruling and more importantly, why?

Note: whatever the decision, this is not a big deal and probably not worth the time you spend bickering about it ;-)

Send to Kindle

Thank you Mr Beck

September 21st, 2014 No comments

Two weeks ago, I was at the JavaZone conference (in Norway) for the first time. There, I facilitated a Vaadin 7 workshop and presented an Introduction to Mutation Testing. Moreover, I attended many great talks by brilliant speakers, but one was really head and shoulders above the lot: “Software Design, Why, When and How” by Kent Beck.

For those of you that don’t know about Mr Beck: he’s the co-author of the JUnit testing framework, he created Extreme Programming, he is at the origin of Test-Driven Development methodologies and much much more… This is just to say that I was expecting much from the talk, especially since Software Design is at the root of most of my work though I’ve been disappointed by so-called “star” speakers in he past.

The first thing to note is that the talk took place in an unusual room, as the stage was set between two opposite sets where half the audience was located. In essence, you could speak only in front of  half the people at any one time. Not an easy feat! Also, I was very surprised by Mr Beck not using any slides in his talk. Of course, he probably had some piece of paper somewhere but the magic is that his talk was very fluid. Given the amount of time it takes me to prepare my talks with slides, I can only wonder at the time it took him to prepare without. In the end, the result was impressive!

The content of the talk, however, was all about a single and very simple message: Software Design is all about context. Sometimes, you need to design upfront, sometimes, it will cost more. Sometimes, it’s better to refactor, sometimes, not. Sometimes, one should refactor now, sometimes later. And so on. In essence, the meme about code duplication being bad is just that, a meme. This is despite the fact that Software Designers need to care about efficiency first and about how to best use scarce resources, i.e. time.

The last part of the talk was devoted to three main qualities of a Software Designer:

  • Tolerance to ambiguity
  • Ability to postpone decisions
  • Social skills [sic]

This talk really stroke a chord in me as my studies were not in Computer Sciences but in Architecture (as in buildings). At this time, I was very pissed off that Architecture was not a real science, that there was no right or wrong answer and that you were never completely sure to have an adequate solution until the thing is finally built. This uncertainty was really a burden at that time, and I went to software development because either my code compiled – or not, and my application produced the expected output – or not. But with Software Design, that’s exactly as in Architecture (as in buildings, remember?), it’s all about context and gut feeling and experience.

All in all, this was really a great show. Even better, I went to him after the talk to tell him about the Architecture stuff and his conclusion is the following:

Good decisions come from experience
Experience come from bad decisions
- Kent Beck at JavaZone

This just exactly match my experience, as I’ve made plenty of bad decisions in my career, so I guess I’ve plenty of material to take good decisions from in the future ;-)

And as for those who want to enjoy the integrality of the talk, here it is in all its glory. Thank you Mr Beck.

Send to Kindle
Categories: Technical Tags: