Vagrant your Drupal

June 17th, 2012 No comments

In one of my recent post, I described how I used VMWare to create a Drupal I could play with before deploying updates to Then, at Devoxx France, I attended a session where the talker detailed how he set up a whole infrastructure for after work formations with Vagrant.

Meanwhile, a little turn of fate put me in charge of some Drupal projects and I had to get better at it… fast. I put my hands on the Definitive Guide to Drupal 7 that talks about Drupal use with Vagrant. This was definitely too much: I decided to take on this opportunity to automatically manage my own Drupal infrastructure. These are the steps I followed and the lessons I learned. Note my host OS is Windows 7 :-)

Download VirtualBox

Oracle’s VirtualBox is the format used by Vagrant. Go on their download page and choose your pick.

Download Vagrant

Vagrant download page is here. Once installed on your system, you should put the bin directory on your PATH.

Now, get the Ubuntu Lucyd Linx box ready with:

vagrant box add base

This download the box in your %USER_HOME%/.vagrant.d/boxes under the lucid32 folder (at least on Windows).

Get Drupal Vagrant project

Download the current version of Drupal Vagrant and extract it in the directory of your choice. Edit the Vagrantfile according to the following guidelines: = "lucid32" // References the right box
... :hostonly, "" // Creates a machine with this IP

Then, boot the virtual box with vagrant up and let Vagrant take care of all things (boot the VM, get the necessary applications, configure all, etc.).

Update your etc/hosts file to have the 2 following domains pointing to	drupal.vbox.local		dev-site.vbox.local

At the end of the process (it could be a lenghty one), browsing on your host system to http://drupal.vbox.local/install.php should get you the familiar Drupal installation screen. You’re on!

SSH into the virtual box

Now is time to get into the host system, with vagrant ssh.

If on Windows, here comes the hard part. Since there’s no SSH utility out-of-the-box, you have to get one. Personally, I used PuTTY. Not the VM uses a SSH key for authentication and unfortunately, the format of Vagrant’s provided key is not compatible with PuTTY so that we have to use PuTTYGen to translate %USER_HOME%/.vagrant.d/insecure_private_key into a format PuTTY is able to use. When done, connect with PuTTY on the system (finally).


All in all, this approach works alright, altough Drush is present in /usr/share/drush but doesn’t seem to work (Git is installed and works fine).

Note: I recently stumbled upon this other Drupal cookbook but it cannot be used as is. Better DevOps than me can probably fix it.

Categories: Development Tags: , ,

Ways of comparing Date objects in Java

June 3rd, 2012 3 comments

Let’s face it, compairing Date objects in Java is complex and as well as error-prone. This is the case in standard code, but this is also the case in testing code, where we regularly need to create Date objects that point at specific instant in time, to be our reference in comparison.

The good ol’ deprecated way

In test code, I’ve no qualms about using deprecated methods. So, I used the old Date constructor to initialize dates:

Date date = new Date(112, 5, 3);

Pro: it’s concise. Con: it really isn’t intuitive and you need a good knowledge of the Java API to know the first parameter is the year minus 1900 and the second is the month count (beginning at 0, for January). It’s almost a surprise to learn the last parameter is simply… the day of the month.

The canonical way

Since Java 1.1, Calendar was introduced in the Java API to dissociate between an instant in time (the date) and its view in a specific referential (the calendar). The following snippet is a naive way to obtain the same result as above.

Calendar calendar = Calendar.getInstance();

calendar.set(YEAR, 2012);
calendar.set(MONTH, JUNE);
calendar.set(DAY_OF_MONTH, 3);

It’s not only more verbose, it’s also a mistake: hours, minutes and the rest are not 0 (depending on the exact creation time of the calendar) so using equals() here will return false. Here’s the correct code:

Calendar calendar = Calendar.getInstance();

calendar.set(YEAR, 2012);
calendar.set(MONTH, JUNE);
calendar.set(DAY_OF_MONTH, 3);
calendar.set(HOUR_OF_DAY, 0);
calendar.set(MINUTE, 0);
calendar.set(SECOND, 0);
calendar.set(MILLISECOND, 0);

It defeats brevity purpose, to say the least ;-)

Apache Commons Lang

Apache Commons provides since ages different utility libraries that help develop in Java. One of such library is Apache Commons Lang, which aims at providing code that would merit being part of the Java API.

In our case, the DateUtils class let us shorten the previous code, while keeping its readibility:

Calendar calendar = Calendar.getInstance();

calendar.set(YEAR, 2012);
calendar.set(MONTH, JUNE);
calendar.set(DAY_OF_MONTH, 3);
calendar = DateUtils.truncate(calendar, DAY_OF_MONTH);

Even better, DateUtils let us work directly on Date objects, to make the following alternative also possible:

Date date = new Date();

date = DateUtils.setYears(date, 2012);
date = DateUtils.setMonths(date, JUNE);
date = DateUtils.setDays(date, 3);
date = DateUtils.truncate(date, DAY_OF_MONTH);

Note that it leaves parameters untouched, enforcing the immutability dear to Functional Programing proponents. Pro: we use the standard Java API. Con: none. And yet, wouldn’t a full-fledged DSL feel somewhat more right?

Joda Time

The final option is to use Joda Time, which aims at being a replacement for Date and Calendar. It also has spawned the JSR-310“a new and improved date and time API for Java”, that should be part of Java 8 (it was originally scheduled for Java 7). A whole article (or even a mini-guide) could be dedicated to Joda Time. For our present concern, the following snippet can advantageously replace our original one:

DateMidnight dm = new DateMidnight(2012, 6, 3);

Back to square one, it seems: clear and concise. And yet, the parameters self explanatory, no real need to regularly check the JavaDocs to see how to initialize the year. Besides, the semantics of the class name are clear. Finally, the toDate() method let us bridge to the standard Java API.


The conclusion is your own. As for myself, I regularly use Apache Commons Lang, but I’m leaning toward Joda Time these days.

The code is available here as a Eclipse/Maven project archive.

Note: if you need to work with business days, I’ve been recently made aware of ObjectLab Kit. I haven’t used it yet, and feedbacks are welcomed.

Categories: Java Tags:

Database unit testing with DBUnit, Spring and TestNG

June 3rd, 2012 4 comments

I really like Spring, so I tend to use its features to the fullest. However, in some dark corners of its philosophy, I tend to disagree with some of its assumptions. One such assumption is the way database testing should work. In this article, I will explain how to configure your projects to make Spring Test and DBUnit play nice together in a multi-developers environment.


My basic need is to be able to test some complex queries: before integration tests, I’ve to validate those queries get me the right results. These are not unit tests per se but let’s assilimate them as such. In order to achieve this, I use since a while a framework named DBUnit. Although not maintained since late 2010, I haven’t found yet a replacement (be my guest for proposals).

I also have some constraints:

  • I want to use TestNG for all my test classes, so that new developers wouldn’t think about which test framework to use
  • I want to be able to use Spring Test, so that I can inject my test dependencies directly into the test class
  • I want to be able to see for myself the database state at the end of any of my test, so that if something goes wrong, I can execute my own queries to discover why
  • I want every developer to have its own isolated database instance/schema

Considering the last point, our organization let us benefit from a single Oracle schema per developer for those “unit-tests”.

Basic set up

Spring provides the AbstractTestNGSpringContextTests class out-of-the-box. In turn, this means we can apply TestNG annotations as well as @Autowired on children classes. It also means we have access to the underlying applicationContext, but I prefer not to (and don’t need to in any case).

The structure of such a test would look like this:

@ContextConfiguration(location = "classpath:persistence-beans.xml")
public class MyDaoTest extends AbstractTestNGSpringContextTests {

    private MyDao myDao;

    public void whenXYZThenTUV() {

Readers familiar with Spring and TestNG shouldn’t be surprised here.

Bringing in DBunit

DbUnit is a JUnit extension targeted at database-driven projects that, among other things, puts your database into a known state between test runs. [...] DbUnit has the ability to export and import your database data to and from XML datasets. Since version 2.0, DbUnit can also work with very large datasets when used in streaming mode. DbUnit can also help you to verify that your database data match an expected set of values.

DBunit being a JUnit extension, it’s expected to extend the provided parent class org.dbunit.DBTestCase. In my context, I have to redefine some setup and teardown operation to use Spring inheritance hierarchy. Luckily, DBUnit developers thought about that and offer relevant documentation.

Among the different strategies available, my tastes tend toward the CLEAN_INSERT and NONE operations respectively on setup and teardown. This way, I can check the database state directly if my test fails. This updates my test class like so:

@ContextConfiguration(locations = {"classpath:persistence-beans.xml", "classpath:test-beans.xml"})
public class MyDaoTest extends AbstractTestNGSpringContextTests {

    private MyDao myDao;

    private IDatabaseTester databaseTester;

        protected void setUp() throws Exception {

            // Get the XML and set it on the databaseTester
            // Optional: get the DTD and set it on the databaseTester


        public void whenXYZThenTUV() {

Per-user configuration with Spring

Of course, we need to have a specific Spring configuration file to inject the databaseTester. As an example, here is one:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""

        <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
            <property name="location" value="${}" />

        <bean name="dataSource" class="org.springframework.jdbc.datasource.SingleConnectionDataSource">
             <property name="driverClass" value="oracle.jdbc.driver" />
             <property name="username" value="${db.username}" />
             <property name="password" value="${db.password}" />
             <property name="url" value="jdbc:oracle:thin:@<server>:<port>/${db.schema}" />

        <bean name="databaseTester" class="org.dbunit.DataSourceDatabaseTester">
            <constructor-arg ref="dataSource" />

However, there’s more than meets the eye. Notice the databaseTester has to be fed a datasource. Since a requirement is to have a database per developer, there are basically two options: either use a in-memory database or use the same database as in production and provide one such database schema per developer. I tend toward the latter solution (when possible) since it tends to decrease differences between the testing environment and the production environment.

Thus, in order for each developer to use its own schema, I use Spring’s ability to replace Java system properties at runtime: each developer is characterized by a different Then, I configure a PlaceholderConfigurer that looks for {} file, that will look like so:


This let me achieve my goal of each developer using its own instance of Oracle. If you want to use this strategy, do not forget to provide a specific for the Continuous Integration server.

Huh oh?

Finally, the whole testing chain is configured up to the database tier. Yet, when the previous test is run, everything is fine (or not), but when checking the database, it looks untouched. Strangely enough, if you did load some XML dataset and assert it during the test, it does behaves accordingly: this bears all symptoms of a transaction issue. In fact, when you closely look at Spring’s documentation, everything becomes clear. Spring’s vision is that the database should be left untouched by running tests, in complete contradiction to DBUnit’s. It’s achieved by simply rollbacking all changes at the end of the test by default.

In order to change this behavior, the only thing to do is annotate the test class with @TransactionConfiguration(defaultRollback=false). Note this doesn’t prevent us from specifying specific methods that shouldn’t affect the database state on a case-by-case basis with the @Rollback annotation.

The test class becomes:

@ContextConfiguration(locations = {classpath:persistence-beans.xml", "classpath:test-beans.xml"})
public class MyDaoTest extends AbstractTestNGSpringContextTests {

    private MyDao myDao;

    private IDatabaseTester databaseTester;

	protected void setUp() throws Exception {

		// Get the XML and set it on the databaseTester
		// Optional: get the DTD and set it on the databaseTester


	public void whenXYZThenTUV() {


Though Spring and DBUnit views on database testing are opposed, Spring’s configuration versatility let us make it fit our needs (and benefits from DI). Of course, other improvements are possible: pushing up common code in a parent test class, etc.

To go further:

Categories: Java Tags: , , ,

Arquillian on legacy servers

May 27th, 2012 No comments

In most contexts, when something doesn’t work, you just Google the error and you’re basically done. One good thing about working for organizations that lag behind technology-wise is that it generally is more challenging and you’re bound to be creative. Me, I’m stuck on JBoss 5.1 EAP, but that doesn’t stop me for trying to use modern approach in software engineering. In the quality domain, one such try is to be able to provide my developers a way to test their code in-container. Since we are newcomers in the EJB3 realm, that means they will have a need for true integration testing.

Given the stacks available at the time of this writing, I’ve decided for Arquillian, which seems the best (and only?) tool to achieve this. With this framework (as well as my faithful TestNG), I was set to test Java EE components on JBoss 5.1 EAP. This article describes how to do just that (as well as mentioning the pitfalls I had to overcome).

Arquillian basics

Arquillian basically provides a way for developers to manage the lifecycle of an application server and deploy a JavaEE artifact in it, in an automated way and integrated with your favorite testing engine (read TestNG – or JUnit if you must). Arquillian architecture is based on a generic engine, and adapters for specific application servers. If no adapter is available for your application server, that’s tough luck. If you’re feeling playful, try searching for a WebSphere adapter… the develop one. The first difficulty was that at the time of my work, there was no JBoss EAP 5.1 adapter, but I read from the web that it’s between a 5.0 GA and a 6.0 GA: a lengthy trial-and-error process took place (now, there’s at least some documentation, thanks to Arquillian guys hearing my plea on Twitter – thanks!).

Server type and version are not enough to get the right adapter, you’ll also need to choose how you will interact with the application server:

  • Embedded: you download all dependencies, provides a configuration and presto, Arquillian magically creates a running embedded application server. Pro: you don’t have to install the app server on each of you devs computer; cons: configuring may well be a nightmare, and there may be huge differences with the final platform, defeating the purpose of integration testing.
  • Remote: use an existing and running application server. Choose wisely between a dedicated server for all devs (who will share the same configuration, but also the same resources) and a single app server per dev (who said maintenace nightmare?).
  • Managed: same as remote, but tests will start and stop the server. Better suited for one app server per dev.
  • Local: I’ve found traces of this one, even though it seems to be buried deep. This seems to be the same as managed, but it uses the filesystem instead of the wire to do its job (starting/stopping by calling scripts, deploying by copying archives to the expected location, …), thus the term local.

Each adapter can be configured according to your needs through the arquilian.xml file. For Maven users, it takes place at the root of src/test/resources. Don’t worry, each adapter is aptly documented for configuration parameters. Mine looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<arquillian xmlns=""
    <container qualifier="jboss" default="true">
            <property name="javaHome">${jdk.home}</property>
            <property name="jbossHome">${jboss.home}</property>
            <property name="httpPort">8080</property>
            <property name="rmiPort">1099</property>
            <property name="javaVmArguments">-Xmx512m -Xmx768m -XX:MaxPermSize=256m
                -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000

Finally, the test class itself has to extend org.jboss.arquillian.testng.Arquillian (or if you stick with JUnit, use @RunWith). The following snippet shows an example of a TestNG test class using Arquillian:

public class WebIT extends Arquillian {

    public void testMethod() { ... }

    public static WebArchive createDeployment() { ... }

Creating the archive

Arquillian tactics regarding in-container testing is to deploy only what you need. As such, it comes with a tool named ShrinkWrap that let you package only the relevant parts of what is to be tested.

For example, the following example creates an archive named web-test.war, bundling the ListPersonServlet class and the web deployment descriptor. More advanced uses would include libraries.

WebArchive archive = ShrinkWrap.create(WebArchive.class, "web-test.war")
    .setWebXML(new File("src/main/webapp/WEB-INF/web.xml"));

The Arquillian framework will look for a static method, annotated with @Deployment that returns such an archive, in order to deploy it to the application server (through the adapter).

Tip: use the toString() method on the archive to see what it contains if you have errors.

Icing on the Maven cake

Even if you’re no Maven fan, I think this point deserves some attention. IMHO, integration tests shouldn’t be part of the build process since they are by essence more fragile: a simple configuration error on the application server and your build fails without real cause.

Thus, I would recommend putting your tests in a separate module that is not called by its parent.

Besides, I also use the Maven Failsafe plugin, instead of the Surefire one so that I get clean separation of unit tests and integration tests, even with separated modules, so that I get different metrics for both (in Sonar for example). In order to get that separation out-of-the-box, just ensure your test classes end with IT (like Integration Test). Note that integration tests are bound to the integration-test lifecycle phase, just before install.

Here comes JBoss EAP

Now comes the hard part, actual deployment on JBoss 5.1 EAP. This stage will probably last for a long long time, and involve a lot of going back and forth. In order to be as productive as possible, the first thing to do is to locate the logs. If you kept the above configurations, they are in the JBoss <profile>/logs directory. If you feel the logs are too verbose for your own tastes (I did), just change the root priority from ${jboss.server.log.threshold} to INFO in the <profile>/conf/jboss-log4j.xml file.

On my machine, I kept getting JVM bind port errors. Thus, I changed the guilty connector port in the server.xml file (FYI, it was the AJP connector and JBoss stopped complaining when I changed 8009 to 8010).

One last step for EAP is to disable security. Since it’s enterprise oriented, security is enabled by default in EAP: in testing contexts, I couldn’t care less about authenticating to deploy my artifacts. Open the <profile>/deploy/profileservice-jboss-beans.xml configuration file and search for “comment this list to disable auth checks for the profileservice”. Do as you’re told :-) Alternatively, you could also walk the hard path and configure authentication: detailed instructions are available on the adapter page.

Getting things done, on your own

Until this point, we more or less followed instructions here and there. Now, we have to sully our nails and use some of our gray matter.

  • The first thing to address is some strange java.lang.IllegalStateExceptionwhen launching a test. Strangely enough, this is caused by Arquillian missing some libraries that have to be ShrinkWrapped along with your real code. In my case, I had to had the following snippet to my web archive:
    MavenDependencyResolver resolver = DependencyResolvers.use(MavenDependencyResolver.class);
  • The next error is much more vicious and comes from Arquillian inner workings.
        at java.util.LinkedHashMap$LinkedHashIterator.nextEntry(
        at java.util.LinkedHashMap$
        at org.jboss.arquillian.container.test.spi.util.TestRunners.getTestRunner(

    When you look at the code, you see Arquillian uses the Service Provider feature (for more info, see here). But much to my chagrin, it doesn’t configure the implementation the org.jboss.arquillian.container.test.spi.TestRunner service should use and thus fails miserably. We have to create such a configuration manually: the file should only contain org.jboss.arquillian.testng.container.TestNGTestRunner (for such is the power of the Service Provider).Don’t forget to package it along the archive to have any chance at success:

    archive.addAsResource(new File("src/test/resources/META-INF/services"), "META-INF/services");

Update [28th May]: the two points above can be abandoned if you use the correct Maven dependency (the Arquillian BOM). Check the POMs in the attached project.

At the end, everything should work fine except a final message log in the test:

Shutting down server: default
Writing server thread dump to /path/to/jboss-as/server/default/log/threadDump.log

This means Arquillian cannot shutdown the server because it can’t authenticate. This would have no consequence whatsoever but it marks the test as failed and thus needs to be corrected. Edit the <profile>/conf/props/ file and uncomment the admin = admin line.


The full previous steps took me about about half a week work spread over a week (it seems I’m more productive when not working full-time as my brain launches some background threads to solve problems). This was not easy but I’m proud to have beaten the damn thing. Moreover, a couple of proprietary configuration settings were omitted in this article. In conclusion, Arquillian seems to be a nice in-container testing framework but seems to have to be polished around some corners: I think using TestNG may be the cultprit here.

You can find the sources for this article here, in Eclipse/Maven format.

To go further:

EJB3 façade over Spring services

May 20th, 2012 No comments

As a consultant, you seldom get to voice out your opinions regarding the technologies used by your customers and it’s even more extraordinary when you’re heard. My current context belongs to the usual case: I’m stuck with Java 6 running JBoss 5.1 EAP with no chance of going forward in the near future (and I consider myself happy since a year and a half ago, that was Java 5 with JOnAS 4). Sometimes, others wonder if I’m working in a museum but I see myself more as an archaeologist than a curator.


I recently got my hands on an application that had to be migrated from a proprietary framework, to more perennial technologies. The application consists of one web-front office and a Swing back-office. The key difficulty was to make the Swing part communicate with the server part, since both lives in two different network zones, separated by a firewall (with some open ports for RMI and HTTP). Moreover, our Security team enforces that such communications has to be secured.

The hard choice

The following factors played a part in my architecture choice:

  • My skills, I’ve plenty more experience in Spring than in EJB3
  • My team skills, more oriented toward Swing
  • Reusing as much as possible the existing code or at least interfaces
  • Existing requirement toward web-services:
  • Web services security is implemented through the reverse proxy, and its reliability is not the best I’ve ever seen (to put it mildly)
  • Web services applications have to be located on dedicated infrastructure
  • Mature EJB culture
  • Available JAAS LoginModule for secure EJB calls and web-services from Swing

Now, it basically boils down to exposing Spring web-services on HTTP or EJB3. In the first case, cons include no experience of Spring remoting, performance (or even reliability) issues, and deployment on different servers thus more complex packaging for the dev team. In the second case, they include a slightly higher complexity (yes, EJB3 are easier than with EJB 2.1, but still), a higher ramp-up time and me not being able to properly support my team when a difficulty is met.

In the end, I decided to use Spring services in fullest, but to put them behind a EJB3 façade. That may seem strange but I think that I get the best of both world: EJB3 skills are kept at a bare minimum (transactions will be managed by Spring), while the technology gets me directly through the reverse-proxy. I’m open for suggestions and arguments toward such and such solutions given the above factors, but the quicker the better :-)


To ease the design to the maximum, each Spring service will have exactly one and only one EJB3 façade, which will delegate calls to the underlying service. Most IDEs will be able to take care of the boilerplate delegating code (hell, you can even use Project Lombok with @Delegate – I’m considering it).

On the class level, the following class design will be used:

This is only standard EJB design, with the added Spring implementation.

On the module level, this means we will need a somewhat convoluted packaging for the service layer:

  • A Business Interfaces module
  • A Spring Implementations module
  • An EJB3 module, including remote interfaces and session beans (thanks to the Maven EJB plugin, it will produce two different artifacts)


Finally, developing  the EJB3 façade and injecting it with Spring beans is ridiculously simple. The magic lies in the JavaEE 5 Interceptors annotation on top of the session bean class that references the SpringBeanAutowiringInterceptor class, that will kick in Spring injection after instantiation (as well as activation) on every referenced dependency.

The only dependency in our case is the delegate Spring bean, which as to be annotated with legacy @Autowired.

import javax.ejb.Stateless;
import javax.interceptor.Interceptors;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.ejb.interceptor.SpringBeanAutowiringInterceptor;


public class RandomGeneratorBean implements RandomGeneratorService {

    private delegate;

    public int generateNumber(int lowerLimit, int upperLimit) {

        return delegate.generateNumber(lowerLimit, upperLimit);

In order to work, we have to use a specific Spring configuration file, which references the Spring application context defined in our services module as well as activate annotation configuration.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""

  <context:annotation-config />



Warning: it’s mandatory to name this file beanRefContext.xml and to make it available at the root of the EJB JAR (and of course to set the Spring service module as a dependency).


Sometimes, you have to make some interesting architectural choices that are pertinent only in your context. In these cases, it’s good to know somebody paved the road for you: this is the thing with EJB3 façade over Spring services.

To go further:

Categories: JavaEE Tags: ,

Quick evaluation of Twitter Bootstrap

May 13th, 2012 5 comments

I must admit I suck at graphical design. It’s one of the reasons that put me in the way of Flex and Vaadin in the first place: out-of-the-box, you got an application that is pleasing to the eye.

Using one of these technologies is not possible (nor relevant) in all contexts, and since I’ve got a strong interest in UI, I regularly have a look at other alternatives for clean looking applications. The technology I studied this week is Twitter Bootstrap.

Bootstrap is a lightweight client-side framework, that comes (among others) with pre-styled “components” and a standard layout. Both are completely configurable and you can download the customized result here. From a technical point of view, Bootstrap offers a CSS – and an optional JavaScript file that relies on jQuery (both are also available in minified flavor). I’m no big fan of fancy client-side JavaScript so what follows will be focused on what you can do with plain CSS.


Boostrap offers a 940px wide canvas, separated in 12 columns. What you do with those columns is up to you: most of the time, you’ll probably want to group some of them. It’s easily achieved by using simple CSS classes. In fact, in Bootstrap, everything is done with selectors. For example, the following snippet offers a main column that has twice the width of its left and right sidebars:

    <div>Left sidebar</div>
    <div>Right sidebar</div>

A feature provided worth mentioning is responsive design: in essence, the layout adapts itself to its screen, an important advantage considering the fragmentation of today’s user agents.


Whatever application you’re developing, chances are high that you’ll need some components, like buttons, menus, fields (in forms), tabs, etc.

Through CSS classes, Bootstrap offers a bunch of components. For existing HTML components, it applies a visually appealing style on them. For non-existing components (such as menus and tabs), it tweaks the rendering of standard HTML.


I tried to remake some pages of More Vaadin with Bootstrap and all in all, the result in not unpleasing to the eye. In the case I can’t use Vaadin, I think I would be a strong proponent of Bootstrap when creating an application from scratch.

You can find the sources of my attempt here.

Categories: Development Tags: ,

Specification by Example review

This review is about Specifications by Example by Gojko Adzic from Manning.


  • 18 chapters, 254 pages, $29.21
  • This book covers Specifications by Example (you could have guessed it from the title). In effect, SBE are a way to build the right software (for the customers), as opposed to build the software right (which is our trade as engineers).

Specification by Example is a set of process patterns that facilitate change in software products to ensure that the right product is delivered efficiently.


  • Not all methods are adequate for each context. Each hint described is made contextual.
  • Do’s are detailed all right, with their respective benefits, but dont’s are also characterized along with the reasons why you shouldn’t go down their path.
  • Full of real-life examples, with people and teams talking about what they did and why.
  • 6 case studies


  • Missing a tooling section to explain how to put these ideas in action.


As software engineers, we are on the lower part in the software creation chain and it shows. We focus on building the software right… Yet, we fail to address problems that find their origin in the upper part of the chain; agile methodologies are meant to prevent this. This book comes from a simple idea: in most projects, specifications, tests and software are all but synchronized. It lists a collection of agile recipes to remedy that.

Whether you’re a business analyst, a project manager or just a technical guy who wants to build better software, this book is for you. In any cases, you’ll want to keep it under arm’s reach.

Categories: Book review Tags: ,

Why Eclipse WTP doesn’t publish libraries when using m2e

April 29th, 2012 No comments

Lately, I noticed my libraries weren’t published to Tomcat when I used a Maven project in Eclipse, even though it was standard war packaging. Since I mostly use Vaadin, I didn’t care much, I published the single vaadin-x.y.z.jar to the deployed WEB-INF/lib manually and I was done with it.

Then, I realized it happened on two different instances of Eclipse and for the writing of Develop Vaadin apps with Scala, I used 3 different libraries, so I wanted to correct the problem. At first, I blamed it on the JRebel plugin I recently installed, then I began investigating the case further and I found out that I needed a special Maven connector that was present in neither of my Eclipse instances: the WTP connector. I had installed this kind of connector back when Maven Integration was done by m2eclipse by Sonatype, but had forgotten that a while ago. Now the integration is called m2e, but I have to do the whole nine yards again…

The diagnostics that can be hard but the solution is very simple.

Go to the Windows -> Preferences menu and from this point on go to Maven -> Discovery.

Click on Open Catalog.

Search for “wtp”, select Maven Integration for WTP and click on Finish. Restart and be pleased. Notice that most Maven plugins integration in Eclipse can be resolved the same way.

For a sum up on my thoughts about the current state of m2e, please see here.

Categories: JavaEE Tags: , ,

Food for my thoughts after a week at conferences

April 22nd, 2012 1 comment

This week, I attended both Jenkins User Conference Paris and Devoxx France. Though I’ve already reported notes about the sessions I went to (Jenkins, Devoxx day 1, day 2 and day 3), there were some underlying trends I wanted to write down in order to clear my thoughts.

Power to the programmers
Programmers form the basis of software industry. Like many of their brethren in other industries, this means they are subject to unlimited exploitation (at least, it’s what I’ve observed in France) and are not well considered.
A trend I’ve seen at Devoxx is for developers to reclaim their rightful part in the value production chain. Interestingly, it’s not an individual movement but a collective one.
Evolve or die
In contract, with the rapid changes occuring in both hardware and software, programmers have a duty to themselves to constantly learn new things or become irrelevant the next day.
As a consequence, developers should be jack-of-all-trades and no one trick pony. In particulare, those who have no prior experience of concurrency should get some post haste or they will face unexpected bugs coming from today’s multicore architectures.
The decline of JavaEE
Although Java is still seen as “the” language, which is a good thing in a Java conference, there weren’t many talks on JavaEE. It’s as if the platform is becoming a thing of the past, without much ado (none in fact).
Some say the JVM will be Java’s legacy, in no small part due to the recent explosion of languages. Likewise, there’s a growing part of applications that have a very short lifespan. Both factors make me predict that in the near-future, emphasis will be not on how to implement features, but on the features themselves.
The cloud is here to stay
In parallel to JavaEE’s fall, the cloud is rising… very fast, whether it’s IaaS, Paas or SaaS: Google App Engine, Jelastic, Cloud Foundry, Heroku, CloudBees re only some remarkable examples. All these solution enable very short time-to-market, a necessity in today’s world.

For me, this translates into some paths worth exploring in the following months:

  • Check Java 1.4, 5, 6 and 7 ways of handling concurrency. Search for third-party libraries that complete them (such as Akka)
  • Go deeper into Scala. And start learning Groovy, in earnest
  • Get me a laptop with a killer configuration and play with DevOps tools. I was amazed in a talk to learn how these tools were used to create a training environment
  • Think globally about code quality: JavaScript is code too. The code base is becoming larger and larger (much to my chagrin, but I’ve to deal with it nonetheless). Do the same for PHP
  • Stay in touch with all those nice clouds platform and their offerings
Categories: Event Tags:

Devoxx France 2012 day 3

April 20th, 2012 No comments

Last day of Devoxx France 2012! Waking up was not so easy, I had to drink a coffee to get me going (and I never do).

Trends in mobile application development by Greg Truty

I didn’t take notes on this one because my computer rebooted :-/

Mobile is becoming strategic to the business: it’s beginning to be the equivalent to CRM, portals, and such other assets.
On the development side, mobile development is akin to traditional development. There are a couple of differences however: development cycles are very short (weeks) because time-to-market is very quick, hardware manufacturers release more often leading to a very fragmented market and we are back to client server architectures.

  • Native applications require a full new set of skills in the enterprise. The fragmentation enforce to target specific platforms and to forget others. Other requirements are creeping in (security, scalability, integration in the enterprise architecture, …).
  • Another approach would be to take the standard web route; actually, familiarity is here and experience is already available on this subject. Unfortunately, we cannot leverage platform native capabilities.
  • The best of both worlds come in the form of hybrid approach, with tools coming from Apache Cordova (formerly PhoneGap). In this case, the only disadvantage comes from slower performance than native applications (sometimes, at least).
  • Finally, what if we could write to a common API and then compile to each platform to produce native applications? This is the cross-platform programming models. We get help from appceletator and its kind here.

IBM recently acquired Worklight, a framework to address all the previous strategies, and also deal with security, SSO, integration in so on. Worklight provides a IDE, a server, runtime components and a web-based console to monitor mobile applications and infrastructure.

The guy is not bad, there’s some interesting info, but a sales pitch at 9 in the morning makes me regret not to have stayed later in bed.

Portrait of the developer in “The Artist” by Patrick Chanezon

The story is about George, a JavaEE developer that work for top companies. Anyway, after 3 years work, the AZERTY project is put into production: GUI is bad, the IT director is happy, users hate it. George is happy, he loves complexity and nobody understand how the system work, save him. George is promoted Project manager, then Project Director. One day, when George go into a OSS meeting, he hears new terms: Hibernate, agility, and such. Yet, instead learning those new techs, he learns golf and code no more. Since he now has budget, he brings in agile coachs into his company: it fails abismally. Finally, he becomes IT director… Users reject his new project AZERTY 3.0 and users use Google Apps behind his back. His boss now demands iPhone and Android applications for. He cannot code it and is finally fired.

What happened while George learned to play golf? The last 2 years saw a paradigm shift that happens every 15 years:

  • in 60′s, there were mainframes (server-side)
  • in 80′s, it was client-server architecture
  • in 90′s the web put everything on the server-side again
  • In 2010, it’s the Cloud, HTML5 and mobile; it feels like we put things back into the client again

Standards analyst segregate Cloud into Saas, PaaS and IaaS as a pyramid. For developers, PaaS is the most important, which is something alike to the operating system. The trend is now Developing as a Service. Interestingly enough, though platforms become more industrialized, software development moves toward craftmanship.

Historically speaking, Cloud began at consumer websites solving their own needs (like Google, Amazon, and others). Nowadays, IaaS is becoming mainstream, but you still need to build and maintain your own software infrastructure. For example, when Amazon failed last year, 3 startups survived the outage, those that built their own platform.
The software development process is about to change. Business-to-consumer applications are becoming like haute couture, with new applications coming in everyday. Some applications have even short life expectancy (like Devoxx France application). Thus, development cycles have to be shortened in parallel, hence Agility. Cloud let us become more agile, by cutting us off from infrastructure.

The main risk of Cloud infrastructure is the vendor lock-in. Cloud Foundry resolves the problem by providing an Open Source product. For example, AppFog uses Cloud Foundry and added PHP. Likewise, you can create you own private cloud to host it on your own infrastructure.

There are a couple of things to learn from that:

  • Software is becoming like fashion, design rules
  • Use the best tool for the task at hand
  • Learn, learn, learn!

In conclusion, George buys an iPhone, codes a little bit everyday, read programming books and is happy again! Besides, he stopped playing golf. The rest is up to you…

Wow, the man is really good, as well as the session. If you missed the keynote, see it on Parleys ASAP.

Abstraction Distractions for France by Neal Ford

As developers we think about abstractions, because if there weren’t any, we would have to deal with 0 and 1 and we couldn’t get anything done. And yet, by using abstractions, we sometimes forget they are only abstractions, and tend to consider them as the real thing.

Let’s take a simple document. How can you ensure that a Chocolate Cake Recipe could be saved for hundred years? Put it in a Word document? Probably not… Getting to text format would be better, but raises encoding issues. If you’re a government agency, stakes are higher. The goal is not to save the document, but the information.

For example, emacs tapes are still available, but there’s nothing to read them.

[Other fun examples are omitted because I couldn't type fast enough and the whole thing is based on pictures]

  • Lesson #1, don’t mistake the abstraction for the real thing
  • Lesson #2, understand 1 level down your usual abstraction
  • Lesson #3, once internalized, abstractions are very hard to cut off
  • Lesson #4, abstractions are both walls and prisons
  • Lesson #5, don’t name things that expose underlying details
  • Lesson #6, good APIS are not merely high-level or low-level, they are both at once
  • Lesson #7, generalize the 80% cases, and get away from the rest
  • Lesson #8, don’t be distracted by your abstractions

Dietzler’s law: using a tool like Acces (or nay contextual tool), 80% of what the user wants is very easy, 10% is hard to implement and the last 10% cannot be achieved because you have to go beyond the abstraction and that cannot be done.

Any tool (like Maven), has a tipping point. Because once you’ve gone beyond that, it will never ever be wonderful again. At this time, you’ve got to cut and run but it will be hard because you’ll be imprisoned by the abstraction.

Wow! I missed Neal’s session on the first day and I regretted it at the time. Now, I regret it even more. The previous keynote raised the bar very high, Neal passed it with flying colors. Anyone wanting a real show (with content to boot) should attend Neal’s session given the chance. Besides, you could even get his wife’s Chocolate Cake recipe :-)

Kotlin, by Dmitry Jemerov

New languages seem to bloom every so often, but the last year so announcements from enterprises (Red Hat’s Ceylon, JetBrains’s Kotlin and Google’s Dart). Here’s my chance to have a talk from someone very close to one of Kotlin’s inception.
JetBrain’s motto is develop with pleasure. There are other products beside IntelliJ’s IDEA (TeamCity, PHPStorm, and others).
Kotlin is: statically typed, running on the JVM, OSS and developed by JetBrains with help from the community. Why Kotlin? Because IntelliJ’s IDEA is based on Java and though it’s not the best fit, other languages are not better.
Goals include:

  • Full Java interoperability
  • compile and runtime as fast as Java
  • more concise than Java, but preventing more kinds of errors (such as NPE)
  • simple (no PhD level)

Performance are to be found in different areas: no autoboxing, no constant runtime overhead in generated code and a low compilation time. Kotlin is easy to learn; there’s no distinction between libray devlopers and ‘standard’ developers (as in Scala). It focuses on best practices taken from other languages. This leads to no ground-breaking features, only common sense.
[Follow a bunch of code snippets]
At first, it seems akin to Scala with fun replacing def: there are Scala-like constructors, vals and vars, traits, reversed-order type/name declaration (separated by double colons)

Extension functions seem nice enough: you can not only extend types, but also functions without bothering to inherit the whole nine yards. Also, in Kotlin, functions are first-class citizens (in short, you get closures with type inference). Interestingly enough, it is by default the first argument of the function literal (as in Groovy).

An interesting feature of Kotlin is the usage of the dollar sign to reference variable inside Strings, so as to have variable replacement at runtime.

There’s a limited form of operator overloading: only a subset of operators are available to be overloaded, as it needs to be done through a specifically-defined function. For example, to overload the -, you need to ovveride the minus function.

In order to manage nullability, Kotlin allows the ? operator to only allow safe calls (as to Groovy again). For example, the following snippet returns the list size or 0 if the list is null:

f.listFiles()?.size() ?: 0

The question mark marks the return type as nullable, otherwise, it’s not nullable and the compiler complains; it makes the core more concise as well as provide as safety net for handling null values.

Kotlin also provides pattern matching (as in Scala, the syntax being very similar). An improvement over Groovy’s builders, Kotlin’s are typesafe: for example, the HTML builder checks for validity at compile-time!

More features include:

  • Inline functions
  • First-class delegation rendering delegation easier with no code to write, it’s taken care of by the compiler
  • Modularization as part of the language
  • Declaration site variance [???]
  • And more to come [This frightens me to no end, what will happen to backward compatibility]

Not an uninteresting talk, but there was some passion lacking in the speaker. The get-what-works-from-other-languages strategy is a very interesting one. At least, you get the best of many world out-of-the-box. I see a big problem however, how does JetBrains plan to ensure backwards compatibility if they get their critical mass. The answer from the speaker is:

JetBrains intends to allow for compiler extensions.

Food for your thoughts (directly from the speaker): if JetBrains had known that Ceylon was being developed, it probably wouldn’t have started the project.

Discovering JavaFX

OK, I’m a bit of a masochist considering the presales talk from IBM, but I’m really interested in GUI and perhaps JavaFX next version is the right one (considering Flex’s fate).
The talk begins by the speaker asking to remove all misconceptions about JavaFX (good idea). JavaFX is a complement of Swing, SWT and HTML5. On Android and iOS, it works! The last addition to the JavaFX ecosystem is JavaFX scene builder.

JavaFX goals were to achieve:

  • a client API
  • a modular architecture (both for mobile and embedded)
  • a migration solution for UI in Java (since Swing has more than 10 years)
  • advanced development tooling
  • a cross-platform solution (across both OS and hardware type)

JavaFX 1.O receveid a bad feedback from the developers, they didn’t want to learn a new scripting language. Current version is JavaFX, available since 2011 on Windows (Mac OS and Linux are nearing GA completion). On the tooling side, NetBeans 7.1, 7.2 and Scene builder are provided. The community has already taken care of some integration plugins in Eclipse, while JetBrains has made contact to integrate JavaFX development in IntelliJ IDEA.

Currently achieved goals include: inclusion in the OpenJDK (through OpenJFX) and a common license with JDK. For the latter, note that JavaFX will be part of JavaSE from JDK 8. Oracle policy is to submit JSR to the JCP for standardization.

With JavaFX, you can use the tools you want, benefit from all Java features (generics, annotations, etc.) or even use your preferred language that run on the JVM. In JavaFX 2, the FXML markup language is available for UI defintion (just like MXML in Flex). Alternatively, you can still code you UI programmatically. FXML can be laced with scripting languages, such as JavaScript and Groovy.

On the graphics side, the architecture (Prism) makes it possible to use the best of hardware acceleration. The new windowing toolkit is called Glass. Its use make it possible to access advanced function (buffering, etc.). Supported media format include VP6, H.264, MP3 and AAC (very akin to supported HTML5 media formats). Moreover, JavaFX embeds the WebKit rendering engine, thus making it possible to interpret HTML and JavaScript in Java applications. Bidirectional communications are possible between the Java and the WebKit part. With WebKit’s usage, you get multiplatform testing for free.

Swing and SWT can be gradually migrated to JavaFX, through seamless integration of the latter in the existing UI. A big difference with Swing/SWT is the ability to use CSS to style your JavaFX components, hence you can reuse the same stylesheet on the web and on JavaFX. As a sidenote, there’s an already existing community around OpenJFX. Other contributions include Eclipse and OSGi plugins, DataFX (datasource access), ScalaFX and GroovyFX, frameworks (already!) and JFXtras (additional UI controls and extensions).

Last week, the JavaFX Scene Builder release was announced. It’s a visual development tool for GUI; drag-and-drop is supported. Integration with NetBeans comes out of the box, but it can be used with other IDE (since it’s a standalone tool). It’s entirely developed in JavaFX and available for Mac OS X and Windows.

The roadmap is the following:

  • End 2012, JavaFX 2.0.1 bundled with JDK 7
  • First quarter 2012, JavaFX 2.1 GA for Windows (Linux in beta)
  • Mid-2012, GA for all OS
  • Mid-2013, JavaFX 3.0 included in the JDK 8

[Talk ends with both a 3-tier demo, with the GUI developed in JavaFX with CSS and
Scene Builder demonstrating Drag-and-Drop]

Mono tone speaker, I had to fight agains sleep. On a more positive note, this wasn’t a sales pitch but there were some interesting pieces of information. The speaker also presented an iPad running a JavaFX using the hardware accelerometer.

Java caching with Guava by Charles Fry

The session is about the inner workings of caching. Guava is an OSS of low level Java librarires. The package contains simpel caching.

LoadingCache automatically loads entries when a cache miss occurs, while Cache doesn’t. The delegate CacheLoader interface knows how to load entries.

Concurrency is internally handled very similarly to ConcurrentHashMap. In the case of exceptions occuring during load, you have to catch runime exceptions and rethrow checked IOException. For keys that are transient, a fluent weakKeys() method on the builder allow for weak references. In contract, to allow the garbage collector to collect cached values, the softKeys() method is used (there have been performance issues however, this use is discouraged).

  • To allow for cache eviction, the cache builder can be set a maximum size; entries will be evicted in approximage LRU order (evictions occur on write operations).
  • Alternatively, if the eviction strategy is based on weight, a Weighterinterface let specify the weight computation and is passed around while building the cache loader. In this case, the eviction is exactly the same.
  • Another strategy would be time to idle, it’s achieved by expireAfterAccess()method on the builder
  • Finally, time to live eviction strategy is provided by expireAfterWrite() method on the builder

The next step is to be able to analyze cache: just use the recordStats() method. A CacheStats is returned by the stats() method on the cache object and allows for querying. Having analyzed the cache, it’s now possible to tweak the configuration: you have to create a string containing key-value pairs and pass this to the builder.

Guava also provides an event-listener model around the cache. Builders allow for listener registration through the removalListener that takes RemovalListener instance as a parameter. Of course, you can have multiple listeners by registering more than one object. The listener is called whenever an entry is evicted, whichever the stragegy used. By default, listeners are called synchronously: it’s advised to make listeners asynchronous if they are heavyweight (or wrap them inside RemovalListener.asynchronous).

Guava also provides two different ways for refreshing the cache:

  • manual refresh by calling the reload()method on the cache which can be implemented asynchronously
  • automatic refresh by calling the refreshAfterWrite() method on the builder

Asynchronous refresh shall be implemented to avoid blocking user thread. Just use Java’s FutureTask

Sometimes, it’s more efficient for a cache loader to load a set of entries at the same time than one at a time. This is accomplished by overriding CacheLoader.loadAll() and LoadingCache.getAll(). The latter does not block multiple requests for the same key.

So far, cache is fed by the cache loader. However, how can we handle specific use-cases to manually populate the cache? This is simply done with ifPresent() to query if a key is present and put() to put an entry in the cache.

In order to implement non-loading caches, we don’t need CacheLoader at all. When calling, you get one such cache. In order to disable caching, the current canonical way is to set the maximumSize to 0. This can be done at runtime through CacheBuilderSpec class.

Caches can be iterated over like maps with Cache.asMap(). All ConcurrentMap write operations are implemented, but it’s discouraged to use them (use CacheLoader or a Future instead). Be warned that both get() methods have similar signatures but they are different (Map.get() is in fact a “get if present”).

Future works include:

  • AsyncLoadingCache returning a Future<V> on get(K)
  • A CacheBuilder.withBackingCache()to provide cache layering (such as L1 + L2)
  • Performance optimizations

A very nice talk. ‘Nuff said. A bit disappointed by the lack of knowledge of the Guava team in the Java Caching API though (a JSR).

.Net for the Java developer, an inspiration by Cyrille Martraire and Rui Carvalho

This talks is about what’s .Net and what’s lacking in Java?

For a Java developer, .Net was something akin to the blue skin of death. From a more rational point of view, let’s remember at first that Sun and Microsoft were neighbors at first.

  1. Sun created a superb tool named Java, and thought about putting into Mosaic (those were applets)
  2. Microsoft responded with its own alternative, ActiveX. And to go beyond that, it added the ActiveX Server Pages (ASP) to generate HTML pages dynamically
  3. Sun answered with the JavaEE platform.
  4. Disappointed with Sun’s success, Microsoft delivered .Net in turn

Remarkably, Sun took a very long time to ship Java 5. The .Net framework matched Java’s features in version 2.0 and soon went beyond them. .Net principles are exactly the same; only some coding standards are somewhat different. .Net is polyglot from the start (C#, Iron Ruby, Iron Python, F#, …). In order to seduce C++ programmers, Microsoft provided low-level API.

Some features are really better in C# than in Java (or more precisely, already available):

  • The yieldkeyword let us avoid creating temporary collections when working to return such a result
  • Ranges are provided through the inkeyword
  • Functions are first class citizens and inline functions are possible (lambdas, and guess what, the operator =>). Moreover, lambdas parameters are statically-typed; hence we can create fluent API
  • C# provides anonymous objects while Java make us create new types although we only have the need in a single place
  • LINQ is a DSL that is a SQL look-alike
  • With C# 4, dynamics let developers is a special extensible type (not a dynamic proxy) [I confess I didn't understand all of this, apart from the benefit to not create data structure when there's no need]
  • Razor is a hybrid syntax between HTML and C#

On the portable side, the Apache Mono project let developers program in and deploy on Linux and Mac OS X. Still, in order to maximize decoupling, do not forget to use proprietary products (SQL Server for example). Though .Net is seen as autocratic, alternative communities exist, such as Alt.Net.

As a conclusion, here are some messages:

  • Do not forget the Java is not the JVM, Look to other languages while keeping your environment
  • The language is only a small part of development; developers share more (patterns, GUI, xDD, etc.)

A definitely good presentation on the benefits brought by looking over the gap, thanks to the well-oiled tongue-in-cheek exchanges between the two speakers (real pleasant teamwork, guys).

And it is the end of the Devoxx Fr 2012. See you next year, guys and thanks to all for all the great time there.

Categories: Event Tags: