MongoDB course, thoughts and feedback

June 16th, 2013 4 comments

I’m afraid I traded my ability to understand SQL for the ability to understand Object-Oriented Programming a long time ago. That’s why I never have been at ease with databases in general and SQL in particular. Given the not-so-recent trend about NoSQL, I thought it was time to give it a try. When I became aware that 10gen – the company behind MongoDB, was making free online courses available at regular intervals, I jumped at the chance and last month, I registered for M101J: MongoDB for Java Developers.

Yesterday, I finished week 5 assignments. Even though it’s not finished yet, it’s time for some feedback, on the both the course and MongoDB.

On the bright side:

  • First, I’d like to thank 10 gen to make those courses available freely. Of course, it’s not without second thoughts, but it would be a better world if other software editors would do the same.
  • Also, the course is very practical and IMHO well adapted to a software engineer’s day-to-day work. I’ve prevoously followed Coursera’s Scala course, and I disliked it because I felt it was too theoritical: homework examples were about algorithms and had no roots in the real world. Compared to Scala’s course, this one is much more pragmatic and relevant to me.

There are some downsides, though:

  • The course is said to be “for Java Developers”. Unfortunately, some – if not most – of the courses and their associated homeworks are about the MongoDB shell, which is plain JavaScript. I would have prefered to have two different courses, one for the shell and one for the Java API.
  • Speaking about the Java API, to tell it’s a thin wrapper is an understatement, for it’s only the literal transcription of JavaScript shell commands into the Java language.
    I know API design is hard, but come on guys: no enumerations, no constants, only Strings! I was expecting more in 2013 than JDBC reinvented, complete with String concatenation… Now, here’s what one who has been around a few years can expect if MongoDB becomes widespread:

    1. Productivity will be low, there will be tons of bugs, developers will complain
    2. Some smart people/company will provide a usable wrapper API around the native one (in fact, Spring Data already does)
    3. MongoDB will learn its lessons and provide their own improved API, based on their competitor, but not quite like it
    4. And it will be the whole JDBC to Hibernate vs JPA war all over again!
  • SQL is not what I would call readable, but it’s expected for a technology that began some 40 years ago. I have more expectations for a technology a few years old, but sadly, they are to be unmet. Whereas standard MongoDB queries are not a model of usability from a developer point-of-view, but the palm goes to the so-called aggregation framework. As an illustration, this Gist is the answer to one of this week’s homework (obfuscated of course).

This is my feedback based on my own experiences and those are completely personal. I would recommend anyone even remotely interested in NoSQL to take those courses themselves and make their own mind. The next batch is planned for July 29th.

Categories: Java Tags: , ,

There is no such thing as an absolute best architecture

June 9th, 2013 No comments

I’ve been surprised this week by a colleague’s comment regarding how a piece of software had been architectured. He found it too much procedural and not enough object-oriented, despite the fact the code was performing exactly as needed. I would have expected this from a junior developer, not from a seasoned one.

Hey, guys, may I kindly remind you we’re not in university anymore? When designing an architecture, there are more things to take into account than just technical parameters and personal preferences (not to mention the fact that the hype is about FP now).

Let’s muse a little and well… think about the best transportation mode?

Most people would agree that the plane is the fastest and safest transportation mode. By flying, we can reach remote islands with unmatching speed. And yet, not every place has a nearby airport. What of the waiting time at airports? And the whole bunch of unpleasant but necessary security measures? Not to mention the huge cost to environments and the need for top-notch technology and pilots…

OK, let us forget about remote islands. Let us use a common way of transportation: car. Cars are everywhere, roads are overall and gas stations also. Car mechanics are easily available, and driving a car is learnt in a dozen of hours. From this point of view, car may seem like the best way to travel there is. Some downsides cannot be ignored though: there are traffic congestions in cities – and between them, you’ve to park your vehicles somewhere and cars have a major impact on urban pollution – and filth, just to mention a few.

In order to go beyond those problems, let’s use a bike. Bikes are cheap, they are green, daily exercise is good for health and you can even take your bike along in cars and trains. Now, if you’re a biker, you know about the delicate pleasure of biking under the rain. Biking is also no good if your work require a suit, or you need to shower before entering the office. It’s no use to have a bike if you need to transport more than a few things. It’s also worth mentioning bikes are easily stolen.

I hope my point is now clear: there’s no such things as the best transportation mode. There are many, and each one is adapted to a specific context.

In software projects, context is made of project planning, skills availability, team maturity, development costs, application lifetime and countless others! As you wouldn’t call names people using their cars instead of trains, you shouldn’t judge a piece of software (and people behind it) “as-is” but only in their relevant context. And IMHO, the academic “purity” has a very low priority in real-life projects.

Categories: Technical Tags:

Modularity in Spring configuration

June 2nd, 2013 1 comment

The following goes into more detail what I already ranted about in one of my previous post.

In legacy Spring applications I’ve to develop new features in, I regularly stumble upon a big hindrance that slows down my integration-testing effort. This hindrance, and I’ll go as far as naming it an anti-pattern, is to put every bean in the same configuration file (either in XML or in Java).

The right approach is to define at least the following beans in a dedicated configuration component:

  • Datasource(s)
  • Mail server
  • External web services
  • Every application dependency that doesn’t fall into one of the previous category

Depending on the particular context, we should provide alternate beans in test configuration fragments. Those alternatives can either be mocked beans or test doubles. In the first case, I would suggest using the Springockito framework, based upon Mockito; in the second case, tools depend on the specific resource: in-memory database (such as HSQLDB, Derby or H2*) for datasource and GreenMail for mail server.

It is up to the application and each test’s responsibility to assemble the different required configuration fragments to initialize the full-fledged Spring application context. Note that coupling fragments together is also not a good idea.

* My personal preference goes to H2 these days

Categories: Java Tags:

The next Gutenberg moment is now

May 12th, 2013 No comments

Johannes Gutemberg and his printing pressThis article shall be considered as my week ramblings, nothing more.

Some time ago, I stumbled upon the term “Gutenberg moment” (and I’m very sorry because I don’t remember where). I googled it quickly and here’s a definition I found:

A Gutenberg moment is one which changes the way we produce and consume text as dramatically as Gutenberg’s machine did.
Dennis Baron

Reading this definition triggered this article: I already wrote about knowledge value and cost and this is somewhat related, but it brings an historical light to it (real historians will pardon my many mistakes: corrections welcome).

Anyway, what is so important in the invention of the printing press? In order to understand that, we have to forget how easy it is to buy a book and remember how things were before.

In these times known as the Dark Ages, books were written manually by so-called copyist monks who were dedicated to this task. In a era where reading and writing were uncommon, only a small subset of the educated provided books… which of course, kept the number of pieces produced extremely low.

The good part of the manual copying is that each book was unique, somewhat akin to a mastercraft; the bad part is that the manual copying process introduced many divergences or even errors in the copied books.

I don’t even mention the fact that since monks were responsible for producing books, they probably put a higher priority on sacred books and less on profane ones.

In the end, there would be a couple of issues regarding you getting your hands on a specific book: basically, it boils down to you going to the book or the book (or a copy) coming to you. In all cases, you’d have needed to be very rich to afford that.

Gutenberg’s printing press “just” enabled mass production of books; this lead to huge decrease in book prices. In turn, this had important consequences:

Whereas formerly, one would first have to know the location of a particular book, printing press just shrinked down the problem to knowing about the book itself. From this point on, one could just go to the library or order the book.
A mechanical process didn’t completely erased errors – in fact, an error on the master led to so many erroneous copies, but it made possible incremental and iterative improvements. Errors could be reported and then corrected on the master, improving future printings.
Removing books creation from the hands of a selected few to put it in a larger circle enable to get books that were not only out of the religious scope, but even sometimes contrary to the then-predominant catholic doxa. Knowledge restricted to oral passing could be finally written.
Communication enablement
Last but certainly not least, mass-production of books made possible network communication of ideas. Formerly, you could perhaps write a book to spread your ideas, but until it became well-known, centuries would have passed. With printing press, you could communicate to entire Europe in your lifetime; and even be contradicted by other books and have a chance to read them!

Those effects took decades or centuries to manifest, but they were powerful enough. And they were just the result from manual production to mass production, not only of books but of content, of knowledge.

Of course, the way books are pressed has improved drastically since Gutenberg’s time. It has become an industrial process instead of a craftmanship: easier, faster and thus cheaper… but the fundamental alchemy is the same since centuries.

Now, it is my opinion we are blessed because we are experiencing another Gutenberg moment, and those effects are already apparent. I’m referring to the digitalization of books, from material to intangible. And consequences of the former Gutenberg revolution are multiplied a thousandfold:

  1. Places were books are not available, cannot be kept (because of environment or policies) can access content nonetheless. In Africa, mobile has done more for women education than years of NGO efforts ever could.
  2. You don’t need to wait to get errata: content is changed online so each connection may bring a new (and hopefully better) version.
  3. Anyone (even me) can write his/her own book(s), spread his/her ideas, and so on. Granted, for some ideas, this is definitely not cool but let the majority be the judge of that. Rare books or out-of-print books can find a new eternal life online, cared for by specialists and experts.
  4. Communication is even faster than before: you can publish as soon as you’ve typed (as I do on this blog). There’s no need for a lengthy physical process. Writing means publishing!

Consider this is only infrastructure. Things will be as we mold them, for better or worst.

Categories: Miscellaneous Tags:

Deployit, deployment automation made easy

May 5th, 2013 2 comments

DeployitTwo weeks ago, I attended the first Swiss JDuchess workshop in Geneva. It was about Deployit, a software to enable continuous deployment. I had already been introduced to it at Devoxx France 2012, and it had been a surprise… a very good one.

Unfortunately, the workshop was a failure, at least for me: I couldn’t import the provided Virtual Machine. Given the very positive feedback of the other attendees, I decided to run it some time later at home. This time, it worked like a charm.

By the way, congrats to Benoit Moussaud, Technical Director at Xebialabs France, because everything in the workshop works flawlessly. I’ve attended paid trainings with material of much worse quality.

Deployit is based on the following core concepts:

  • Deliverables are versioned artifacts to be deployed, e.g. a WAR, a EAR, a ZIP archive of web resources, a SQL script, …
  • Environments are containers artifacts run in, e.g. a servlet container, an application server, a web server, a database, …
  • Bindings between a pair of the former
  • Dictionaries to resolve placeholders, so that the same deliverable can be deployed in different environments

There’s no magic in how Deployit works, it uses existing parts of modern development environments, most notably CI servers. The only requirement is to have a dedicated Maven project that creates so-called DAR archives, which are nothing more than ZIP files wrapping all resources mandatory for a deployment. A dedicated Jenkins plugin takes care of providing produced artifact to Deployit console.

At that point, only one action is necessary: bind between each resource and a container (or multiple containers). For example, I would map the WAR to Tomcat and the SQL scripts to MySQL. Even better, if matching tags are set on both deliverable and environment, this step becomes optional. Also, providing the right dictionary for this deployment plan may also be needed (see above).

Deployit admin console

Now, the Jenkins plugin is able to see the plan, so we can launch it as part of the build process! In effect, this means we achieved a whole automated build pipeline, from compiling to deployment and including all needed tests.

Of course, there are some more features provided by Deployit (for example, how to launch a plan with the CLI, but those are only nice-to-have, not core. If you want to see more, I suggest heading toward the product page.


  1. Deployit is (unfortunately) not OpenSource
  2. I don’t know about any Deployit competitor. Hints welcome as I think competition is a sure way to up the ante…

To go further

Categories: Technical Tags: ,

Integration tests from the trenches

April 28th, 2013 No comments

This post is the written form of one of my submission for Devoxx France 2013. As it was only chosen as backup, I lacked the necessary motivation to prepare it. The subject is important though, so I finally decided to write it down.

In 2013, if you’re a standard developer, it is practically a given that you test your code. Whether you’re a practicioner of TDD or just create them afterwards, most realize a robust automated test harness is not optional but mandatory.

Unit Test vs Integration Test

There’s some implicit behind the notion of ‘test’ though. Depending on the person you ask, there may be some differences of what a test is. Let us provide two definitions to prevent potential future misunderstanding.

Unit test
A unit test tests a method in isolation. In this case, the unit is the method. Outside dependencies should be mocked to provide known input(s). This input can be set different values, including bordercase values, to check all code branches.
Integration test
An integration test tests the collaboration of two (or more) units. In this case, the unit is the class. This means that as soon as you want to check the execution result of more than one class, it is an integration test.

On one hand, achieving 100% UT sucess with 100% code coverage is not enough to guarantee a bug-free application. On the other hand, a complete IT harness is not feasible in the context of real-world project since it would cost too much. The consequence is that both are complementary.

Knowing the good IT & UT ratio is part of being a successful developer. It depends on many factors, including some outside the pure technical, such as the team skills and experience, application lifetime and so on.

Fail-fast tests

Most IT are usually inherently slower than UT, as they require some kind of initialization. For example, in modern-day applications, this directly translates to some DI framework bootstrap.

It is highly unlikely that IT be successful while UT fail. Therefore, in order to have the shortest feedback loop in case of a failure, it is a good idea to execute the fastest tests first, check if they fail, and only execute slowest ones then.

Maven provides the way to achieve this with the following plugins combination:

  1. maven-surefire-plugin to execute unit tests; convention is that their names end with Test
  2. maven-failsafe-plugin to execute integration tests; names end with IT

By default, the former is executed during the test phase of the Maven build lifecycle. Unfortunately, configuration of the latter is mandatory:


Note the Maven mojo is already bound to the integration-test phase, so that it is not necessary to configure it explicitly.

This way, UT – whose name pattern is *Test, are executed during the test phase while IT – whose name patter is *IT, are executed in the integration-test phase.

System under test boundaries

The first step before writing IT is to define the SUT boundaries. Inside lies everything that we can manage, outside the rest. For example, the database clearly belongs to the SUT because we can easily provide test data, whereas a web service does not.

Inside the SUT, mocking or initializing subsystems depend on the needed/wanted integration level:

  1. To UT a use-case, one could mock DAOs in the service class
  2. Alternatively, one could initialize a test database and use real world data. DBUnit is the framework to use in this case, providing ways to initialize data during set up and check results afterwards.

Doing both would let us test corner cases in the mocking approach (faster) and the standard case in the initialization one (slower).

Outside the SUT, there is an important decision to make regarding IT. If external dependencies are not up and stable most of the time, the build would break too often and throws too many red herrings. Those IT will probably we deactivated very quickly.

In order to still benefit from those IT but so that they do not get ignored (or even worse, removed), they should be put in a dedicated module outside the standard build. When using a continuous integration server, you should set two jobs: one for the standard build and one for those dependent IT.

In Maven, it is easily achieved by creating a module at the parent root and not referencing it in the parent POM. This way, running mvn integration-test will only launch tests that are in the standard modules, those that are reliable enough.

Fragile tests

As seen above, fragility in IT comes from dependency on external systems. Those fragile tests are best handled as a separate module, outside the standard build process.

However, another cause of fragility is unstability. And in any application, there’s a layer that is fundamentally unstable and that is the presentation layer. Whatever the chosen technology, this layer is exposed to end-users and you can be sure there will be many changes there. Whereas your service API are your own, GUI is subject to users whims, period.

IT that uses GUI, whether dedicated GUI tests or end-to-end tests should thus also be considered fragile and isolated in a dedicated module, as above.

My personal experience, coupled by some writings by Gojko Adzic, taught me to bypass the GUI layer and start my tests at the servlet layer. Whether you use  Spring Test provides a bunch of Fakes regarding the Servlet API.

Tests ordering

Developers unfamiliar with IT should probably be more than astounded by this section title. In fact, as UT should be context-independent and not be ordered in any case.

For IT, things are a little different: I like to define some UT as user stories. For example, user logs in then performs some action and then another. Those steps can of course be defined in the same test method.

The negative side of this, is that if the test fails, we have no way of easily knowing what went wrong and during which step. We could remedy to that by isolating each step in a single method and ordering those methods.

TestNG – a JUnit offshoot fully integrated in Maven and Spring, let us do that easily:

public class MyScenarioIT {

    public void userLogsIn() {
    @Test(dependsOnMethod = "userLogsIn")
    public void someAction() {
    @Test(dependsOnMethod = "someAction")
    public void anotherAction() {

This way, when an error occurs or an assert fails, log displays the faulty method: we just need to have adequate name for each method.

Framework specifics UT

Java EE in-container UT

Up until recently, automated UT were executed independently of any container. For Spring applications, this was no big deal but for Java EE intensive applications, you had to fake and mock dependencies. In the end, there was no guarantee really running the application inside the container would produce expected results.

Arquillian brought a paradigm shift, with the ability to produce UT for applications using Java EE. If you do not use it already, know that Arquillian let you automate creation of the desired archive and its deployment on one or more configured application servers.

Those can be either existing or downloaded, extracted and configured during UT execution. The former category would need to be segregated in a separate module as to prevent breaking the build while the latter can safely be assimilated to regular UT.

Spring UT

Spring Test provide a feature to assemble different Spring configuration fragments (either XML or Java classes). This means that by designing our application with enough modularity, we can use desired production-ready and tests-only configuration fragments.

As an example, by separating our data source and DAO in two different fragments, we can reuse the regular DAO fragment in UT and use a test fragment declaring an in-memory database. With Maven, this means having the former in src/main/resources and the latter in src/test/resources.

@ContextConfiguration(locations = {"classpath:datasource.xml", "classpath:dao.xml"})
public class MyCustomUT extends AbstractTestNGSpringContextTests {



This is the big picture regarding my personal experience regarding UT. As always, they shouldn’t be seen as hard and fast rules but be adapted to your specific project context.

However, tools listed in the article should be a real asset in all cases.

Design by contract and bean validation

April 21st, 2013 4 comments

I must confess that despite all benefits of defensive programming, I usually limit myself to not expose mutable attribute to the outside world. Why is that? I believe this is mostly because of readability. Have a look at the following piece of code:

public List getPersons() {
    return Collections.unmodifiableList(persons);

Easy enough to read and understand. Now, consider a design by contract approach, where I want to enforce pre-conditions, i.e. conditions that have to be met to run the code.

public void sendEmail(String email) {
    // Real code after that

Not really hard. But now, I want to surface-check the email so as not to query the JNDI tree for a SMTP server for nothing.

public void sendEmail(String email) {
    Validator validator = new EmailValidator(email);
    // Real code after that

It only so much harder to read… for now. Should I have more than one parameter and multiple checks for each parameter, complexity would quickly grow out-of-hand, and this only for pre-condition!. Post-condition checks would take place at the end of the method, and disturb readability even more. Besides, the outside world doesn’t know about those: they have to be manually written in Javadoc and kept in sync with the code.

What if those limitations could be waived? Would I be more inclined toward using design by contract? I do believe that’s the case. And when I attended Emmanuel Bernard‘s talk at Devoxx France, I couldn’t believe usage would be so simple. My goal can be reached by only using Bean Validation API 1.1 (aka JSR-349). This newer version reuses Bean Validation 1.0 (aka JSR-303) annotations, but where in the previous version only attributes could be annotated, now methods can too. Or to say it more generally, before state was validated, now we can design by contract.

The previous method could be updated like so:

public void sendEmail(@Email String email) {
    // Real code after that

You can see by yourself, not only is it more readable but it communicates my intent to the method callers: this should be an email, not just any string. Icing on the cake, the JSR is open, in all senses. You can contribute to the specs, to the implementation, to anything you like!

Kill-joys would probably object that it is nice and good, but it is still in draft. That would be true, except it is nothing like HTML5 scope. Besides, Hibernate Validator was Bean Validation API 1.0 Reference Implementation. It is only natural that it is also the case for version 1.1, with Hibernate Validator 5. However, it already provides everything we need with version 4.2. If you’re willing to use Spring, it is a breeze:

  1. Get the right dependencies in your POM
  2. Annotate your methods as above
  3. Add the right Spring bean post-processor
    public class SpringConfiguration {
        public static MethodValidationPostProcessor methodValidationPostProcessor() {
            return new MethodValidationPostProcessor();
  4. Enjoy! Really, it is over, you can use design by contract to your heart’s content.

Sources for this article can be found in Eclipse/Maven format here.

To go further: see relevant links in the article

Note: regular readers of this blog may have noticed I’m using Spring Java configuration whereas I was previously an advocate of XML configuration. Please read this post.

Categories: Java Tags:

The middle path approach

April 14th, 2013 No comments

hybris software, the future of commerce

I’ve been doing software development for most of my career, so that I think myself as a software developer (or software architect), even though I’ve dabbled in solutions engineering more than once. This surely has an effect on how I see the architecture landscape, but I’ll try to be as objective as possible.

Historically, there have been two approaches in providing software solutions to business requirements:

  • Either create custom software to address the requirements
  • Or purchase an out-of-the-box solution to do the same

My software developer side was formerly more in favor of the first option. I thought that this way provided more flexibility than an editor package. During my professional life, I’ve come to realize that management sits on the opposite side, viewing out-of-the-box packages as less risky and less expensive. Even if I agree to the risk point, there are numerous cons to purchase such a solution:

  1. All projects I encoutered only planned for license costs. However, users generally ask for numerous adaptations that require a long configuration
  2. Some, if not most, of those adaptations require more than configuration, but hacking the product itself, which costs even more
  3. Changing the product has a great influence on the maintenance costs, especially toward upgrading: you’ll have to patch the code in the newer version, provided it is compatible
  4. Cutting on the adaptation side will also have an effect on the business side and will require a stronger change management planning
  5. Most products used are provided by (really) big companies: you’ll need to also purchase the associated infrastructure in order to benefit from full editor support (believe, I’ve tried going the other way to my chagrin)

Preaching that out-of-the-box solutions are thus cost-effective is the biggest lie of all!

So basically, it’s either take the risky development road or the costly/coupled product one. Shouldn’t there be an alternative path, that could address these issues?

Since the beginning of the year, I’m working for a company that I think provide such an alternative. It comes in the form of a product whose features include:

  • a basic platform developed with Java & Spring and providing a plugin architecture
  • an extensible business model
  • a service layer, designed around an interface and an out-of-the-box implementation. Implementations can be replaced through a developed plugin
  • a templated web layer, based on Spring MVC that can be modified as needed

IMHO, this is the best alternative I’ve ever seen. This way, you can get the whole shebang in production very quickly, adapt the system to your specific needs (either before production or by incremental steps) and still be able to upgrade easily with each version. Software companies should probably invest in migrating their products to this kind of architecture.

The company is hybris, and I’m very happy to have joined their ranks since the start of this year.

Categories: Technical Tags: , ,

JavaScript namespace and restricted access

April 7th, 2013 1 comment

I began developing with JavaScript more than 10 years ago in order to handle DOM updates on the client-side. At this time, this usage was called DHTML and I was one of the few Java developers that had some interest in it. Still, JavaScript was considered a second-class language, something that was somewhat necessary but wasn’t professional enough.

Fast-forward: the year is 2013 and JavaScript is everywhere:

  • developers are using JavaScript both server- and client-side
  • HTML5 uses JavaScript for dynamic behaviour
  • Client-side data binding is featured by major JavaScript frameworks to build modern web applications

On my part, I’m still the same developer in regard to JavaScript. There always had been more important matters to pursue, and I find myself lost when looking at JavaScript most advanced features. At Devoxx France, I was extremely impressed by HTML5 animation features, and I wanted to play with them in my spare time. I had to develop some JavaScript, with right scoping and good isolation.

As opposed to Java, JavaScript provides global variables. Also, JavaScript functions are available from everywhere. Isolation is not built-in, but has to be designed. The first rule is: do not use global variables. Global variables are a bug waiting to happen, encapsulation should be enforced, period.

Also, Java provides namespaces, a way to create different classes with the same name and still avoid collision, even when using other third-party libraries. The first way I stumbled upon to achieve this made my heart sink:

var CustomNamespace = CustomNamespace || {};

CustomNamespace.doThis = function() {

Not only do I find this extremely unwieldy from a developer point of view as members do not have to be encompassed in the same block, it doesn’t provide isolation: members are accessible.

Next I found the Module pattern, which resolves both problems: code is written inside a single block, and provides ways to offer both internal and public members by using closures:

var CustomNamespace = (function() {
    var varPrivate = ...;
    function doThatPrivate() {
    return {
        doThisPublic: function() {
        doThisPublic2: function() {

This syntax let use encapsulate variables and functions we want to keep private while providing publicly-available functions under a specific namespace. Usage is as follow:


I find this syntax clear enough to write and easy to use, while providing namespace and restricted access to only desired members. Of course, being newbie in this field, I haven’t encountered disadvantages of it… yet.

Categories: Development Tags:

Devoxx France 2013 – Day 3

March 30th, 2013 1 comment

Classpath isn’t dead… yet by Alexis Hassler

Classpath is dead!
Mark Reinold

What is the classpath anyway? In any code, there are basically two kinds of classes: those coming from the JRE, and those that do not (either becasue they are your own custom class or becasue they come from 3rd-party libraries). Classpath can be set either with a simple java class load by the -cp argument or with a JAR by the embedded MANIFEST.MF.

A classloader is a class itself. It can load resources and classes. Each class knows its classloader, that is which classloader has loaded the class inside the JVM (i.e. sun.misc.Launcher$AppClassLoader). JRE classes classloader is not not a Java type, so the returned classloader is null. Classloader are organized into a parent-child hierarchy, with delegation from bottom to top if a class is not found in the current classloader.

The BootstrapClassLoader can be respectively replaced, appended or prepended with -Xbootclasspath,-Xbootclasspath/a and -Xbootclasspath/p. This is a great way to override standard classes (such as String or Integer); this is also a major security hole. Use at your own risk! Endorsed dir is a way to override some API with a more recent version. This is the case with JAXB for example.

ClassCastException usually comes from the same class being loaded by two different classloaders (or more…). This is because classes are not identified inside the JVM by the class only, but by the tuple {classloader, class}.

Classloader can be developed, and then set in the provided hierarchy. This is generally done in application servers or tools such as JRebel. In Tomcat, each webapp has a classloader, and there’s a parent unified classloader for Tomcat (that has the System classloader as its parent). Nothing prevents you from developing your own: for example, consider a MavenRepositoryClassLoader, that loads JAR from your local Maven repository. You just have to extends UrlClassLoader.

JAR hell comes from dependencies management or more precisely its lack thereof. Since dependencies are tree-like during development time, but completely flat at runtime i.e. on the classpath, conflicts may occur if no care is taken to eliminate them beforehand.

One of the problem is JAR visibility: you either have all classes available if the JAR is present, or none if it is not. The granularity is at the JAR level, whereas it would be better to have finer-grained visibility. Several solutions are available:

  • OSGi has an answer to these problems since 1999. With OSGi, JARs become bundles, with additional meta-data set in the JAR manifest. These meta-data describe visibility per package. For a pure dependency management point of view, OSGi comes with additional features (services and lifecycle) that seem overkill [I personally do not agree].
  • Project Jigsaw also provides this modularity (as well as JRE classes modularity) in the form of modules. Unfortunately, it has been delayed since Java 7, and will not be included in Java 8. Better forget it at the moment.
  • JBoss Module is a JBoss AS 7 subproject, inspired by Jigsaw and based on JBoss OSGi. It is already available and comes with much lower complexity than OSGi. Configuration is made through a module.xml description file. This system is included in JBoss AS 7. On the negative side, you can use Module either with JBoss or on its own, which prevents us from using it in Tomcat. An ongoing Github proof-of-concept achieves it though, which embeds the JAR module in the deployed webapp and overrides Tomcat classloader of the webapp.
    Several problems still exists:

    • Artefacts are not modules
    • Lack of documentation

Animate your HTML5 pages with CSS3, SVG, Canvas & WebGL by Martin Gorner

Within the HTML5 specification alone, there are 4 ways to add fun animations to your pages.

CSS 3 transitions come through the transition property. They are triggered though user-events.Animations are achieved through animation properties. Notice the plural, because you define keyframes and the browser computes intermediate ones.2D transformations -property transformation include rotate, scale, skew, translate and matrix. As an advice, timing can be overriden, but the default one is quite good. CSS 3 also provides 3D transformations. Those are the same as above, but with either X, Y or Z appended to the value name to specify the axis name.The biggest flaw from CSS 3 is that they lack draw features.
SVG not only provides vectorial drawing features but also out-of-the-box animation features. SVG is described in XML: SVG animations are much more powerful that CSS 3 but also more complex. You’d better use a tool to generate it, such as Inkscape.There are different ways to animate SVG, all through sub-tags: animate, animateTransform and animateTransform.Whereas CSS 3 timing is acceptable out-of-the-box, default in SVG is linear (which is not pleasant to the eye). SVG offers timing configuration through the keySplines attribute of the previous tags.Both CSS 3 and SVG have a big limitations: animations are set in stone and cannot respond to external events, such as user inputs. When those are a requirement, the following two standard apply.
Canvas + JavaScript
From this point on, programmatic (as opposed to descriptive) configuration is available. Beware that JavaScript animations comes at a cost: on mobile devices, it will dry power. As such, know about method that let the browser stop animations when the page is not displayed.
WebGL + THREE.js
WebGL let use a OpenGL API (read 3D), but it is very low-level. THREE.js comes with a full-blown high level API. Better yet, you can import Sketchup mesh models into THREE.js.In all cases, do not forget to use the same optimization as in 2D canvas to stop animations when the canvas is not visible.

Tip: in order to not care about prefix, prefix.js let us preserve original CSS and enhance with prefix at runtime. Otherwise, use LESS / SASS. Slides are readily available online with associated labs.
[I remember using the same 3D techniques 15 years ago when I learnt raytracing. That’s awesome!]

The Spring update by Josh Long

[Talk is shown in code snippets, rendering full-blown notes mostly moot. It is dedicated to new features of the latest Spring platform versions]

Version Feature
  • JavaConfig equivalents of XML
  • Profiles
  • Cache abstraction, with CacheManager and Cache
  • Newer backend cache adapters (Hazelcast, memcached, GemFire, etc.) in addition to EhCache
  • Servlet 3.0 support
  • Spring framework code available on GitHub
  • Gradle-based builds [Because of incompatible versions support. IMHO, this is one of the few use-case for using Gradle that I can agree with]
  • Async MVC processing through Callable (threads are managed by Spring), DeferredResult and AsyncTask
  • Content negotiation strategies
  • MVC Test framework server
  • Groovy-configuration support. Note that all available configuration ways (XML, JavaConfig, etc.) and their combinations have no impact at runtime
  • Java 8 closures support
  • JSR 310 (Date and Time API) support
  • Removal of setting @PathVariable‘s value need, using built-in JVM mechanism to get it
  • Various support for Java EE 7
  • Backward compatibility will still include Java 5
  • Annotation-based JMS endpoints
  • WebSocket aka “server push” support
  • Web resources caching

Bean validation 1.1: we’re not in Care Bears land anymore by Emmanuel Bernard

All that will be written here is not set in stone, it has to be approved first. Bean Validation comes bundled with Java EE 6+ but it can be used standalone.

Before Bean Validation, validations were executed at each different layer (client, application layers, database). This led to duplications as well as inconsistencies. The Bean Validation motto is something along the lines of:

Constrain once, run anywhere

1.0 has been released with Java EE 6. It is fully integrated with other stacks including JPA, JSF (& GWT, Wicket, Tapestry) and CDI (& Spring).

Declaring a constraint is as simple as adding a specific validation annotation. Validation can be cascaded, not only on the bean itself but on embedded beans. Also, validation may wrap more than one property to validate if two different properties are consistent with one another. Validation can be set on the whole, but also defined subsets – called groups, of it. Groups are created through interfaces.

Many annotations come out-of-the-box, but you can also define your own. This is achieved with the @Constraint annotation on a custom annotation. It includes the list of validators to use when validating. Those validators must implement the Validator interface.

1.1 will be included in Java EE 7. The most important thing to remember is that it is 100% open. Everything is available on GitHub, go fork it.

Now, containers are in complete control of Bean Validation components creation, so that they are natively compatible with CDI. Also, other DI containers, such as Spring, may plug in their own SPI implementation.

The greatest feature of 1.1 is that not only properties can be validated, but also method parameters and method return values. Constructors being specialized method, it also applies to them. It is achieved internally with interceptors. However, this requires an interception stack – either CDI, Spring or any AOP, and comes with associated limitations, such as proxies. This enables declarative Contract-Oriented Programming, and its pre- and post-conditions.


Devoxx France 2013 has been a huge success, thanks to the organization team. Devoxx is not only tech talks, it is also a time to meet new people, exchange ideas and see old friends.

See you next year, or at Devoxx 2013!

Thanks to my employer – hybris, who helped me attend this great event!

Categories: Event Tags: , , ,