Devoxx France 2013 – Day 1

March 28th, 2013 No comments

Rejoice people, it’s March, time for Devoxx France 2013! Here are some notes I took during the event.

Java EE 7 hands-on lab by David Delabasse & Laurent Ruaud

An hands-on lab by Oracle for good old-fashioned developers that want to check some Java EE 7 features by themselves.

This one, you can do it at home. Just go to this page and follow instructions. Note you will need at least Glassfish 4 beta 80 and the latest NetBeans (7.3).

You’d better reserve a day if you want to go beyond copy-paste and really read and understand what you’re doing. Besides, you have to some JSF knowledge if anything goes wrong (or have a guru on call).

Angular JS by Thierry Chatel

The speaker comes from a Java developer background. He has used Swing in the past and since then, he has searched for binding features: a way to automate data exchange between model and views. Two years ago, he found AngularJS.

AngularJS is a JavaScript framework, comprised of more than +40 kloc and weights 77 kb minified. The first stable version was released one year ago, codenamed temporal-domination. The application developed by Google with AngularJS is Doubleclick for publishers. Other examples include OVH’s future management console and Youtube application on PS3. Its motto is:

HTML enhanced for web apps

What does HTML enhanced means? Is it HTML6? The problem is HTML has never been designed to create applications: it is only to display documents and link between them. Most of the time, one-way binding between Model & Template is achieved to create a view. Misko Hevery (AngularJS founder) point of view is instead of trying to go around this limitation, we’d better add this feature to HTML.

So, AngularJS philosophy is to compile the View from the Template, and then 2-way bind between View & Model. AngularJS use is easy as pie:

Yout name: 
Hello {{me}}

AngularJS is a JavaScript framework, that free developers from coding too manyJavaScript lines.

The framework uses simple concepts:

  • watches around expressions (properties, functions, etc.)
  • dirty checking on events (keyboard, HTTP request, etc.)

Watches are re-evaluated on each dirty checks. This means expressions have to be simple (i.e. computation results instead of the computations themselves). The framework is designed to handle up to 2000 simple watches. Keep note that standards (as well as user agents) are evolving and that ECMAScript next version will provide Object.observer() to handle x50 the actual number of watches.

An AngularJS app is as simple as:

This let us have as many applications as needed on the same page. AngularJS is able to create single-page applications, with browser navigation (bookmarks, next, previous) automatically handled. There’s no such thing like global state.

AngularJS also provides core concepts like modules, services and dependency injection. There’s no need to inherit from specific classes or interfaces: any object is available for any role. As a consequence, code is easily unit-testable, the preferred tool to do this is Karma (ex-Testacular). For end-to-end scenarii testing, the same dedicated tool is also available, based on the framework and plays tests in defined browsers. In conclusion, AngularJS is not only a framework but also a complete platform with the right level of abstraction, so that developed code is purely business.

There are no AngularJS UI components, but many are provided by third-party like AngularUI, AngularStrap, etc.

AngularJS is extremely structuring, it is an opinionated framework. You have to code the AngularJS way. A tutorial is readily available to let you do that. Short videos dedicated to a single focused theme are available online.

Wow, this the second talk I attend about AngularJS and it looks extremely good! My only complaints are that is follows the trend of pure client-side frameworks and that is not designed for mobile.

Gradle, 30 minutes to change all by Sébastien Cogneau

In essence, Gradle is a Groovy DSL to automate build. It is extensible through Java & Groovy plugins. Gradle is based on existing principles: it let you reuse Ant tasks, it reuses Maven convention and is compatible with both Ivy & Maven repositories.

A typical gradle build file looks like this:

apply plugin: 'jetty'

version = '1.0.0'

repositories {

    mavenCentral()
}

configuration {
    codeCoverage
}

sonarRunner {
    sonarProperties {
        ...
    }
}

dependencies {
    compile: 'org.hibernate:hibernate-core:3.3.1.GA'
    codeCoverage: 'org.jacoco....'
}

test {
    jvmArgs '...'
}

task wrapper(type:Wrapper) {
    gradleVersion = '1.5-rc3'
}

task hello(type:Exec) {
    description 'Devoxx 2013 task'
    group 'devoxx'
    dependsOn wrapper
    executable 'echo'
    args 'Do you have question'
}

Adding plugins add tasks to the available build. For example, by adding jetty, we get jettyStart. Moreover, plugins have dependencies so you also have tasks from dependent plugins.

Gradle can be integrated with Jenkins, as there is an available Gradle plugin. There are two available options to run Gradle build on Jenkins:

  • either you install Gradle and configure its installation on Jenkins. From this point, you can configure your build to use this specific install
  • or you generate a Gradle wrapper and only configure your build to use this wrapper. There’s no need to install Gradle at all in this case

Gradle power let also add custom tasks, such as the aforementioned hello task.

The speaker tells us he is using Gradle because it is so flexible. But that’s exactly the reason I’m more than reluctant to adopt it: I’ve been building with Ant for ages, then came to Maven. Now, I’m forced to use Ant again and it takes so much time to understand a build file compared to a Maven POM.

Space chatons, bleeding-edge HTML5 by Phillipe Antoine & Pierre Gayvallet

This presentation demoed new features brought by HTML5.

It was too quick to assimilate anything, but demos were really awesome. In particular, one of them used three.js, a 3D rendering library that you should really have a look into. When I think it took raytracing to achieve this 15 years ago..

Vaadin & GWT 2013 Paris Meetup

Key points (because at this time, I was more than a little tired):

  • My presentation on FieldGroup and Converters is available on Slideshare
  • Vaadin versions are supported for 5 years each. Vaadin 6 support will end in 2014
  • May will see the release of a headless of Vaadin TestBench. Good for automated tests!
  • This Friday, Vaadin 7.1 will be out with server push
  • Remember that Vaadin Ltd also offers Commercial Support (and it comes with a JRebel license!)
Categories: Event Tags: , , ,

KISS your architecture

March 17th, 2013 No comments

The project I’m working on these days is not properly “legacy” but has seen some twists that renders it less than ideal.

On this project, one of the worst point that has been an obstacle for me to develop a simple feature is layered architecture. “What”, shout all experienced developers, “layered architecture is at the root of maintainability!” and I agree wholeheartedly. So, how could layer architecture be an obstacle?

  1. First, adding too many layers kill layering. I’ve always been a proponent of keeping the layers count low – especially DTO – when not strictly necessary. Too many layers and you will not only add complexity at development time but also performance overhead at runtime.
  2. Second, mapping is extremely important. If you completely change attribute names from one layer to another, newcomers are bound to endlessly scratch their heads.
  3. In all cases, the point of those layers is to separate code. So, keep it that way.
  4. Finally, if you need to do one (or all) of the former point, document it!

In my case, I stumbled upon an aggregation of all 4 points, as well as a previously unknown component (at least to me), jQuery template. This jQuery plugin is used to handle AJAX (but this is just a coincidence, as I suspect you could replace it with any equivalent and still get unmaintainable code).

Though I understand the intent to help with JavaScript by adding one more plugin, I tend to frown on how client-side is actually managed: not at all. Now that we finally have a decent build process including quality metrics (Jenkins + Sonar), developers and architects running like mad toward JavaScript without putting in place equivalent infrastructure – yes, it is possible, is utter nonsense. This is it for point 1.

Server-side, Spring MVC is the framework. A controller returns DTO which are processed through JSON converter. Guess what, JSON sent back by the server is mapped to another JSON entity client-side, one that has all fields of the original, but not all with the same name. Point 2 just makes it harder to understand what happens, and client means full-text search. Welcome into your new home…

I won’t pretend I understand all intricacies of jQuery Template, so I’ll just be factual about code I saw: it’s completely against all I know to create HTML on the fly by aggregating HTML snippets (including div with CSS class attribute) and parameter values. This is even worse if it takes more than 20 lines of strings, even though they are neatly aligned. Now, you’re really asking for pain, because not only is your JavaScript code untested, but it mingles HTML, CSS and JavaScript: I pity the poor maintainers of this application (I’ve done my part). That was point 3.

Point 4: are you kidding?

Now that I’m slightly psychotic because of this feature I had to implement, and since I will know where you live in a not-so-distant future, I suggest you start designing your architecture by respecting KISS! This will render all aforementioned points moot and keep me happy.

Categories: Development Tags:

Consider replacing Spring XML configuration with JavaConfig

March 10th, 2013 5 comments

Spring articles are becoming a trend on this blog, I should probably apply for a SpringSource position :-)

Colleagues of mine sometimes curse me for my stubbornness in using XML configuration for Spring. Yes, it seems so 2000’s but XML has definite advantages:

  1. Configuration is centralized, it’s not scattered among all different components so you can have a nice overview of beans and their wirings in a single place
  2. If you need to split your files, no problem, Spring let you do that. It then reassembles them at runtime through internal tags or external context files aggregation
  3. Only XML configuration allows for explicit wiring – as opposed to autowiring. Sometimes, the latter is a bit too magical for my own taste. Its apparent simplicity hides real complexity: not only do we need to switch between by-type and by-name autowiring, but more importantly, the strategy for choosing the relevant bean among all eligible ones escapes but the more seasoned Spring developers. Profiles seem to make this easier, but is relatively new and is known to few
  4. Last but not least, XML is completely orthogonal to the Java file: there’s no coupling between the 2 so that the class can be used in more than one context with different configurations

The sole problem with XML is that you have to wait until runtime to discover typos in a bean or some other stupid boo-boo. On the other side, using Spring IDE plugin (or the integrated Spring Tools Suite) definitely can help you there.

An interesting alternative to both XML and direct annotations on bean classes is JavaConfig, a former separate project embedded into Spring itself since v3.0. It merges XML decoupling advantage with Java compile-time checks. JavaConfig can be seen as the XML file equivalent, only written in Java. The whole documentation is of course available online, but this article will just let you kickstart using JavaConfig. As an example, let us migrate from the following XML file to a JavaConfig




    
        
    

    
        
    

    
        
            
            	
            
        
    

The equivalent file is the following:

import java.net.MalformedURLException;
import java.net.URL;

import javax.swing.Icon;
import javax.swing.ImageIcon;
import javax.swing.JButton;

import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;

@Configuration
public class MigratedConfiguration {

    @Bean
    public JButton button() {

        return new JButton("Hello World");
    }

    @Bean
    public JButton anotherButton(Icon icon) {

        return new JButton(icon);
    }

    @Bean
    public Icon icon() throws MalformedURLException {

        URL url = new URL("http://morevaadin.com/assets/images/learning_vaadin_cover.png");

        return new ImageIcon(url);
    }
}

Usage is simpler than simple: annotate the main class with @Configuration and individual producer methods with @Bean. The only drawback, IMHO, is that it uses autowiring. Apart from that, It just works.

Note that in a Web environment, the web deployment descriptor should be updated with the following lines:


    contextClass
    org.springframework.web.context.support.AnnotationConfigWebApplicationContext


    contextConfigLocation
    com.packtpub.learnvaadin.springintegration.SpringIntegrationConfiguration

Sources for this article are available in Maven/Eclipse format here.

To go further:

  • Java-based container configuration documentation
  • AnnotationConfigWebApplicationContext JavaDoc
  • @ContextConfiguration JavaDoc (to configure Spring Test to use JavaConfig)
Categories: Java Tags: ,

Solr overview from a beginner’s point of view

February 24th, 2013 2 comments

I’ve recently begun diving into Search Engines in general and Solr in particular. This is my understanding of it so far.

Why Solr?

It isn’t really feasible to execute blazing fast search queries on very big SQL databases for 2 different reasons. The first reason comes SQL databases favoring lack of radiancy over performance. Basically, you’d need to use JOINs in your SELECT. The second reason is about the nature of data in documents: it’s essentially unstructured plain text so that SELECT would need LIKE. Both joins and likes are performance killers, so this way is a no-go in real-life search engines.

Therefore, most of them propose a way to look at data that is very different from SQL, inverted index(es). This kind of data structure is a glorified dictionary where:

  • key are individual terms
  • values are list of documents that match term

Nothing fancy, but this view of data makes for very fast research in very high-volume databases. Note that the term ‘document’ is used very loosely in that it’s should be a field-structured view of the initial document (see below).

Index structure

Though Solr belongs to the NoSQL database family, it is no schemaless. Schema configuration takes place in a dedicated schema.xml file: individual fields must be defined, and with each its type. Different document types may be different in structure and have few (no?) fields in common. In this case, each document type may be set its own index with its own schema.

Predefined types like strings, integers and dates are available out-of-the-box. Types can be declared searchables (called “indexed”) and/or stored (returned in queries). For examples, books could (would?) include not only their content, but also author(s), publisher(s), date of publishing, etc.

Indexing

There are two available interface to index documents in Solr: a REST API and a full Java interface named SolrJ.

Parsing documents

To build the inverted index, documents have to be parsed for individual terms. In order for search to be user-friendly, you have to be able to query regardless of case, of hyphens and of irrelevant words – called stop words (that would include ‘a’ and ‘the’ in english). It would also be great to provide a way to equal terms that share a common meaningful root – this is called stemming, such as ‘fish’, ‘fishing’ and ‘fisherman’ as well as offer a dictionary for synonyms.

Solr applies a tokenizer processing chain to each received document: individual steps in the chain have a single responsibility based on either removing, adding or replacing a term token. They are referred to as filters. For example, one filter is used to remove stop words, one to lowercase term (replace) and one to add synonym terms.

Queries

Queries also have to be made of terms. Those terms can be composed with binary operators and individual terms can be boosted.

Queries are parsed into tokens through a process similar as documents. Of course, some filters make sense while others do not. In the former category, we find the lowercase filter, in the latter, the synonym one.

Parsed queries are compaired to indexed terms using set theory to determine matching results.

Results

Search results are are paged and ordered so that documents being more relevant to users are presented first. In order to provide the best user-experience, a middle-ground has to be found between:

  • correctness – only relevant results are returned
  • thoroughness – all relevant results must be returned

Results can be grouped using one or more fields. Grouping depends on the filed type and can be customized: for example, book results can be grouped per author or per author’s first letter, depending on the number of books in the whole index.

To go further:

Categories: Java Tags: ,

Spring beans overwriting strategy

February 16th, 2013 2 comments

I find myself working more and more with Spring these days, and what I find raises questions. This week, my thoughts are turned toward beans overwriting, that is registering more than one bean with the samee name.

In the case of a simple project, there’s no need for this; but when building a a plugin architecture around a core, it may be a solution. Here are some facts I uncovered and verified regarding beans overwriting.

Single bean id per file
The id attribute in the Spring bean file is of type ID, meaning you can have only a single bean with a specific ID in a specific Spring beans definition file.
Overwriting bean dependent on context fragments loading order
As opposed to classpath loading where the first class takes priority over those others further on the classpath, it’s the last bean of the same name that is finally used. That’s why I called it overwriting. Reversing the fragment loading order proves that.
Fragment assembling methods define an order
Fragments can be assembled from statements in the Spring beans definition file or through an external component (e.g. the Spring context listener in a web app or test classes). All define a deterministic order.
As a side note, though I formerly used import statements in my projects (in part to take advantage of IDE support), experience taught me it can bite you in the back when reusing modules: I’m in favor of assembling through external components now.
Names
Spring lets you define names in addition to ids (which is a cheap way of putting illegals characters fors ID). Those names also overwrites ids.
Aliases
Spring lets you define aliases of existing beans: those aliases also overwrites ids.
Scope overwriting
This one is really mean: by overwriting a bean, you also overwrite scope. So, if the original bean had a specified scope and you do not specify the same, tough luck: you just probably changed the application behavior.

Not only are perhaps not known by your development team, but the last one is the killer reason not to overwrite beans. It’s too easy to forget scoping the overwritten bean.

In order to address plugins architecture, and given you do not want to walk the OSGi path, I would suggest what I consider a KISS (yet elegant) solution.

Let us use simple Java properties in conjunction with ProperyPlaceholderConfigurer. The main Spring Beans definition file should define placeholders for beans that can be overwritten and read two defined properties file: one wrapped inside the core JAR and the other on a predefined path (eventually set by a JVM property).

Both property files have the same structure: fully-qualified interface names as keys and fully-qualified implementations names as values. This way, you define default implementations in the internal property file and let uses overwrite them in the external file (if necessary).

As an added advantage, it shields users from Spring so they are not tied to the framework.

Sources for this article can be found in Maven/Eclipse format here.

Categories: Java Tags:

The case for Spring inner beans

February 10th, 2013 6 comments

When code reviewing or pair programming, I’m always amazed by the following discrepancy. On one hand, 99% of developers conscientiously apply encapsulation and limit accessibility and variable scope to the minimum possible. On the other hand, nobody cares one bit about Spring beans and such beans are always set at top-level, which makes them accessible from every place where you can get a handle on the Spring context.

For example, this a typical Spring beans configuration file:




    
    



    
    

If beans one, two, three and four are only used by bean five, they shouldn’t be accessible from anywhere else and should be defined as inner beans.


    
        
            
                
            
            
                
            
        
     
     
         
     

From this point on, beans one, two, three and four cannot be accessed in any way outside of bean five; in effect, they are not visible.

There are a couple of points I’d like to make:

  1. By using inner beans, those beans are implicitly made anonymous but also scoped prototype, which doesn’t mean squat since they won’t be reused anywhere else.
  2. With annotations configuration, this is something that is done under the cover when you set a new instance in the body of the method
  3. I acknowledge it renders the Spring beans definition file harder to read but with the graphical representation feature brought by Spring IDE, this point is moot

In conclusion, I would like every developer to consider not only technologies, but also concepts. When you understand variable scoping in programming, you should not only apply it to code, but also wherever it is relevant.

Categories: Java Tags:

Changing default Spring bean scope

February 4th, 2013 3 comments

By default, Spring beans are scoped singleton, meaning there’s only one instance for the whole application context. For most applications, this is a sensible default; then sometimes, not so much. This may be the case when using a custom scope, which is the case, on the product I’m currently working on. I’m not at liberty to discuss the details further: suffice to say that it is very painful to configure each and every needed bean with this custom scope.

Since being lazy in a smart way is at the core of developer work, I decided to search for a way to ease my burden and found it in the BeanFactoryPostProcessor class. It only has a single method – postProcessBeanFactory(), but it gives access to the bean factory itself (which is at the root of the various application context classes).

From this point on, the code is trivial even with no prior experience of the API:

public class PrototypeScopedBeanFactoryPostProcessor implements BeanFactoryPostProcessor {

    @Override
    public void postProcessBeanFactory(ConfigurableListableBeanFactory factory) throws BeansException {

        for (String beanName : factory.getBeanDefinitionNames()) {

            BeanDefinition beanDef = factory.getBeanDefinition(beanName);

            String explicitScope = beanDef.getScope();

            if ("".equals(explicitScope)) {

                beanDef.setScope("prototype");
            }
        }
    }
}

The final touch is to register the post-processor in the context . This is achieved by treating it as a simple anonymous bean:




    

Now, every bean which scope is not explicitly set will be scoped prototype.

Sources for this article can be found attached in Eclipse/Maven format.

Categories: Java Tags:

DRY your Spring Beans configuration file

January 20th, 2013 No comments

It’s always when you discuss with people that some things that you (or the people) hold for an evidence seems to be a closely-held secret. That’s what happened this week when I tentatively showed a trick during a training session that started a debate.

Let’s take an example, but the idea behind this can of course be applied to many more use-cases: imagine you developed many DAO classes inheriting from the same abstract DAO Spring provides you with (JPA, Hibernate, plain JDBC, you name it). All those classes need to be set either a datasource (or a JPA EntityManager, a Spring Session, etc.). At your first attempt, you would create the Spring beans definition file a such:



    


    


    


    


    


    

Notice a pattern here? Not only is it completely opposed to the DRY principle, it also is a source for errors as well as decreasing future maintainability. Most important, I’m lazy and I do not like to type characters just for the fun of it.

Spring to the rescue. Spring provides the way to make beans abstract. This is not to be confused with the abstract keyword of Java. Though Spring abstract beans are not instantiated, children of these abstract beans are injected the properties of their parent abstract bean. This implies you do need a common Java parent class (though it doesn’t need to be abstract). In essence, you will shorten your Spring beans definitions file like so:



    






The instruction to inject the data source is configured only once for the abstractDao. Yet, Spring will apply it to every DAO configured as having the it as its parent. DRY from the trenches…

Note: if you use two different data sources, you’ll just have to define two abstract DAOs and set the correct one as the parent of your concrete DAOs.

Categories: Java Tags: ,

The power of naming

January 13th, 2013 No comments

Before diving right into the subject of naming, and at the risk of souding somewhat pedantic, an introduction is in order.

History show us that naming is so important to the human race that it’s being regarded as sacred (or at least) magical in most human cultures. Some non-exhaustive examples include:

  • Judaïc tradition go to great lengths to search for and compile the names of God. In this regard, each name offers a point of view into a specific aspect of God.
  • Christian Church gives a name when a new member is brought into the fold. The most famous example is when Jesus names his disciple Simon “Peter” (coming from petros – rock, in Greek). Other religions also tend to follow this tradition.
  • In sorcery (or more precisely traditional demonology), knowing a demon’s true name gives power over it and let the mage bind it.

From these examples, names seem to bestow some properties on the named subject and/or gives a specific insight into it.

Other aspects of naming include communication between people. Not mentioning whether there are many Sami words for snow or not, fact is that colors are intrinsically cultural, some allow for a whole array of color names, while others only have 2 (such as the Dugum Dani of the New Zealand). Of course, designers and artists using colour regularly have a much broader vocabulary to describe colors than us mere mortals (at least me). In the digital age, colors can even be described exactly by their RGB value.

It’s pretty self-evident that people exchanging information have a much higher signal-to-noise ration the richer their common vocabulary and the less the difference between the word and the reality it covers.

Now, how does the above data translate into software development practive? Given that naming describes reality:

  • Giving a name to a class (package or method) describes to fellow programmers what the class does. If the name is misleading, time will have to be spent to realign between the reality (what it does really) and the given name (what people think it does). On the contrary, right names will tremendously boost the productivity of developers new to the project.
  • When updating a class (package or method), always think whether it should be reflected on its name. In the light of modern IDE, this won’t cost any significant time while increasing future maintainability.
  • It’s better to have an explicitly incongruous class (package or method) name such as ToBeRenamed than one slightly misaligned with reality (e.g. Yellow to describe #F5DA81) so as to avoid later confusion. This is particularly true when unsure about a newly-created class responsibilities.

Let someone being much more experienced have the final word:

There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton

Categories: Development Tags:

Pet catalog for JavaEE 6 reengineered

January 6th, 2013 No comments

Some time ago, I published the famed Pet Catalog application on Github. It doesn’t seem like much, but there are some hours of work (if not days) behind the scenes. I wanted to write down the objectives of this pet project (noticed the joke here?) and the walls I hit trying to achieve them to prevent others to hit them.

Objectives were very simple at first: get the Pet Catalog to work under TomEE, the Web Profile compliant application server based on Tomcat. I also wanted to prove that by running automated integration tests, so Arquillian was my tool to achieve that. JavaEE 6, TomEE, Arquillian: this couldn’t be simpler, could it? Think again…

First of all, it may seem strange but there’s no place on the Internet you can find the Pet Catalog for JavaEE 6, despite it being set as an example during the whole JavaEE evolution; a simple Google search can easily convince you of that. I think there’s a definite lack there but I’m nothing but persistent. In order to get the sources, I had to get the NetBeans IDE: the PetCatalog is available as an example application.

The second obstacle I had to overcome was that whatever I tried, I couldn’t find a way to make Arquillian inject my EntityManager in my TestNG test class using the @PersistenceContext annotation. I don’t know what the problem is, but switching to JUnit did the trick. This is a great (albeit dirty) lesson to me: when the worker has tried everything, maybe it’s time to question the tool.

The final blow was an amazingly stupid question but a fundamental one: how do you create a datasource in TomEE during a test? For this one, I googled the entire Internet and it took me some time to finally get it, so here it is in all its glory. Just update your arquillian.xml file to add the following snippet:


    
        
            PetCatalog = new://Resource?type=DataSource
            PetCatalog.JdbcUrl jdbc:hsqldb:mem:petcat
            PetCatalog.JdbcDriver org.hsqldb.jdbc.JDBCDriver
            PetCatalog.UserName sa
            PetCatalog.Password
            PetCatalog.JtaManaged true
        
    

Finally, I got the thing working and created two tests, one to check the number of pets was right, and the other to verify data for a specific item. If (when?) I have time, I will definitely check the Persistence and Graphene extensions of Arquillian. Or you’re welcome to fork the project.

Categories: JavaEE Tags: , ,