Spring beans overwriting strategy

February 16th, 2013 2 comments

I find myself working more and more with Spring these days, and what I find raises questions. This week, my thoughts are turned toward beans overwriting, that is registering more than one bean with the samee name.

In the case of a simple project, there’s no need for this; but when building a a plugin architecture around a core, it may be a solution. Here are some facts I uncovered and verified regarding beans overwriting.

Single bean id per file
The id attribute in the Spring bean file is of type ID, meaning you can have only a single bean with a specific ID in a specific Spring beans definition file.
Overwriting bean dependent on context fragments loading order
As opposed to classpath loading where the first class takes priority over those others further on the classpath, it’s the last bean of the same name that is finally used. That’s why I called it overwriting. Reversing the fragment loading order proves that.
Fragment assembling methods define an order
Fragments can be assembled from statements in the Spring beans definition file or through an external component (e.g. the Spring context listener in a web app or test classes). All define a deterministic order.
As a side note, though I formerly used import statements in my projects (in part to take advantage of IDE support), experience taught me it can bite you in the back when reusing modules: I’m in favor of assembling through external components now.
Names
Spring lets you define names in addition to ids (which is a cheap way of putting illegals characters fors ID). Those names also overwrites ids.
Aliases
Spring lets you define aliases of existing beans: those aliases also overwrites ids.
Scope overwriting
This one is really mean: by overwriting a bean, you also overwrite scope. So, if the original bean had a specified scope and you do not specify the same, tough luck: you just probably changed the application behavior.

Not only are perhaps not known by your development team, but the last one is the killer reason not to overwrite beans. It’s too easy to forget scoping the overwritten bean.

In order to address plugins architecture, and given you do not want to walk the OSGi path, I would suggest what I consider a KISS (yet elegant) solution.

Let us use simple Java properties in conjunction with ProperyPlaceholderConfigurer. The main Spring Beans definition file should define placeholders for beans that can be overwritten and read two defined properties file: one wrapped inside the core JAR and the other on a predefined path (eventually set by a JVM property).

Both property files have the same structure: fully-qualified interface names as keys and fully-qualified implementations names as values. This way, you define default implementations in the internal property file and let uses overwrite them in the external file (if necessary).

As an added advantage, it shields users from Spring so they are not tied to the framework.

Sources for this article can be found in Maven/Eclipse format here.

Categories: Java Tags:

The case for Spring inner beans

February 10th, 2013 6 comments

When code reviewing or pair programming, I’m always amazed by the following discrepancy. On one hand, 99% of developers conscientiously apply encapsulation and limit accessibility and variable scope to the minimum possible. On the other hand, nobody cares one bit about Spring beans and such beans are always set at top-level, which makes them accessible from every place where you can get a handle on the Spring context.

For example, this a typical Spring beans configuration file:




    
    



    
    

If beans one, two, three and four are only used by bean five, they shouldn’t be accessible from anywhere else and should be defined as inner beans.


    
        
            
                
            
            
                
            
        
     
     
         
     

From this point on, beans one, two, three and four cannot be accessed in any way outside of bean five; in effect, they are not visible.

There are a couple of points I’d like to make:

  1. By using inner beans, those beans are implicitly made anonymous but also scoped prototype, which doesn’t mean squat since they won’t be reused anywhere else.
  2. With annotations configuration, this is something that is done under the cover when you set a new instance in the body of the method
  3. I acknowledge it renders the Spring beans definition file harder to read but with the graphical representation feature brought by Spring IDE, this point is moot

In conclusion, I would like every developer to consider not only technologies, but also concepts. When you understand variable scoping in programming, you should not only apply it to code, but also wherever it is relevant.

Categories: Java Tags:

Changing default Spring bean scope

February 4th, 2013 3 comments

By default, Spring beans are scoped singleton, meaning there’s only one instance for the whole application context. For most applications, this is a sensible default; then sometimes, not so much. This may be the case when using a custom scope, which is the case, on the product I’m currently working on. I’m not at liberty to discuss the details further: suffice to say that it is very painful to configure each and every needed bean with this custom scope.

Since being lazy in a smart way is at the core of developer work, I decided to search for a way to ease my burden and found it in the BeanFactoryPostProcessor class. It only has a single method – postProcessBeanFactory(), but it gives access to the bean factory itself (which is at the root of the various application context classes).

From this point on, the code is trivial even with no prior experience of the API:

public class PrototypeScopedBeanFactoryPostProcessor implements BeanFactoryPostProcessor {

    @Override
    public void postProcessBeanFactory(ConfigurableListableBeanFactory factory) throws BeansException {

        for (String beanName : factory.getBeanDefinitionNames()) {

            BeanDefinition beanDef = factory.getBeanDefinition(beanName);

            String explicitScope = beanDef.getScope();

            if ("".equals(explicitScope)) {

                beanDef.setScope("prototype");
            }
        }
    }
}

The final touch is to register the post-processor in the context . This is achieved by treating it as a simple anonymous bean:




    

Now, every bean which scope is not explicitly set will be scoped prototype.

Sources for this article can be found attached in Eclipse/Maven format.

Categories: Java Tags:

DRY your Spring Beans configuration file

January 20th, 2013 No comments

It’s always when you discuss with people that some things that you (or the people) hold for an evidence seems to be a closely-held secret. That’s what happened this week when I tentatively showed a trick during a training session that started a debate.

Let’s take an example, but the idea behind this can of course be applied to many more use-cases: imagine you developed many DAO classes inheriting from the same abstract DAO Spring provides you with (JPA, Hibernate, plain JDBC, you name it). All those classes need to be set either a datasource (or a JPA EntityManager, a Spring Session, etc.). At your first attempt, you would create the Spring beans definition file a such:



    


    


    


    


    


    

Notice a pattern here? Not only is it completely opposed to the DRY principle, it also is a source for errors as well as decreasing future maintainability. Most important, I’m lazy and I do not like to type characters just for the fun of it.

Spring to the rescue. Spring provides the way to make beans abstract. This is not to be confused with the abstract keyword of Java. Though Spring abstract beans are not instantiated, children of these abstract beans are injected the properties of their parent abstract bean. This implies you do need a common Java parent class (though it doesn’t need to be abstract). In essence, you will shorten your Spring beans definitions file like so:



    






The instruction to inject the data source is configured only once for the abstractDao. Yet, Spring will apply it to every DAO configured as having the it as its parent. DRY from the trenches…

Note: if you use two different data sources, you’ll just have to define two abstract DAOs and set the correct one as the parent of your concrete DAOs.

Categories: Java Tags: ,

The power of naming

January 13th, 2013 No comments

Before diving right into the subject of naming, and at the risk of souding somewhat pedantic, an introduction is in order.

History show us that naming is so important to the human race that it’s being regarded as sacred (or at least) magical in most human cultures. Some non-exhaustive examples include:

  • Judaïc tradition go to great lengths to search for and compile the names of God. In this regard, each name offers a point of view into a specific aspect of God.
  • Christian Church gives a name when a new member is brought into the fold. The most famous example is when Jesus names his disciple Simon “Peter” (coming from petros – rock, in Greek). Other religions also tend to follow this tradition.
  • In sorcery (or more precisely traditional demonology), knowing a demon’s true name gives power over it and let the mage bind it.

From these examples, names seem to bestow some properties on the named subject and/or gives a specific insight into it.

Other aspects of naming include communication between people. Not mentioning whether there are many Sami words for snow or not, fact is that colors are intrinsically cultural, some allow for a whole array of color names, while others only have 2 (such as the Dugum Dani of the New Zealand). Of course, designers and artists using colour regularly have a much broader vocabulary to describe colors than us mere mortals (at least me). In the digital age, colors can even be described exactly by their RGB value.

It’s pretty self-evident that people exchanging information have a much higher signal-to-noise ration the richer their common vocabulary and the less the difference between the word and the reality it covers.

Now, how does the above data translate into software development practive? Given that naming describes reality:

  • Giving a name to a class (package or method) describes to fellow programmers what the class does. If the name is misleading, time will have to be spent to realign between the reality (what it does really) and the given name (what people think it does). On the contrary, right names will tremendously boost the productivity of developers new to the project.
  • When updating a class (package or method), always think whether it should be reflected on its name. In the light of modern IDE, this won’t cost any significant time while increasing future maintainability.
  • It’s better to have an explicitly incongruous class (package or method) name such as ToBeRenamed than one slightly misaligned with reality (e.g. Yellow to describe #F5DA81) so as to avoid later confusion. This is particularly true when unsure about a newly-created class responsibilities.

Let someone being much more experienced have the final word:

There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton

Categories: Development Tags:

Pet catalog for JavaEE 6 reengineered

January 6th, 2013 No comments

Some time ago, I published the famed Pet Catalog application on Github. It doesn’t seem like much, but there are some hours of work (if not days) behind the scenes. I wanted to write down the objectives of this pet project (noticed the joke here?) and the walls I hit trying to achieve them to prevent others to hit them.

Objectives were very simple at first: get the Pet Catalog to work under TomEE, the Web Profile compliant application server based on Tomcat. I also wanted to prove that by running automated integration tests, so Arquillian was my tool to achieve that. JavaEE 6, TomEE, Arquillian: this couldn’t be simpler, could it? Think again…

First of all, it may seem strange but there’s no place on the Internet you can find the Pet Catalog for JavaEE 6, despite it being set as an example during the whole JavaEE evolution; a simple Google search can easily convince you of that. I think there’s a definite lack there but I’m nothing but persistent. In order to get the sources, I had to get the NetBeans IDE: the PetCatalog is available as an example application.

The second obstacle I had to overcome was that whatever I tried, I couldn’t find a way to make Arquillian inject my EntityManager in my TestNG test class using the @PersistenceContext annotation. I don’t know what the problem is, but switching to JUnit did the trick. This is a great (albeit dirty) lesson to me: when the worker has tried everything, maybe it’s time to question the tool.

The final blow was an amazingly stupid question but a fundamental one: how do you create a datasource in TomEE during a test? For this one, I googled the entire Internet and it took me some time to finally get it, so here it is in all its glory. Just update your arquillian.xml file to add the following snippet:


    
        
            PetCatalog = new://Resource?type=DataSource
            PetCatalog.JdbcUrl jdbc:hsqldb:mem:petcat
            PetCatalog.JdbcDriver org.hsqldb.jdbc.JDBCDriver
            PetCatalog.UserName sa
            PetCatalog.Password
            PetCatalog.JtaManaged true
        
    

Finally, I got the thing working and created two tests, one to check the number of pets was right, and the other to verify data for a specific item. If (when?) I have time, I will definitely check the Persistence and Graphene extensions of Arquillian. Or you’re welcome to fork the project.

Categories: JavaEE Tags: , ,

Do we really need the DAO?

December 23rd, 2012 4 comments

This may seem like a stupid question, especially after years of carefully creating them. Yet these thoughts about DAO arose in my mind when I watched Adam Bien’s Real World JavaEE rerun on Parley’s.

In his talk, Adam says he doesn’t use DAOs anymore – even though he has one ready so as to please architects (them again). My first reaction was utter rejection: layered architecture is at the root of decoupling and decoupling is a requirement for an evolutive design. Then, I digested the information and thought, why not?

Let’s have a look at the definition of the DAO design pattern from Oracle:

Use a Data Access Object (DAO) to abstract and encapsulate all access to the data source. The DAO manages the connection with the data source to obtain and store data.

Core J2EE Patterns – Data Access Object

Back in the old days, either with plain JDBC or EJB, having DAO was really about decoupling. Nowadays and when you think about that, isn’t it what JPA’s EntityManager is all about? In effect, most JPA DAO just delegate their base methods to the EntityManager.

public void merge(Person person) {
    em.merge(person);
}

So, for basic CRUD operations, the point seems a valid one, doesn’t it? That would be a fool’s conclusion, since the above snippet doesn’t take into account most cases. What if a Person has an Address (or more than one)? In web applications, we don’t have the whole object graph, only the Person to be merged, so we wouldn’t merge the entity but load it by it’s primary key and update fields that were likely changed in the GUI layer. The above snippet should probably look like the following:

public Person merge(Person person) {
    Person original = em.find(Person.class, person.getId());
    original.setFirstName(person.getFirstName());
    original.setLastName(person.getLastName());
    em.flush();
    return original;
}

But what about queries? Picture the following:

public class PersonService {

    public List findByFirstNameAndLastName(String firstName, String lastName) {

        CriteriaBuilder builder = em.getCriteriaBuilder();

        CriteriaQuery select = builder.createQuery(Person.class);

        Root fromPerson = select.from(Person.class);

        Predicate equalsFirstName = builder.equal(fromPerson.get(Person_.firstName), firstName);

        Predicate equalsLastName = builder.equal(fromPerson.get(Person_.lastName), lastName);

        select.where(builder.and(equalsFirstName, equalsLastName));

        return em.createQuery(select).getResultList();
    }
}

The findByFirstNameAndLastName() method clearly doesn’t use any business DSL but plain query DSL. In this case, and whatever the query strategy used (JPA-QL or Criteria), I don’t think it would be wise to use such code directly in the business layer and the DAO still has uses.

Both those examples lead me to think DAO are still needed in the current state of things. Anyway, asking these kind of questions are of great importance since patterns tend to stay even though technology improvements make them obsolete.

Categories: Java Tags:

Short-term benefits in the Information Service business

December 16th, 2012 No comments

It’s a fact that companies and their management have money in mind. However, they’re many ways to achieve this goal. In my short career as an employee, I’ve been faced with two strategies, the short-term one and the long-term one.

Here are some examples I’ve been directly confronted with:

  • A manager sends a developer on an uninteresting mission a long way from home. At first, he tells the developer it won’t last more than 3 months but in effect, it lasts more than one year. Results: the manager meets his objectives and gains his bonus, the developer is demotivated and talks very negatively about his mission (and about the manager and the company as well)
  • A business developer bids for and wins a contract regarding a proprietary and obsolete technology that needs maintenance. Results: the business developer gains a hefty chunk of the contract money, developers who are tasked with the maintenance become increasingly irrelevant regarding mainstream technology
  • Coming to a new company, a developer is proposed a catalog to choose his laptop from. The catalog contains real-life (read “not outdated”) laptops (such as Mac Book Pro) as well as real-life hardware (8Gb RAM). Results: the developer is highly motivated to begin with, the company pays the hardware a little more than lower-grade one
  • A developer informs his boss there’s been an error in his previous payslip: overtime he has made the previous month has been paid again. The boss says it’s ok and that the bonus is now becoming permanently included in his salary. Results: the developer is happily surprised and tells his friends, the boss pays extra salary each month.

What these experience have in common is a trade-off between short-term gain and long-term (or middle-term) gain.

In the first two examples, the higher-up meets his objectives thus ensuring short-term profit. In the first example, this is balanced with trust issues (and negative word-of-mouth); in the second example with a sharp decrease in human resources asset. On the contrary, the two last examples balance marginally higher costs with loyalty and good publicity from the developer.

I understand that stakeholders want short-term profit. What I fail to understand is that in the long run, it decreases company value: when all you have in your IS company is a bunch of demotivated developers looking to jump out of the train, you can have all the contracts in the world, you won’t be able to deliver, period. Of course you could invest massively in recruiting, but what’s the added value?

Kepping your developers happy and motivated has a couple long-term consequences that have definite value:

  • Developers will speak highly of the company, thus making the company more attractive to fellow developers
  • Developers will speak highly of the company, thus building trust between the company’s customers and the company itself
  • Developers will be more motivated, thus more productive by themselves
  • Developers will be more motivated, hence more collaborative (and so even more productive)
  • etc.

The problem lies in that those benefits are hardly quantifiable and translatable into cash equivalent by any means. Average companies will likely ignore them while smart ones will build upon them. Are you working for the first kind or the second?

Note 1: this article reflects my experience in France and Switzerland. Maybe it’s different in other European countries and North America?
Note 2: in Paris (France), there’s a recent trend of companies caring more about long-term profits. I can only hope it will get stronger

Categories: Miscellaneous Tags:

The cost and value of knowledge

December 2nd, 2012 No comments

I was a student in France when I discovered the Internet and the World Wide Web. At that time, I didn’t realize the value of the thing, even though I taught myself HTML from what was available at the time (even if I don’t remember the sites – they probably have disappeared into Limbo).

When I began my professional life,  a project director of mine joked about the difference between the developer and the expert: “The expert knows about the cabinet where the documentation is”. Some time after that, I found this completely untrue, as one of my senior developer constantly found the answers to my questions: he used Google through and through and it helped me tremendously. The cabinet had become the entire Internet and everything could be found provided time and toil. I soon became a master of the Google query.

At this time, when I had a problem, I asked Google, found the right answer and reproduced it on my project. This was all good but it didn’t teach you much – script kiddies can do the same. At the same time, however, I found an excellent site for French Java developers: “Développons en Java” by Jean-Michel Doudoux. This site was the first I knew of where you could learn something step by step, not only reproduce a recipe. I eagerly devoured the whole thing, going as far as copying it on my computer to be able to read it when offline (a very common occurence then). This was made possible by M. Doudoux passion and will to make his knowledge available!

Even better, a recent trend, from which Coursera has been the best marketer, cooperates with well-established universities to make standard in-the-room courses available… for free. For the first time in history, knowledge has become a costless commodity and it’s something that can be compaired to Gutenberg’s press during the Renaissance. Where previously books had to be written by hand thus preventing diffusion of the understanding they contained, printed press made books at relatively low costs available everywhere.

The first natural reaction is the “Wow!” effect. I personally am very interested in foreign languages, Japan and pyschology: I could learn so much about any (or all) of these subjects without even leaving my home. Moreover, so many people are kept away from education, not by a lack of intellectual prowess, but just because they haven’t been born rich enough or in a part of Earth where there’s no university. Imagine if all people could learn about humanities, democracy, engineering, whatever. This could well provoke a new Age of Enlightnemet, but this time not only in Europe but encompassing the whole globe.

However, there’s a negative side to this coin. When something is made free, people tend to give it no value and waste is sure to follow. There are plenty of examples everywhere, the worst I know of is the free water policy for Qatari households in Qatar. Thus, we have to be very careful about that – knowledge has been acquired with much effort, period. If it’s made freely available, there’s the risk that people treat it as a common commodity.

Worse, both public and private sectors invest in research and knowledge. If it isn’t sold for a profit (even marginal), what are the odds both (at least the latter) would continue their investments. The business model for such services is still to be found. A hint would be recruiting: if the course is one of the best, recruiters would perhpas be interested in getting their paws on contact data of those who passed with flying colors?

My conclusion to these points is that providing people with free knowledge is something very commendable, but it shouldn’t lower its value, only its cost.

Categories: Miscellaneous Tags:

Drupal 7 to Jekyll, an epic journey

November 25th, 2012 1 comment

There’s a recent trend in blogging that consists of dropping PHP platforms such as WordPress and Drupal in factor of static HTML pages generated when needed.

Static blogs have many advantages over PHP engines:

  • They’re much faster since there’s no computation overhead at runtime for composing the rendered page
  • There’s no security issues regarding SQL injection
  • Finally, generating the site removes the need for a staging area with possibly mismatched configurations

morevaadin.com is one of my site, designed with the Open Publish distribution on top of Drupal 7. It’s designed only to display articles about the Vaadin framework: there’s no comment management, no interaction, no fancy stuff involved whatsoever. In essence, there’s no need for a Drupal platform and a bunch of HTML files with some JavaScript would be more than enough. Here’s about my journey into porting the existing site, with the different steps I went through to reach my goal (spoiler: at the time of this writing, it’s not a success).

First step

Having no prior experience, I choosed Jekyll as my target platform for it’s supported by Github.

Before doing anything, I went through the motion of creating a Ubuntu VM, with the Ruby and RubyGems packages.

sudo apt-get install ruby
sudo apt-get install rubygems

Blog migration explained

My first strategy was to follow instructions from this blog post, which seemed to exactly match my target. In essence, I was to get the SQL dump from my Drupal site, execute it on a local MySQL base and execute some Ruby script wrapping a SQL one.

The problem is that the propsed SQL script was not adapted to the structure of my base: I fixed the prefixes and the node types, but I had still no success running the script.

Other scripts

I found other scripts on the Internet, but none was adapted to the structure of my database and I couldn’t get any running despite my best efforts.

Eureka!

Back to the basics: given that no hand-crafted script was able to run on my database, I decided to look further into Jekyll documentation and found there was a gem associated with a Drupal 6.1 migrator.

So, I installed Jekyll – a Ruby gem – and I could finally run the script.

sudo gem install jekyll
ruby -rubygems -e 'require "jekyll/migrators/drupal"; Jekyll::Drupal.process($MYSQL_INSTANCE, $LOGIN, $PASSWORD)

At this point, I got no errors running the Ruby script but no output whatsoever. However, I tweaked the /var/lib/gems/1.8/gems/jekyll-0.11.2/lib/jekyll/migrators/drupal.rb script a bit, where I commented out the lines concerning the node and entity types (30-31), as well as the whole tag things (lines 53-60 and 69) and I finally obtained some results: a _drafts and a _posts folder, with the latter full of .md files!

Running Jekyll with jekyll --server finally generated a _site folder, with a load of pages.

Problems

There are some problems with what I got:

  • Some pages have unparseable sections (REXML could not parse this HTML/XML)
  • Pages have no extensions, despite being HTML, so the browser doesn’t recognize the MIME type when they’re run under Jekyll. The normal way would be to have index.html under folders named as the post
  • There’s no index

Cloning Octopress and Jekyll

Given my very limited knowledge (read none) of what happens under the cover and my unwillingness to further change the script, I tried the same on a ready-to-use template named Octopress, with the possible intention to migrate the few pages I had (not very satisfying from a software developer point-of-view but pragmatic enough for my needs).

sudo apt-get install git
git clone https://github.com/imathis/octopress

Unfortunately, there was an error in the script :-( So, in a desperate attempt, I wanted to give a try to Jekyll Bootstrap.

git clone https://github.com/plusjade/jekyll-bootstrap.git

And lo, I finally got no error, an output and something to view in the browser at http:/0.0.0.0/index.html!

Present situation

I’ve been tweaking and updating included files (those in the _included folder) and I’m beginning to get something bearing some similarity to the original layout.

To obtain that, I had to install Pygments, the code prettifier:

sudo apt-get install python-pygments

At this point, there are basically 2 options: either manually port other posts and finalize the layout (as well as work toward the original theme), or invest in the configuration process to make it work.

At present, I’m leaning toward the first option. Feedback welcome!

Categories: Technical Tags: ,