Changing default Spring bean scope

February 4th, 2013 3 comments

By default, Spring beans are scoped singleton, meaning there’s only one instance for the whole application context. For most applications, this is a sensible default; then sometimes, not so much. This may be the case when using a custom scope, which is the case, on the product I’m currently working on. I’m not at liberty to discuss the details further: suffice to say that it is very painful to configure each and every needed bean with this custom scope.

Since being lazy in a smart way is at the core of developer work, I decided to search for a way to ease my burden and found it in the BeanFactoryPostProcessor class. It only has a single method – postProcessBeanFactory(), but it gives access to the bean factory itself (which is at the root of the various application context classes).

From this point on, the code is trivial even with no prior experience of the API:

public class PrototypeScopedBeanFactoryPostProcessor implements BeanFactoryPostProcessor {

    @Override
    public void postProcessBeanFactory(ConfigurableListableBeanFactory factory) throws BeansException {

        for (String beanName : factory.getBeanDefinitionNames()) {

            BeanDefinition beanDef = factory.getBeanDefinition(beanName);

            String explicitScope = beanDef.getScope();

            if ("".equals(explicitScope)) {

                beanDef.setScope("prototype");
            }
        }
    }
}

The final touch is to register the post-processor in the context . This is achieved by treating it as a simple anonymous bean:




    

Now, every bean which scope is not explicitly set will be scoped prototype.

Sources for this article can be found attached in Eclipse/Maven format.

Categories: Java Tags:

DRY your Spring Beans configuration file

January 20th, 2013 No comments

It’s always when you discuss with people that some things that you (or the people) hold for an evidence seems to be a closely-held secret. That’s what happened this week when I tentatively showed a trick during a training session that started a debate.

Let’s take an example, but the idea behind this can of course be applied to many more use-cases: imagine you developed many DAO classes inheriting from the same abstract DAO Spring provides you with (JPA, Hibernate, plain JDBC, you name it). All those classes need to be set either a datasource (or a JPA EntityManager, a Spring Session, etc.). At your first attempt, you would create the Spring beans definition file a such:



    


    


    


    


    


    

Notice a pattern here? Not only is it completely opposed to the DRY principle, it also is a source for errors as well as decreasing future maintainability. Most important, I’m lazy and I do not like to type characters just for the fun of it.

Spring to the rescue. Spring provides the way to make beans abstract. This is not to be confused with the abstract keyword of Java. Though Spring abstract beans are not instantiated, children of these abstract beans are injected the properties of their parent abstract bean. This implies you do need a common Java parent class (though it doesn’t need to be abstract). In essence, you will shorten your Spring beans definitions file like so:



    






The instruction to inject the data source is configured only once for the abstractDao. Yet, Spring will apply it to every DAO configured as having the it as its parent. DRY from the trenches…

Note: if you use two different data sources, you’ll just have to define two abstract DAOs and set the correct one as the parent of your concrete DAOs.

Categories: Java Tags: ,

The power of naming

January 13th, 2013 No comments

Before diving right into the subject of naming, and at the risk of souding somewhat pedantic, an introduction is in order.

History show us that naming is so important to the human race that it’s being regarded as sacred (or at least) magical in most human cultures. Some non-exhaustive examples include:

  • Judaïc tradition go to great lengths to search for and compile the names of God. In this regard, each name offers a point of view into a specific aspect of God.
  • Christian Church gives a name when a new member is brought into the fold. The most famous example is when Jesus names his disciple Simon “Peter” (coming from petros – rock, in Greek). Other religions also tend to follow this tradition.
  • In sorcery (or more precisely traditional demonology), knowing a demon’s true name gives power over it and let the mage bind it.

From these examples, names seem to bestow some properties on the named subject and/or gives a specific insight into it.

Other aspects of naming include communication between people. Not mentioning whether there are many Sami words for snow or not, fact is that colors are intrinsically cultural, some allow for a whole array of color names, while others only have 2 (such as the Dugum Dani of the New Zealand). Of course, designers and artists using colour regularly have a much broader vocabulary to describe colors than us mere mortals (at least me). In the digital age, colors can even be described exactly by their RGB value.

It’s pretty self-evident that people exchanging information have a much higher signal-to-noise ration the richer their common vocabulary and the less the difference between the word and the reality it covers.

Now, how does the above data translate into software development practive? Given that naming describes reality:

  • Giving a name to a class (package or method) describes to fellow programmers what the class does. If the name is misleading, time will have to be spent to realign between the reality (what it does really) and the given name (what people think it does). On the contrary, right names will tremendously boost the productivity of developers new to the project.
  • When updating a class (package or method), always think whether it should be reflected on its name. In the light of modern IDE, this won’t cost any significant time while increasing future maintainability.
  • It’s better to have an explicitly incongruous class (package or method) name such as ToBeRenamed than one slightly misaligned with reality (e.g. Yellow to describe #F5DA81) so as to avoid later confusion. This is particularly true when unsure about a newly-created class responsibilities.

Let someone being much more experienced have the final word:

There are only two hard things in Computer Science: cache invalidation and naming things.
— Phil Karlton

Categories: Development Tags:

Pet catalog for JavaEE 6 reengineered

January 6th, 2013 No comments

Some time ago, I published the famed Pet Catalog application on Github. It doesn’t seem like much, but there are some hours of work (if not days) behind the scenes. I wanted to write down the objectives of this pet project (noticed the joke here?) and the walls I hit trying to achieve them to prevent others to hit them.

Objectives were very simple at first: get the Pet Catalog to work under TomEE, the Web Profile compliant application server based on Tomcat. I also wanted to prove that by running automated integration tests, so Arquillian was my tool to achieve that. JavaEE 6, TomEE, Arquillian: this couldn’t be simpler, could it? Think again…

First of all, it may seem strange but there’s no place on the Internet you can find the Pet Catalog for JavaEE 6, despite it being set as an example during the whole JavaEE evolution; a simple Google search can easily convince you of that. I think there’s a definite lack there but I’m nothing but persistent. In order to get the sources, I had to get the NetBeans IDE: the PetCatalog is available as an example application.

The second obstacle I had to overcome was that whatever I tried, I couldn’t find a way to make Arquillian inject my EntityManager in my TestNG test class using the @PersistenceContext annotation. I don’t know what the problem is, but switching to JUnit did the trick. This is a great (albeit dirty) lesson to me: when the worker has tried everything, maybe it’s time to question the tool.

The final blow was an amazingly stupid question but a fundamental one: how do you create a datasource in TomEE during a test? For this one, I googled the entire Internet and it took me some time to finally get it, so here it is in all its glory. Just update your arquillian.xml file to add the following snippet:


    
        
            PetCatalog = new://Resource?type=DataSource
            PetCatalog.JdbcUrl jdbc:hsqldb:mem:petcat
            PetCatalog.JdbcDriver org.hsqldb.jdbc.JDBCDriver
            PetCatalog.UserName sa
            PetCatalog.Password
            PetCatalog.JtaManaged true
        
    

Finally, I got the thing working and created two tests, one to check the number of pets was right, and the other to verify data for a specific item. If (when?) I have time, I will definitely check the Persistence and Graphene extensions of Arquillian. Or you’re welcome to fork the project.

Categories: JavaEE Tags: , ,

Do we really need the DAO?

December 23rd, 2012 4 comments

This may seem like a stupid question, especially after years of carefully creating them. Yet these thoughts about DAO arose in my mind when I watched Adam Bien’s Real World JavaEE rerun on Parley’s.

In his talk, Adam says he doesn’t use DAOs anymore – even though he has one ready so as to please architects (them again). My first reaction was utter rejection: layered architecture is at the root of decoupling and decoupling is a requirement for an evolutive design. Then, I digested the information and thought, why not?

Let’s have a look at the definition of the DAO design pattern from Oracle:

Use a Data Access Object (DAO) to abstract and encapsulate all access to the data source. The DAO manages the connection with the data source to obtain and store data.

Core J2EE Patterns – Data Access Object

Back in the old days, either with plain JDBC or EJB, having DAO was really about decoupling. Nowadays and when you think about that, isn’t it what JPA’s EntityManager is all about? In effect, most JPA DAO just delegate their base methods to the EntityManager.

public void merge(Person person) {
    em.merge(person);
}

So, for basic CRUD operations, the point seems a valid one, doesn’t it? That would be a fool’s conclusion, since the above snippet doesn’t take into account most cases. What if a Person has an Address (or more than one)? In web applications, we don’t have the whole object graph, only the Person to be merged, so we wouldn’t merge the entity but load it by it’s primary key and update fields that were likely changed in the GUI layer. The above snippet should probably look like the following:

public Person merge(Person person) {
    Person original = em.find(Person.class, person.getId());
    original.setFirstName(person.getFirstName());
    original.setLastName(person.getLastName());
    em.flush();
    return original;
}

But what about queries? Picture the following:

public class PersonService {

    public List findByFirstNameAndLastName(String firstName, String lastName) {

        CriteriaBuilder builder = em.getCriteriaBuilder();

        CriteriaQuery select = builder.createQuery(Person.class);

        Root fromPerson = select.from(Person.class);

        Predicate equalsFirstName = builder.equal(fromPerson.get(Person_.firstName), firstName);

        Predicate equalsLastName = builder.equal(fromPerson.get(Person_.lastName), lastName);

        select.where(builder.and(equalsFirstName, equalsLastName));

        return em.createQuery(select).getResultList();
    }
}

The findByFirstNameAndLastName() method clearly doesn’t use any business DSL but plain query DSL. In this case, and whatever the query strategy used (JPA-QL or Criteria), I don’t think it would be wise to use such code directly in the business layer and the DAO still has uses.

Both those examples lead me to think DAO are still needed in the current state of things. Anyway, asking these kind of questions are of great importance since patterns tend to stay even though technology improvements make them obsolete.

Categories: Java Tags:

Short-term benefits in the Information Service business

December 16th, 2012 No comments

It’s a fact that companies and their management have money in mind. However, they’re many ways to achieve this goal. In my short career as an employee, I’ve been faced with two strategies, the short-term one and the long-term one.

Here are some examples I’ve been directly confronted with:

  • A manager sends a developer on an uninteresting mission a long way from home. At first, he tells the developer it won’t last more than 3 months but in effect, it lasts more than one year. Results: the manager meets his objectives and gains his bonus, the developer is demotivated and talks very negatively about his mission (and about the manager and the company as well)
  • A business developer bids for and wins a contract regarding a proprietary and obsolete technology that needs maintenance. Results: the business developer gains a hefty chunk of the contract money, developers who are tasked with the maintenance become increasingly irrelevant regarding mainstream technology
  • Coming to a new company, a developer is proposed a catalog to choose his laptop from. The catalog contains real-life (read “not outdated”) laptops (such as Mac Book Pro) as well as real-life hardware (8Gb RAM). Results: the developer is highly motivated to begin with, the company pays the hardware a little more than lower-grade one
  • A developer informs his boss there’s been an error in his previous payslip: overtime he has made the previous month has been paid again. The boss says it’s ok and that the bonus is now becoming permanently included in his salary. Results: the developer is happily surprised and tells his friends, the boss pays extra salary each month.

What these experience have in common is a trade-off between short-term gain and long-term (or middle-term) gain.

In the first two examples, the higher-up meets his objectives thus ensuring short-term profit. In the first example, this is balanced with trust issues (and negative word-of-mouth); in the second example with a sharp decrease in human resources asset. On the contrary, the two last examples balance marginally higher costs with loyalty and good publicity from the developer.

I understand that stakeholders want short-term profit. What I fail to understand is that in the long run, it decreases company value: when all you have in your IS company is a bunch of demotivated developers looking to jump out of the train, you can have all the contracts in the world, you won’t be able to deliver, period. Of course you could invest massively in recruiting, but what’s the added value?

Kepping your developers happy and motivated has a couple long-term consequences that have definite value:

  • Developers will speak highly of the company, thus making the company more attractive to fellow developers
  • Developers will speak highly of the company, thus building trust between the company’s customers and the company itself
  • Developers will be more motivated, thus more productive by themselves
  • Developers will be more motivated, hence more collaborative (and so even more productive)
  • etc.

The problem lies in that those benefits are hardly quantifiable and translatable into cash equivalent by any means. Average companies will likely ignore them while smart ones will build upon them. Are you working for the first kind or the second?

Note 1: this article reflects my experience in France and Switzerland. Maybe it’s different in other European countries and North America?
Note 2: in Paris (France), there’s a recent trend of companies caring more about long-term profits. I can only hope it will get stronger

Categories: Miscellaneous Tags:

The cost and value of knowledge

December 2nd, 2012 No comments

I was a student in France when I discovered the Internet and the World Wide Web. At that time, I didn’t realize the value of the thing, even though I taught myself HTML from what was available at the time (even if I don’t remember the sites – they probably have disappeared into Limbo).

When I began my professional life,  a project director of mine joked about the difference between the developer and the expert: “The expert knows about the cabinet where the documentation is”. Some time after that, I found this completely untrue, as one of my senior developer constantly found the answers to my questions: he used Google through and through and it helped me tremendously. The cabinet had become the entire Internet and everything could be found provided time and toil. I soon became a master of the Google query.

At this time, when I had a problem, I asked Google, found the right answer and reproduced it on my project. This was all good but it didn’t teach you much – script kiddies can do the same. At the same time, however, I found an excellent site for French Java developers: “Développons en Java” by Jean-Michel Doudoux. This site was the first I knew of where you could learn something step by step, not only reproduce a recipe. I eagerly devoured the whole thing, going as far as copying it on my computer to be able to read it when offline (a very common occurence then). This was made possible by M. Doudoux passion and will to make his knowledge available!

Even better, a recent trend, from which Coursera has been the best marketer, cooperates with well-established universities to make standard in-the-room courses available… for free. For the first time in history, knowledge has become a costless commodity and it’s something that can be compaired to Gutenberg’s press during the Renaissance. Where previously books had to be written by hand thus preventing diffusion of the understanding they contained, printed press made books at relatively low costs available everywhere.

The first natural reaction is the “Wow!” effect. I personally am very interested in foreign languages, Japan and pyschology: I could learn so much about any (or all) of these subjects without even leaving my home. Moreover, so many people are kept away from education, not by a lack of intellectual prowess, but just because they haven’t been born rich enough or in a part of Earth where there’s no university. Imagine if all people could learn about humanities, democracy, engineering, whatever. This could well provoke a new Age of Enlightnemet, but this time not only in Europe but encompassing the whole globe.

However, there’s a negative side to this coin. When something is made free, people tend to give it no value and waste is sure to follow. There are plenty of examples everywhere, the worst I know of is the free water policy for Qatari households in Qatar. Thus, we have to be very careful about that – knowledge has been acquired with much effort, period. If it’s made freely available, there’s the risk that people treat it as a common commodity.

Worse, both public and private sectors invest in research and knowledge. If it isn’t sold for a profit (even marginal), what are the odds both (at least the latter) would continue their investments. The business model for such services is still to be found. A hint would be recruiting: if the course is one of the best, recruiters would perhpas be interested in getting their paws on contact data of those who passed with flying colors?

My conclusion to these points is that providing people with free knowledge is something very commendable, but it shouldn’t lower its value, only its cost.

Categories: Miscellaneous Tags:

Drupal 7 to Jekyll, an epic journey

November 25th, 2012 1 comment

There’s a recent trend in blogging that consists of dropping PHP platforms such as WordPress and Drupal in factor of static HTML pages generated when needed.

Static blogs have many advantages over PHP engines:

  • They’re much faster since there’s no computation overhead at runtime for composing the rendered page
  • There’s no security issues regarding SQL injection
  • Finally, generating the site removes the need for a staging area with possibly mismatched configurations

morevaadin.com is one of my site, designed with the Open Publish distribution on top of Drupal 7. It’s designed only to display articles about the Vaadin framework: there’s no comment management, no interaction, no fancy stuff involved whatsoever. In essence, there’s no need for a Drupal platform and a bunch of HTML files with some JavaScript would be more than enough. Here’s about my journey into porting the existing site, with the different steps I went through to reach my goal (spoiler: at the time of this writing, it’s not a success).

First step

Having no prior experience, I choosed Jekyll as my target platform for it’s supported by Github.

Before doing anything, I went through the motion of creating a Ubuntu VM, with the Ruby and RubyGems packages.

sudo apt-get install ruby
sudo apt-get install rubygems

Blog migration explained

My first strategy was to follow instructions from this blog post, which seemed to exactly match my target. In essence, I was to get the SQL dump from my Drupal site, execute it on a local MySQL base and execute some Ruby script wrapping a SQL one.

The problem is that the propsed SQL script was not adapted to the structure of my base: I fixed the prefixes and the node types, but I had still no success running the script.

Other scripts

I found other scripts on the Internet, but none was adapted to the structure of my database and I couldn’t get any running despite my best efforts.

Eureka!

Back to the basics: given that no hand-crafted script was able to run on my database, I decided to look further into Jekyll documentation and found there was a gem associated with a Drupal 6.1 migrator.

So, I installed Jekyll – a Ruby gem – and I could finally run the script.

sudo gem install jekyll
ruby -rubygems -e 'require "jekyll/migrators/drupal"; Jekyll::Drupal.process($MYSQL_INSTANCE, $LOGIN, $PASSWORD)

At this point, I got no errors running the Ruby script but no output whatsoever. However, I tweaked the /var/lib/gems/1.8/gems/jekyll-0.11.2/lib/jekyll/migrators/drupal.rb script a bit, where I commented out the lines concerning the node and entity types (30-31), as well as the whole tag things (lines 53-60 and 69) and I finally obtained some results: a _drafts and a _posts folder, with the latter full of .md files!

Running Jekyll with jekyll --server finally generated a _site folder, with a load of pages.

Problems

There are some problems with what I got:

  • Some pages have unparseable sections (REXML could not parse this HTML/XML)
  • Pages have no extensions, despite being HTML, so the browser doesn’t recognize the MIME type when they’re run under Jekyll. The normal way would be to have index.html under folders named as the post
  • There’s no index

Cloning Octopress and Jekyll

Given my very limited knowledge (read none) of what happens under the cover and my unwillingness to further change the script, I tried the same on a ready-to-use template named Octopress, with the possible intention to migrate the few pages I had (not very satisfying from a software developer point-of-view but pragmatic enough for my needs).

sudo apt-get install git
git clone https://github.com/imathis/octopress

Unfortunately, there was an error in the script :-( So, in a desperate attempt, I wanted to give a try to Jekyll Bootstrap.

git clone https://github.com/plusjade/jekyll-bootstrap.git

And lo, I finally got no error, an output and something to view in the browser at http:/0.0.0.0/index.html!

Present situation

I’ve been tweaking and updating included files (those in the _included folder) and I’m beginning to get something bearing some similarity to the original layout.

To obtain that, I had to install Pygments, the code prettifier:

sudo apt-get install python-pygments

At this point, there are basically 2 options: either manually port other posts and finalize the layout (as well as work toward the original theme), or invest in the configuration process to make it work.

At present, I’m leaning toward the first option. Feedback welcome!

Categories: Technical Tags: ,

Devoxx 2012 – Final day

November 17th, 2012 No comments

— Devoxx last day, I only slept 2 hours during the previous night. Need I really say more? —

Clustering your applications with Hazelcast by Talip Ozturk

Hazelcast is an OpenSource product used by many companies (even Apple!).

HashMap is a no-thread safe key-value implementation. If you need thread safety, you’ll use ConcurrencyHashMap. When you need to distribute your map against distributed JVMs, you use Hazel.getMap() but the rest is the same as for the Map interface (but not interface itself).

— a demo is presented during the entire session —
Hazelcast let you add nodes very easily, basically just by starting another node (and it takes care of broadcasting).

Hazelcast alternatives include Terracotta, Infinispan and a bunch of others. However, it has some unique features: for example, it’s lightweight (1,7 Mb) without any dependency. The goal is to make distributed computing very easy.

In a Hazelcast cluster, there’s a master, but every node knows the topology of the cluster so that when the master eventually dies, its responibilities can be reassigned. Data is backed up on other nodes. Typical Hazelcast topology include data nodes and lite members, which do not carry data.

Enterprise edition of Hazelcast is the Community Edition plus Elastic Memory configuration and JAAS Security. It’s easy to use Hazelcast in tests with the Hazelcast.newHazelcastInstance(config) statement. Besides, an API is available to query about the topology. Locking is either done globally on the cluster or on a single key. Finally, messaging is also supported, though not through the JMS API (note that messages aren’t persistent). A whole event-based API is available to listen to events related to Hazelcast.

A limitation of Hazelcast is that objects put into it have to serializable. Moreover, if an object is taken from Hazelcast and then modified, you have to put it back into Hazelcast so that it’s also updated inside the cluster.

Hazelcast doesn’t persist anything, but ease the task of doing it on your own through the MapStore and MapLoader. There are two ways to store: write-behind which is asynchronous and write-through which is synchronous. Reads are read-through which is basically lazy-loading.

About long GC-pauses:

Killing a node is good, but letting it hang around like a zombie is not

There’s a plugin to support Hibernate second-level cache and it can also be tightly integrated with Spring.

mgwt – GWT goes mobile by Daniel Kurka

The speaker’s experience has taught him that mobile apps are fated to die. In fact, he has used and developed many mobile applications. The problem, is that you’re either limited by a too small number of applications, or the boundary goes higher and you cannot find them anymore. In the past, Yahoo put web pages into a catalog, while Google only crawled and ranked them. App stores looks like crap, they don’t provide a way to search for applications you’re itnerested in. Besides, when you’re looking for public transports schedule, you have to install the application of the specific company. Worse, some sites force you to install the application instead of providing the needed data.

When developing mobile applications, you have to do it for a specific platform. As Java developers, we’re used to the “Develop Once, Run Anywhere”. Also, there’s already such an universal platform, it’s the browser. PhoneGap tries to resolve the problem and provides HTML and JavaScript as the sole language. Now, you get “Build Once, Wrap with PhoneGap Around, Run Anywhere”.
GWT is a framework where Java is compiled into JavaScript instead of bytecode. Better, it’s compiled into optimized JS, and that’s important because mobile access may dry your battery dry. Finally, there’s GWT PhoneGap integration so you can write awesome webapps, wrap them into PhoneGap and release them on a store. PhoneGap works with your app composed of HTML, JS and CSS and plugins that are native and let you access mobile device features. Two important things to remember: data passed between app and device is done as strings, and calls are asynchronous. PhoneGap uses W3C standards (when possible); it’s an intermediate that will have to die when the mobile web is finally there.

Yet, PhoneGap doesn’t solve the core “too many apps” problem. GWT’s compiler compiles Java into JavaScript (a single file per browser). Remember that GWT’s code is optimized that would be hard to achieve manually: that’s very important when running on mobiles to due limited battery issue. mgwt is about writing GWT applications, aimed specifically for mobiles. In order to create great apps, be wary of performance. There are 3 areas of performance:

  • Startup performance, a matter of downloading, parsing, executing and rendering
  • Runtime performance which is really impacted by layout. As a rule, never leave native code. So, if you use JavaScript layout, you’re doing just that. CSS3 provides all the means necessary to layout. Likewise, animations built in JavaScript are bad and prefer CSS instead.

Both are taken into account by the GWT compiler. Note that mgwt provides plenty of themes, one for each mobile platform.

— And now ends Devoxx 2012 edition for me. It was enriching as well as tiring. You can find the recap of the previous days here:

Categories: Event Tags:

Devoxx 2012 – Day 4

November 15th, 2012 No comments

Unitils: full stack testing solution for enterprise applications by Thomas de Rycke and Jeroen Horemans

There are different types of test: unit tests (testing in isolation), integration tests (testing a subsystem) and system tests (testing the whole system). The higher you get in the hierarchy, the longer it will to execute, so you should have a lot of UT, some IT and just a few system tests.

Unit tests have to be super fast to execute, be easy to read and refactor, they also should tell you what went wrong without debugging. On the other hand, you’ve to write a lot of tests. Not writing unit tests will get you faster in the first phase, but when the application grows, it will drastically slows your productivity down. In enterprise settings, you’re better off with UT.

In order to avoid an explosion of testing frameworks, Unitils testing framework was chosen. Either JUnit or TestNG can be hooked into Unitils. The core itself is small (so as to be fast), the architecture being based on modules.

Let’s take for example a Greetings Card Sending application. — the demoed code is nicely layered, with service and dao and uses Spring autowiring and transactions through annotations — Unitils is available from Maven, so adding it to your classpath is just a matter of adding the right dep to the POM. Unitils provide some annotations:

  • @TestedObjectto set on the target object
  • @Mockto inject the different collaborators (using EasyMock)
  • @InjectByType to hint at the injection strategy for Unitils to use

Even though testing POJOs (as well as DTOs) is not very cost-effective, a single test class can be created to check for JavaBean conventions.

Unit testing becomes trivial if you learn how to write testable code: designing good OO and dependency injection, seams in your code and avoid at all the use of global state (time for example). What about integration and systems? They tend to get complex and you’ll need a “magic” abstract superclass. Moreover, when an IT fails, you don’t know if the code is buggy or if the IT itself is. Besides, you’ll probably need to learn of frameworks (JSF comes with its own testing framework, JSFUnit).

As software developers, the same problems come back:

  • Database: is my schema up-to-date? How to manage my referential integritry to set up and tear down data?
  • Those operations are probably slow
  • Finally, how to deal with external problems?

Unitils provides a solution for each of these problems, through the use of a dedicated module for each. For example, a Webdriver module lets you use Selenium through Unitils, a DBUnit module does the same for DBunit and a Mail module creates a fake SMTP server.
— speakers tells us that with Unitils, there’s no need for a magic abstract superclass anymore: IMHO, it has been replace with a magic black box testing framework —
Unitils benefits from a whole set of modules, which covers a single dedicated area: Swing, Batch, File, Spring, Joda Time,etc.

Modern software development anti-patterns by Martijn Verburg and Ben Evans

There are 10 anti-patterns to cover:

Anti-pattern Diabolical programmers Voice of reason What to do
Conference-Driven Delivery Real pros hack code and write their slides minutes before their talks PPPPPP In order to improve your presentation skills, you can rehearse in front of the mirror. Let’s call it Test-Driven presentations. Begin with a supportive crew, then grow into larger and larger audience.
Mortgage-Driven Development In order for others not to steal your job, don’t do any documentation. Even better, control your source code i.e keep the source on a USB key Don’t succumb to fear: proper communication is key. Developers who communicate have the most success. If you want one of your idea to go into source control, you’ll have to communicate about it.
Distracted by shiny Always use the latest tech, it’ll put you ahead. Consider using the latest version of Eclipse, packed full of plugins Prototype and evaluate: learn to separate the myth from the reality. As an example, web frameworks are an area you should tread carefully. Code reviews are an asset, especially if everyone do them so as to share best practices.
Brown bag sessions are good, in order to test every possible options.
Design-Driven Design UML Code generators are awesome. Print your UML diagrams on gigantic sheets, put them on the wall, and if someone asks you a question, reply that it’s obvious Design for what you need to know: don’t try to do too much upfront. Successful teams are able to navigate between methodologies that suit them. Less source code: pay your junior developers to produce code, and pay your senior to remove it. The less code, the less chances to have bugs. Be wary of maintainability, though (Clojure is a good example).
Pokemon pattern Use allof the GoF patterns The appropriate design pattern is your friend. Using a design pattern is adding a feature to a language which is missing it. As an example, Java concurrency works, but since mutability is deeply rooted, it’s hard. Of course, you could use actors (like in Scala), but you definitely would add a feature. Whiteboard, whiteboard and whiteboard, to communicate with your colleagues. As a tip, carry whiteboard markers at all time (or alternatively pen and pencil).
Tuning by Folklore Performance tune by lightning black candles. Remove string concatenations, add DB connections and so on Measure, don’t guess. It’s the best approach to improve performance. If it sounds boring, it’s because it is. Remember that the problem is probably not where you think it is. Shameless plug: go to jClarity
Deity All the code in one file is easier to search. VB 6 is one of the most popular language because of it Discrete components based on SOLID principles: Single Responsibility, Open to extension, Litskov substitution, etc. Discuss with your team, don’t go screaming WTF, you’ll probably have issues. You need to use naming based on the domain model so as other developers can read your code
Lean Startup Ninja A ninja ships its code when it compiles!Continuous delivery is a business enabler. Tools are available to achieve continuosu delivery: Ant/Gradle/Maven + Jenkins + Puppet + Vagrant
CV++ The Tiobe index is your friend. Just be good at the principles. Try different languages, because when learning another language can be applied to the language you’re currently programming in. Being a Software Developer is better than a Programmer. The former understands the whole lifecycle of an application, which is much better.
I haz Cloud Nothing can go wrong if you push your applications in the cloud Evaluate and prototype There are many providers available: EC2, Heroku, JElastic, CloudBees, OpenShift, etc. Again, a brown bag session is a great way to start
Can Haz Mobile If you go into the mobile game, you’ll be a millionaire
HTML5 for the win UX is so important. What are your UX needs? Think about what pisses you off…
Big Data MongoDB is web scale What type of data are you storing? Non-functional matters. Business doesn’t care about distributed databases, but about their reports!

Simplicity in Scala Design by Bill Venners

The main thing is to show how to:

  • Design to simplify tasks for your users
  • Design for busy teams. Don’t assume your users are expert in your libraries. It’s something like driving a car you’ve never driven before: it should be so obvious users shouldn’t have to check the documentation
  • Make it obvious, guessable or easy to remember, by order of preference. The choice process for the plusOrMinusin ScalaTest is a good example.
  • Design for readers, then writers, in that order. To choose the name for the invokePrivateis a good example: better use something semantically significant than some symbol
  • Make errors impossible (or difficult at last). In particular, use Scala’s rich type system to achieve this. Using the compiler to flag errors is the best
  • Exploit familiarity. James Gosling used familiar C++ keywords in Java, but without memory management so as to leverage this principle. So is the way ScalaTest is using JUnit constructs but enforcing a label on each test.
  • Document with examples
  • Minimize redudancy. A Zen of Python states:

    “There should be one and preferably only one obvious way to do it.”

    . A good practice would be to put a Recommended Usage blocks for each class.

  • Maximize consistency. ScalaTest offers a GivenWhenThen trait as well as the Suite class. But when you mixin the latter with the former, you do not need to set the name of the test. However, enforcing the Single Responsibility Principle and introducing Specsolves the problem.
  • USe symbols when your users are already experts in them. Scala provides + instead of plus, and * instead of multiply. On the contrary, in SBT some symbols prove to be problematic: ~=, <<=, <+=, <<++=. Likewise, it’s better to use foldLeft() than /:. However, there are situations where you shouldn’t be able to use words at all (like in algebra).
  • Use a functional style by default, but switch to an imperative style where it improves usability. For example, test functions in a suite, it registers the tests typed inside the braces.

Remember what tools are for. They are for solving problems, not finding problems to solve

Re-imagining the browser with AngularJS by Igor Minar and Misko Hevery

— I attend this session in order to open my mind about HATEOAS applications. I so do hope it’s worth it! —
Browsers were really simple at one time. 16 years later, browsers offer many more features, going as far as allowing people to develop through a browser! For a user, this is a good, for a developer not so much because of complexity.

In a static page, you tell the browser what you want. In JavaScript, you have to take care of how browsers work and the difference between them (hello IE). As an example, remember rounded corner before CSS border-radius? In HTML or the DOM API, there doesn’t seem to be such improvement. Even if you use jQuery, you’re doing imperative programming. The solution to look for is declarative! Databinding helps with this situation. If your model changes, the view should be automatically updated. On the contrary, with two-ways data binding updates in the view should change the underlying model. Advanced templating can be used with collections. Actually, 90% of JavaScript is only to synchronize between the model and the view.

AngularJS takes care of this now, while a specification is under work for this should be part of how browser works. HTML can be verbose, tabs are a good example: they currently use div while they should be using tab. From a more general point-of-view, AngularJS brings such components to the table. Also, the Web Components is an attempt at standardization of this feature.
— here comes a demo —
AngularJS works by importing the needed js files, creating a module in JS and adding custom attribute to HTML tags.

Testacular is a JavaScript testing framework adapted to AngularJS.

Apache TomEE, JavaEE 6 Web Profile on Tomcat by David Blevins

Apache TomEE is trying to solve the problem in that you run with Tomcat unlike you hit the wall and go to another server. TomEE is a certified Web Profile stack, it is a Tomcat with the Apache libraries to bridge the gap. Core values include being small, being certified and stay Tomcat.

What is the Web Profile anyway? In order to solve the bloated image of JavaEE, v6 specs introduced kepping only about half the specs of the full JavaEE specs (12 on 24). Things that are not included:

  • CMP
  • CORBA
  • JAX-RPC
  • JAX-RS
  • JAX-WS
  • JMS
  • Connectors

The last 4 are especially lacking so TomEE comes into 3 flavors: Web Profile including JavaMail (certified), JAX-RS (certified) and Plus which includes JAX-WS, Connector and JMS (not certified). Note that in JavaEE 7 will certainly change those distributions, since profiles themselves will change. TomEE began in 2011 with 1.0.0.beta 1 and October saw the release of 1.5.0. TomEE is know for its performance but also its lack of extended support.
— here comes the TomEE demo, including its integration with Eclipse (which is exactly the same as for Tomcat bare —
TomcatEE is just Tomcat with a couple of JAR in both lib and endorsed, extra-configuration in conf and a launcher for each OS in bin. Also, there’s Java agent (— for some reason I couldn’t catch —).

Testing is done with Arquillian and runs within an hour. Certifications tests are executed on Amazon EC2 instances, each having 613 max memory (and runs in +hundred hours): note that TCK pass with default JVM memory. The result is quite interesting with a 27Mb download, with only 64Mb memory taken at runtime. It would be a bad idea to integrate all stacks yourself, not only because it’s just a waste of time, but also because there’s a gap between aggregating all components individually and the integration that was done in TomEE.

Important features of TomEE include integration of Arquillian and Maven. A key point of TomEE is that deployment errors are printed out all at the same time and do not force you to correct those errors one by one as it’s the case with other application servers.

Categories: Event Tags: