Home > Technical > Critical analysis of frameworks comparison

Critical analysis of frameworks comparison

Let me first say I was not at Devoxx 2010. Yet, I heard from Matt Raible’s Comparing JVM Web Frameworks. Like many, as I read the final results, I was very surprised that my favorite framework (Vaadin for me) was not ranked first. I passed through all stages of grief, then finally came to realize the presentation itself was much more interesting than the matrix. The problem lies not in the matrix, but in the method used to create it. Do not misunderstand me: Matt is very courageous to step into the light and launch the debate. However, IMHO, there are several points I would like to raise. Note that even Matt’s work was the spark for this article, the same can be said for every matrix which aim is to rank frameworks.

Numbers

The matrix uses plain and cold numbers to calculate ranks. As such, there’s a scientific feeling to the results. But it’s only that, a feeling. Because each and very grade can be jeopardized. How do you assign them? Let’s take the mult-language criteria: it seems the maximum grade (1) is given when the framework supports Java, Grails and Scala. From what I understand, Struts is available in JAR format, that can be called from any JVM language. So why the 0.5 for Struts?

On the other hand, I personally would assign brand new frameworks (like Play and Lift) a 0 for the degree of risk criteria. I would also give a flat 0 to JSF 2. It all depends of your vision.

Perimeter

This one is short: what’s the difference between the ‘Developer availability’ and the ‘Job trends’ criteria? The former is the snapshot and the latter the trend? Then why the ‘Plugins/addons’ or the ‘Documentation’ criterion do not get the same treatment?

Criterion weight

Why in God’s name are all criteria assigned the same weight? What if I don’t care if my application can easily be localized? What if I don’t have to support mobile? What if scalability is not an issue? Giving criterion weight should give entirely different results. Now the problem lies in assigning the right weight. And we’re back on point 1.

Context, context and more context

My previous article advised one to think in contexts. This stays true: if you’re located in Finland, I bet ‘Job trends’ or ‘Developer availability’ shouldn’t be a concern for managers wanting to start a Vaadin project. In contrast, in Switzerland, I don’t know many people mastering (or even knowing something) about JSF 2 or Spring MVC.

Requirements

I don’t think anyone should choose a single framework and be done with it. Think about it: if you choose Flex, you won’t be able to run on iPhone whereas if you choose a traditional approach, you won’t be able to run your application offline. Different requirements mean you should have a typology of possible use-cases and have a framework ready for each one.

My own experience

I was confronted with the same task when we had to choose a JavaScript framework. We did a written piece on the candidate frameworks, listing for each perceived pros and cons, but voluntarily did not rank them. IMHO, I believe this is more than enough to let you choose the right application framework, depending on your requirements and your context.

Conclusion

On the persistence layer, Hibernate or EclipseLink are leaders. For Dependency Injection, Spring is the de facto standard. For the presentation layer, the problem is not in the lack of choices but in the plenty alternatives you’ve got. Because many of them have strong pros and cons. Waiting for the perfect framework that will solve all and every of our problems, we should settle for the one best matched to the requirements and context at hand.

email
Send to Kindle
Categories: Technical Tags:
  1. December 6th, 2010 at 22:45 | #1

    I agree that the numbers need explanations and the criterion’s should be weighted. I documented why I gave each framework the rating I did on my blog. Hope this helps clarify things.

    http://raibledesigns.com/rd/entry/how_i_calculated_ratings_for

  1. No trackbacks yet.