Archive

Posts Tagged ‘web’

Quick evaluation of Twitter Bootstrap

May 13th, 2012 5 comments

I must admit I suck at graphical design. It’s one of the reasons that put me in the way of Flex and Vaadin in the first place: out-of-the-box, you got an application that is pleasing to the eye.

Using one of these technologies is not possible (nor relevant) in all contexts, and since I’ve got a strong interest in UI, I regularly have a look at other alternatives for clean looking applications. The technology I studied this week is Twitter Bootstrap.

Bootstrap is a lightweight client-side framework, that comes (among others) with pre-styled “components” and a standard layout. Both are completely configurable and you can download the customized result here. From a technical point of view, Bootstrap offers a CSS – and an optional JavaScript file that relies on jQuery (both are also available in minified flavor). I’m no big fan of fancy client-side JavaScript so what follows will be focused on what you can do with plain CSS.

Layout

Boostrap offers a 940px wide canvas, separated in 12 columns. What you do with those columns is up to you: most of the time, you’ll probably want to group some of them. It’s easily achieved by using simple CSS classes. In fact, in Bootstrap, everything is done with selectors. For example, the following snippet offers a main column that has twice the width of its left and right sidebars:

Left sidebar
Main
Right sidebar

A feature provided worth mentioning is responsive design: in essence, the layout adapts itself to its screen, an important advantage considering the fragmentation of today’s user agents.

Components

Whatever application you’re developing, chances are high that you’ll need some components, like buttons, menus, fields (in forms), tabs, etc.

Through CSS classes, Bootstrap offers a bunch of components. For existing HTML components, it applies a visually appealing style on them. For non-existing components (such as menus and tabs), it tweaks the rendering of standard HTML.

Conclusion

I tried to remake some pages of More Vaadin with Bootstrap and all in all, the result in not unpleasing to the eye. In the case I can’t use Vaadin, I think I would be a strong proponent of Bootstrap when creating an application from scratch.

You can find the sources of my attempt here.

Send to Kindle
Categories: Development Tags: ,

Critical analysis of frameworks comparison

December 5th, 2010 1 comment

Let me first say I was not at Devoxx 2010. Yet, I heard from Matt Raible’s Comparing JVM Web Frameworks. Like many, as I read the final results, I was very surprised that my favorite framework (Vaadin for me) was not ranked first. I passed through all stages of grief, then finally came to realize the presentation itself was much more interesting than the matrix. The problem lies not in the matrix, but in the method used to create it. Do not misunderstand me: Matt is very courageous to step into the light and launch the debate. However, IMHO, there are several points I would like to raise. Note that even Matt’s work was the spark for this article, the same can be said for every matrix which aim is to rank frameworks.

Numbers

The matrix uses plain and cold numbers to calculate ranks. As such, there’s a scientific feeling to the results. But it’s only that, a feeling. Because each and very grade can be jeopardized. How do you assign them? Let’s take the mult-language criteria: it seems the maximum grade (1) is given when the framework supports Java, Grails and Scala. From what I understand, Struts is available in JAR format, that can be called from any JVM language. So why the 0.5 for Struts?

On the other hand, I personally would assign brand new frameworks (like Play and Lift) a 0 for the degree of risk criteria. I would also give a flat 0 to JSF 2. It all depends of your vision.

Perimeter

This one is short: what’s the difference between the ‘Developer availability’ and the ‘Job trends’ criteria? The former is the snapshot and the latter the trend? Then why the ‘Plugins/addons’ or the ‘Documentation’ criterion do not get the same treatment?

Criterion weight

Why in God’s name are all criteria assigned the same weight? What if I don’t care if my application can easily be localized? What if I don’t have to support mobile? What if scalability is not an issue? Giving criterion weight should give entirely different results. Now the problem lies in assigning the right weight. And we’re back on point 1.

Context, context and more context

My previous article advised one to think in contexts. This stays true: if you’re located in Finland, I bet ‘Job trends’ or ‘Developer availability’ shouldn’t be a concern for managers wanting to start a Vaadin project. In contrast, in Switzerland, I don’t know many people mastering (or even knowing something) about JSF 2 or Spring MVC.

Requirements

I don’t think anyone should choose a single framework and be done with it. Think about it: if you choose Flex, you won’t be able to run on iPhone whereas if you choose a traditional approach, you won’t be able to run your application offline. Different requirements mean you should have a typology of possible use-cases and have a framework ready for each one.

My own experience

I was confronted with the same task when we had to choose a JavaScript framework. We did a written piece on the candidate frameworks, listing for each perceived pros and cons, but voluntarily did not rank them. IMHO, I believe this is more than enough to let you choose the right application framework, depending on your requirements and your context.

Conclusion

On the persistence layer, Hibernate or EclipseLink are leaders. For Dependency Injection, Spring is the de facto standard. For the presentation layer, the problem is not in the lack of choices but in the plenty alternatives you’ve got. Because many of them have strong pros and cons. Waiting for the perfect framework that will solve all and every of our problems, we should settle for the one best matched to the requirements and context at hand.

Send to Kindle
Categories: Technical Tags:

Managing web sessions

March 8th, 2010 3 comments

In the previous article, I set up a cluster of 2 Tomcat instances in order to achieve load-balacing. It also offered failover capability. However, when using this feature, the user session was lost when changing node. In this article, I will show you how this side-effect can be avoided.

Reminder: the HTTP protocol is inherently disconnected (as opposed to FTP which is connected). In HTTP, the client sends a request to a server, it gets its response and that’s the end. The server cannot natively associate a request to a previous request made by the same client. In order to use HTTP for applications purpose, we needed to group such requests. Session is a label for this grouping feature.

This is done through a token, passed to the client on the first request. The token is passed with a cookie if possible, or appended to the URL if not. Interestingly enough, this is the same for PHP (token named PHPSESSID) as for JEE (token named JSESSIONID). In both case, we use a stateless protocol and tweak it so that it appears stateful. This pseudo-statefulness make possible application-level features, such as authentication/authorization or shopping cart.

Now, let’s take a real use case. I’m browsing through an online shop. I have already put some article in my cart. When I decide to finally go to the payment, I find myself with an empty basket! What happened? Unbeknownst to me, the node which hosted my session crashed and I was transparently rerouted to a working node, without my session ID, and thus, without the content of my cart.

Such thing can not happen in real life, since such a shop will probably be put out of business if this happens too often. There are basically 3 strategies to adopt in order to avoid such loss.

Sessions are evil

From time to time, I stumble upon articles blaming sessions and labeling them as evil. While not an universal truth, using sessions in a bad way can have negative side-effects.

The most representative bad usage of session if putting everything in them. I’ve seen lazy developers put collections in session in order to manage paging. You pass a statement once, put the result in the session, and manage paging on the sessionized result. Since collection’s size is not constrained, such use does not scale well with the number of users increasing. In most cases, everything goes fine in development but you’re soon overwhelmed by unusual response time or even OutOfMemoryError in production.

If you think sessions are evil, some solutions are:

  • Store data in the database: now your data has to go from the front-end to the back-end to be saved, and then back again to be used. Classic relational database may not be the solution you’re looking for. Most high-traffic low-response time take the NoSQL route, although I don’t know if they use it for session storage purpose
  • Store data on the client side through cookies: your clients needs to have cookie-enabled browser. Furthermore, data will be sent with every request/response so don’t overuse too much

The second option has the advantage of freeing you of session ID management. However, for both solution, your application code needs to implement the storage part.

Besides, nothing stops you from using sessions and using the extension points of your application server to use cookie storage instead of the default behaviour (mostly in memory). I wouldn’t recommend that though.

Server session replication

Another solution is to embrace session – this is a JEE feature after all – but to use session replication in order to avoid session data loss. Session replication is not a JEE feature. It is a proprietary feature, offered by many (if not all) application servers that is entirely independent from your code: your code uses session, and it is magically replicated by the server across cluster nodes.

There are two constraints common to session replication amongst all servers:

  • Use the tag in the web.xml
  • Only put in session instances of classes that are java.lang.Serializable

IMHO, these rules should be enforced on all web applications, whether currently deployed on a cluster or not, since they are not very restrictive. This way, deploying an application on a cluster will tends toward a no-operation.

Strategies available for session replication are application server dependent. However, they are usually based on the following implementations:

  • In memory replication: each server stores all servers session datat. When updating, it broadcasts to all nodes the modified session (or the delta, based on the strategy available/used). This implementation heavily uses the network and the memory
  • Database persistence
  • File persistence: the file system used should be available to all cluster nodes

For our simple example, I will just show you how to use in-memory session replication in Tomcat. The following are the steps one should take in order to do so. Please notice that one should first undertake what is described in the Tomcat clustering article.

Note: Tomcat 5.5 has the cluster configuration commented in server.xml. Tomcat 6 does not. Here is the default clustering configuration:



  

  

  

  

  

  

First, make sure that the mcastAddrand mcastPort of the tag are the same. This is validated when starting a second node with the following log in the first:

INFO: Replication member added:org.apache.catalina.cluster.mcast.McastMember[tcp
://10.138.97.43:4001,catalina,10.138.97.43,4001, alive=0]

This insures that all nodes of a cluster are able to communicate with each other. From this point on, considering all other configuration is left by default, sessions are replicated in memory in all nodes of the cluster. Thus, you don’t need sticky session anymore. This is not enough for failover though, since removing a node from the cluster will still lead a new request to be assigned a new session ID, thus preventing access to your previous session data.

In order to also route session IDs, you need to specify two additional tags in :


The valve redirects requests to another node with the previous session ID. The cluster listener receives session ID cluster change event. Now, removing a cluster node is seamless (apart from latency for redirected) for clients whose session ID was redirected to this node.

Last minute note: the previous setup uses the standard session manager. I was recently made aware of a third-party manager that also handles session cookies when a node fails, thus reducing the configuration hassle. Such product is Memcached Session Manager and is based on Memcached. Any feedback on the use of this product is welcome.

Third party session replication

The previous solution has the disadvantage of specific server configuration. Though it does not impact development, it needs to be done for every server type in a different manner. This could be a burden if you happen to have different server types in your enterprise.

Using third party products is a remedy to this. Terracotta is such a product: morevoer, by providing a set number of Terracotta nodes, you avoid broadcasting your session changes to all server nodes, like in the Tomcat replication previous example.

In the following, the server is the Terracotta replication server and the clients are the Tomcat instances. In order to set up Terracotta, two steps are mandatory:

  • create the configuration for the server. In order to be used, name it tc-config.xml and put it in the bin directory. In our case, this is it:
    
    ...
        
          
        
    ...
          
    	    
            servlets-examples
    
          
    ...
    

    Note: the default installed module is for Tomcat 6. In case you need the Tomcat 5.5 module, you have to launch the Terracotta Integration module Management (TIM) and use it to download the correct module. For users behind an Internet proxy, this is made possible by updating tim-get.properties with the following lines:
    org.terracotta.modules.tool.proxyUrl = …
    org.terracotta.modules.tool.proxyAuth = …

  • add to all client launch scripts the following lines:
    TC_INSTALL_DIR=
    
    TC_CONFIG_PATH=:
    
    . ${TC_INSTALL_DIR}/bin/dso-env.sh -q
    
    export JAVA_OPTS="$JAVA_OPTS $TC_JAVA_OPTS"

Now we’re back to session replication and failover but the configuration is usable across different application servers.

In order to see what is stored, you can also launch the Terracotta Developer Console.

Conclusion

There are 3 basic strategies to manage session failover over cluster nodes. The first one is not to use session at all: it has major consequences on your development time, since your application has to do it directly. The second one is to look at the server documentation to look how it is done with a specific server. This ties your session management to a single product. Last but not least, you can use a third-party product. This has the advantage to move the configuration outside the scope of your specific server, thus letting you move with less hassle from one server to the next and still enjoy the benefits of session failover. Were I a system engineer, this is the solution I would recommend since it is the most flexible.

To go further:

Send to Kindle
Categories: Development Tags: , ,

Decrease your pages load time

March 17th, 2009 No comments

1. Generalities

Web applications have several advantages over traditional client server applications: since business code comes from a unique server (forget clustering here), they are always up-to-date. Moreover, deployment of a new version is only a matter of minutes.

They have one big drawback, though. Since HTML code is sent through the network, responsiveness is generally much less than that of traditional applications.

The responsiveness of an interactive system describes how quickly it responds to user input (i.e. the rate of communication with the system).

From Wikipedia

Ajax overcomes some of this in proposing to load the entire page at start and the modifiying pertinent chuncks of it. Yet, it is only part of the solution, since initial loading of the page will be the hardest on the user. Users don’t like waiting for a page to load. As a user, you don’t like waiting. As a good rule of thumb, you must make every effort to achieve a initial page load time of less than 1 second (you can find additional informations on load time steps in this very interesting article). This load time doesn’t cover the time for the entire page and style sheets and scripts to finish downloading, but only the time for the user to have pertinent informations displayed and to begin interacting with the screen. This is of cours very context dependent but conclusion is: the less time your page takes to display, the better it is.

2. Decreasing load time

Yahoo provides 34 best practices to speed up your site. If you use Mozilla Firefox, and you already use the fantastic FireBug plug-in, you should add the YSlow plugin to it (if not, download FireBug beforehand). YSlow checks the web page you’re browsing with Firefox for 13 of these rules. The important rules for the matter at hand are:

  1. Make JS and CSS external,
  2. Make fewer HTTP requests,
  3. Minify JS (and CSS).

First point is pretty self-explanatory. Instead of putting JS code and CSS information in your HTML, you put both in external files and reference these from your page: no big deal, everybody does it nowadays. The second point is contrary to the first: for each file you have, you must make a separate HTTP request. So, the best solution is your main HTML plus a single JavaScript file and a single CSS file. The 3rd point is to minify those files, that is to remove comments, extra spaces and linebreaks, so as to make those files lighter. Options also include renaming the variables to make their names shorter (which can obsfuscate, deliberately or not).

3. Minification at build time

Among all the tools avalaible, a good one for minifying JS and CSS (starting from v2.0) is Yahoo UI Library Compressor. It is developed in Java and can be runned from the command line. Such command can take place during the build process. The following Ant task executes the minifying process on simple.js :



  
    
    
  

The main drawback of this solution is that you lose your original script in favor of the minified one. YUI compressor enables to output the minified script in a new script, hence the following Ant target :



  
    
    
    
    
  

This makes the process slightly better. Yet, I cannot test the minified script under my favourite IDE because the script is only correct in the WAR creation scope, not in the deploy-in-IDE scope.

4. Minification at runtime

In order to prevent this drawback, we could make the minifying a runtime process. In order to do this, we create a filter that is associated with .js extensions :

package ch.frankel.blog.jawr.web;

import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.PrintWriter;
import java.io.Reader;
import java.io.StringWriter;

import javax.servlet.Filter;
import javax.servlet.FilterChain;
import javax.servlet.FilterConfig;
import javax.servlet.ServletException;
import javax.servlet.ServletRequest;
import javax.servlet.ServletResponse;
import javax.servlet.http.HttpServletRequest;

import com.yahoo.platform.yui.compressor.JavaScriptCompressor;

/**
 * Filter that minifies JavaScript files. Can be used with CSS files with less configuration.
 *
 * @author Nicolas Fränkel
 * @since 17 mars 2009
 */
public class YuiCompressorFilter implements Filter {

    /** Configuration object. */
    private FilterConfig config;

    /** Number of kept linebreaks. */
    private int linebreak;

    /** Munge. */
    private boolean munge;

    /** Verbose. */
    private boolean verbose;

    /** Should all semicolons be kept. */
    private boolean preserveAllSemiColons;

    /** Should optimization be disabled. */
    private boolean disableOptimizations;

    /**
     * This implementation does nothing.
     *
     * @see javax.servlet.Filter#destroy()
     */
    @Override
    public void destroy() {}

    /**
     * Compress JS and CSS output.
     *
     * @see javax.servlet.Filter#doFilter(javax.servlet.ServletRequest,
     *      javax.servlet.ServletResponse, javax.servlet.FilterChain)
     */
    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException,
	    ServletException {

	StringWriter stringWriter = new StringWriter();

	final PrintWriter printWriter = new PrintWriter(stringWriter);

	HttpServletRequest httpRequest = (HttpServletRequest) request;

	String path = httpRequest.getServletPath();

	InputStream stream = config.getServletContext().getResourceAsStream(path);

	Reader source = new InputStreamReader(stream);

	stream.close();

	JavaScriptCompressor compressor = new JavaScriptCompressor(source, new BasicErrorReporter());

	compressor.compress(printWriter, linebreak, munge, verbose, preserveAllSemiColons, disableOptimizations);

	response.getWriter().write(stringWriter.getBuffer().toString());

	printWriter.close();
    }

    /**
     * Reads filter parameters.
     *
     * @see javax.servlet.Filter#init(javax.servlet.FilterConfig)
     */
    @Override
    public void init(FilterConfig config) throws ServletException {

	this.config = config;

	String linebreakString = config.getInitParameter("linebreak");
	String mungeString = config.getInitParameter("munge");
	String verboseString = config.getInitParameter("verbose");
	String preserveAllSemiColonsString = config.getInitParameter("preserveAllSemiColons");
	String disableOptimizationsString = config.getInitParameter("disableOptimizations");

	munge = Boolean.parseBoolean(mungeString);
	verbose = Boolean.parseBoolean(verboseString);
	preserveAllSemiColons = Boolean.parseBoolean(preserveAllSemiColonsString);
	disableOptimizations = Boolean.parseBoolean(disableOptimizationsString);

	try {

	    if (linebreakString != null) {

		linebreak = Integer.parseInt(linebreakString);
	    }

	} catch (NumberFormatException e) {

	    throw new ServletException(e);
	}
    }
}

Now our compressor works both in and out of our favorite IDE. But the drawback is the performance lost in applying the filter each time the resource is needed. In order to lessen this cost, we could:

  • implement a cache that maps paths to minified scripts (as strings) in memory,
  • rely on the browser cache by using HTTP headers related to expiration values.

Still, I find this not very satisfying… First of all, YUI compressor was not supposed to be used in this fashion. This shows because you find no documentation, JavaDoc or else, regarding individual use of the methods. Then, the less I code, the better I feel. In this case, I had to code a whole filter, complete with initialization parameter. Last but not least, do you remember the 3 rules, at the top of the article? We adressed only the 3rd point, minifying the script, but in no way did we make fewer HTTP request by using YUI compressor. If we had 15 different source scripts, we would still have 15 minified scripts at the end and thus need 15 HTTP requests to get them all.

5. Jawr

Jawr is the solution to our problem: it enables to have different scripts during development (third-party scripts, enterprise framework scripts, applications scripts, etc.) and bundles them at application startup in one big script file. The process is completely transparent to the developer and is lightweight because all work is done at application startup.

Additional features include:

  • Bundling of any JavaScript or CSS file whatever the chosen directory structure,
  • Modifying those files dynamically, e.g. with ResourceBundle messages,
  • JavaScript and CSS minification (oh, what a surprise!) that can be done with either YUI Compressor or JSMin,
  • Caching enforcement through is a modification of sent HTTP headers,
  • GZip support for compatible browsers.

In order to do so, Jawr uses 3 kind of components:

  • a servlet to serve JavaScript and CSS content,
  • jawr taglibs that references JS and CSS,
  • a configuration file in Java properties format.

The configuration of the web deployment descriptor will look something like this:


    JawrServlet
    net.jawr.web.servlet.JawrServlet
    
        configLocation
        /jawr.properties
    
    1


    JawrServlet
    *.js

An example properties file will look like this

#Turning on compression when possible
jawr.gzip.on=true
#Using YUI compressor
jawr.js.bundle.factory.bundlepostprocessors=YUI
#Bundling into all.js
jawr.js.bundle.all.id=/script/all.js
#Bundled scripts
jawr.js.bundle.all.mappings=/script/simple.js, /script/sub/b.js, /script/sub/subsub/a.js

The, you will use it like this in your page:

<%@ taglib prefix="jawr" uri="http://jawr.net/tags"%>


    
        
        
    
    

You will find there a Jawr example project made with Eclipse along with the previous YUI Compressor example. This example shows you the main uses of Jawr, mainly:

  • bundling JavaScript files,
  • minifying them,
  • compressing the bundled file (if you browser supports it).

As an example, it will only output two JavaScript alert boxes, but it will you the framework to build upon if you need to go further.

6. Conclusion

Minifying external files is not the only think of in order to decrease your web pages load time. You should also decrease to the maximum the number of these files. Jawr is an OpenSource component used to do both elegantly: I hope this article convinced you that Jawr is the right tool for this usage.

Send to Kindle
Categories: JavaEE Tags: , , , ,

(Ooops) I dit it again

May 26th, 2008 2 comments

I’m sorry (not really in fact) about the title. I just got the Sun Certified Web Component Developer (CX-055-083) certification with a score of 91%. I’m happy. Life is good. Now I’m reaching for the next step, the Architect certification. Wish me luck !

Send to Kindle