Archive

Posts Tagged ‘devoxx’

Devoxx France 2013 – Day 3

March 30th, 2013 1 comment

Classpath isn’t dead… yet by Alexis Hassler

Classpath is dead!
Mark Reinold

What is the classpath anyway? In any code, there are basically two kinds of classes: those coming from the JRE, and those that do not (either becasue they are your own custom class or becasue they come from 3rd-party libraries). Classpath can be set either with a simple java class load by the -cp argument or with a JAR by the embedded MANIFEST.MF.

A classloader is a class itself. It can load resources and classes. Each class knows its classloader, that is which classloader has loaded the class inside the JVM (i.e. sun.misc.Launcher$AppClassLoader). JRE classes classloader is not not a Java type, so the returned classloader is null. Classloader are organized into a parent-child hierarchy, with delegation from bottom to top if a class is not found in the current classloader.

The BootstrapClassLoader can be respectively replaced, appended or prepended with -Xbootclasspath,-Xbootclasspath/a and -Xbootclasspath/p. This is a great way to override standard classes (such as String or Integer); this is also a major security hole. Use at your own risk! Endorsed dir is a way to override some API with a more recent version. This is the case with JAXB for example.

ClassCastException usually comes from the same class being loaded by two different classloaders (or more…). This is because classes are not identified inside the JVM by the class only, but by the tuple {classloader, class}.

Classloader can be developed, and then set in the provided hierarchy. This is generally done in application servers or tools such as JRebel. In Tomcat, each webapp has a classloader, and there’s a parent unified classloader for Tomcat (that has the System classloader as its parent). Nothing prevents you from developing your own: for example, consider a MavenRepositoryClassLoader, that loads JAR from your local Maven repository. You just have to extends UrlClassLoader.

JAR hell comes from dependencies management or more precisely its lack thereof. Since dependencies are tree-like during development time, but completely flat at runtime i.e. on the classpath, conflicts may occur if no care is taken to eliminate them beforehand.

One of the problem is JAR visibility: you either have all classes available if the JAR is present, or none if it is not. The granularity is at the JAR level, whereas it would be better to have finer-grained visibility. Several solutions are available:

  • OSGi has an answer to these problems since 1999. With OSGi, JARs become bundles, with additional meta-data set in the JAR manifest. These meta-data describe visibility per package. For a pure dependency management point of view, OSGi comes with additional features (services and lifecycle) that seem overkill [I personally do not agree].
  • Project Jigsaw also provides this modularity (as well as JRE classes modularity) in the form of modules. Unfortunately, it has been delayed since Java 7, and will not be included in Java 8. Better forget it at the moment.
  • JBoss Module is a JBoss AS 7 subproject, inspired by Jigsaw and based on JBoss OSGi. It is already available and comes with much lower complexity than OSGi. Configuration is made through a module.xml description file. This system is included in JBoss AS 7. On the negative side, you can use Module either with JBoss or on its own, which prevents us from using it in Tomcat. An ongoing Github proof-of-concept achieves it though, which embeds the JAR module in the deployed webapp and overrides Tomcat classloader of the webapp.
    Several problems still exists:

    • Artefacts are not modules
    • Lack of documentation

Animate your HTML5 pages with CSS3, SVG, Canvas & WebGL by Martin Gorner

Within the HTML5 specification alone, there are 4 ways to add fun animations to your pages.

CSS 3
CSS 3 transitions come through the transition property. They are triggered though user-events.Animations are achieved through animation properties. Notice the plural, because you define keyframes and the browser computes intermediate ones.2D transformations -property transformation include rotate, scale, skew, translate and matrix. As an advice, timing can be overriden, but the default one is quite good. CSS 3 also provides 3D transformations. Those are the same as above, but with either X, Y or Z appended to the value name to specify the axis name.The biggest flaw from CSS 3 is that they lack draw features.
SVG + SMIL
SVG not only provides vectorial drawing features but also out-of-the-box animation features. SVG is described in XML: SVG animations are much more powerful that CSS 3 but also more complex. You’d better use a tool to generate it, such as Inkscape.There are different ways to animate SVG, all through sub-tags: animate, animateTransform and animateTransform.Whereas CSS 3 timing is acceptable out-of-the-box, default in SVG is linear (which is not pleasant to the eye). SVG offers timing configuration through the keySplines attribute of the previous tags.Both CSS 3 and SVG have a big limitations: animations are set in stone and cannot respond to external events, such as user inputs. When those are a requirement, the following two standard apply.
Canvas + JavaScript
From this point on, programmatic (as opposed to descriptive) configuration is available. Beware that JavaScript animations comes at a cost: on mobile devices, it will dry power. As such, know about method that let the browser stop animations when the page is not displayed.
WebGL + THREE.js
WebGL let use a OpenGL API (read 3D), but it is very low-level. THREE.js comes with a full-blown high level API. Better yet, you can import Sketchup mesh models into THREE.js.In all cases, do not forget to use the same optimization as in 2D canvas to stop animations when the canvas is not visible.

Tip: in order to not care about prefix, prefix.js let us preserve original CSS and enhance with prefix at runtime. Otherwise, use LESS / SASS. Slides are readily available online with associated labs.
[I remember using the same 3D techniques 15 years ago when I learnt raytracing. That’s awesome!]

The Spring update by Josh Long

[Talk is shown in code snippets, rendering full-blown notes mostly moot. It is dedicated to new features of the latest Spring platform versions]

Version Feature
3.1
  • JavaConfig equivalents of XML
  • Profiles
  • Cache abstraction, with CacheManager and Cache
  • Newer backend cache adapters (Hazelcast, memcached, GemFire, etc.) in addition to EhCache
  • Servlet 3.0 support
  • Spring framework code available on GitHub
3.2
  • Gradle-based builds [Because of incompatible versions support. IMHO, this is one of the few use-case for using Gradle that I can agree with]
  • Async MVC processing through Callable (threads are managed by Spring), DeferredResult and AsyncTask
  • Content negotiation strategies
  • MVC Test framework server
4
  • Groovy-configuration support. Note that all available configuration ways (XML, JavaConfig, etc.) and their combinations have no impact at runtime
  • Java 8 closures support
  • JSR 310 (Date and Time API) support
  • Removal of setting @PathVariable‘s value need, using built-in JVM mechanism to get it
  • Various support for Java EE 7
  • Backward compatibility will still include Java 5
  • Annotation-based JMS endpoints
  • WebSocket aka “server push” support
  • Web resources caching

Bean validation 1.1: we’re not in Care Bears land anymore by Emmanuel Bernard

All that will be written here is not set in stone, it has to be approved first. Bean Validation comes bundled with Java EE 6+ but it can be used standalone.

Before Bean Validation, validations were executed at each different layer (client, application layers, database). This led to duplications as well as inconsistencies. The Bean Validation motto is something along the lines of:

Constrain once, run anywhere

1.0 has been released with Java EE 6. It is fully integrated with other stacks including JPA, JSF (& GWT, Wicket, Tapestry) and CDI (& Spring).

Declaring a constraint is as simple as adding a specific validation annotation. Validation can be cascaded, not only on the bean itself but on embedded beans. Also, validation may wrap more than one property to validate if two different properties are consistent with one another. Validation can be set on the whole, but also defined subsets – called groups, of it. Groups are created through interfaces.

Many annotations come out-of-the-box, but you can also define your own. This is achieved with the @Constraint annotation on a custom annotation. It includes the list of validators to use when validating. Those validators must implement the Validator interface.

1.1 will be included in Java EE 7. The most important thing to remember is that it is 100% open. Everything is available on GitHub, go fork it.

Now, containers are in complete control of Bean Validation components creation, so that they are natively compatible with CDI. Also, other DI containers, such as Spring, may plug in their own SPI implementation.

The greatest feature of 1.1 is that not only properties can be validated, but also method parameters and method return values. Constructors being specialized method, it also applies to them. It is achieved internally with interceptors. However, this requires an interception stack – either CDI, Spring or any AOP, and comes with associated limitations, such as proxies. This enables declarative Contract-Oriented Programming, and its pre- and post-conditions.

Conclusion

Devoxx France 2013 has been a huge success, thanks to the organization team. Devoxx is not only tech talks, it is also a time to meet new people, exchange ideas and see old friends.

See you next year, or at Devoxx 2013!

Thanks to my employer – hybris, who helped me attend this great event!

Send to Kindle
Categories: Event Tags: , , ,

Devoxx France 2013 – Day 2

March 29th, 2013 3 comments

Object and Functions, conflict without a cause by Martin Odersky

The aim of Scala is to merge features of Object-Programming and Functional Programming. The first popular OOP language was Simula in 67, aimed at simulations; the second one was Smalltalk for GUIs. What is the reason OOP became popular: only because of the things you could do., not because of its individual features (like encapsulation). Before OOP, the data structure was well known with an unbounded number of operations while with OOP, the number of operations is fixed but the number of implementation is unbounded. Though it is possible for procedural languages (such as C) to apply to the field of simulation & GUI, it is to cumbersome to develop with them in real-life projects.

FP has advantages over OOP but none of them is enough to led to mainstream adoption (remember it has been around for 50 years). What can spark this adoption is the complexity to develop OOP applications multicores and cloud computing ready. Requirements for these scopes include:

  • parallel
  • reactive
  • distributed

In each of these, mutable state is a huge liability. Shared mutable state and concurrent threads leads to non-determinism. To avoid this, just avoid mutable state :-) or at least reduce it.

The essence of FP is to concentrate on transformation of immutable values instead of stepwise updates of a single mutable data structure.

In Scala, the .par member turns a collection into a parallel collection. But then, you have to became FP and forego of any side-effects. With Future and Promise, non-blocking is also possible but is hard to write (and read!), while Scala for-expressions syntax is an improvement. It also make parallel calls very easy.

Objects are not to put put away: in fact, they are not about imperative, they are about modularization. There are no module systems (yet) that are on par with OOP. It feels like using FP & OOP is like sitting between two chairs. Bridging the gap require letting go of some luggage first.

Objects are characterized by state, identity and behavior
Grady Booch

It would be better to focus on behavior…

Ease development of offline applications in Java with GWT by Arnaud Tournier

HTML5 opens new capabilities that were previously the domain of native applications (local storage, etc.). However, it is not stable and mature yet: know that it will have a direct impact on development costs.

GWT is a tool of choice for developing complex Java applications leveraging HTML5 features. A module called “elemental” completes lacking features. Moreover, the JNSI API is able to use JavaScript directly. In GWT, one develops in Java and a compiler transforms Java code into JavaScript instead of bytecode. Generated code is compatible with most modern browsers.

Mandatory features for offline include application cache and local storage. Application cache is a way for browsers to store files locally to use when offline. It is based on a manifest file, and has to be referenced by desired HTML pages (in the tag). A cache management API is provided to listen to cache-related events. GWT already manages resources: we only need to provide a linker class to generate the manifest file that includes wanted resources. Integration of the cache API is achieved through usual JSNI usage [the necessary code is not user-friendly… in fact, it is quite gory].

Local storage is a feature that stores user data on the client-side. Some standards are available: WebSQL, IndexedDB, LocalStorage, etc. Unfortunately, only the latter is truly cross-browser and is based on a key-value map of strings. Unlike application cache, there’s an existing out-of-the-box GWT wrapper around local storage. Objects stored being strings and running client-side, JSON is the serialization mechanism of choice. Beware that standard mandates 5MB maximum of storage (while some browsers provide more).

We want:

  1. Offline authentication
  2. A local database to be able to run offline
  3. JPA features for developers
  4. Transparent data synch when coming online again for users

In regard to offline authentication, it is not a real problem. Since local storage is not secured, we just have to store the password hash. Get a SHA-1 Java library and GWT takes care of the rest.

SQL capabilities is a bigger issue, there are many incomplete solutions. sql.js a JavaScript SQLite port that provides limited SQL capabilities. As for integration, back to JSNI again ([* sigh *]). You will be responsible for developing a high-level API to ease usage of this, as you have to talk to either a true JPA backend or local storage. Note that JBoss Errai is a proposed JPA implementation to resolve this (unfortunately, it is not ready for production use – yet).

State sync between client and server is the final problem. It can be separated into 3 ascending complexity levels: read-only, read-add and read-add-delete-update. Now, sync has to be done manually, only the process itself is generic. In the last case, there are no rules, only different conflict resolution strategies. What is mandatory is to have causality relations data (see Lamport timestamps).

Conclusion is that developing offline applications now is a real burden, with a large mismatch between possible HTML5 capabilities and existing tools.

Comparing JVM web frameworks by Matt Raible

Starting with JVM Web frameworks history, it all began with PHP 1.0 in 1995. In the J2EE world, Struts replaced proprietary frameworks in 2001.

Are there many too many Java web frameworks? Consider Vaadin, MyFaces, Struts2, Wicket, Play!, Stripes, tapestry, RichFaces, Spring MVC, Rails, Sling, Stripes, Grails, Flex, PrimeFaces, Lift, etc.

And now, for SOFEA architecture, there are again so many frameworks on the client-side: Backbone.ja, AngularJS, HTML5, etc. But, “traditional” frameworks are still relevant because of client-side development limitations, including development speed and performance issues.

In order to make relevant decision when faced with a choice, first set your goals and then evaluate each option in regard to these goals. Pick your best option and then re-set your goals. Maximizers trie to make the best possible choice, satisficers try to find the first suitable choice. Note that the former are generally more unhappy than the latter.

Here is a proposed typology (non-exhaustive):

Pure web Full stack SOFEA
Apache GWT JSF Miscellaneous API JavaScript MVC
  • Wicket
  • Struts
  • Spring
  • Tapestry
  • Click
  • SmartGWT
  • GXT
  • Vaadin
  • Errai
  • Mojarra (RI)
  • MyFaces
  • Tomahawk
  • IceFaces
  • RichFaces
  • PrimeFaces
  • Spring MVC
  • Stripes
  • RIFE
  • ZK
  • Rails
  • Grails
  • Play!
  • Lift
  • Spring Roo
  • Seam
  • RESTEasy
  • Jersey
  • CXF
  • vert.x
  • Dropwizard
  • Backbone.js
  • Batman.js
  • JavaScript MVC
  • Ember.js
  • Sprout Core
  • Knockout.js
  • AngularJS

The former matrix with fine-grained criteria is fine, but you probably have to create your own, with your own weight for each criterion. There are so many ways to tweak the comparison: you can assign more fine-grained grades, compare performances, locs, etc. Most of the time, you are influenced by your peers and by people who have used such and such frameworks. Interestingly enough, performance-oriented tests show that most of the time, bottlenecks appear in the database.

  • For full stack, choose by language
  • For pure web, Spring MVC, Struts 2, Vaadin, Wicket, Tapestry, PrimeFaces. Then, eliminate further by books, job trends, available skills (i.e. LinkedIn), etc.

Fun facts: a great thing going for Grails and Spring MVC is backward compatibility. On the opposite side, Play! is the first framework that has community revive a legacy version.

Conclusion: web frameworks are not the productivity bottleneck (administrative tasks are as show in the JRebel productivity report), make your own opinion, be nether a maximizer (things change too quickly) nor a picker.

Puzzlers and curiosities by Guillaume Tardif & Eric Lefevre-Ardant

[Interesting presentation on self-reference, in art and code. Impossible to resume in written form! Wait for the Parleys video…]

Exotic data structures, beyond ArrayList, HashMap & HashSet by Sam Bessalah

If all you have is a hammer, everything looks like a nail

In some cases, problem can be solved in an easier way by using the right data structure instead of the one we know. Those 4 different “exotic” data structures are worth knowing:

  1. Skip lists are ordered data sets. The benefit of skip lists over array lists is that every operation (insertion, removal, contains and retrieval, ranges) is in o(log N). It is achieved by adding extra levels for “express” lines. Within the JVM, it is even faster with JVM region localizing feature. The type is non-locking (thread-safe) and included in Java 6 with ConcurrentSkipListMap and ConcurrentSkipListSet. The former is ideal for cache implementations.
  2. Tries are ordered trees. Whereas traditional trees have complexity of o(log N) where N is the tree depth, tries have constant time complexity whatever the depth. A specialized kind of trie is the Hash Array Mapped Trie (HAMT), a functional data structure for fast computations. Scala offers CTrie structure, a concurrent trie.
  3. Bloom filters are probabilistic data structures, designed to return very fast whether an element belongs to a data structure. In this case, there are no false negatives, accurately returning when an element does not belong to the structure. On the contrary, false positives are possible: it may return true when it is not the case. In order to reduce the probability of false positives, one can choose an optimal hash function (cryptographic functions are best suited), in order to avoid collision between hashed values. To go further, one can add hash functions. The trade off is memory space consumption.
    Because of collisions, you cannot remove elements from Bloom filters. In order to achieve them, you can enhance Bloom filters with counting, where you also store the number of elements at a specific location.
  4. Count Min Sketches are advanced Bloom filters. It is designed to work best when working with highly uncorrelated, unstructured data. Heavy hitters are based on Count Min Sketches.
Send to Kindle
Categories: Event Tags: , ,

Devoxx France 2013 – Day 1

March 28th, 2013 No comments

Rejoice people, it’s March, time for Devoxx France 2013! Here are some notes I took during the event.

Java EE 7 hands-on lab by David Delabasse & Laurent Ruaud

An hands-on lab by Oracle for good old-fashioned developers that want to check some Java EE 7 features by themselves.

This one, you can do it at home. Just go to this page and follow instructions. Note you will need at least Glassfish 4 beta 80 and the latest NetBeans (7.3).

You’d better reserve a day if you want to go beyond copy-paste and really read and understand what you’re doing. Besides, you have to some JSF knowledge if anything goes wrong (or have a guru on call).

Angular JS by Thierry Chatel

The speaker comes from a Java developer background. He has used Swing in the past and since then, he has searched for binding features: a way to automate data exchange between model and views. Two years ago, he found AngularJS.

AngularJS is a JavaScript framework, comprised of more than +40 kloc and weights 77 kb minified. The first stable version was released one year ago, codenamed temporal-domination. The application developed by Google with AngularJS is Doubleclick for publishers. Other examples include OVH’s future management console and Youtube application on PS3. Its motto is:

HTML enhanced for web apps

What does HTML enhanced means? Is it HTML6? The problem is HTML has never been designed to create applications: it is only to display documents and link between them. Most of the time, one-way binding between Model & Template is achieved to create a view. Misko Hevery (AngularJS founder) point of view is instead of trying to go around this limitation, we’d better add this feature to HTML.

So, AngularJS philosophy is to compile the View from the Template, and then 2-way bind between View & Model. AngularJS use is easy as pie:

Yout name: 
Hello {{me}}

AngularJS is a JavaScript framework, that free developers from coding too manyJavaScript lines.

The framework uses simple concepts:

  • watches around expressions (properties, functions, etc.)
  • dirty checking on events (keyboard, HTTP request, etc.)

Watches are re-evaluated on each dirty checks. This means expressions have to be simple (i.e. computation results instead of the computations themselves). The framework is designed to handle up to 2000 simple watches. Keep note that standards (as well as user agents) are evolving and that ECMAScript next version will provide Object.observer() to handle x50 the actual number of watches.

An AngularJS app is as simple as:

This let us have as many applications as needed on the same page. AngularJS is able to create single-page applications, with browser navigation (bookmarks, next, previous) automatically handled. There’s no such thing like global state.

AngularJS also provides core concepts like modules, services and dependency injection. There’s no need to inherit from specific classes or interfaces: any object is available for any role. As a consequence, code is easily unit-testable, the preferred tool to do this is Karma (ex-Testacular). For end-to-end scenarii testing, the same dedicated tool is also available, based on the framework and plays tests in defined browsers. In conclusion, AngularJS is not only a framework but also a complete platform with the right level of abstraction, so that developed code is purely business.

There are no AngularJS UI components, but many are provided by third-party like AngularUI, AngularStrap, etc.

AngularJS is extremely structuring, it is an opinionated framework. You have to code the AngularJS way. A tutorial is readily available to let you do that. Short videos dedicated to a single focused theme are available online.

Wow, this the second talk I attend about AngularJS and it looks extremely good! My only complaints are that is follows the trend of pure client-side frameworks and that is not designed for mobile.

Gradle, 30 minutes to change all by Sébastien Cogneau

In essence, Gradle is a Groovy DSL to automate build. It is extensible through Java & Groovy plugins. Gradle is based on existing principles: it let you reuse Ant tasks, it reuses Maven convention and is compatible with both Ivy & Maven repositories.

A typical gradle build file looks like this:

apply plugin: 'jetty'

version = '1.0.0'

repositories {

    mavenCentral()
}

configuration {
    codeCoverage
}

sonarRunner {
    sonarProperties {
        ...
    }
}

dependencies {
    compile: 'org.hibernate:hibernate-core:3.3.1.GA'
    codeCoverage: 'org.jacoco....'
}

test {
    jvmArgs '...'
}

task wrapper(type:Wrapper) {
    gradleVersion = '1.5-rc3'
}

task hello(type:Exec) {
    description 'Devoxx 2013 task'
    group 'devoxx'
    dependsOn wrapper
    executable 'echo'
    args 'Do you have question'
}

Adding plugins add tasks to the available build. For example, by adding jetty, we get jettyStart. Moreover, plugins have dependencies so you also have tasks from dependent plugins.

Gradle can be integrated with Jenkins, as there is an available Gradle plugin. There are two available options to run Gradle build on Jenkins:

  • either you install Gradle and configure its installation on Jenkins. From this point, you can configure your build to use this specific install
  • or you generate a Gradle wrapper and only configure your build to use this wrapper. There’s no need to install Gradle at all in this case

Gradle power let also add custom tasks, such as the aforementioned hello task.

The speaker tells us he is using Gradle because it is so flexible. But that’s exactly the reason I’m more than reluctant to adopt it: I’ve been building with Ant for ages, then came to Maven. Now, I’m forced to use Ant again and it takes so much time to understand a build file compared to a Maven POM.

Space chatons, bleeding-edge HTML5 by Phillipe Antoine & Pierre Gayvallet

This presentation demoed new features brought by HTML5.

It was too quick to assimilate anything, but demos were really awesome. In particular, one of them used three.js, a 3D rendering library that you should really have a look into. When I think it took raytracing to achieve this 15 years ago..

Vaadin & GWT 2013 Paris Meetup

Key points (because at this time, I was more than a little tired):

  • My presentation on FieldGroup and Converters is available on Slideshare
  • Vaadin versions are supported for 5 years each. Vaadin 6 support will end in 2014
  • May will see the release of a headless of Vaadin TestBench. Good for automated tests!
  • This Friday, Vaadin 7.1 will be out with server push
  • Remember that Vaadin Ltd also offers Commercial Support (and it comes with a JRebel license!)
Send to Kindle
Categories: Event Tags: , , ,

Devoxx 2012 – Final day

November 17th, 2012 No comments

– Devoxx last day, I only slept 2 hours during the previous night. Need I really say more? –

Clustering your applications with Hazelcast by Talip Ozturk

Hazelcast is an OpenSource product used by many companies (even Apple!).

HashMap is a no-thread safe key-value implementation. If you need thread safety, you’ll use ConcurrencyHashMap. When you need to distribute your map against distributed JVMs, you use Hazel.getMap() but the rest is the same as for the Map interface (but not interface itself).

– a demo is presented during the entire session –
Hazelcast let you add nodes very easily, basically just by starting another node (and it takes care of broadcasting).

Hazelcast alternatives include Terracotta, Infinispan and a bunch of others. However, it has some unique features: for example, it’s lightweight (1,7 Mb) without any dependency. The goal is to make distributed computing very easy.

In a Hazelcast cluster, there’s a master, but every node knows the topology of the cluster so that when the master eventually dies, its responibilities can be reassigned. Data is backed up on other nodes. Typical Hazelcast topology include data nodes and lite members, which do not carry data.

Enterprise edition of Hazelcast is the Community Edition plus Elastic Memory configuration and JAAS Security. It’s easy to use Hazelcast in tests with the Hazelcast.newHazelcastInstance(config) statement. Besides, an API is available to query about the topology. Locking is either done globally on the cluster or on a single key. Finally, messaging is also supported, though not through the JMS API (note that messages aren’t persistent). A whole event-based API is available to listen to events related to Hazelcast.

A limitation of Hazelcast is that objects put into it have to serializable. Moreover, if an object is taken from Hazelcast and then modified, you have to put it back into Hazelcast so that it’s also updated inside the cluster.

Hazelcast doesn’t persist anything, but ease the task of doing it on your own through the MapStore and MapLoader. There are two ways to store: write-behind which is asynchronous and write-through which is synchronous. Reads are read-through which is basically lazy-loading.

About long GC-pauses:

Killing a node is good, but letting it hang around like a zombie is not

There’s a plugin to support Hibernate second-level cache and it can also be tightly integrated with Spring.

mgwt – GWT goes mobile by Daniel Kurka

The speaker’s experience has taught him that mobile apps are fated to die. In fact, he has used and developed many mobile applications. The problem, is that you’re either limited by a too small number of applications, or the boundary goes higher and you cannot find them anymore. In the past, Yahoo put web pages into a catalog, while Google only crawled and ranked them. App stores looks like crap, they don’t provide a way to search for applications you’re itnerested in. Besides, when you’re looking for public transports schedule, you have to install the application of the specific company. Worse, some sites force you to install the application instead of providing the needed data.

When developing mobile applications, you have to do it for a specific platform. As Java developers, we’re used to the “Develop Once, Run Anywhere”. Also, there’s already such an universal platform, it’s the browser. PhoneGap tries to resolve the problem and provides HTML and JavaScript as the sole language. Now, you get “Build Once, Wrap with PhoneGap Around, Run Anywhere”.
GWT is a framework where Java is compiled into JavaScript instead of bytecode. Better, it’s compiled into optimized JS, and that’s important because mobile access may dry your battery dry. Finally, there’s GWT PhoneGap integration so you can write awesome webapps, wrap them into PhoneGap and release them on a store. PhoneGap works with your app composed of HTML, JS and CSS and plugins that are native and let you access mobile device features. Two important things to remember: data passed between app and device is done as strings, and calls are asynchronous. PhoneGap uses W3C standards (when possible); it’s an intermediate that will have to die when the mobile web is finally there.

Yet, PhoneGap doesn’t solve the core “too many apps” problem. GWT’s compiler compiles Java into JavaScript (a single file per browser). Remember that GWT’s code is optimized that would be hard to achieve manually: that’s very important when running on mobiles to due limited battery issue. mgwt is about writing GWT applications, aimed specifically for mobiles. In order to create great apps, be wary of performance. There are 3 areas of performance:

  • Startup performance, a matter of downloading, parsing, executing and rendering
  • Runtime performance which is really impacted by layout. As a rule, never leave native code. So, if you use JavaScript layout, you’re doing just that. CSS3 provides all the means necessary to layout. Likewise, animations built in JavaScript are bad and prefer CSS instead.

Both are taken into account by the GWT compiler. Note that mgwt provides plenty of themes, one for each mobile platform.

– And now ends Devoxx 2012 edition for me. It was enriching as well as tiring. You can find the recap of the previous days here:

Send to Kindle
Categories: Event Tags:

Devoxx 2012 – Day 4

November 15th, 2012 No comments

Unitils: full stack testing solution for enterprise applications by Thomas de Rycke and Jeroen Horemans

There are different types of test: unit tests (testing in isolation), integration tests (testing a subsystem) and system tests (testing the whole system). The higher you get in the hierarchy, the longer it will to execute, so you should have a lot of UT, some IT and just a few system tests.

Unit tests have to be super fast to execute, be easy to read and refactor, they also should tell you what went wrong without debugging. On the other hand, you’ve to write a lot of tests. Not writing unit tests will get you faster in the first phase, but when the application grows, it will drastically slows your productivity down. In enterprise settings, you’re better off with UT.

In order to avoid an explosion of testing frameworks, Unitils testing framework was chosen. Either JUnit or TestNG can be hooked into Unitils. The core itself is small (so as to be fast), the architecture being based on modules.

Let’s take for example a Greetings Card Sending application. – the demoed code is nicely layered, with service and dao and uses Spring autowiring and transactions through annotations – Unitils is available from Maven, so adding it to your classpath is just a matter of adding the right dep to the POM. Unitils provide some annotations:

  • @TestedObjectto set on the target object
  • @Mockto inject the different collaborators (using EasyMock)
  • @InjectByType to hint at the injection strategy for Unitils to use

Even though testing POJOs (as well as DTOs) is not very cost-effective, a single test class can be created to check for JavaBean conventions.

Unit testing becomes trivial if you learn how to write testable code: designing good OO and dependency injection, seams in your code and avoid at all the use of global state (time for example). What about integration and systems? They tend to get complex and you’ll need a “magic” abstract superclass. Moreover, when an IT fails, you don’t know if the code is buggy or if the IT itself is. Besides, you’ll probably need to learn of frameworks (JSF comes with its own testing framework, JSFUnit).

As software developers, the same problems come back:

  • Database: is my schema up-to-date? How to manage my referential integritry to set up and tear down data?
  • Those operations are probably slow
  • Finally, how to deal with external problems?

Unitils provides a solution for each of these problems, through the use of a dedicated module for each. For example, a Webdriver module lets you use Selenium through Unitils, a DBUnit module does the same for DBunit and a Mail module creates a fake SMTP server.
– speakers tells us that with Unitils, there’s no need for a magic abstract superclass anymore: IMHO, it has been replace with a magic black box testing framework –
Unitils benefits from a whole set of modules, which covers a single dedicated area: Swing, Batch, File, Spring, Joda Time,etc.

Modern software development anti-patterns by Martijn Verburg and Ben Evans

There are 10 anti-patterns to cover:

Anti-pattern Diabolical programmers Voice of reason What to do
Conference-Driven Delivery Real pros hack code and write their slides minutes before their talks PPPPPP In order to improve your presentation skills, you can rehearse in front of the mirror. Let’s call it Test-Driven presentations. Begin with a supportive crew, then grow into larger and larger audience.
Mortgage-Driven Development In order for others not to steal your job, don’t do any documentation. Even better, control your source code i.e keep the source on a USB key Don’t succumb to fear: proper communication is key. Developers who communicate have the most success. If you want one of your idea to go into source control, you’ll have to communicate about it.
Distracted by shiny Always use the latest tech, it’ll put you ahead. Consider using the latest version of Eclipse, packed full of plugins Prototype and evaluate: learn to separate the myth from the reality. As an example, web frameworks are an area you should tread carefully. Code reviews are an asset, especially if everyone do them so as to share best practices.
Brown bag sessions are good, in order to test every possible options.
Design-Driven Design UML Code generators are awesome. Print your UML diagrams on gigantic sheets, put them on the wall, and if someone asks you a question, reply that it’s obvious Design for what you need to know: don’t try to do too much upfront. Successful teams are able to navigate between methodologies that suit them. Less source code: pay your junior developers to produce code, and pay your senior to remove it. The less code, the less chances to have bugs. Be wary of maintainability, though (Clojure is a good example).
Pokemon pattern Use allof the GoF patterns The appropriate design pattern is your friend. Using a design pattern is adding a feature to a language which is missing it. As an example, Java concurrency works, but since mutability is deeply rooted, it’s hard. Of course, you could use actors (like in Scala), but you definitely would add a feature. Whiteboard, whiteboard and whiteboard, to communicate with your colleagues. As a tip, carry whiteboard markers at all time (or alternatively pen and pencil).
Tuning by Folklore Performance tune by lightning black candles. Remove string concatenations, add DB connections and so on Measure, don’t guess. It’s the best approach to improve performance. If it sounds boring, it’s because it is. Remember that the problem is probably not where you think it is. Shameless plug: go to jClarity
Deity All the code in one file is easier to search. VB 6 is one of the most popular language because of it Discrete components based on SOLID principles: Single Responsibility, Open to extension, Litskov substitution, etc. Discuss with your team, don’t go screaming WTF, you’ll probably have issues. You need to use naming based on the domain model so as other developers can read your code
Lean Startup Ninja A ninja ships its code when it compiles!Continuous delivery is a business enabler. Tools are available to achieve continuosu delivery: Ant/Gradle/Maven + Jenkins + Puppet + Vagrant
CV++ The Tiobe index is your friend. Just be good at the principles. Try different languages, because when learning another language can be applied to the language you’re currently programming in. Being a Software Developer is better than a Programmer. The former understands the whole lifecycle of an application, which is much better.
I haz Cloud Nothing can go wrong if you push your applications in the cloud Evaluate and prototype There are many providers available: EC2, Heroku, JElastic, CloudBees, OpenShift, etc. Again, a brown bag session is a great way to start
Can Haz Mobile If you go into the mobile game, you’ll be a millionaire
HTML5 for the win UX is so important. What are your UX needs? Think about what pisses you off…
Big Data MongoDB is web scale What type of data are you storing? Non-functional matters. Business doesn’t care about distributed databases, but about their reports!

Simplicity in Scala Design by Bill Venners

The main thing is to show how to:

  • Design to simplify tasks for your users
  • Design for busy teams. Don’t assume your users are expert in your libraries. It’s something like driving a car you’ve never driven before: it should be so obvious users shouldn’t have to check the documentation
  • Make it obvious, guessable or easy to remember, by order of preference. The choice process for the plusOrMinusin ScalaTest is a good example.
  • Design for readers, then writers, in that order. To choose the name for the invokePrivateis a good example: better use something semantically significant than some symbol
  • Make errors impossible (or difficult at last). In particular, use Scala’s rich type system to achieve this. Using the compiler to flag errors is the best
  • Exploit familiarity. James Gosling used familiar C++ keywords in Java, but without memory management so as to leverage this principle. So is the way ScalaTest is using JUnit constructs but enforcing a label on each test.
  • Document with examples
  • Minimize redudancy. A Zen of Python states:

    “There should be one and preferably only one obvious way to do it.”

    . A good practice would be to put a Recommended Usage blocks for each class.

  • Maximize consistency. ScalaTest offers a GivenWhenThen trait as well as the Suite class. But when you mixin the latter with the former, you do not need to set the name of the test. However, enforcing the Single Responsibility Principle and introducing Specsolves the problem.
  • USe symbols when your users are already experts in them. Scala provides + instead of plus, and * instead of multiply. On the contrary, in SBT some symbols prove to be problematic: ~=, <<=, <+=, <<++=. Likewise, it’s better to use foldLeft() than /:. However, there are situations where you shouldn’t be able to use words at all (like in algebra).
  • Use a functional style by default, but switch to an imperative style where it improves usability. For example, test functions in a suite, it registers the tests typed inside the braces.

Remember what tools are for. They are for solving problems, not finding problems to solve

Re-imagining the browser with AngularJS by Igor Minar and Misko Hevery

– I attend this session in order to open my mind about HATEOAS applications. I so do hope it’s worth it! –
Browsers were really simple at one time. 16 years later, browsers offer many more features, going as far as allowing people to develop through a browser! For a user, this is a good, for a developer not so much because of complexity.

In a static page, you tell the browser what you want. In JavaScript, you have to take care of how browsers work and the difference between them (hello IE). As an example, remember rounded corner before CSS border-radius? In HTML or the DOM API, there doesn’t seem to be such improvement. Even if you use jQuery, you’re doing imperative programming. The solution to look for is declarative! Databinding helps with this situation. If your model changes, the view should be automatically updated. On the contrary, with two-ways data binding updates in the view should change the underlying model. Advanced templating can be used with collections. Actually, 90% of JavaScript is only to synchronize between the model and the view.

AngularJS takes care of this now, while a specification is under work for this should be part of how browser works. HTML can be verbose, tabs are a good example: they currently use div while they should be using tab. From a more general point-of-view, AngularJS brings such components to the table. Also, the Web Components is an attempt at standardization of this feature.
– here comes a demo –
AngularJS works by importing the needed js files, creating a module in JS and adding custom attribute to HTML tags.

Testacular is a JavaScript testing framework adapted to AngularJS.

Apache TomEE, JavaEE 6 Web Profile on Tomcat by David Blevins

Apache TomEE is trying to solve the problem in that you run with Tomcat unlike you hit the wall and go to another server. TomEE is a certified Web Profile stack, it is a Tomcat with the Apache libraries to bridge the gap. Core values include being small, being certified and stay Tomcat.

What is the Web Profile anyway? In order to solve the bloated image of JavaEE, v6 specs introduced kepping only about half the specs of the full JavaEE specs (12 on 24). Things that are not included:

  • CMP
  • CORBA
  • JAX-RPC
  • JAX-RS
  • JAX-WS
  • JMS
  • Connectors

The last 4 are especially lacking so TomEE comes into 3 flavors: Web Profile including JavaMail (certified), JAX-RS (certified) and Plus which includes JAX-WS, Connector and JMS (not certified). Note that in JavaEE 7 will certainly change those distributions, since profiles themselves will change. TomEE began in 2011 with 1.0.0.beta 1 and October saw the release of 1.5.0. TomEE is know for its performance but also its lack of extended support.
– here comes the TomEE demo, including its integration with Eclipse (which is exactly the same as for Tomcat bare –
TomcatEE is just Tomcat with a couple of JAR in both lib and endorsed, extra-configuration in conf and a launcher for each OS in bin. Also, there’s Java agent (– for some reason I couldn’t catch –).

Testing is done with Arquillian and runs within an hour. Certifications tests are executed on Amazon EC2 instances, each having 613 max memory (and runs in +hundred hours): note that TCK pass with default JVM memory. The result is quite interesting with a 27Mb download, with only 64Mb memory taken at runtime. It would be a bad idea to integrate all stacks yourself, not only because it’s just a waste of time, but also because there’s a gap between aggregating all components individually and the integration that was done in TomEE.

Important features of TomEE include integration of Arquillian and Maven. A key point of TomEE is that deployment errors are printed out all at the same time and do not force you to correct those errors one by one as it’s the case with other application servers.

Send to Kindle
Categories: Event Tags:

Devoxx 2012 – Day 3

November 14th, 2012 No comments

– the last evening was a little hard on me, shall we say. I begin the day at noon, which suits me just fine –

Securing the client-side by Mike West

If you want the largest possible audience, chances are you’ll use HTML, JavaScript and CSS. Native hasn’t (yet) the same traction as those traditional technologies. Business logic is slowly moving from those big backends toward browsers.

A prerequisite to anything is to send data securely, using HTTPS. There’s no reason today not to use HTTPS. In fact, getting to a server, you’ll probably hop from server to server until you get to your target. Now, if someone gets in the middle, you’re basically toast. Using HTTP, there’s no way to detect the data. On the contrary, using HTTPS, you can prevent reading and writing data until it gets to the server (and back to you).

However, servers should still listen to HTTP – because most users type HTTP, but then redirect to HTTPS. There a HTTP header Strict-Transport-Security. If you want to develop an application, got to https://startssl.com/ and get SSL certificates.

The most common security attack is script injection. Even you can write begign scripts, most scripts will have worse effects such as accessing cookies. Browsers normally disallow accessing cookies from other domains, but XSS exactly bypass that. The biggest problem is that users are not aware of this. A XSS cheatsheet is available at OWASP but in essence, in order to defend against XSS, you have to both verify each input and escape output.

Fortunately, both problems are resolved since ages. Developers just have to know about solutions considering the languages and frameworks they’re using. Unfortunately, developers may make mistakes, and browsers may contain bugs. Therefore:

Every program an every priviled user of the systle shoule operate using the least amount of privilege necessary to complete the job

HTML has a thing called Content Security Policy, which is a W3C specification (currently under draft). It takes the form of a HTTP header and specify valid external resources. Resources not specified won’t be loaded by the browser, as the latter would assume they were injected by scripts. Currently, CSP 1.0 is implemented in Chrome and Firefox. The main security hole in XSS are inline scripts: in order to prevent them, CSP disallow inline JavaScript, including declarations and event handlers. Note this could be seen as a downside, but both can be declared outside HTML, and remember it’s a security hole.

Alternatively, CSP also offers a Content-Security-Policy-Report-Only header, so as to send reporting to a URL. This lets you have a loose CSP policy and a stricter CSP policy report so you can test the scricter one before deploying it. HTML Rocks tutorial is a a good source of info to setup security policy.

Another way to protect against XSS is to use sandboxing. sandbox is an attribute to apply to iframe. It works by putting this iframe into a unique domain origin. This has some major consequences: no plugins, no scripts, no form submissions, not top-level navigation, no popups, no autoplay, no pointer lock and no seamless iframe. sandbox allows values to loosen some of these restrictions.

In concordance with the above ‘Least privilege policy’, a possible architecture is to design a page where each component is used to do a single thing. For example, we could use a sandbox to execute potentially dangerous scripts and wraps it under a simple message API to get its results from the main page.

Home automation for Geeks by Kai Kreuzer and Thomas Eichstädt-Engelen

– given my initial education as an architect, I’m due bridging the gap between it and my current job –
The talk if about fun stuff you can do with Java.

Home automation is not just electrifying things, because they’re already! Point is, none of these systems are integrated in anyway: home automation have to put emphasize on integration. There are 3 goals of home automation:

  • Comfort
  • Security (motion sensors, window contacts, etc.)
  • Energy saving (cut power off, etc.)

To create a smart home, you’ll need smart devices, to manage your home or to visualize data. Many of these features are already avaialble in the openHAB project. The project consists of two parts, the openHAB runtime, which is a headless Java running on the JVM and a designer which is used to configure the former. Both communicate either through a Samba share folde or Dropbox.

The runtime is composed of:

  • Java 7
  • OSGi (Equinox variety)
  • Declarative services
  • EMF
  • and an Embedded Jetty, with JAX-RS and Atmosphere

Core concepts around opneHAB include items: they are high-level abstractions that basically represent things you want to control. Note that devices aren’t the right level of abstraction. As an example, a radio device could be composed of a switch item, a dimmer item and a number item.

opneHAB’s architecture is a simple Event Bus, with bindings between commands (to the item) and status update (from the item) events and the real-life device. An item registry is plugged into the bus, so as to have both an UI and some automation rules, as well as a console and a persistence layer. There are a couple of UIs available: web of course, native for Windows Mobile, Android and iOS.

openHAB offers many different bindings over devices, but also integrates with XMPP, OSGi and Google Calendar. The latter is very important since you can create your triggers through Google Calendar. Other features include database services and export capabilities (Sense, Cosm).
– a real-life demon starts, taking a Koubachi plant device and a doorbell as examples –
A demo server is available at demo.openHAB.org

Behavior-Driven Development on the JVM, a state of the union by John Ferguson Smart

– if you’ve read Specifications by Example, this one is a must –
BDD has many defnite benefits such as less bugs and more productivity but was is BDD anyway? In essence, BDD is making Business, Business Analysts, Developers and Testers talk a common language. There are no tests in BDD, there are executable specifications. In TDD, you test what your code does, while in BDD, you test what your code should do in accordance to the specifications.

Effectively, executable specifications are not only specifications, but also tests and documentation. Givent that, every line you’re writing should be linked to a line in the speficications: this means you do not code at the micro-level, but at the macro-level. As a consequence, every line you code has a value to the business. In summary, using BDD will:

  • build only features that add real value
  • result in less wasted effort (through traceability)
  • improve communication
  • get a higher quality, better tested product

How to apply BDD to Business Analyst

A typical application will probably have some goals, each being decomposed into capabilities, features, stories down to examples, each one being translated into acceptance criteria. The main point of this is to be able to trace each criterion up.

Goal
The question to ask the customer what he really needs and keep asking why until you attain a high-level goal. Standard users tend to express requirements as implementations: we need to find the business need behind the suggested naive implementation.
Capability/feature
Those help reach the goals defined above. Which features do you do first? We should implement the minimum features required to deliver the business value defined in the goal. Traditionally, using the triplet “IN ORDER”, “AS A” and “I WANT”.
Story
The sotry takes the role of the user and describes, guess what, a user story as Agile defines it.

Tools to code

Available tools include JBehave, Cucumber, easyb and Thucydides, which does the job of reporting from the other tools.

Remember that TDD tools include JUnit, Mockito, Spock and Arquillian. While BDD is about acceptance tests, TDD is about developer tests (unit- and integration-).– I tend to think that high-level integration tests begin to approach user-acceptance tests –

The speaker uses Spock as a TDD testing framework, because it respects the Given/When/Then BDD structure. As a bonus, you can mock (and stub) needed dependencies very easily as well as code data-driven tests in a tabular way. Finally, Spock can be integrated with Arquillian. Alternatively, if you’re a Scala fan, BDD2 is the framework to use. Finally, for JavaScript, Jasmine is available.

In conclusion, it’s behavior all the way down.

Using Spring in Scala by Arjen Poutsma

The goal of the session is to see how to use Spring in the Scala language.
– Since the session is very based on demo, I encourage the reader to go directly to the Spring Scala project –

Effective Scala by Josh Suereth

  1. Basics
    • Use expressions instead of statements. Telling the computer what to do is a statement: create a variable, set its value and return the variable. In Scala, the last computed value is the returned value. Note that match, if and others return values. Likewise, Scala forexpressions let you just tell what you want, not how you want it!
    • Use the REPL (the Scala command-line evaluator). The good thing about REPL is that you’re not only shown the type of the variables you create, there’s auto-completion, so it’s a great way to find what exists. It’s a good practice to start with the REPL, then test, then code.
    • Just stay immutable. In Java, keeping references on mutable collections is a very dangerous thing to do, so you’ll have to copy them before storing/exposing them. In this case, you’ll endure a very big decrease in performance. In Scala, caseclasses with constructor “injection” make it simpler to implement immutability. Additionally, you’ll get automatic equality and hashcode for free.However, remember that local immutability is ok, that is it’s ok as long as you don’t expose your mutable variable.
    • Use Option. In Java, you’ll have to check whether every used parameter is null and you’d better not forget one! In Scala, for expressions can be used to easily perform checks and yield the desired Option result.
  2. Object-orientation
    • Use def for abstract members. Given the way memory allocation works on the JVM, we have to do either declare a block or use a def. Since the former is an advanced technique, use the latter. Addtionally, you can use the same def names in case classes.
    • Annotate API on non trivial return type. Since the compiler does type inference, you may run into complex cases where the type you infer is not the type the compiler does. Do document!
    • Favor composition over inheritance. – See the famed Cake pattern
  3. Implicits
    • Implicits are used when you need to provide a default parameter in a function but let the caller decide which default he wants to use (as opposed to default value in parameters).
    • Implicits are found when “in scope”. Scope computation follow a strict (– but complex –)search pattern. A good thing to do is to avoid imports, since it’s not explicit that importing will trigger the implicit. Thus, it’s best to put implicit in companion objects.
    • Implicit views. Don’t use implicit view, whatever they are, in favor in implicit classes.
  4. Types
    • Types traits. A best practice is to provide typed traits.
    • Type class. There’re several benefits to using type class, most importantly to monkey patch classes you don’t control.
  5. Misc
    • Scala isn’t Java (nor Haskell). Learn to write Scala, don’t write it as the language(s) you know.
    • Know your collections! Traversables are at the root of the collections hierarchy. Iterables comes next, etc. Indexed sequences are good to use when you need to access an element by their index, linear sequences when you need to access by head/tail. So, if coming from Java and wanting index access, use Vector instead of List.
    • Java integration. Prefer Java primitives in API
    • Learn patterns from FP.

Vaadin meetup with Joonas Lehtinen and the Vaadin team

Escalante, Scala-based Application Server

Escalante is an attempt of JBoss to create a Scala Application Server. Scaladin is a Scala wrapper around the Vaadin API, provided as a add-on. Lift, the Scala web framework, has very fat dependencies: Escalate prevents you from having to deploy the Lift JAR in your Lift-dependent webapps.

mGWT

GWT goes mobile, with mGWT and GWT Phonegap. mGWT supports major mobile browsers, including iOS, Android etc. The only major difference, is that you get it on the web instead of an app store. Web performance is an importante topic: compiling makes it easier, only you have to think about some things. For example, startup performance is composed of downloading, evaluating and executing. – see the talk about web performance –

JavaScript optimizations is hard to do by hand, and thus most websites don’t do them. On the contrary, the GWT compiler enforces those good practices (e.g. caching). Moreover, GWT let you do things really not possible otherwise, as componentizing parts of your application. Also, GWT can generate the offline manifest of HTML5 app cache. By comparison, GWT applications are much smaller, because of choices made by Google engineers: for example, there’s no image in the iOS theme, only pure CSS. Another axis of optimization is the minimal output (as for CSS). Finally, only the stuff actually used is in your final application: it’s called Dead Code Removal. In essence, only Android theme is downloaded by an application running on an Android device.

In the area of runtime performance, lets consider layout. Doing layout in JavaScript requires time! A rule is never to leave native code, in our case the CSS rendering engine. Well, mGWT uses CSS3 to layout the page. if we consider animations, looping in JavaScript is also a pretty bad idea. The solution is the same, just use CSS, which will use GPU acceleration.

SASS theming in Vaadin 7

SASS is a CSS compiler so think of it as a “GWT for CSS”. CSS should be easy, but it’s not for real-life applications and general-purpose themes. You want to do it the right way, because it’s gonna be there for a long-time. CSS support in IDE (Eclipse for example) is crap: there’s only auto-complication. Point is, CSS is hard to maintain with a team of more than one person (and even for a single person, it can be tedious – to say the least). On the other hand, a maintainable code structure – imports, kill startup performance because it will require an HTTP request per CSS. SASS can help in these areas.

For example, you can split your CSS file into smaller ones but SASS will concatenate them into a single one. Morevoer, URL paths for images in the CSS can be resolved automatically. – the rest of the presentation presents SASS features; you can have a look at an introduction there or check the full SASS documentation
The Vaadin team has developed its own SASS compiler in Java, since the original implentation is Ruby-based and we didn’t want any Ruby dependency in our apps, did we?

Besides SASS support, Vaadin 7 also offers a way to completely remove the default style name of a component through the setPrimaryStyleName() instead of being only able to add additional CSS class names. In combination to SASS mixins, this let you combine parts of different themes.

Important: SASS support is under active development, test and do provide feedback!

Usability in Vaadin

– Faros, a Vaadin solution partner created the Vaadin Belgian meetup group –
Usability is a huge topic, point is a good interface should be self-explanatory. Usability is defined as how it’s easy to accomplish a give task while accessibility is ensuring an equivalent user experience for people with disabilities (blindness, in most cases). Default Vaadin theming doesn’t render buttons as HTML buttons but as div, screenreaders cannot recognize them as such. Moreover, as Vaadin AJAX requests update the UI, it can really throw screenreaders off.

WAI-ARIA is a standard to define how to make RIA accessible. In order to achieve this, it defines standard roles such as navigation menus, or buttons, even though it’s not a real HTML button. WAI-ARIA is on the roadmap, but has been frozen due to time constraints.

Still, there’s already support for keyboard and shortcut keys.

Button button = new Button("OK");

button.setClickShortcut(KeyCode.ENTER);

A good practice is also to provide screen units in relative size, so that users can increase the font size on their browsers.

Vaadin 7 and what’s next

Vaadin 7 is in beta 8 status. The decision is to deliver when it’s ready, so as not to decrease quality: it should be RC in December. Version 7 is a real major release, with huge core rewrites. Goals included empowering developers, embracing extendability and cleaning up (web is the only supported platform).

After Vaadin 7, the roadmap is not decided yet… The biggest mistake is some things were announced with Vaadin 7, and too many couldn’t be released. Therefore, there’s no plan for Vaadin 8. However, version 7.1 is planned for February 2013 and will include:

  • Built-in push channel
  • New them that puts SASS to full use
  • Start adding client-side APIS to some (few) Vaadin widgets.

Version 7.2 will be more focused towards (client) widgets. For most of the cases, it’s wiser to use (server) components, but in some cases, there’s need for widgets (offline for example). Table and combo-boxes will be redesigned since support outside GWT will be dropped. The target will be summer 2013.

Other non-planned improvements include:

  • Declarative XML-based UIs, mix HTML in UI declarations and integrate with JPS, as well as taglibs, and mix-match them with XML
  • Better IDE tooling: add-ons support, full-theme support and declarative UI editing mode for Visual Editor
  • Book of Vaadin covering v7 is expected in January
  • WAI-ARIA
  • On the fly translations for UI
  • Comprehensive charting library, based on HighCharts and available with a Vaadin Pro Account
  • Java EE 6 CDI with Vaadin (support for injecting UIs and views), the work in progress is available on GitHub

– nighty night, folks, it’s more that 10 PM –

Send to Kindle
Categories: Event Tags:

Devoxx 2012 – Day 2

November 13th, 2012 3 comments

– after a few hours of sleep and a fresh mind to boot, I’m ready to start the day with some Scala. Inside, I’m wondering how much time I’ll be able to keep on understanding the talk –

Scala advanced concepts by Dick Wall and Bill Venners

Implicit are the first subject tackled. First, know that implicit is one of the reasons that the Scala compiler has such a hard time doing its job. In order to illustrate implicit, a Swing anonymous class snippet code is shown:

val button = new JButton

button.addActionListener(new ActionListener {

	def actionPerformed(event: ActionEvent) {

		println("pressed")

	}

})

button.addActionListener(_:ActionEvent) => println("pressed") // Compile time error with type mismatch

The implicit keyword is a way for the Scala compiler to try conversions from one type to another type until it compiles. The only thing that matters is the type signature, the name doesn’t. Since implicit functions are so powerful (thus dangerous), in Scala 2.10, you have to explicitlyimport language.implicitConversions. Moreover, other such features (e.g. macros) have to be imported in order to be used. If not imported, compiler issues warnings, but in future versions, it will be handled as errors.

Rules for using implicit are the following:

  • Marking rule: only defs marked as implicit are available
  • Scope rule: an implicit conversion must be in scopeas a single identifier (or be associated with either the source or target type of the conversion). This is something akin to inviting the implicit conversion. If the implicit is in the companion object, it will be automatically used. Another more rational alternative would be to put it in a trait, which may be in scope, or not.
  • One at a time rule: only a single implicit conversion must satisfy the compiler.
  • Explicit-first rule: if the code compiles without implicit, implicits won’t be used.
  • Finally, if Scala has more that one implicit conversion available, the compiler will choose the more specific one if it can. If it cannot, it will choose none and just throw an error.

Even though the compiler doesn’t care about the implicit def’s name, people reading your code do so that naming should be chosen for both explicit usage of the conversion and explicit importing into scope for the conversion. Note that some already available implicits are in the Predef object.

In Scala 2.10, implicit can set on classes, so that implicit are even safer. As a corollary, you don’t have to import the feature. Such classes must take exactly one parameter and must be defined where a method could be defined (so they cannot be top-level). Note that the generated method will have the same name as the class.

Implicit parameters can be used in curried defs and replace the last parameter.

def curried(a: Int)(implicit b: Int) = a + b

implicit val evilOne: Int = 1

curried(1)(2) // Returns 3 as expected

curried(1) // Returns 2 because evilOne is implicit and used in the def

Implicit parametes are best used as one of your own type you control: don’t use Int or String as implicit parameters. It also lets you set meaningful names on the types (such as Ordered in the Scala API).

By the way, in order to ease the use of you API, you can specify the @ImplicitNotFound annotation to provide a specific error message to the compiler. Also, when an implicit isn’t applied, the compiler doesn’t offer much debugging information. In this case, a good trick is to start explicitly using the parameter and see what happens. In conclusion, use implicits with care: ask yourself if there’s no other way (such as inheritance or trait).

Scala 2.10 also offers some way into duck typing through the applyDynamics(s:String) (which is also a feature that has to be imported in order to use). It looks much like reflection, but let compiler checks pass-through and fails at runtime instead at compile time (if the desired method is not present).

Now is time for some good Scala coding practices.

  • First, favor immutability: in particular, if you absolutely need to use a var, use the minimum scope possible. Also, mutable collections are particularly evil. If you need to use mutability in this context, it’s most important you do not expose it outside your method (in the signature).
  • For collections, consider map and flatMap above foreach: this tend to make you think in functional programming.
  • Then, don’t chain more than a few map/filter/… calls in order to make your code more readable. Instead, declare intermediate variables with meaningfulnames.
  • Prefer partial functions to bust out tuples:
    wl.zipWithIndex.map { case(w, i) => w.toList(i) } // better than
    wl.zipWithIndex.map { t => t._1.toList(t._2) } // not a good thing
  • Consider Eitheras alternative to exceptions since exceptions disturb the flow of your code.
  • The more if and foryou can eliminate from your code, the least you’ll have to take all cases in account. Thus, it’s better to favoir tail recursion and immutables over loops and mutables.
  • Have a look at Vector over List. The former is constant-time performance…
  • Know your collections!already began!
  • When developing publicmethods, always provide a return type. Not only it let you avoid runtime errors (it compiles because of type inference), it also nicely documents your code and makes it more readable.
  • Deeply think about case classes: you get already-defined methods (toString(), apply(), unapply(), etc.) for free.

– here my battery goes down–

Faster Websites: Crash Course on Frontend Performance by Iliya Grigorik

– Even though backend performance matters, I do tend to think, like the speaker, that in most “traditional” webapps, most of the time is spent on the frontend. I hope to increase my palette of tools available in case it happens –

The speaker is part of the Web Fast team inside of Google. The agenda is separated into three main parts: the problem, browser architecture and best practices, with context.

The problems

What’s the point of making fast websites? Google and Bing tried to measure the impact of delays, by injecting artificial delays at various points during the response. All metrics, including Revenue per User go down, and the more the delay, the sharper the decrease in a linear fashion. As a sidenote, once the delay is removed, metrics take time to go back to their original value. Another company did correlate the page load speed and the bounce rate. Mobile web is putting even more pressure on the load of the sites…

If you want to succeed with web-performance, don’t view it as a technical metric. Instead, measure and correlate its impact with business metrics

Delay User reaction
0-100ms Instant
100-300ms Feels sluggush
300-1000ms Machine is working
+1s Mental context switch
+10s I’ll come back later

Google Page Analytics show that on the desktop web, the median page load time is roughly 3ms while the mobile web, it’s 5ms (only taking in account those new Google phones). The problem is that an average page is made of 84 requests of is and weights about 1MB!

Let’s have a look at the life a HTTP request: DNS lookup, socket connect, request management itself and finally content download. Each phase in the lifecycle can be optimized.

  • For example, most DNS servers are pretty bad. That’s why Google provide free public DNS servers: 8.8.4.4 and 8.8.8.8. A project named namebench can help you find the best DNS servers, in part according to your browser history.
  • Socket connection can be reduced by moving the server closer. Alternatively, we could use CDN.

In the case of devoxx.com, there are only 67 requests but it’s a little less that 4Mb. Those requests are not only HTML, but also CSS, JavaScript and images. Requesting all these resources can be graphically represented as a waterfall. There are 2 ways to optimize the waterfall: making it shorter and thinner. Do not understimate the time taken in the frontend: on average, the frontend takes 86% of the time.

One could argue that improvements in network speed could save us. Most “advanced” countries have an average 5Mb/s speed, and is increasing. Unfortunately, bandwidth don’t matter that much, latency is much more important! Over 5Mb/s, increasing bandwidth won’t gonna help much Page Load Time (PLT) improvements, even though it will for downloading movies :-) The problem is that increasing bandwith is easy (just lay out more cables), while improving latency is expensive (read impossible) since it’s bound by the speed of light: it would require laying out shorter cables.

HTTP 1.1 is an improvement over HTTP 1.0 in that you can keep your connections alive: in the former version, one request required one connection, which is not the case in the later version. Now, this means that browsers open up to 6 TCP connections to request resources. This number is basically a limit, because of TCP slow start: it’s a feature of TCP that is meant to probe for network performance and not overload the network. In real-life, most HTTP traffic is composed of small, bursty, TCP flows because there are many requests, each of limited size (think JS files). A improvement would be to increase TCP initial congestion window, but it requires tweaking low-level system resources.

An ongoing work is HTTP 2.0! The good thing is that SPDY is already there and v2 is intented as a base for HTTP 2.0. HTTP 2.0 goals are making things better (of course), built on HTTP 1.1 and be extensible. Actual workarounds to increase PLT include concatenating files (CSS, JS), images spriting and domain sharding (in order to mislead the browser into making more than 6 connections because there are “fake” domains). SPDY is intended to remove these workarounds so that developers won’t have to sully their nails with dirty hacks anymore. You don’t need to modify your site to work with SPDY, but you can optimize it (stop spriting and sharding)

In essence, with SDPY, you open a single TCP connection and inside this single connection send multiple “streams”, which are multiplexed. SPDY also offers Server Push so that server can push resources to the client… so long as the client doesn’t cancel the stream. Note that inlining resources inside the main HTML page can be seen as a form of Server Push. Today, SPDY is supported across a number of browsers, including Chrome (of course), Firefox 13+ and Opera 12.10+. Additionnally, there are a number of servers that are compatible (Apache through the mod_spdy, nginx, Jetty, …).

The mobile web is exploding: mobile Internet users will reach desktop Internet users in 2 years. In countries like India, it’s already the case! There’s a good part of Internet users that are striclty accessing if from their mobile, the exact figure depends on the country, of course (25% in the US, 79% in Egypt). The heart of the matter is that though we don’t want to differenciate between desktop and mobile access, make to mistake: the physical layer is completely different. In fact, for mobile, the latency is much worse because exiting from idle state is time-consuming, both in 3G or in 4G (though numbers are reduced in the latter case).

There are some solutions to address those problems:

  • Think again SPDY
  • Re-use connections, pipeline
  • Download resources in bulk to avoid waking up the radio
  • Compress resources
  • Cache once you’ve got them!

Browser architecture

The good news is that the browser is trying to help you. Chrome, for example, has the following strategies: DNS prefetch (for links on the same page), TCP preconnect (same here), pooling and re-use TCP connections and finally caching (for resources already downloaded). More precisely, hovering the mouse over the link will likely fire DNS prefect and TCP preconnect. Many optimization parametes are available in Chrome, just go play with them (so long as you understand).

Google Analytics provides you with the means to get the above metrics when your site uses the associated script. An important feature is segmenting, to analyze the differences between users. As an example, the audience could be grouped into Japan, Singapore and US: it let us see that a peak PLT only for Singapore users was caused by the temporary unavailibility of a social widget, but only for the Singapore group.

Now, how does the browser render the bytes that were provided by the server? The HTML parser does a number of different things: create the DOM tree, as well as the CSS tree to create the Render tree. By the way, HTML5 specify how to parse the HTML, which was not the case in HTML 4: bytes are tokenized, tokens are built into the tree. However, JavaScript could change the DOM through doc.write so the browser must first manage scripts before doing anything. Consequence, sync scripts do block the browser; corollary, code async scripts, like so


(function()) {

	// do something here

})();

In HTML5, there are more simpler ways to async scripts, with script tag attributes:

  • regular, which is just standard script
  • defer tells the browser to download in the background and execute the script in order
  • async tells it to also download in the background but execute it when ready

– 3 hours talk are really too much, better take a pause and come back aware than sleep my way through –

In order to optimize a page, you can install PageSpeed extension in Chrome, which will basically audit the page (– note it’s also available as a Firebug plugin –). Checks include:

  • Image compression optimization: even better, the optimized content is computed by PageSpeed so you could take it directly and replace your own.
  • Resizing to 0x0 pixels (yes, it seems to happen and enough there’s a rule to check for it)
  • Looking for uncompressed resources. Web servers can easily be put in place to use gzip compression without updating the site itself

Alternatively, you can use the online PageSpeed Insights

Performance best practices

  • Reduce DNC lookups
  • Avoid redirect
  • Make fewer HTTP requests
  • Flush the document early so as to help the document parser discover external resources early
  • Uses a CDN to decrease latency: the content has to be put the closest to your users
  • Reduce the size of your pages
    • GZIP your text assets
    • Optimize images, pick optimal format
    • Add an Expiresheader
    • Add ETags
  • Optimize for fast first render, do not block the parser
    • Place stylesheets at the top
    • Load scripts asynchronously
    • Place scripts at the bottom
    • Minify and concatenate
  • Eliminate memory leaks
    • Build buttery smooth pages (scroll included)
    • Leverage hardware acceleration where possible
    • Remove JS and DOM memory leaks
    • Test on mobile devices
  • Use the right tool for the job (free!)
    • Learn about Developer Tools
    • PageSpeed Insights
    • WebPageTest.org
    • Test on mobile devices (again)

Reference: the entire slides are available there (be warned, there are more than 3)

Maven dependency puzzlers by Geoffrey de Smet

– seems like a nice change of pace, recreational and all… –

The first slide rejoins my experience: if you deploy a bad POM to a Maven repo, your users will feel the pain (– see last version of log4j for what not to do –)

– he, you can win beers! –

What happens when you add an artifact with different groupId?
Maven doesn’t know it’s the same artifact so it adds both. For users, this means you have to ban outdated artifacts, and exclude transitive dependencies. For providers, use relocation. In no case should you redistribute other project’s artifacts under your own!
What happens when you define a dependency’s version different from the dependency management’s version parent?
Maven overwrites the version. In order to use the parent’s version, don’t use any version in the project.
What happens when you define a version and you get a transitive dependency with a different version?
Maven correctly detects a conflict and uses the version from the POM with the least number of relations from the project. To fail fast, use the maven-enforcer-plugin and the enforce goal.
What happens when the number of relations is the same?
Maven chooses the version from the dependency declared first in the POM! In essence, this means dependencies are ordered; the version in itself plays no role, whereas a good strategy would be to retain the highest version.

– hopes were met, really fun! –

Junit rules by Jens Schauder

– though I’m more of a TestNG fanboy, it seems the concept of rules may tip the balance in favor of the latter. Worth a look at least –

Rules fixes a lot of problems you encounter when testing:

  • Failing tests due to missing tear down
  • Inheriting from multiple base test classes
  • Copy-paste code in tests
  • Running with different runners

When JUnit tries to run a test class, it finds all methods annotated with @Test and wraps them in “statement”. Rules are things that take a statement as a parameter and returns a statement. Creating a rule is as simple a creating a class inheriting from Rule and annotating it with @Rule.Thus, bundling setup and teardown code is as simple as creating a rule that does something before and after evaluating the statement.Rules can also be used to enforce a timeout, to run the test inside the Event-Dispatch thread, or even to change the environment. Additionally, rules can be used to create tests that are run only under specific conditions, for example to test Oracle-specific code in some environment and H2-specific code in other. Finally, rules may be used to provide services to test classes.

Once you start using rules, you’ll probably end up having more than one rule during in the test execution. As a standard, rules shouldn’t depend on each other, so order shouldn’t be a problem. However, real-life cases prove a need to have rules chaining: JUnit provides a RuleChain to achieve this.

Finally, if you need a rule to execute before a single run (something akin to @BeforeClass), you can use @RuleClass.

– IMHO, it seems rules are like TestNG listeners, which are available since ages. At least, I tried –

Send to Kindle
Categories: Event Tags:

Devoxx 2012 – Day 1

November 13th, 2012 No comments

The 2012 Devoxx Conference in Antwerp has begun. Here’s some sum-ups of the sessions I’ve been attending to. I’ll try to write one daily (unless I’m too lazy or tired, Devoxx has been known to have such consequences on geeks…).

–I followed no conference in the morning since I presented a hands-on lab on Vaadin 7

Modular JavaEE application in the cloud by B. Ertman and P. Bakker

Two current trends rule modern-day application: on one hand, applications tend to grow bigger, thus complex and agile development and refactoring become more common. Challenges that arise from these factors are manyfold, and include:

  • Dependency management
  • Versioning
  • Long-term maintenance
  • Deployment

When building applications for the cloud, there are a couple of non-functional requirements, such as 0-downtime and modular deployments. Once you got more customers, chances are you’ll end up having specific extensions for some, thus even more aggravating the previous challenges.

In university, we learned to promote high-cohesion and low-coupling systems for Object-Oriented systems. How can we lessen coupling in a system? The first step is to code by interface. Now, we need a mechanism to hide implementations, so as to prevent someone from accidentally using our implementtation classes. A module is a JAR extension component that provides such a feature (note that such feature is not available within current JVMs). Given a module, how can we instantiate a new hidden class? One possible answer can be dependency injection, another can be service registries.

Modularity is an architecture principle. You have to design with modularity in mind: it doesn’t come with a framework. You have to enforce separation of concerns in the code. At runtime, the module is the unit of modularity.

What we need is:

  • An architectural focus on modularity
  • High-level enterprise API
  • A runtime dynamic module framework: right now, OSGi is the de facto standard since Jigsaw is not available at the time of this presentation.

–Now begins the demo–

In order to develop applications, you’ll probably have to use the BndTools plugin under Eclipse. A good OSGi practice is to separate a stable API (composed of interfaces) in a module and implementations (of concrete classes) in other ones.

OSGi metadata is included in the JAR’s META-INF/MANIFEST file. In order to make the API module usable by other modules, we need to export the desired package. As for the implementation class, there’s no specific code, although we need a couple of steps:

  • Configure the modules available to the module implementation (just like traditional classpath management)
  • Create a binding between the interface and its implementation through an Activator. It can be done through Java code, through Java annotations or with XML.
  • Configure implementation and activator classes to be private

Finally, we just have to develop client code for the previous modules, and not forget to tell which OSGi container will be used at runtime.

In case of exceptions, we need a way to interact with the container. In order to achieve this, a console would be in order: the gogo console can be hot-deployed to the container since it’s an OSGi module by itself. Once done, we can start and stop specific modules.
–Back to the slides–
Are JavaEE and OSGi fated to be enemies? While it could have been the case before, nowadays they’re not exclusive. In essence, we have 3 options:

  1. Use OSGi core architecture and use JavaEE APIs as published services. It’s the approach taken by application servers just as GlassFish.
    • In this case, web applications become Web Application Bundles (the extension plays no part). Such modules are standard WAR files (JSF, JAX-RS, etc.) only for the Manifest. Extra-entries include the Web-Context-Path directive as well as the traditional OSGi Import-Packagedirective.
    • The goal of EJB bundles is to use EJB and JPA code as now: unfortunately, it’s not a standard yet. Manifest directive include Export-EJB.
    • For CDI, the target is to use CDI within bundles. It’s currently prototyped in Weld (CDI RI).

    All application servers use OSGi bundles under the cover, with supporting OSGi bundles as an objective in the near future. At this time, Glassfish and WebSphere have the highest level of support for that.

  2. When confronted with an existing JavaEE codebase but modularity is needed in a localized part of the application, it’s best to use an hybrid approach. In this case, separate your application into parts: for exameple, one ‘administrative’ part based on Java and the other ‘dynamic’ using OSGi glued together by an injection bridge. In the JavaEE part, the BundleContext is injected through a standard @Resource annotation. –Note that your code is now coupled to the OSGi API, which defeats the low-coupling approach professed by OSGi–.
  3. Finally, you could need modularity early in the development cycle and use OSGi ‘à la carte‘. This is a viable option if you don’t use many JavaEE APIs, for example only JAX-RS.

Note that for debugging purposes, it’s a good idea to deploy the Felix Dependency Manager module.

What happens when you have more than one implementation module of the same API module? There’s no way to be deterministic about that. However, there are ways to use it:

  • The first way is to use arbitrary properties on the API module, and only require such properties when querying the OSGi container for modules
  • Otherwise, you can rank your services so as to get the highest rank available. It’s very interesting to have fallback services in case the highest ranking is not available.
  • Finally, you can let you client code handle this so that it listens to service lifecycle events for available module implementations

It’s a good OSGi practice: keep modules pretty small in most cases, probably only a few classes. Thus, in a typical deployment, it’s not uncommon for the number of bundles to amount in hundreds.

Now comes the Cloud part (from the title): in order to deploy, we’ll use Apache ACE, a provisioning server. It defines features (modules), distributions (collections of modules) and targets (servers for distributions provisioning). By the way, ACE is not only built on top of OSGi but also uses Vaadin as its GUI (sorry, too good to miss). When a server registers on ACE (and is defined as a server), the latter will automatically deploy the desired distributions to it.

This method let us use auto-scaling: just use a script when a server is ready to register to the ACE server and it just works!

References:

Weld-OSGI in action by Mathieu Ancelin and Matthieu Clochard

Key annotations of CDI (Context and Dependency Injection) are @Inject to request injection, @Qualifier to specify precise injection and @Observes to listen to CDI events.

OSGi is a modular and dynamic platform for Java: it’s stable and powerful but provides old API. In OSGi, bundles are the basis of the modularity, those are enhanced JARs. OSGi proposes three main features:

  • bundles lifecycle management
  • dependency management
  • version management

Weld-OSGi is a CDI extension that tries to combine the best of both worlds. It’s a JBoss Weld project, and is developed by the SERLI company. What’s really interesting is that you don’t need to know about OSGi, it’s wrapped under CDI itself. Basically, each time a bundle is deployed in the OSGi container, a new CDI context is created. Also, CDI beans are exposed as OSGi services.

Weld-OSGI provides some annotations to achieve that, such as @OSGIService to tell CDI you need an OSGi service. Alternatively, you could require all services implementations through the @Service annotation so as to use the desired implementation. On both, you could refine the injected services through qualifier annotations, as in “true” CDI.

OSGi containers generate a lot of events, each time a bundle changes state (e.g. when a bundle starts or stops). Those events can be observed through the use of the traditional @Observes annotation, with the addition of some specific Weld-OSGI annotations. A specific API is also available for events happening between bundles.

– demo is very impressive, the UI being changed on the fly, synced with the OSGi container –

References:

Send to Kindle
Categories: Event Tags:

Devoxx Fr 2012 Day 2

April 19th, 2012 Comments off

Today begins with the big talk from the Devoxx Fr team, which sums up the story of the Paris JUG and Devoxx France.

Proud to be developer? by Pierre Pezziardi

In some social situations, when you are asked what you do in life, it’s hard to tell you’re a computer programmer. Because IT is late, costly and well, not very used in everydays life.

That’s not entirely untrue. The cost of implementing a feature is constantly growing: IT is the only industry where there is no productivity gain, the opposite is true! Our industry is evolving very rapidly, we are programming in languages that didn’t exist 5 years ago. And yet, some things never change. Project cycles, despite agility, are long. Moreover, there’s a clear distinction between roles, most notably between thinkers and doers. Agility was meant to address that.

Our method was to develop integrated products, and that meant our process had to be integrated and collaborative.

Steve Jobs

Yet, though Steve Jobs formalized that, though productivity decreases, though agility works, not much changes. There’s a block somewhere and this block comes from the people themselves, that resist change. Evolving from a planned economy to agility is very hard.

We could stop at that, and tell us we cannot do nothing but in fact, we can do better, not changing overnight but step by step. As an example, at BRED, customer service was handling more that +120k calls per year (80% support). 10k calls were about Citrix session freezing: adding a simple button on the screen helped reduce the this number by 8k. Another way to reduce was to change a simple error message label, from “Technical error” to “Your account is under watch, please contact your local counseler”.

IT is misunderstood by people outside IT. For example, the notion of technical debt is invisible to top-level managers. In order to communicate with people outside your culture, you have to talk their language. It’s done through graphics and semantics. To go beyond productivity decreases, we’ll have to drastically change our culture, but in the meantime, we can act in small increments with added value.

Another thought is that IT is aligned with its organization: bureaucratics organizations produce bureaucratics IT. For example, Leman Brothers implemented the Bale 2 standards (risk management), and yet they gained nothing by it (you remember it?). In essence, understand not only IT complexity, but also remember IT is a support function, and has to bring business value. Examples of such added value systems are Wikipedia, Google, and so on. What do these tools have in common?

The simple, poor, transparent tool is a humble servant; the elaborate, complex, secret is an arrogant master.

Ivan Illich

In essence, those tools are based on everything is possible but instead of standard enterprise policies where everything is forbidden but. The bid is on trust toward users. The approach is to empower users, let users do mistakes, but be able to correct them easily.

It may seem that we as computer programmers cannot do something about it (we only are developers), but in order to change the system, we have to change ourselves first. This is not a recent strategy, though: Gerald Weinberg describes it in his book The Psychology of Computer Programming.

In conclusion, we are working for organizations that segregate people. In the IT, we reproduce the system: programmers, project managers, business analysts. Our duty as programmers (and people) is to break those barriers, and talk people outside IT.

This could be Heaven or this could be Hell by Ben Evans and Martijn Verbu

The talk is about the future of Java. There are two possible view of the future, Heaven and Hell.

In the Heaven view of things, Java 8 comes out in 2013 (as promised), it remains the #1 language and Jigsaw (Java’s promised modularity) is largely adopted.Java 9 brings out reified generics :-)

In essence, we are in the middle of a changing environment:

  • hardware is definitely multicore
  • mobile is here to stay
  • Emerging markets are going to bring many more new Java developers (millions are expected)

Open Source Software becomes really part of the mainstream and everybody (the general public) understand the notions behind it. On the social networks side, they observe strict confidentiality policies. As a corollary, configuration for controlling those parameters is easy and readily available.

In the Hell view of things, Java 8 is not released until 2015 because the community fails to test it enough; Jigsaw is never made available. Java 9 is not released until after 2020: it’s no more #1, developers flee to the .Net platform and alternative languages die.

On the hardware side, it becomes even more fragmented. Worse, patent litigations dominate the market and OpenJDK cannot manage both. On the mobile side, Android and Java stay separated and Android fragmentation goes even further.

In Hell, millions of new developers turn toward .Net and VB is taught in school. Apple and their look-alike continue to promote lock-in. Competitors see the benefits of this approach and take the same way. For Open Source, OSS developers become elitist and many new ideas never concretize. Actual startups, as market leaders, are worried about new nimbler competitors and start heavily lobbying, innovation is smothered.

Facebook and its successors dominate the Internet and privacy is completely lost. You log in with your Facebook at work and your boss can watch what you do the all day.

In conclusion, the future can be anyhting, but whatever it is, it’s up to us.

Spring is dead, long live Spring by Gildas Cuisinier

Guess what, this talk is about Spring and more precisely the war between Spring and JavaEE.

Episode 1

How Spring appeared? The JCP made a platform proposal years ago, that went by the name of J2EE (at the time). There were some problems: migration from a standard J2EE platform to another one was not as easy as announced and tests were coupled to the platform (no unit tests). Rod Johnson created in response an Open Source lightweight framework to address those problems: Spring was born.

Second evolution

At first, initial configuration was done through XML (there wasn’t so many boos against XML at the time). At its inception, a single configuration was necessary. Then, the import command made possible to have multiple neatly separated configuration files. Third-party frameworks made their way into the framework, and for some, XML configuration was a nightmare (Acegi for example). Spring evolved with specific namespaces to made configuration for those frameworks easier, it’s something akin to DSL for XML.

JavaEE 6 sexy again

With Java 5 bringing annotations, Spring continued its evolution and took them in. We still need a little XML configuration to scan for annotations, though. Meanwhile, the JCP learned of Spring’s successes and delivered JavaEE 6: it’s simple, testable and lightweight, all is achieved through annotations. Yet, migrating to JavaEE 6 is a barrier to acceptance.

Also inspired from its competition, Spring 3 brought full-annotation configuration, XML can be directly discarded.

JavaEE in 2012 from a survey

Results from http://cyg.be/SpringJEE show the following results: more than half of installed JavaEE are version 5. A majority of not-yet adopters think about migrating in less than a year. More than half of Spring users do not consider migrating to JavaEE. On the contrary, new users are very fragmented: only a quarter think about JavaEE 6, while another quarter think about Spring.

On the Spring side, results show a quicker migration to newer versions. For example, half of current Spring users use v3.0 while it was delivered at the same time as JavaEE 6 and around 20% use Spring 3.1. It’s mostly because migrating is easy (while JavaEE migrations require other application servers).

Episode 3

Spring 3.0 brought the JavaConfig approach, a no-XML configuration but didn’t entirely close the gap, Spring 3.1 does. For example, component scanning can be done through @ComponentScan, scheduling can be achieved through @EnableScheduling and MVC is provided though @EnableMVC. What these annotations do is easily understood from the underlying code (it was not so easy in XML).

A great feature of Spring is an abstraction over cache through @Cacheable; it provides no implementation however so you can use different implementations (ConcurrentHashMap, EhCache and GemFire) under the cover. The feature is also available through XML. Spring 3.1 also provides:

  • Hibernate 4 support
  • A new XML c:namespace for constructors in XML
  • JPA use without persistence.xml
  • miscellaneous improvements in Spring MVC

Conclusion

In conclusion, Spring is not only the Spring framework, but a whole ecosystem is readily available: Spring Data, Spring Batch, Spring Mobile, Spring Security, Spring Android, Spring Flex (ok, let’s forget this one), and many more.
IMHO, too much defending Spring at the beginning which left not enough time to deal with real stuff at the end. It’s not a good sign when one is on the defensive.

Java concurrency from the trenches by Alex Snaps

Having learned yesterday that we should prepare for concurrency, I decided to attend this talk in place of Client/Server Apps with Java and HTML5 [I couldn’t reproduce the code snippets].

The goal of this talk is to have a better understanding of concurrent code, do more with java.util.concurrent and finally make best use of CPU cores.
What’s concurrency anyway? And what about threads, locks and actors? The main thing about concurrency is state and how to share it. A thread safe is a class that obeys its contract, whether by a single or multiple threads, whatever the scheduling and without external configuration.

  • Atomicity: no threads should expose intermediate or transient state that violate post-conditions of a class
  • Visibility: in a multithreaded environment, there’s no guarantee I can read back a value from a variable just after having written it

Taking Hibernate has an example, there are multiple statistics to track: entities and queries counters, max duration information, etc.

public interface Statistics {

public long getQueryExecutionMaxTime();

public long getQueryExecutionMaxTimeQueryString();

}

This interface (no implementation there) cannot guarantee its contract, cause the two methods are linked but it’s not possible to easily enforce this.

  • In order to get thread safety, a naive implementation would be to put the synchronize keyword in front of each method. The drawback is that a single monitor for each method has a huge impact on performance.
  • The next step is to have different monitor for each variable that is accessed, and to use the same monitor for the same variable across different methods with synchronized blocks inside the method.
  • Java 5 brings the notion of read/write locks: there’s still only a single access for writing but it allows multiple concurrent read access. For statistics, there’s no interest since there are many writes but only a few read.
  • Also in Java 5, we have Atomic classes that can be used for synchronized variables. Using such atomics, tests are even slower… even though I didn’t catch why.

Conclusion, finer grained locks can improve performance (i.e. adapt your lock granularity to your needs and context).

Better yet, there’s a no-lock solution, by creating a single State object. As an alternative, the volatile keyword can be used to guarantee the read value is up to date when accessed. Finally, another solution could be Compare-and-Swap. CAS is all about not enforcing thread-safety, but rather preparing for failure and dealing with it.

In conclusion, aim for non-blocking strategies and only try and use locks when required.

I must admit I didn’t get the most of the session, it was full of content and I probably lacked some of the background skills. Slides on Parleys will probably help me go deeper into my understanding because there was a lot of stuff.

DevOps: not only to manage servers by Jerome Bernard

Following my (re)discovery of virtualization, I thought the session could be very insightful.

The session is a feedback on how the speaker had to organize insider teaching sessions, while minimizing impact on infrastructure. The biggest constraint was delay (less than two weeks from bare machines).

Solutions that were rejected included:

  • Ghosting strategies (a single master redeployed on client machines before each session)
  • Complete reinstallation from the local network through scripts
  • Account creation/destruction for sessions

The final solution is a Chef server, with cookbooks being polled by Chef clients deploying on each client’s VirtualBox. This let attendees get full admin rights on their guest system (while locking the host system).

Chef let us automate installations through recipes and cookbooks. Chef polls every so often so that configuration stays in sync on each client. Chef’s advantage over Puppet is a bunch of recipes ready to be used. Finally, Chef provides an administration console that let us follow clients state. On the physical machines, a Chef server was installed to only automate VirtualBox installatiobn and Vagrant files. Also, effort was put to optimize network usage.

VeeWee was born as a Vagrant extension but now has a life on its own. It let use Vagrant masters easily, through templating. During the normal course of things, there’s no need since they are plenty of Vagrant baseboxes available. In this case, the need was to change to the French locale from the English based master. VeeWee launches a VirtuaBox and simulates user clicks on it. As a side note, VeeWee recently made Windows masters available.

Master definition files were edited to achieve the previous steps:

  • definition.rb: changed the used ISO, as well as the MD5 hash
  • preseed.cfg: changed locale to French
  • postinstall.sh: removed Puppet installation

Once VeeWee finished building the image, a single command was used to create a Vagrant box.

Vagrant let us create a VirtualBox image from the CLI, through a single text-based Vagrantfile. It can easily be integrated with Chef and Puppet. Note: we can do many other things (network management, port forwarding between VM and physical machine, directory sharing between host and guest, multiple VMs management (and their network configuration), …). In the case study, the Vagrantfile was dynamic (updated by Chef) so that the number of allocated CPUs were the number available on the physical machine, and so on. The Chef recipe downloaded Java, Maven and MySQL.

On the host systems, Chef solo was also installed. Vagrant and Ched only installed the softwares necessary to a particular session: Eclipse (as a .tar.gz), native packages (SVN) and custom symbolic links.

Feedbacks:

  • Chef
    • DNS resolution is very important
    • The official Java Chef recipe doesn’t work anymore since Oracle now requires JavaScript and cookies to download from the site. Better get the JDK once and make it available on an internal web server
    • Packaging the Android SDK is not easy, it’s only an installer
  • Vagrant
    • Test on your notebook before deploying
    • vagrant provision/reload to update a Vagrant-managed box
    • Vagrant and VeeWee should be now more compatible
  • VeeWee
    • Keep in synch VirtualBox versions and VirtualBox masters version

Finally, other uses came by: developer machines are provisioned by the same system and user-acceptance environments are managed by Vagrant. A good idea would be to use Continuous Integration of VMs, so that bad surprises don’t happen at the last moment.

All in all, another session full of content that will need to be reviewed once or twice to get the most of it.

Behind the scens of day to day development at Google by Petra Cross

The talk is about teams and roles, development workflows and those used at Google.

Teams and roles

There are +10k developers across +4O offices checking in every minute. At Google, roles are defined with clear-cut responsibilities: engineering directors (ENG DIR), product managers (PM), engineering managers (ENG MGR), software engineer/tech lead (SWE/TL), software engineers (SWE), software engineers in test (SET) and test engineers (TE). The team hierarchy has no more than three levels. Teams are created based on needs, for a project, not for here forever.

Google’s way of achieving its goal (organizing world’s information…) is by creating software. A feature developement goes like this:

Idea > Features > Planned > Worked On > In Code review > Tested > Canary > Live!

Note that “worked on” include unit tests, so that code review can get you a write more tests result.

Everything that’s it in the continuous integration goes into a release. Google has a eat your own dogfood policy so that Google employees can give feedback. Then, if everything goes right, the release is pushed to the Canary: it spreads to only a fraction of the users, depending on the particular product.

How to reduce development time? Either add resources or reduce waste, Google’s way is to reduce waste.

Development workflow

Three methods are relatively well-known:

  • Waterfall is old school and come from old manufacturing industries. No need to go further…
  • Spiral is the next step; it’s a sequence of waterfall, each producing a prototype of a product. The iteration cycle is about 6 months.
  • Agility is the newest approach. It’s all about customer collaboration, having feedback and adapting for changes. There are different flavors: XP, Scrum and Kanban (among others)

Google workflow

Google does mostly a bit of everything but mostly Agile. Ten things Google found to be true first lesson is that when you focus on users, everything else falls into place. Customers think in User stories while Devs thinks in tasks.
Example user story: “As an ATM user, I want to be able to view my balance”. Possible tasks include:

  1. Define an API which the ATM will be able to talk to the backend
  2. Implement the backend
  3. Add the GUI

In Kanban, a task can be in four different states: ICEBOX, BACKLOG, CURRENT (or WORKING ON) and DONE AND VERIFIED. Tasks in the icebox may never get done, so constant cleaning of the icebox is necessary. The backlog should have tasks for one to one-and-half iteration to guarantee noone runs out of work. Tasks in the backlog are things everyone agreed to work on; they are picked by developers. Everyone having a visibility on the backlog renders daily meetings useless because when picking a task, passing it in the CURRENT state, you put your name on it. On whiteboards, there’s a single swimlane for each state (save ICEBOX) and tasks are writtten on post-it. When passing in the CURRENT swimlane, you put your name (and perhaps picture) next to it. You can also pair program, it’s irrelevant to the workflow.

As said before, daily standups are optional but weekly team meetings are mandatory as well as monthly retrospective meetings (see later). In weekly meeting, you estimate tasks from top to bottom, by planning poker. When a consensus is not reached, lowest and highest estimators have to explain their reasons. An important point is that there’s no multi-directional discussion in order to avoid involving emotions. Then you re-estimate until a majority agrees. Estimation is done in points, not in time, because estimating in absolute is hard: it’s easier to tell that a task will be done quicker (or slower) than the other one. Likewise, velocity is computed in points: when a PM asks for a feature, it’s broken down into tasks and the engineering manager know if it can be achieved given his/her team velocity.

Note: when stuck, developers are expected to learn from materials and ask questions so as to avoid island of knowledge situations. Besides, managers and tech lead don’t care how you achieve your task, only that the task is done.

There’s also an emphasis on retrospective once a month. Everyone expresses: what went well, what went not so well and what could be done better.

Ok, there’s no magic involved… A bit disappointed, because I was probably expecting to have life explained to me. Seems Google manages its projects like everyone else, the success must come from other factors: culture (though I don’t see how I could apply the “get stuck and learn from it” approach), editor approach, or something else I cannot point my finger on.

Send to Kindle
Categories: Miscellaneous Tags:

Devoxx Fr 2012 Day 1

April 18th, 2012 No comments

Though your humble writer is a firm believer in abstraction from underlying mechanics, I chose the following conference because there are some use-cases when one is confronted by performance issues that come from mismatch between the developer’s intent and its implementation. Matching them require a better understanding of what really happens.

From Runnable and synchronize to parallel() and atomically() by Jose Paumard

The talk is all about concurrent programming and its associated side-effects, performance and bugs. Title explanation comes from the past and the future: Runnable is from JDK 1.1 (1995) and parallel from JDK 8 (2014?). In particular, we’ll see how hardware has an impact on software and on how software developers work.

The central question is how to leverage processors power? As a corollary, what are the problems of parallel applications (think about concurrency, synchronizing and immutability). Finally, whare are the tools available to Java developers today and in the near future to help us develop parallel applications?

Multithreading

In 1995, processors had a single core. Multithreading is about sharing the processor’s time between execution threads. A processor executes a thread when the current thread has finished, or if it’s waiting for a slowor a locked resource. At this time, parallelizing is all processors executeing the same instruction at the same time but on different data. This is very fast because threads don’t need to interact with each other (no need to pass data to each other).

From an API point of view, Java provides Runnable and Thread. Keywords synchronized and volatile are available. Until 2004, there’s no need to go beyond that.

As soon as 2005, multicore processors are generally available. Also, from this point on, power doesn’t come from an increase in core frequency (Moore’s Law), but by doubling cores number. To address that, Java 5 brings java.util.concurrent to the table. As a side note, there’s a backport EDU.oswego to Java 4. Many new notions are available in this package.

Today, every simple computer has from 2 to 8 cores. Professional computers, such as SPARC T3 has 128 cores. Tomorrow, the scale will be in ten thousands of cores, or even more.

Problems of multithreading

  • The increase of raw data makes it necessary, deal with it
  • Race conditions. Here comes the demonstration of a race condition in the Singleton pattern and the double-check locking pattern (which doesn’t work).
    The Java language specification defines a principle: a read from a variable should return the last written value of this variable. In order to have determinism, there should be a link “happens before” between access of the same variable. When there’s no link, there’s no way to know for sure the control flow: ooops, there comes a nice definition of a bug. The solution is to have the keyword volatile on the variable to create the link “happens before”. Unluckily, it drastically reduces performance down to the level of a single core.
    As for the only way to have a Singleton implementation is:

    public enum Singleton {
    	instance;
    }

Now, when you get down to the hardware level, it’s even more complex. Each core has two levels of cache (L1 and L2) and there’s a single cache shared between all cores (L3). Here is a table displaying time needed to pass data from the core to the different caches:

Destination Time (nanosecond)
L1 1
L2 3
L3 15
RAM 80

In conclusion, without the need to synchronize, a program could be executed at its optimal speed. With it, you should take into account the way the processor manages the memory (including cache) wired into the processor’s architecture! That’s exactly what we’re trying to prevent with Java (write once, run everywhere). Perhaps Java is not ready to go this path.

java.util.concurrent

Java 5 java.util.concurrent is a new way to launch threads. Before, we instantiated a new Runnable, created a new Thread wrapper and started the latter. The run() method of Thread has no return value nor it doesn’t throw an exception.

With Java 5, we instantiate a Callable: its call method has both a return value and do throw an exception. We pass it to a thread executor service which manages its onw thread pool (and probably reuses an already existing thread). We delegate thread management to a service that can abstract (and adapt) to the underlying hardware. There are also other notions.

  • Lock: locks the block for 1 thread. It provides the way to get the lock in a method and release it in another. Besides, there are two possible methods to get a lock, lock() and tryLock(). Finally, there’s a way to have a ReadWriteLockwhich is like having as many readers as desired, and a single writer which blocks reads. The reads are not synchronized, the write is.
  • Semaphor: locks the block for n thread
  • Barrier: synchronizes threads
  • Latch: open/close a code block for all threads

Also, new collections are available: BlockingQueue (and DeQueue) which are fixed-size collections with timeout methods. They are thread-safe and are used for Producer/Consumer implementations.

Finally, a CopyOnWriteArrayList (and CopyOnWriteLinkedList) provide concurrency friendly that let every thread read and lock only when writing. Unfortunately, therey are optimized when there’s not many reads (the array is copied and the memory pointer changed when writing). Hash and set structures could be implemented to operate the same way (copy and change pointer).

Alternatives to synchronize

Memory transactions

Akka provides a Software Transactional Memory API. With the API, we shouldn’t manipulate objects directly, but instead references to objects. Besides, we need Atomic objects that will manage STM for us. Memory is managed with optimistic locking: do as nothing was changed and either commit if it’s the case. The best use-case is when there’s low access concurrency because transactions are replayed when they fail. In the case of high concurrency access, there will be a lot of replays.

Some implementations use locks, others do not. Those are not the same lock semantics as those in the JLS. Also, some implementation are very memory hungry, and scales with the number of concurrent transactions. Intel recently announced that Haswell architecture will support Transactional Synchronization Extension: this will probably have an impact on the Java language, sooner or later.

Actors

Actors are components that receive data in the form of read-only messages. They compute a result and may send it as a message to other actors (eventually its caller). This strategy is all based on immutability, there’s no need for concurrency in this case.
Akka also provides an Actor API: writing an Akka actor is very similar to writing a Java 5 Callable. Besides, CPU usage is about the same as well as execution time (to be precise, with no significant difference). So, why used actoris instead of executor services? To bring in the power of STM, through transactional actors. The provided example is banking account transfers.

Parallel computing

In Java 7, there’s the Fork/Join pattern. Alternatively, there are parallel arrays and Java 8 provides the parallel() method. The goal is to automate parallel computing on arrays and collections.

Fork Join

It’s based on tasks. If a task decide it’s too big, it will spread other smaller subtasks: this is known as fork. Then, a system managed thread pool is available, each thread having a tasks queue. When a queue has no more tasks, the thread is capable of getting tasks from other threads queues and put them in its own. When a task is finished, a blocking known as join can aggregate results from subtasks (previously forked).
As a consequence, a task implementation must not only describe its business code, but also know how to decide if it’s too big and how to fork itself. Moreover, Fork Join can be used in a recursive way or an iterative way. The latter approach has 30% performance gain compared to the former. The conclusion is to only use recursivity when you do not know the boundaries of your system in advance.

Parallel arrays

Parallel arrays are described in JSR 166y. [And so end my notes on the talk, along with my battery life #fail]

Conclusion

The talker not only knows his subject, he knows how to hold an audience captive. Aside from the points described in the talk, I think the most important point in the whole thing is his answer to the last question: 

“Given the levels of abstraction that Java isolate developers from, isn’t it logical to use alternative languages to address faced with new multicores architecture?
- Probably yes”

[UPDATE]

Reduce pressure on memory allocation by Olivier Lamy and Benoit Perroud

The goal of Apache Direct Memory is to decrease latency provoked by the garbage collector. It’s currently in the Apache Incubator.

Cache off Heap

As a reminder, Java’s memory is segmented in different spaces (young, tenured and perm). Some parts are allocated to the JVM, the rest is native and used by network and such. The problem of the heap is that it’s managed by the JVM and that in order to free unused objects, a garbage collector has to clean up. This is a process that freezes program execution and renders the program non deterministic (you don’t know when GC will start and stop).

Since RAM is very cheap nowadays, many applications use local cache to increase performance. Two caches are possible:

  • On-heap: think a big hash map. It increases GC processing time.
  • Off-heap: it’s stored through serialization in native memory (and there’s a penalty associated) but decreases heap memory usage and hence, garbage collector time

Apache Direct Memory

ADM is based on the ByteBuffer class: the are allocated en mass and then separated on demand. The architecture is layered, with a Cache API on top.

Strategies to allocate memory have to address exactly the same problems as disk space fragmentation: either fuse byte buffers, or fixed size byte buffers (memory ‘loss’ but no fragmentation).

In real life, there should be a small portion of cached objects on-heap, and the rest off-heap. This is very similar to what Terracotta’s BigMemory does. This means that one can use EhCache’s API and delegate to ADM’s implementation.

Next steps include JSR 107 implementation, benchmarks, integration into other components (Cassandra, Lucene, Tomcat), hot configuration, monitoring and management features, etc. For more information, see the ADM site.

Note that ADM only uses Java’s public API under the cover (ByteBuffer) and no com.sun hacks.

Very interesting talk, and a nice alternative to Terracotta’s product. I don’t think ADM is production ready yet but definitively something to keep under watch.

Though I’m no fan of applications chocked by JavaScript, fact is, there are here. Better have some tools to manage them, instead of suffering from them. Perhaps a well-managed JavaScript application can be the equal of a well-managed Java application (***evil laugh***).

Take care your JavaScript code by Romain Linsolas

In essence, this talk is about how to write tests for JavaScript, analyze it, and manage it by Jenkins CI. In reality, only a small fraction do it already on JavaScript code.

Testing JavaScript

Jasmine is a behavior-driven testing library for JavaScript, while Underscore.js is a utility library. Both will be used to create our test harness. Akquinet Maven archetypes can help us bootstrap our project.

describe is a reference to test “class”, while it is a reference to a test function. Also, expect let us assert. beforeEach plays the role of equivalent Before annotations in Java (JUnit or TestNG). OtherJasmine functions mimic their Java equivalent.

Analyzing JavaScript

Let us keep proven tools here. If you use the aforementioned Maven archetype, it’s already compatible with Sonar! In order to govern test coverage, a nice library js-test-driver is available. It’s a JUnit look-alike. No need to throw away our previous Jasmine tests, we just need to launch them with js-test-driver, hence some prerequisites implying some POM updates:

  • a Jasmine adapter
  • the JAR to compute coverage
  • and an  available port to launch a web server

The metrics are readily available in Sonar! What is done through the Maven CLI can easily be reproduced in Jenkins.

Nice talk though developping JavaScript is antagonist with my approach to applications. At least, I will be ready to tackle those JavaScript-based application when I encounter them.

Send to Kindle
Categories: Event Tags: