Arquillian on legacy servers

May 27th, 2012 No comments

In most contexts, when something doesn’t work, you just Google the error and you’re basically done. One good thing about working for organizations that lag behind technology-wise is that it generally is more challenging and you’re bound to be creative. Me, I’m stuck on JBoss 5.1 EAP, but that doesn’t stop me for trying to use modern approach in software engineering. In the quality domain, one such try is to be able to provide my developers a way to test their code in-container. Since we are newcomers in the EJB3 realm, that means they will have a need for true integration testing.

Given the stacks available at the time of this writing, I’ve decided for Arquillian, which seems the best (and only?) tool to achieve this. With this framework (as well as my faithful TestNG), I was set to test Java EE components on JBoss 5.1 EAP. This article describes how to do just that (as well as mentioning the pitfalls I had to overcome).

Arquillian basics

Arquillian basically provides a way for developers to manage the lifecycle of an application server and deploy a JavaEE artifact in it, in an automated way and integrated with your favorite testing engine (read TestNG – or JUnit if you must). Arquillian architecture is based on a generic engine, and adapters for specific application servers. If no adapter is available for your application server, that’s tough luck. If you’re feeling playful, try searching for a WebSphere adapter… the develop one. The first difficulty was that at the time of my work, there was no JBoss EAP 5.1 adapter, but I read from the web that it’s between a 5.0 GA and a 6.0 GA: a lengthy trial-and-error process took place (now, there’s at least some documentation, thanks to Arquillian guys hearing my plea on Twitter – thanks!).

Server type and version are not enough to get the right adapter, you’ll also need to choose how you will interact with the application server:

  • Embedded: you download all dependencies, provides a configuration and presto, Arquillian magically creates a running embedded application server. Pro: you don’t have to install the app server on each of you devs computer; cons: configuring may well be a nightmare, and there may be huge differences with the final platform, defeating the purpose of integration testing.
  • Remote: use an existing and running application server. Choose wisely between a dedicated server for all devs (who will share the same configuration, but also the same resources) and a single app server per dev (who said maintenace nightmare?).
  • Managed: same as remote, but tests will start and stop the server. Better suited for one app server per dev.
  • Local: I’ve found traces of this one, even though it seems to be buried deep. This seems to be the same as managed, but it uses the filesystem instead of the wire to do its job (starting/stopping by calling scripts, deploying by copying archives to the expected location, …), thus the term local.

Each adapter can be configured according to your needs through the arquilian.xml file. For Maven users, it takes place at the root of src/test/resources. Don’t worry, each adapter is aptly documented for configuration parameters. Mine looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<arquillian xmlns="http://jboss.org/schema/arquillian"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">
    <container qualifier="jboss" default="true">
        <configuration>
            <property name="javaHome">${jdk.home}</property>
            <property name="jbossHome">${jboss.home}</property>
            <property name="httpPort">8080</property>
            <property name="rmiPort">1099</property>
            <property name="javaVmArguments">-Xmx512m -Xmx768m -XX:MaxPermSize=256m
                -Djava.net.preferIPv4Stack=true
                -Djava.util.logging.manager=org.jboss.logmanager.LogManager
                -Djava.endorsed.dirs=${jboss.home}/lib/endorsed
                -Djboss.boot.server.log.dir=${jboss.home}
                -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
            </property>
        </configuration>
    </container>
</arquillian>

Finally, the test class itself has to extend org.jboss.arquillian.testng.Arquillian (or if you stick with JUnit, use @RunWith). The following snippet shows an example of a TestNG test class using Arquillian:

public class WebIT extends Arquillian {

    @Test
    public void testMethod() { ... }

    @Deployment
    public static WebArchive createDeployment() { ... }
}

Creating the archive

Arquillian tactics regarding in-container testing is to deploy only what you need. As such, it comes with a tool named ShrinkWrap that let you package only the relevant parts of what is to be tested.

For example, the following example creates an archive named web-test.war, bundling the ListPersonServlet class and the web deployment descriptor. More advanced uses would include libraries.

WebArchive archive = ShrinkWrap.create(WebArchive.class, "web-test.war")
    .addClass(ListPersonServlet.class)
    .setWebXML(new File("src/main/webapp/WEB-INF/web.xml"));

The Arquillian framework will look for a static method, annotated with @Deployment that returns such an archive, in order to deploy it to the application server (through the adapter).

Tip: use the toString() method on the archive to see what it contains if you have errors.

Icing on the Maven cake

Even if you’re no Maven fan, I think this point deserves some attention. IMHO, integration tests shouldn’t be part of the build process since they are by essence more fragile: a simple configuration error on the application server and your build fails without real cause.

Thus, I would recommend putting your tests in a separate module that is not called by its parent.

Besides, I also use the Maven Failsafe plugin, instead of the Surefire one so that I get clean separation of unit tests and integration tests, even with separated modules, so that I get different metrics for both (in Sonar for example). In order to get that separation out-of-the-box, just ensure your test classes end with IT (like Integration Test). Note that integration tests are bound to the integration-test lifecycle phase, just before install.

Here comes JBoss EAP

Now comes the hard part, actual deployment on JBoss 5.1 EAP. This stage will probably last for a long long time, and involve a lot of going back and forth. In order to be as productive as possible, the first thing to do is to locate the logs. If you kept the above configurations, they are in the JBoss <profile>/logs directory. If you feel the logs are too verbose for your own tastes (I did), just change the root priority from ${jboss.server.log.threshold} to INFO in the <profile>/conf/jboss-log4j.xml file.

On my machine, I kept getting JVM bind port errors. Thus, I changed the guilty connector port in the server.xml file (FYI, it was the AJP connector and JBoss stopped complaining when I changed 8009 to 8010).

One last step for EAP is to disable security. Since it’s enterprise oriented, security is enabled by default in EAP: in testing contexts, I couldn’t care less about authenticating to deploy my artifacts. Open the <profile>/deploy/profileservice-jboss-beans.xml configuration file and search for “comment this list to disable auth checks for the profileservice”. Do as you’re told :-) Alternatively, you could also walk the hard path and configure authentication: detailed instructions are available on the adapter page.

Getting things done, on your own

Until this point, we more or less followed instructions here and there. Now, we have to sully our nails and use some of our gray matter.

  • The first thing to address is some strange java.lang.IllegalStateExceptionwhen launching a test. Strangely enough, this is caused by Arquillian missing some libraries that have to be ShrinkWrapped along with your real code. In my case, I had to had the following snippet to my web archive:
    MavenDependencyResolver resolver = DependencyResolvers.use(MavenDependencyResolver.class);
    archive.addAsLibraries(
        resolver.artifact("org.jboss.arquillian.testng:arquillian-testng-container:1.0.0.Final")
                .artifact("org.jboss.arquillian.testng:arquillian-testng-core:1.0.0.Final")
                .artifact("org.testng:testng:6.5.2").resolveAsFiles());
  • The next error is much more vicious and comes from Arquillian inner workings.
    java.util.NoSuchElementException
        at java.util.LinkedHashMap$LinkedHashIterator.nextEntry(LinkedHashMap.java:375)
        at java.util.LinkedHashMap$KeyIterator.next(LinkedHashMap.java:384)
        at org.jboss.arquillian.container.test.spi.util.TestRunners.getTestRunner(TestRunners.java:60)

    When you look at the code, you see Arquillian uses the Service Provider feature (for more info, see here). But much to my chagrin, it doesn’t configure the implementation the org.jboss.arquillian.container.test.spi.TestRunner service should use and thus fails miserably. We have to create such a configuration manually: the file should only contain org.jboss.arquillian.testng.container.TestNGTestRunner (for such is the power of the Service Provider).Don’t forget to package it along the archive to have any chance at success:

    archive.addAsResource(new File("src/test/resources/META-INF/services"), "META-INF/services");

Update [28th May]: the two points above can be abandoned if you use the correct Maven dependency (the Arquillian BOM). Check the POMs in the attached project.

At the end, everything should work fine except a final message log in the test:

Shutting down server: default
Writing server thread dump to /path/to/jboss-as/server/default/log/threadDump.log

This means Arquillian cannot shutdown the server because it can’t authenticate. This would have no consequence whatsoever but it marks the test as failed and thus needs to be corrected. Edit the <profile>/conf/props/jmx-console-users.properties file and uncomment the admin = admin line.

Conclusion

The full previous steps took me about about half a week work spread over a week (it seems I’m more productive when not working full-time as my brain launches some background threads to solve problems). This was not easy but I’m proud to have beaten the damn thing. Moreover, a couple of proprietary configuration settings were omitted in this article. In conclusion, Arquillian seems to be a nice in-container testing framework but seems to have to be polished around some corners: I think using TestNG may be the cultprit here.

You can find the sources for this article here, in Eclipse/Maven format.

To go further:

EJB3 façade over Spring services

May 20th, 2012 No comments

As a consultant, you seldom get to voice out your opinions regarding the technologies used by your customers and it’s even more extraordinary when you’re heard. My current context belongs to the usual case: I’m stuck with Java 6 running JBoss 5.1 EAP with no chance of going forward in the near future (and I consider myself happy since a year and a half ago, that was Java 5 with JOnAS 4). Sometimes, others wonder if I’m working in a museum but I see myself more as an archaeologist than a curator.

Context

I recently got my hands on an application that had to be migrated from a proprietary framework, to more perennial technologies. The application consists of one web-front office and a Swing back-office. The key difficulty was to make the Swing part communicate with the server part, since both lives in two different network zones, separated by a firewall (with some open ports for RMI and HTTP). Moreover, our Security team enforces that such communications has to be secured.

The hard choice

The following factors played a part in my architecture choice:

  • My skills, I’ve plenty more experience in Spring than in EJB3
  • My team skills, more oriented toward Swing
  • Reusing as much as possible the existing code or at least interfaces
  • Existing requirement toward web-services:
  • Web services security is implemented through the reverse proxy, and its reliability is not the best I’ve ever seen (to put it mildly)
  • Web services applications have to be located on dedicated infrastructure
  • Mature EJB culture
  • Available JAAS LoginModule for secure EJB calls and web-services from Swing

Now, it basically boils down to exposing Spring web-services on HTTP or EJB3. In the first case, cons include no experience of Spring remoting, performance (or even reliability) issues, and deployment on different servers thus more complex packaging for the dev team. In the second case, they include a slightly higher complexity (yes, EJB3 are easier than with EJB 2.1, but still), a higher ramp-up time and me not being able to properly support my team when a difficulty is met.

In the end, I decided to use Spring services in fullest, but to put them behind a EJB3 façade. That may seem strange but I think that I get the best of both world: EJB3 skills are kept at a bare minimum (transactions will be managed by Spring), while the technology gets me directly through the reverse-proxy. I’m open for suggestions and arguments toward such and such solutions given the above factors, but the quicker the better :-)

Design

To ease the design to the maximum, each Spring service will have exactly one and only one EJB3 façade, which will delegate calls to the underlying service. Most IDEs will be able to take care of the boilerplate delegating code (hell, you can even use Project Lombok with @Delegate – I’m considering it).

On the class level, the following class design will be used:

This is only standard EJB design, with the added Spring implementation.

On the module level, this means we will need a somewhat convoluted packaging for the service layer:

  • A Business Interfaces module
  • A Spring Implementations module
  • An EJB3 module, including remote interfaces and session beans (thanks to the Maven EJB plugin, it will produce two different artifacts)

How-to

Finally, developing  the EJB3 façade and injecting it with Spring beans is ridiculously simple. The magic lies in the JavaEE 5 Interceptors annotation on top of the session bean class that references the SpringBeanAutowiringInterceptor class, that will kick in Spring injection after instantiation (as well as activation) on every referenced dependency.

The only dependency in our case is the delegate Spring bean, which as to be annotated with legacy @Autowired.

import javax.ejb.Stateless;
import javax.interceptor.Interceptors;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.ejb.interceptor.SpringBeanAutowiringInterceptor;

import ch.frankel.blog.ejb.spring.service.client.RandomGeneratorService;

@Stateless
@Interceptors(SpringBeanAutowiringInterceptor.class)
public class RandomGeneratorBean implements RandomGeneratorService {

    @Autowired
    private ch.frankel.blog.ejb.spring.service.RandomGeneratorService delegate;

    @Override
    public int generateNumber(int lowerLimit, int upperLimit) {

        return delegate.generateNumber(lowerLimit, upperLimit);
    }
}

In order to work, we have to use a specific Spring configuration file, which references the Spring application context defined in our services module as well as activate annotation configuration.

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:context="http://www.springframework.org/schema/context"
  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
    http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd">

  <context:annotation-config />

  <bean>
    <constructor-arg>
      <list>
        <value>classpath:ejb-service.xml</value>
      </list>
    </constructor-arg>
  </bean>

</beans>

Warning: it’s mandatory to name this file beanRefContext.xml and to make it available at the root of the EJB JAR (and of course to set the Spring service module as a dependency).

Conclusion

Sometimes, you have to make some interesting architectural choices that are pertinent only in your context. In these cases, it’s good to know somebody paved the road for you: this is the thing with EJB3 façade over Spring services.

To go further:

Categories: JavaEE Tags: ,

Quick evaluation of Twitter Bootstrap

May 13th, 2012 5 comments

I must admit I suck at graphical design. It’s one of the reasons that put me in the way of Flex and Vaadin in the first place: out-of-the-box, you got an application that is pleasing to the eye.

Using one of these technologies is not possible (nor relevant) in all contexts, and since I’ve got a strong interest in UI, I regularly have a look at other alternatives for clean looking applications. The technology I studied this week is Twitter Bootstrap.

Bootstrap is a lightweight client-side framework, that comes (among others) with pre-styled “components” and a standard layout. Both are completely configurable and you can download the customized result here. From a technical point of view, Bootstrap offers a CSS – and an optional JavaScript file that relies on jQuery (both are also available in minified flavor). I’m no big fan of fancy client-side JavaScript so what follows will be focused on what you can do with plain CSS.

Layout

Boostrap offers a 940px wide canvas, separated in 12 columns. What you do with those columns is up to you: most of the time, you’ll probably want to group some of them. It’s easily achieved by using simple CSS classes. In fact, in Bootstrap, everything is done with selectors. For example, the following snippet offers a main column that has twice the width of its left and right sidebars:

<div>
    <div>Left sidebar</div>
    <div>Main</div>
    <div>Right sidebar</div>
</div>

A feature provided worth mentioning is responsive design: in essence, the layout adapts itself to its screen, an important advantage considering the fragmentation of today’s user agents.

Components

Whatever application you’re developing, chances are high that you’ll need some components, like buttons, menus, fields (in forms), tabs, etc.

Through CSS classes, Bootstrap offers a bunch of components. For existing HTML components, it applies a visually appealing style on them. For non-existing components (such as menus and tabs), it tweaks the rendering of standard HTML.

Conclusion

I tried to remake some pages of More Vaadin with Bootstrap and all in all, the result in not unpleasing to the eye. In the case I can’t use Vaadin, I think I would be a strong proponent of Bootstrap when creating an application from scratch.

You can find the sources of my attempt here.

Categories: Development Tags: ,

Specification by Example review

This review is about Specifications by Example by Gojko Adzic from Manning.

Facts

  • 18 chapters, 254 pages, $29.21
  • This book covers Specifications by Example (you could have guessed it from the title). In effect, SBE are a way to build the right software (for the customers), as opposed to build the software right (which is our trade as engineers).

Specification by Example is a set of process patterns that facilitate change in software products to ensure that the right product is delivered efficiently.

Pros

  • Not all methods are adequate for each context. Each hint described is made contextual.
  • Do’s are detailed all right, with their respective benefits, but dont’s are also characterized along with the reasons why you shouldn’t go down their path.
  • Full of real-life examples, with people and teams talking about what they did and why.
  • 6 case studies

Cons

  • Missing a tooling section to explain how to put these ideas in action.

Conclusion

As software engineers, we are on the lower part in the software creation chain and it shows. We focus on building the software right… Yet, we fail to address problems that find their origin in the upper part of the chain; agile methodologies are meant to prevent this. This book comes from a simple idea: in most projects, specifications, tests and software are all but synchronized. It lists a collection of agile recipes to remedy that.

Whether you’re a business analyst, a project manager or just a technical guy who wants to build better software, this book is for you. In any cases, you’ll want to keep it under arm’s reach.

Categories: Book review Tags: ,

Why Eclipse WTP doesn’t publish libraries when using m2e

April 29th, 2012 No comments

Lately, I noticed my libraries weren’t published to Tomcat when I used a Maven project in Eclipse, even though it was standard war packaging. Since I mostly use Vaadin, I didn’t care much, I published the single vaadin-x.y.z.jar to the deployed WEB-INF/lib manually and I was done with it.

Then, I realized it happened on two different instances of Eclipse and for the writing of Develop Vaadin apps with Scala, I used 3 different libraries, so I wanted to correct the problem. At first, I blamed it on the JRebel plugin I recently installed, then I began investigating the case further and I found out that I needed a special Maven connector that was present in neither of my Eclipse instances: the WTP connector. I had installed this kind of connector back when Maven Integration was done by m2eclipse by Sonatype, but had forgotten that a while ago. Now the integration is called m2e, but I have to do the whole nine yards again…

The diagnostics that can be hard but the solution is very simple.

Go to the Windows -> Preferences menu and from this point on go to Maven -> Discovery.

Click on Open Catalog.

Search for “wtp”, select Maven Integration for WTP and click on Finish. Restart and be pleased. Notice that most Maven plugins integration in Eclipse can be resolved the same way.

For a sum up on my thoughts about the current state of m2e, please see here.

Categories: JavaEE Tags: , ,

Food for my thoughts after a week at conferences

April 22nd, 2012 1 comment

This week, I attended both Jenkins User Conference Paris and Devoxx France. Though I’ve already reported notes about the sessions I went to (Jenkins, Devoxx day 1, day 2 and day 3), there were some underlying trends I wanted to write down in order to clear my thoughts.

Power to the programmers
Programmers form the basis of software industry. Like many of their brethren in other industries, this means they are subject to unlimited exploitation (at least, it’s what I’ve observed in France) and are not well considered.
A trend I’ve seen at Devoxx is for developers to reclaim their rightful part in the value production chain. Interestingly, it’s not an individual movement but a collective one.
Evolve or die
In contract, with the rapid changes occuring in both hardware and software, programmers have a duty to themselves to constantly learn new things or become irrelevant the next day.
As a consequence, developers should be jack-of-all-trades and no one trick pony. In particulare, those who have no prior experience of concurrency should get some post haste or they will face unexpected bugs coming from today’s multicore architectures.
The decline of JavaEE
Although Java is still seen as “the” language, which is a good thing in a Java conference, there weren’t many talks on JavaEE. It’s as if the platform is becoming a thing of the past, without much ado (none in fact).
Polyglot
Some say the JVM will be Java’s legacy, in no small part due to the recent explosion of languages. Likewise, there’s a growing part of applications that have a very short lifespan. Both factors make me predict that in the near-future, emphasis will be not on how to implement features, but on the features themselves.
The cloud is here to stay
In parallel to JavaEE’s fall, the cloud is rising… very fast, whether it’s IaaS, Paas or SaaS: Google App Engine, Jelastic, Cloud Foundry, Heroku, CloudBees re only some remarkable examples. All these solution enable very short time-to-market, a necessity in today’s world.

For me, this translates into some paths worth exploring in the following months:

  • Check Java 1.4, 5, 6 and 7 ways of handling concurrency. Search for third-party libraries that complete them (such as Akka)
  • Go deeper into Scala. And start learning Groovy, in earnest
  • Get me a laptop with a killer configuration and play with DevOps tools. I was amazed in a talk to learn how these tools were used to create a training environment
  • Think globally about code quality: JavaScript is code too. The code base is becoming larger and larger (much to my chagrin, but I’ve to deal with it nonetheless). Do the same for PHP
  • Stay in touch with all those nice clouds platform and their offerings
Categories: Event Tags:

Devoxx France 2012 day 3

April 20th, 2012 No comments

Last day of Devoxx France 2012! Waking up was not so easy, I had to drink a coffee to get me going (and I never do).

Trends in mobile application development by Greg Truty

I didn’t take notes on this one because my computer rebooted :-/

Mobile is becoming strategic to the business: it’s beginning to be the equivalent to CRM, portals, and such other assets.
On the development side, mobile development is akin to traditional development. There are a couple of differences however: development cycles are very short (weeks) because time-to-market is very quick, hardware manufacturers release more often leading to a very fragmented market and we are back to client server architectures.

  • Native applications require a full new set of skills in the enterprise. The fragmentation enforce to target specific platforms and to forget others. Other requirements are creeping in (security, scalability, integration in the enterprise architecture, …).
  • Another approach would be to take the standard web route; actually, familiarity is here and experience is already available on this subject. Unfortunately, we cannot leverage platform native capabilities.
  • The best of both worlds come in the form of hybrid approach, with tools coming from Apache Cordova (formerly PhoneGap). In this case, the only disadvantage comes from slower performance than native applications (sometimes, at least).
  • Finally, what if we could write to a common API and then compile to each platform to produce native applications? This is the cross-platform programming models. We get help from appceletator and its kind here.

IBM recently acquired Worklight, a framework to address all the previous strategies, and also deal with security, SSO, integration in so on. Worklight provides a IDE, a server, runtime components and a web-based console to monitor mobile applications and infrastructure.

The guy is not bad, there’s some interesting info, but a sales pitch at 9 in the morning makes me regret not to have stayed later in bed.

Portrait of the developer in “The Artist” by Patrick Chanezon

The story is about George, a JavaEE developer that work for top companies. Anyway, after 3 years work, the AZERTY project is put into production: GUI is bad, the IT director is happy, users hate it. George is happy, he loves complexity and nobody understand how the system work, save him. George is promoted Project manager, then Project Director. One day, when George go into a OSS meeting, he hears new terms: Hibernate, agility, and such. Yet, instead learning those new techs, he learns golf and code no more. Since he now has budget, he brings in agile coachs into his company: it fails abismally. Finally, he becomes IT director… Users reject his new project AZERTY 3.0 and users use Google Apps behind his back. His boss now demands iPhone and Android applications for. He cannot code it and is finally fired.

What happened while George learned to play golf? The last 2 years saw a paradigm shift that happens every 15 years:

  • in 60′s, there were mainframes (server-side)
  • in 80′s, it was client-server architecture
  • in 90′s the web put everything on the server-side again
  • In 2010, it’s the Cloud, HTML5 and mobile; it feels like we put things back into the client again

Standards analyst segregate Cloud into Saas, PaaS and IaaS as a pyramid. For developers, PaaS is the most important, which is something alike to the operating system. The trend is now Developing as a Service. Interestingly enough, though platforms become more industrialized, software development moves toward craftmanship.

Historically speaking, Cloud began at consumer websites solving their own needs (like Google, Amazon, and others). Nowadays, IaaS is becoming mainstream, but you still need to build and maintain your own software infrastructure. For example, when Amazon failed last year, 3 startups survived the outage, those that built their own platform.
The software development process is about to change. Business-to-consumer applications are becoming like haute couture, with new applications coming in everyday. Some applications have even short life expectancy (like Devoxx France application). Thus, development cycles have to be shortened in parallel, hence Agility. Cloud let us become more agile, by cutting us off from infrastructure.

The main risk of Cloud infrastructure is the vendor lock-in. Cloud Foundry resolves the problem by providing an Open Source product. For example, AppFog uses Cloud Foundry and added PHP. Likewise, you can create you own private cloud to host it on your own infrastructure.

There are a couple of things to learn from that:

  • Software is becoming like fashion, design rules
  • Use the best tool for the task at hand
  • Learn, learn, learn!

In conclusion, George buys an iPhone, codes a little bit everyday, read programming books and is happy again! Besides, he stopped playing golf. The rest is up to you…

Wow, the man is really good, as well as the session. If you missed the keynote, see it on Parleys ASAP.

Abstraction Distractions for France by Neal Ford

As developers we think about abstractions, because if there weren’t any, we would have to deal with 0 and 1 and we couldn’t get anything done. And yet, by using abstractions, we sometimes forget they are only abstractions, and tend to consider them as the real thing.

Let’s take a simple document. How can you ensure that a Chocolate Cake Recipe could be saved for hundred years? Put it in a Word document? Probably not… Getting to text format would be better, but raises encoding issues. If you’re a government agency, stakes are higher. The goal is not to save the document, but the information.

For example, emacs tapes are still available, but there’s nothing to read them.

[Other fun examples are omitted because I couldn't type fast enough and the whole thing is based on pictures]

  • Lesson #1, don’t mistake the abstraction for the real thing
  • Lesson #2, understand 1 level down your usual abstraction
  • Lesson #3, once internalized, abstractions are very hard to cut off
  • Lesson #4, abstractions are both walls and prisons
  • Lesson #5, don’t name things that expose underlying details
  • Lesson #6, good APIS are not merely high-level or low-level, they are both at once
  • Lesson #7, generalize the 80% cases, and get away from the rest
  • Lesson #8, don’t be distracted by your abstractions

Dietzler’s law: using a tool like Acces (or nay contextual tool), 80% of what the user wants is very easy, 10% is hard to implement and the last 10% cannot be achieved because you have to go beyond the abstraction and that cannot be done.

Any tool (like Maven), has a tipping point. Because once you’ve gone beyond that, it will never ever be wonderful again. At this time, you’ve got to cut and run but it will be hard because you’ll be imprisoned by the abstraction.

Wow! I missed Neal’s session on the first day and I regretted it at the time. Now, I regret it even more. The previous keynote raised the bar very high, Neal passed it with flying colors. Anyone wanting a real show (with content to boot) should attend Neal’s session given the chance. Besides, you could even get his wife’s Chocolate Cake recipe :-)

Kotlin, by Dmitry Jemerov

New languages seem to bloom every so often, but the last year so announcements from enterprises (Red Hat’s Ceylon, JetBrains’s Kotlin and Google’s Dart). Here’s my chance to have a talk from someone very close to one of Kotlin’s inception.
JetBrain’s motto is develop with pleasure. There are other products beside IntelliJ’s IDEA (TeamCity, PHPStorm, and others).
Kotlin is: statically typed, running on the JVM, OSS and developed by JetBrains with help from the community. Why Kotlin? Because IntelliJ’s IDEA is based on Java and though it’s not the best fit, other languages are not better.
Goals include:

  • Full Java interoperability
  • compile and runtime as fast as Java
  • more concise than Java, but preventing more kinds of errors (such as NPE)
  • simple (no PhD level)

Performance are to be found in different areas: no autoboxing, no constant runtime overhead in generated code and a low compilation time. Kotlin is easy to learn; there’s no distinction between libray devlopers and ‘standard’ developers (as in Scala). It focuses on best practices taken from other languages. This leads to no ground-breaking features, only common sense.
[Follow a bunch of code snippets]
At first, it seems akin to Scala with fun replacing def: there are Scala-like constructors, vals and vars, traits, reversed-order type/name declaration (separated by double colons)

Extension functions seem nice enough: you can not only extend types, but also functions without bothering to inherit the whole nine yards. Also, in Kotlin, functions are first-class citizens (in short, you get closures with type inference). Interestingly enough, it is by default the first argument of the function literal (as in Groovy).

An interesting feature of Kotlin is the usage of the dollar sign to reference variable inside Strings, so as to have variable replacement at runtime.

There’s a limited form of operator overloading: only a subset of operators are available to be overloaded, as it needs to be done through a specifically-defined function. For example, to overload the -, you need to ovveride the minus function.

In order to manage nullability, Kotlin allows the ? operator to only allow safe calls (as to Groovy again). For example, the following snippet returns the list size or 0 if the list is null:

f.listFiles()?.size() ?: 0

The question mark marks the return type as nullable, otherwise, it’s not nullable and the compiler complains; it makes the core more concise as well as provide as safety net for handling null values.

Kotlin also provides pattern matching (as in Scala, the syntax being very similar). An improvement over Groovy’s builders, Kotlin’s are typesafe: for example, the HTML builder checks for validity at compile-time!

More features include:

  • Inline functions
  • First-class delegation rendering delegation easier with no code to write, it’s taken care of by the compiler
  • Modularization as part of the language
  • Declaration site variance [???]
  • And more to come [This frightens me to no end, what will happen to backward compatibility]

Not an uninteresting talk, but there was some passion lacking in the speaker. The get-what-works-from-other-languages strategy is a very interesting one. At least, you get the best of many world out-of-the-box. I see a big problem however, how does JetBrains plan to ensure backwards compatibility if they get their critical mass. The answer from the speaker is:

JetBrains intends to allow for compiler extensions.

Food for your thoughts (directly from the speaker): if JetBrains had known that Ceylon was being developed, it probably wouldn’t have started the project.

Discovering JavaFX

OK, I’m a bit of a masochist considering the presales talk from IBM, but I’m really interested in GUI and perhaps JavaFX next version is the right one (considering Flex’s fate).
The talk begins by the speaker asking to remove all misconceptions about JavaFX (good idea). JavaFX is a complement of Swing, SWT and HTML5. On Android and iOS, it works! The last addition to the JavaFX ecosystem is JavaFX scene builder.

JavaFX goals were to achieve:

  • a client API
  • a modular architecture (both for mobile and embedded)
  • a migration solution for UI in Java (since Swing has more than 10 years)
  • advanced development tooling
  • a cross-platform solution (across both OS and hardware type)

JavaFX 1.O receveid a bad feedback from the developers, they didn’t want to learn a new scripting language. Current version is JavaFX, available since 2011 on Windows (Mac OS and Linux are nearing GA completion). On the tooling side, NetBeans 7.1, 7.2 and Scene builder are provided. The community has already taken care of some integration plugins in Eclipse, while JetBrains has made contact to integrate JavaFX development in IntelliJ IDEA.

Currently achieved goals include: inclusion in the OpenJDK (through OpenJFX) and a common license with JDK. For the latter, note that JavaFX will be part of JavaSE from JDK 8. Oracle policy is to submit JSR to the JCP for standardization.

With JavaFX, you can use the tools you want, benefit from all Java features (generics, annotations, etc.) or even use your preferred language that run on the JVM. In JavaFX 2, the FXML markup language is available for UI defintion (just like MXML in Flex). Alternatively, you can still code you UI programmatically. FXML can be laced with scripting languages, such as JavaScript and Groovy.

On the graphics side, the architecture (Prism) makes it possible to use the best of hardware acceleration. The new windowing toolkit is called Glass. Its use make it possible to access advanced function (buffering, etc.). Supported media format include VP6, H.264, MP3 and AAC (very akin to supported HTML5 media formats). Moreover, JavaFX embeds the WebKit rendering engine, thus making it possible to interpret HTML and JavaScript in Java applications. Bidirectional communications are possible between the Java and the WebKit part. With WebKit’s usage, you get multiplatform testing for free.

Swing and SWT can be gradually migrated to JavaFX, through seamless integration of the latter in the existing UI. A big difference with Swing/SWT is the ability to use CSS to style your JavaFX components, hence you can reuse the same stylesheet on the web and on JavaFX. As a sidenote, there’s an already existing community around OpenJFX. Other contributions include Eclipse and OSGi plugins, DataFX (datasource access), ScalaFX and GroovyFX, frameworks (already!) and JFXtras (additional UI controls and extensions).

Last week, the JavaFX Scene Builder release was announced. It’s a visual development tool for GUI; drag-and-drop is supported. Integration with NetBeans comes out of the box, but it can be used with other IDE (since it’s a standalone tool). It’s entirely developed in JavaFX and available for Mac OS X and Windows.

The roadmap is the following:

  • End 2012, JavaFX 2.0.1 bundled with JDK 7
  • First quarter 2012, JavaFX 2.1 GA for Windows (Linux in beta)
  • Mid-2012, GA for all OS
  • Mid-2013, JavaFX 3.0 included in the JDK 8

[Talk ends with both a 3-tier demo, with the GUI developed in JavaFX with CSS and
Scene Builder demonstrating Drag-and-Drop]

Mono tone speaker, I had to fight agains sleep. On a more positive note, this wasn’t a sales pitch but there were some interesting pieces of information. The speaker also presented an iPad running a JavaFX using the hardware accelerometer.

Java caching with Guava by Charles Fry

The session is about the inner workings of caching. Guava is an OSS of low level Java librarires. The package com.google.common.cache contains simpel caching.

LoadingCache automatically loads entries when a cache miss occurs, while Cache doesn’t. The delegate CacheLoader interface knows how to load entries.

Concurrency is internally handled very similarly to ConcurrentHashMap. In the case of exceptions occuring during load, you have to catch runime exceptions and rethrow checked IOException. For keys that are transient, a fluent weakKeys() method on the builder allow for weak references. In contract, to allow the garbage collector to collect cached values, the softKeys() method is used (there have been performance issues however, this use is discouraged).

  • To allow for cache eviction, the cache builder can be set a maximum size; entries will be evicted in approximage LRU order (evictions occur on write operations).
  • Alternatively, if the eviction strategy is based on weight, a Weighterinterface let specify the weight computation and is passed around while building the cache loader. In this case, the eviction is exactly the same.
  • Another strategy would be time to idle, it’s achieved by expireAfterAccess()method on the builder
  • Finally, time to live eviction strategy is provided by expireAfterWrite() method on the builder

The next step is to be able to analyze cache: just use the recordStats() method. A CacheStats is returned by the stats() method on the cache object and allows for querying. Having analyzed the cache, it’s now possible to tweak the configuration: you have to create a string containing key-value pairs and pass this to the builder.

Guava also provides an event-listener model around the cache. Builders allow for listener registration through the removalListener that takes RemovalListener instance as a parameter. Of course, you can have multiple listeners by registering more than one object. The listener is called whenever an entry is evicted, whichever the stragegy used. By default, listeners are called synchronously: it’s advised to make listeners asynchronous if they are heavyweight (or wrap them inside RemovalListener.asynchronous).

Guava also provides two different ways for refreshing the cache:

  • manual refresh by calling the reload()method on the cache which can be implemented asynchronously
  • automatic refresh by calling the refreshAfterWrite() method on the builder

Asynchronous refresh shall be implemented to avoid blocking user thread. Just use Java’s FutureTask

Sometimes, it’s more efficient for a cache loader to load a set of entries at the same time than one at a time. This is accomplished by overriding CacheLoader.loadAll() and LoadingCache.getAll(). The latter does not block multiple requests for the same key.

So far, cache is fed by the cache loader. However, how can we handle specific use-cases to manually populate the cache? This is simply done with ifPresent() to query if a key is present and put() to put an entry in the cache.

In order to implement non-loading caches, we don’t need CacheLoader at all. When calling CacheBuilder.build(), you get one such cache. In order to disable caching, the current canonical way is to set the maximumSize to 0. This can be done at runtime through CacheBuilderSpec class.

Caches can be iterated over like maps with Cache.asMap(). All ConcurrentMap write operations are implemented, but it’s discouraged to use them (use CacheLoader or a Future instead). Be warned that both get() methods have similar signatures but they are different (Map.get() is in fact a “get if present”).

Future works include:

  • AsyncLoadingCache returning a Future<V> on get(K)
  • A CacheBuilder.withBackingCache()to provide cache layering (such as L1 + L2)
  • Performance optimizations

A very nice talk. ‘Nuff said. A bit disappointed by the lack of knowledge of the Guava team in the Java Caching API though (a JSR).

.Net for the Java developer, an inspiration by Cyrille Martraire and Rui Carvalho

This talks is about what’s .Net and what’s lacking in Java?

For a Java developer, .Net was something akin to the blue skin of death. From a more rational point of view, let’s remember at first that Sun and Microsoft were neighbors at first.

  1. Sun created a superb tool named Java, and thought about putting into Mosaic (those were applets)
  2. Microsoft responded with its own alternative, ActiveX. And to go beyond that, it added the ActiveX Server Pages (ASP) to generate HTML pages dynamically
  3. Sun answered with the JavaEE platform.
  4. Disappointed with Sun’s success, Microsoft delivered .Net in turn

Remarkably, Sun took a very long time to ship Java 5. The .Net framework matched Java’s features in version 2.0 and soon went beyond them. .Net principles are exactly the same; only some coding standards are somewhat different. .Net is polyglot from the start (C#, Iron Ruby, Iron Python, F#, …). In order to seduce C++ programmers, Microsoft provided low-level API.

Some features are really better in C# than in Java (or more precisely, already available):

  • The yieldkeyword let us avoid creating temporary collections when working to return such a result
  • Ranges are provided through the inkeyword
  • Functions are first class citizens and inline functions are possible (lambdas, and guess what, the operator =>). Moreover, lambdas parameters are statically-typed; hence we can create fluent API
  • C# provides anonymous objects while Java make us create new types although we only have the need in a single place
  • LINQ is a DSL that is a SQL look-alike
  • With C# 4, dynamics let developers is a special extensible type (not a dynamic proxy) [I confess I didn't understand all of this, apart from the benefit to not create data structure when there's no need]
  • Razor is a hybrid syntax between HTML and C#

On the portable side, the Apache Mono project let developers program in and deploy on Linux and Mac OS X. Still, in order to maximize decoupling, do not forget to use proprietary products (SQL Server for example). Though .Net is seen as autocratic, alternative communities exist, such as Alt.Net.

As a conclusion, here are some messages:

  • Do not forget the Java is not the JVM, Look to other languages while keeping your environment
  • The language is only a small part of development; developers share more (patterns, GUI, xDD, etc.)

A definitely good presentation on the benefits brought by looking over the gap, thanks to the well-oiled tongue-in-cheek exchanges between the two speakers (real pleasant teamwork, guys).

And it is the end of the Devoxx Fr 2012. See you next year, guys and thanks to all for all the great time there.

Categories: Event Tags:

Devoxx Fr 2012 Day 2

April 19th, 2012 Comments off

Today begins with the big talk from the Devoxx Fr team, which sums up the story of the Paris JUG and Devoxx France.

Proud to be developer? by Pierre Pezziardi

In some social situations, when you are asked what you do in life, it’s hard to tell you’re a computer programmer. Because IT is late, costly and well, not very used in everydays life.

That’s not entirely untrue. The cost of implementing a feature is constantly growing: IT is the only industry where there is no productivity gain, the opposite is true! Our industry is evolving very rapidly, we are programming in languages that didn’t exist 5 years ago. And yet, some things never change. Project cycles, despite agility, are long. Moreover, there’s a clear distinction between roles, most notably between thinkers and doers. Agility was meant to address that.

Our method was to develop integrated products, and that meant our process had to be integrated and collaborative.

Steve Jobs

Yet, though Steve Jobs formalized that, though productivity decreases, though agility works, not much changes. There’s a block somewhere and this block comes from the people themselves, that resist change. Evolving from a planned economy to agility is very hard.

We could stop at that, and tell us we cannot do nothing but in fact, we can do better, not changing overnight but step by step. As an example, at BRED, customer service was handling more that +120k calls per year (80% support). 10k calls were about Citrix session freezing: adding a simple button on the screen helped reduce the this number by 8k. Another way to reduce was to change a simple error message label, from “Technical error” to “Your account is under watch, please contact your local counseler”.

IT is misunderstood by people outside IT. For example, the notion of technical debt is invisible to top-level managers. In order to communicate with people outside your culture, you have to talk their language. It’s done through graphics and semantics. To go beyond productivity decreases, we’ll have to drastically change our culture, but in the meantime, we can act in small increments with added value.

Another thought is that IT is aligned with its organization: bureaucratics organizations produce bureaucratics IT. For example, Leman Brothers implemented the Bale 2 standards (risk management), and yet they gained nothing by it (you remember it?). In essence, understand not only IT complexity, but also remember IT is a support function, and has to bring business value. Examples of such added value systems are Wikipedia, Google, and so on. What do these tools have in common?

The simple, poor, transparent tool is a humble servant; the elaborate, complex, secret is an arrogant master.

Ivan Illich

In essence, those tools are based on everything is possible but instead of standard enterprise policies where everything is forbidden but. The bid is on trust toward users. The approach is to empower users, let users do mistakes, but be able to correct them easily.

It may seem that we as computer programmers cannot do something about it (we only are developers), but in order to change the system, we have to change ourselves first. This is not a recent strategy, though: Gerald Weinberg describes it in his book The Psychology of Computer Programming.

In conclusion, we are working for organizations that segregate people. In the IT, we reproduce the system: programmers, project managers, business analysts. Our duty as programmers (and people) is to break those barriers, and talk people outside IT.

This could be Heaven or this could be Hell by Ben Evans and Martijn Verbu

The talk is about the future of Java. There are two possible view of the future, Heaven and Hell.

In the Heaven view of things, Java 8 comes out in 2013 (as promised), it remains the #1 language and Jigsaw (Java’s promised modularity) is largely adopted.Java 9 brings out reified generics :-)

In essence, we are in the middle of a changing environment:

  • hardware is definitely multicore
  • mobile is here to stay
  • Emerging markets are going to bring many more new Java developers (millions are expected)

Open Source Software becomes really part of the mainstream and everybody (the general public) understand the notions behind it. On the social networks side, they observe strict confidentiality policies. As a corollary, configuration for controlling those parameters is easy and readily available.

In the Hell view of things, Java 8 is not released until 2015 because the community fails to test it enough; Jigsaw is never made available. Java 9 is not released until after 2020: it’s no more #1, developers flee to the .Net platform and alternative languages die.

On the hardware side, it becomes even more fragmented. Worse, patent litigations dominate the market and OpenJDK cannot manage both. On the mobile side, Android and Java stay separated and Android fragmentation goes even further.

In Hell, millions of new developers turn toward .Net and VB is taught in school. Apple and their look-alike continue to promote lock-in. Competitors see the benefits of this approach and take the same way. For Open Source, OSS developers become elitist and many new ideas never concretize. Actual startups, as market leaders, are worried about new nimbler competitors and start heavily lobbying, innovation is smothered.

Facebook and its successors dominate the Internet and privacy is completely lost. You log in with your Facebook at work and your boss can watch what you do the all day.

In conclusion, the future can be anyhting, but whatever it is, it’s up to us.

Spring is dead, long live Spring by Gildas Cuisinier

Guess what, this talk is about Spring and more precisely the war between Spring and JavaEE.

Episode 1

How Spring appeared? The JCP made a platform proposal years ago, that went by the name of J2EE (at the time). There were some problems: migration from a standard J2EE platform to another one was not as easy as announced and tests were coupled to the platform (no unit tests). Rod Johnson created in response an Open Source lightweight framework to address those problems: Spring was born.

Second evolution

At first, initial configuration was done through XML (there wasn’t so many boos against XML at the time). At its inception, a single configuration was necessary. Then, the import command made possible to have multiple neatly separated configuration files. Third-party frameworks made their way into the framework, and for some, XML configuration was a nightmare (Acegi for example). Spring evolved with specific namespaces to made configuration for those frameworks easier, it’s something akin to DSL for XML.

JavaEE 6 sexy again

With Java 5 bringing annotations, Spring continued its evolution and took them in. We still need a little XML configuration to scan for annotations, though. Meanwhile, the JCP learned of Spring’s successes and delivered JavaEE 6: it’s simple, testable and lightweight, all is achieved through annotations. Yet, migrating to JavaEE 6 is a barrier to acceptance.

Also inspired from its competition, Spring 3 brought full-annotation configuration, XML can be directly discarded.

JavaEE in 2012 from a survey

Results from http://cyg.be/SpringJEE show the following results: more than half of installed JavaEE are version 5. A majority of not-yet adopters think about migrating in less than a year. More than half of Spring users do not consider migrating to JavaEE. On the contrary, new users are very fragmented: only a quarter think about JavaEE 6, while another quarter think about Spring.

On the Spring side, results show a quicker migration to newer versions. For example, half of current Spring users use v3.0 while it was delivered at the same time as JavaEE 6 and around 20% use Spring 3.1. It’s mostly because migrating is easy (while JavaEE migrations require other application servers).

Episode 3

Spring 3.0 brought the JavaConfig approach, a no-XML configuration but didn’t entirely close the gap, Spring 3.1 does. For example, component scanning can be done through @ComponentScan, scheduling can be achieved through @EnableScheduling and MVC is provided though @EnableMVC. What these annotations do is easily understood from the underlying code (it was not so easy in XML).

A great feature of Spring is an abstraction over cache through @Cacheable; it provides no implementation however so you can use different implementations (ConcurrentHashMap, EhCache and GemFire) under the cover. The feature is also available through XML. Spring 3.1 also provides:

  • Hibernate 4 support
  • A new XML c:namespace for constructors in XML
  • JPA use without persistence.xml
  • miscellaneous improvements in Spring MVC

Conclusion

In conclusion, Spring is not only the Spring framework, but a whole ecosystem is readily available: Spring Data, Spring Batch, Spring Mobile, Spring Security, Spring Android, Spring Flex (ok, let’s forget this one), and many more.
IMHO, too much defending Spring at the beginning which left not enough time to deal with real stuff at the end. It’s not a good sign when one is on the defensive.

Java concurrency from the trenches by Alex Snaps

Having learned yesterday that we should prepare for concurrency, I decided to attend this talk in place of Client/Server Apps with Java and HTML5 [I couldn't reproduce the code snippets].

The goal of this talk is to have a better understanding of concurrent code, do more with java.util.concurrent and finally make best use of CPU cores.
What’s concurrency anyway? And what about threads, locks and actors? The main thing about concurrency is state and how to share it. A thread safe is a class that obeys its contract, whether by a single or multiple threads, whatever the scheduling and without external configuration.

  • Atomicity: no threads should expose intermediate or transient state that violate post-conditions of a class
  • Visibility: in a multithreaded environment, there’s no guarantee I can read back a value from a variable just after having written it

Taking Hibernate has an example, there are multiple statistics to track: entities and queries counters, max duration information, etc.

public interface Statistics {

public long getQueryExecutionMaxTime();

public long getQueryExecutionMaxTimeQueryString();

}

This interface (no implementation there) cannot guarantee its contract, cause the two methods are linked but it’s not possible to easily enforce this.

  • In order to get thread safety, a naive implementation would be to put the synchronize keyword in front of each method. The drawback is that a single monitor for each method has a huge impact on performance.
  • The next step is to have different monitor for each variable that is accessed, and to use the same monitor for the same variable across different methods with synchronized blocks inside the method.
  • Java 5 brings the notion of read/write locks: there’s still only a single access for writing but it allows multiple concurrent read access. For statistics, there’s no interest since there are many writes but only a few read.
  • Also in Java 5, we have Atomic classes that can be used for synchronized variables. Using such atomics, tests are even slower… even though I didn’t catch why.

Conclusion, finer grained locks can improve performance (i.e. adapt your lock granularity to your needs and context).

Better yet, there’s a no-lock solution, by creating a single State object. As an alternative, the volatile keyword can be used to guarantee the read value is up to date when accessed. Finally, another solution could be Compare-and-Swap. CAS is all about not enforcing thread-safety, but rather preparing for failure and dealing with it.

In conclusion, aim for non-blocking strategies and only try and use locks when required.

I must admit I didn’t get the most of the session, it was full of content and I probably lacked some of the background skills. Slides on Parleys will probably help me go deeper into my understanding because there was a lot of stuff.

DevOps: not only to manage servers by Jerome Bernard

Following my (re)discovery of virtualization, I thought the session could be very insightful.

The session is a feedback on how the speaker had to organize insider teaching sessions, while minimizing impact on infrastructure. The biggest constraint was delay (less than two weeks from bare machines).

Solutions that were rejected included:

  • Ghosting strategies (a single master redeployed on client machines before each session)
  • Complete reinstallation from the local network through scripts
  • Account creation/destruction for sessions

The final solution is a Chef server, with cookbooks being polled by Chef clients deploying on each client’s VirtualBox. This let attendees get full admin rights on their guest system (while locking the host system).

Chef let us automate installations through recipes and cookbooks. Chef polls every so often so that configuration stays in sync on each client. Chef’s advantage over Puppet is a bunch of recipes ready to be used. Finally, Chef provides an administration console that let us follow clients state. On the physical machines, a Chef server was installed to only automate VirtualBox installatiobn and Vagrant files. Also, effort was put to optimize network usage.

VeeWee was born as a Vagrant extension but now has a life on its own. It let use Vagrant masters easily, through templating. During the normal course of things, there’s no need since they are plenty of Vagrant baseboxes available. In this case, the need was to change to the French locale from the English based master. VeeWee launches a VirtuaBox and simulates user clicks on it. As a side note, VeeWee recently made Windows masters available.

Master definition files were edited to achieve the previous steps:

  • definition.rb: changed the used ISO, as well as the MD5 hash
  • preseed.cfg: changed locale to French
  • postinstall.sh: removed Puppet installation

Once VeeWee finished building the image, a single command was used to create a Vagrant box.

Vagrant let us create a VirtualBox image from the CLI, through a single text-based Vagrantfile. It can easily be integrated with Chef and Puppet. Note: we can do many other things (network management, port forwarding between VM and physical machine, directory sharing between host and guest, multiple VMs management (and their network configuration), …). In the case study, the Vagrantfile was dynamic (updated by Chef) so that the number of allocated CPUs were the number available on the physical machine, and so on. The Chef recipe downloaded Java, Maven and MySQL.

On the host systems, Chef solo was also installed. Vagrant and Ched only installed the softwares necessary to a particular session: Eclipse (as a .tar.gz), native packages (SVN) and custom symbolic links.

Feedbacks:

  • Chef
    • DNS resolution is very important
    • The official Java Chef recipe doesn’t work anymore since Oracle now requires JavaScript and cookies to download from the site. Better get the JDK once and make it available on an internal web server
    • Packaging the Android SDK is not easy, it’s only an installer
  • Vagrant
    • Test on your notebook before deploying
    • vagrant provision/reload to update a Vagrant-managed box
    • Vagrant and VeeWee should be now more compatible
  • VeeWee
    • Keep in synch VirtualBox versions and VirtualBox masters version

Finally, other uses came by: developer machines are provisioned by the same system and user-acceptance environments are managed by Vagrant. A good idea would be to use Continuous Integration of VMs, so that bad surprises don’t happen at the last moment.

All in all, another session full of content that will need to be reviewed once or twice to get the most of it.

Behind the scens of day to day development at Google by Petra Cross

The talk is about teams and roles, development workflows and those used at Google.

Teams and roles

There are +10k developers across +4O offices checking in every minute. At Google, roles are defined with clear-cut responsibilities: engineering directors (ENG DIR), product managers (PM), engineering managers (ENG MGR), software engineer/tech lead (SWE/TL), software engineers (SWE), software engineers in test (SET) and test engineers (TE). The team hierarchy has no more than three levels. Teams are created based on needs, for a project, not for here forever.

Google’s way of achieving its goal (organizing world’s information…) is by creating software. A feature developement goes like this:

Idea > Features > Planned > Worked On > In Code review > Tested > Canary > Live!

Note that “worked on” include unit tests, so that code review can get you a write more tests result.

Everything that’s it in the continuous integration goes into a release. Google has a eat your own dogfood policy so that Google employees can give feedback. Then, if everything goes right, the release is pushed to the Canary: it spreads to only a fraction of the users, depending on the particular product.

How to reduce development time? Either add resources or reduce waste, Google’s way is to reduce waste.

Development workflow

Three methods are relatively well-known:

  • Waterfall is old school and come from old manufacturing industries. No need to go further…
  • Spiral is the next step; it’s a sequence of waterfall, each producing a prototype of a product. The iteration cycle is about 6 months.
  • Agility is the newest approach. It’s all about customer collaboration, having feedback and adapting for changes. There are different flavors: XP, Scrum and Kanban (among others)

Google workflow

Google does mostly a bit of everything but mostly Agile. Ten things Google found to be true first lesson is that when you focus on users, everything else falls into place. Customers think in User stories while Devs thinks in tasks.
Example user story: “As an ATM user, I want to be able to view my balance”. Possible tasks include:

  1. Define an API which the ATM will be able to talk to the backend
  2. Implement the backend
  3. Add the GUI

In Kanban, a task can be in four different states: ICEBOX, BACKLOG, CURRENT (or WORKING ON) and DONE AND VERIFIED. Tasks in the icebox may never get done, so constant cleaning of the icebox is necessary. The backlog should have tasks for one to one-and-half iteration to guarantee noone runs out of work. Tasks in the backlog are things everyone agreed to work on; they are picked by developers. Everyone having a visibility on the backlog renders daily meetings useless because when picking a task, passing it in the CURRENT state, you put your name on it. On whiteboards, there’s a single swimlane for each state (save ICEBOX) and tasks are writtten on post-it. When passing in the CURRENT swimlane, you put your name (and perhaps picture) next to it. You can also pair program, it’s irrelevant to the workflow.

As said before, daily standups are optional but weekly team meetings are mandatory as well as monthly retrospective meetings (see later). In weekly meeting, you estimate tasks from top to bottom, by planning poker. When a consensus is not reached, lowest and highest estimators have to explain their reasons. An important point is that there’s no multi-directional discussion in order to avoid involving emotions. Then you re-estimate until a majority agrees. Estimation is done in points, not in time, because estimating in absolute is hard: it’s easier to tell that a task will be done quicker (or slower) than the other one. Likewise, velocity is computed in points: when a PM asks for a feature, it’s broken down into tasks and the engineering manager know if it can be achieved given his/her team velocity.

Note: when stuck, developers are expected to learn from materials and ask questions so as to avoid island of knowledge situations. Besides, managers and tech lead don’t care how you achieve your task, only that the task is done.

There’s also an emphasis on retrospective once a month. Everyone expresses: what went well, what went not so well and what could be done better.

Ok, there’s no magic involved… A bit disappointed, because I was probably expecting to have life explained to me. Seems Google manages its projects like everyone else, the success must come from other factors: culture (though I don’t see how I could apply the “get stuck and learn from it” approach), editor approach, or something else I cannot point my finger on.

Categories: Miscellaneous Tags:

Devoxx Fr 2012 Day 1

April 18th, 2012 No comments

Though your humble writer is a firm believer in abstraction from underlying mechanics, I chose the following conference because there are some use-cases when one is confronted by performance issues that come from mismatch between the developer’s intent and its implementation. Matching them require a better understanding of what really happens.

From Runnable and synchronize to parallel() and atomically() by Jose Paumard

The talk is all about concurrent programming and its associated side-effects, performance and bugs. Title explanation comes from the past and the future: Runnable is from JDK 1.1 (1995) and parallel from JDK 8 (2014?). In particular, we’ll see how hardware has an impact on software and on how software developers work.

The central question is how to leverage processors power? As a corollary, what are the problems of parallel applications (think about concurrency, synchronizing and immutability). Finally, whare are the tools available to Java developers today and in the near future to help us develop parallel applications?

Multithreading

In 1995, processors had a single core. Multithreading is about sharing the processor’s time between execution threads. A processor executes a thread when the current thread has finished, or if it’s waiting for a slowor a locked resource. At this time, parallelizing is all processors executeing the same instruction at the same time but on different data. This is very fast because threads don’t need to interact with each other (no need to pass data to each other).

From an API point of view, Java provides Runnable and Thread. Keywords synchronized and volatile are available. Until 2004, there’s no need to go beyond that.

As soon as 2005, multicore processors are generally available. Also, from this point on, power doesn’t come from an increase in core frequency (Moore’s Law), but by doubling cores number. To address that, Java 5 brings java.util.concurrent to the table. As a side note, there’s a backport EDU.oswego to Java 4. Many new notions are available in this package.

Today, every simple computer has from 2 to 8 cores. Professional computers, such as SPARC T3 has 128 cores. Tomorrow, the scale will be in ten thousands of cores, or even more.

Problems of multithreading

  • The increase of raw data makes it necessary, deal with it
  • Race conditions. Here comes the demonstration of a race condition in the Singleton pattern and the double-check locking pattern (which doesn’t work).
    The Java language specification defines a principle: a read from a variable should return the last written value of this variable. In order to have determinism, there should be a link “happens before” between access of the same variable. When there’s no link, there’s no way to know for sure the control flow: ooops, there comes a nice definition of a bug. The solution is to have the keyword volatile on the variable to create the link “happens before”. Unluckily, it drastically reduces performance down to the level of a single core.
    As for the only way to have a Singleton implementation is:

    public enum Singleton {
    	instance;
    }

Now, when you get down to the hardware level, it’s even more complex. Each core has two levels of cache (L1 and L2) and there’s a single cache shared between all cores (L3). Here is a table displaying time needed to pass data from the core to the different caches:

Destination Time (nanosecond)
L1 1
L2 3
L3 15
RAM 80

In conclusion, without the need to synchronize, a program could be executed at its optimal speed. With it, you should take into account the way the processor manages the memory (including cache) wired into the processor’s architecture! That’s exactly what we’re trying to prevent with Java (write once, run everywhere). Perhaps Java is not ready to go this path.

java.util.concurrent

Java 5 java.util.concurrent is a new way to launch threads. Before, we instantiated a new Runnable, created a new Thread wrapper and started the latter. The run() method of Thread has no return value nor it doesn’t throw an exception.

With Java 5, we instantiate a Callable: its call method has both a return value and do throw an exception. We pass it to a thread executor service which manages its onw thread pool (and probably reuses an already existing thread). We delegate thread management to a service that can abstract (and adapt) to the underlying hardware. There are also other notions.

  • Lock: locks the block for 1 thread. It provides the way to get the lock in a method and release it in another. Besides, there are two possible methods to get a lock, lock() and tryLock(). Finally, there’s a way to have a ReadWriteLockwhich is like having as many readers as desired, and a single writer which blocks reads. The reads are not synchronized, the write is.
  • Semaphor: locks the block for n thread
  • Barrier: synchronizes threads
  • Latch: open/close a code block for all threads

Also, new collections are available: BlockingQueue (and DeQueue) which are fixed-size collections with timeout methods. They are thread-safe and are used for Producer/Consumer implementations.

Finally, a CopyOnWriteArrayList (and CopyOnWriteLinkedList) provide concurrency friendly that let every thread read and lock only when writing. Unfortunately, therey are optimized when there’s not many reads (the array is copied and the memory pointer changed when writing). Hash and set structures could be implemented to operate the same way (copy and change pointer).

Alternatives to synchronize

Memory transactions

Akka provides a Software Transactional Memory API. With the API, we shouldn’t manipulate objects directly, but instead references to objects. Besides, we need Atomic objects that will manage STM for us. Memory is managed with optimistic locking: do as nothing was changed and either commit if it’s the case. The best use-case is when there’s low access concurrency because transactions are replayed when they fail. In the case of high concurrency access, there will be a lot of replays.

Some implementations use locks, others do not. Those are not the same lock semantics as those in the JLS. Also, some implementation are very memory hungry, and scales with the number of concurrent transactions. Intel recently announced that Haswell architecture will support Transactional Synchronization Extension: this will probably have an impact on the Java language, sooner or later.

Actors

Actors are components that receive data in the form of read-only messages. They compute a result and may send it as a message to other actors (eventually its caller). This strategy is all based on immutability, there’s no need for concurrency in this case.
Akka also provides an Actor API: writing an Akka actor is very similar to writing a Java 5 Callable. Besides, CPU usage is about the same as well as execution time (to be precise, with no significant difference). So, why used actoris instead of executor services? To bring in the power of STM, through transactional actors. The provided example is banking account transfers.

Parallel computing

In Java 7, there’s the Fork/Join pattern. Alternatively, there are parallel arrays and Java 8 provides the parallel() method. The goal is to automate parallel computing on arrays and collections.

Fork Join

It’s based on tasks. If a task decide it’s too big, it will spread other smaller subtasks: this is known as fork. Then, a system managed thread pool is available, each thread having a tasks queue. When a queue has no more tasks, the thread is capable of getting tasks from other threads queues and put them in its own. When a task is finished, a blocking known as join can aggregate results from subtasks (previously forked).
As a consequence, a task implementation must not only describe its business code, but also know how to decide if it’s too big and how to fork itself. Moreover, Fork Join can be used in a recursive way or an iterative way. The latter approach has 30% performance gain compared to the former. The conclusion is to only use recursivity when you do not know the boundaries of your system in advance.

Parallel arrays

Parallel arrays are described in JSR 166y. [And so end my notes on the talk, along with my battery life #fail]

Conclusion

The talker not only knows his subject, he knows how to hold an audience captive. Aside from the points described in the talk, I think the most important point in the whole thing is his answer to the last question: 

“Given the levels of abstraction that Java isolate developers from, isn’t it logical to use alternative languages to address faced with new multicores architecture?
- Probably yes”

[UPDATE]

Reduce pressure on memory allocation by Olivier Lamy and Benoit Perroud

The goal of Apache Direct Memory is to decrease latency provoked by the garbage collector. It’s currently in the Apache Incubator.

Cache off Heap

As a reminder, Java’s memory is segmented in different spaces (young, tenured and perm). Some parts are allocated to the JVM, the rest is native and used by network and such. The problem of the heap is that it’s managed by the JVM and that in order to free unused objects, a garbage collector has to clean up. This is a process that freezes program execution and renders the program non deterministic (you don’t know when GC will start and stop).

Since RAM is very cheap nowadays, many applications use local cache to increase performance. Two caches are possible:

  • On-heap: think a big hash map. It increases GC processing time.
  • Off-heap: it’s stored through serialization in native memory (and there’s a penalty associated) but decreases heap memory usage and hence, garbage collector time

Apache Direct Memory

ADM is based on the ByteBuffer class: the are allocated en mass and then separated on demand. The architecture is layered, with a Cache API on top.

Strategies to allocate memory have to address exactly the same problems as disk space fragmentation: either fuse byte buffers, or fixed size byte buffers (memory ‘loss’ but no fragmentation).

In real life, there should be a small portion of cached objects on-heap, and the rest off-heap. This is very similar to what Terracotta’s BigMemory does. This means that one can use EhCache’s API and delegate to ADM’s implementation.

Next steps include JSR 107 implementation, benchmarks, integration into other components (Cassandra, Lucene, Tomcat), hot configuration, monitoring and management features, etc. For more information, see the ADM site.

Note that ADM only uses Java’s public API under the cover (ByteBuffer) and no com.sun hacks.

Very interesting talk, and a nice alternative to Terracotta’s product. I don’t think ADM is production ready yet but definitively something to keep under watch.

Though I’m no fan of applications chocked by JavaScript, fact is, there are here. Better have some tools to manage them, instead of suffering from them. Perhaps a well-managed JavaScript application can be the equal of a well-managed Java application (***evil laugh***).

Take care your JavaScript code by Romain Linsolas

In essence, this talk is about how to write tests for JavaScript, analyze it, and manage it by Jenkins CI. In reality, only a small fraction do it already on JavaScript code.

Testing JavaScript

Jasmine is a behavior-driven testing library for JavaScript, while Underscore.js is a utility library. Both will be used to create our test harness. Akquinet Maven archetypes can help us bootstrap our project.

describe is a reference to test “class”, while it is a reference to a test function. Also, expect let us assert. beforeEach plays the role of equivalent Before annotations in Java (JUnit or TestNG). OtherJasmine functions mimic their Java equivalent.

Analyzing JavaScript

Let us keep proven tools here. If you use the aforementioned Maven archetype, it’s already compatible with Sonar! In order to govern test coverage, a nice library js-test-driver is available. It’s a JUnit look-alike. No need to throw away our previous Jasmine tests, we just need to launch them with js-test-driver, hence some prerequisites implying some POM updates:

  • a Jasmine adapter
  • the JAR to compute coverage
  • and an  available port to launch a web server

The metrics are readily available in Sonar! What is done through the Maven CLI can easily be reproduced in Jenkins.

Nice talk though developping JavaScript is antagonist with my approach to applications. At least, I will be ready to tackle those JavaScript-based application when I encounter them.

Categories: Event Tags:

Jenkins Users Conference Paris

April 18th, 2012 No comments

I was lucky to assist to the Jenkins Users Conference Paris 2012 and here are some of the notes I’ve taken from the different talks. Take into account I’m French and not used to hear much spoken English ;-)

Welcome talk by Kohsuke Kawaguchi.

A little bit of history first. Jenkins began innocently enough, because Kohsuke broke one build too many. In 2004, in order to monitor the success/failure of his builds, he created two little shell scripts. The idea was born and soon, he used JavaEE stacks to build the soon-to-be Jenkins (formerly Hudson). Since its inception, Jenkins has some culture principles that are still upheld today: weekly release cycles, an architecture based on plugins, low barrier entry and backward compatibility policy.

From a usage point of view, there are many big companies using Jenkins and strangely enough, Europe is a big users base. Today, there a 535 plugins available, with a nice pace of growth. Follows the evolution of bugs and bugfixes, in constant progression (and a taunt toward Hudson whose same figure show a break following the fork).

The most important part, to me at last, is new features. Most are focused on UI and usability (and to be blunt, there was room for improvement):

  • You can now install without restarting. No need to wait to all your jobs to finish to be able to get a new feature!
  • The Save button sticks at the bottom of the page. When you’ve installed many features, the page was long to scroll to get the damned button…
  • Saving itself is now handled by AJAX and you’re noticed by a message at the top: you stay on the page
  • You get a breadcrumb at the top of each page
  • From this breacrumb (and also from other places), you get a contextual menu. This menu provides a way to navigate to the sections of the job configuration page (and the more plugins you have installed, the more sections there are)
  • Finally, the UI now uses Twitter Bootstrap and later versions of some client JavaScript libraries that bring some added usability feature to the table (like the ability to display which block will be deleted when hovering over a Delete button)

Getting by Nicolas de Loof and Matthieu Ancelin

Jenkins is a cron on steroids

Jenkins is now used to do all things continuous: Integration, Delivery (able to deploy into production) and Deployment (deploy into production). In order to automate to production, there’s a need of a development pipeline, from SCM to production. Undertaken in a naive way, this process could take hours or days: you’ve to rethink your pipeline to streamline it:

  • Pass binaries from job to job (no need to rebuild packages in every step)
  • Parallelize tasks
  • Cleanup in case of failure
  • Manage resources when unavailable

If you want to do that in Jenkins, there are some Jenkins feature and plugins to use (Naginator, Promotion, and so on). This is clearly not the way to go if you want to manage dozens of project builds, because you’ll early lose sight of what you’re doing. There’s no way to have a simple overview of each pipeline because there is no central definition of the workflow build process. Moreover, there could be side-effects and worse, complex interactions between all plugins and jobs.

The answer to these problems is Build Flow, a dedicated pipeline plugin. Build Flow provides a new kind of job that let us describes the chaining of jobs in a DSL (build, guard, rescue, retry, parallel, etc.). The Console output displays the workflow of jobs. Strangely enough, the demo worked :-) Best of all, there’s an attempt at providing a ordered display of ran jobs, with older jobs at top and parallel jobs on the same horizontal line. To be frank, there’s a load of improvements to be made in this area but a discussion after the talk convinced me it’s on the way.

Nicolas sure knows how to hold an audience captive! Otherwise, a plugin I wasn’t aware of worth considering when you have a lot of chaining between jobs.

Build Trust in Your Build to Development Flow with JenkinsCI by Fred Simon

Annoucement: JFrog offers a repository manager for all Jenkins artifacts, now used for snapshots, soon for releases.

The speaker is from JFrog: the example doesn’t use Maven but Gradle (sigh). An interesting point is that with the Graven plugin, you can override the configuration of what’s inside the Gradle script, including the deploy repository so that you can manage this configuration inside the build environment and not inside the script. I think that’s a nice step up the ladder of improvement.

Things of note:

  •   A nice touch of the Artifactory plugin is that you can check license of the dependencies to alert you when something bad happens, such as when a GPL dependency creeps into your build.
  • Maven deploys module by module, Gradle also but the Artifactory does, it publishes only when all module builds are successful.
  • Artifactory computes all artifact metadata when an artifact is deployed in it: it ignores metadata sent by Maven (it has trust issues). Also, a remark is made about Nexus filesystem strategy (whereas Artifactory uses a database – Jackrabbit if I remember well). As an example, when copying from filesystem to filesystem, it can have really have bad performances when facing the Gb scale.

An opinions of the speaker is that:

Releases are not about recompiling and such but about renaming, changing metadata and changing dependencies.

I’m not sure I agree.

Frankly, at the end, I didn’t pay much attention, there was much talk about Artifactory itself, and not enough about Jenkins Artifactory integration. I guess that’s the price to pay to have JFrog as a sponsor. That’s not to say I have something against Artifactory (it was my first repository server and the UI was light years in advance at the time), just that I expected something else.

Jenkins at the Apache Software Foundation by Olivier Lamy

Apache Foundation has around 100 Top Level projects and 2500 commiters, so CI is really important. Continum, BuildBot and Jenkins are used but the most used is Jenkins. There are two build instances, one for normal builds, the other for Sonar builds for security reasons.

The first one runs on Apache Tomcat 6 clustered on 20 nodes and hosts about 750 jobs. Building some projects require hours (such as Lucene and Hadoop). Check directly directly here.

Lunch

Yes, a geek eats… But I’ve overheard about

Thanks to all courageous speakers who tried to had our attention while we had lunch.

Advanced Continuous Deployment with Jenkins by Benoit Moussaud

The session begins by a reminder of what is CI: compile, test and package. Often, we woud like deployment, so as to test deployment process and test the deployed application on the target platform. This is what is Continuous Deployment is: include deployment in the build itself.

This will allow us to pass, in ascending order:

  • Smoke tests
  • Functional tests
  • Performance tests

The idea is to sync the deployment cycle on the development cycle. Two requirements that developers must fulfill are in order to allow that:

For each version of the application, I shall provide one single package definitions containing all the artifacts and the resource definitions.

The package should be independent of the target environment.

This essentailly means one thing: shipped artifacts should be self-sufficient, e.g. the JDBC driver, the datasource and so on. There are challenges to overcome:  application server’s specifics, secure credentials management, etc. XebiaLabs provide a solution named DeployIt. Of course, there’s a plugin available for Jenkins to achieve that.

In Jenkins, you assemble what constitures a package (webapps, datasources, SQL scripts, tests packages, etc.) and you choose the environment to deploy to. Jenkins handles scheduling and launch. Finally, the deployment itself is managed by DeployIt: it’s done through plugins, each dedicated to a single task (deployment, copy, pass SQL, etc.).

For access control, credentials are securely stored inside DeployIt, so there’s no need for identity management nightmare on each platform.

Finally, DeployIt started from JavaEE environments but now includes PHP and .Net.

All in all, a very interesting talk, that raises many questions such as how to get the product in some clients but a path worth investigating for open-minded or agile ones.

Jenkins at Sfeir by Bruno Guedes

Bruno presents the following case study: in order to implement from scratch a software forge, the decision is made at the end of August and the service should be opened on the 1st of September. What are the components of a successful solution? The key is the Cloud, which brings the following benefits/critical points:

Benefit Critical point
No need to order the machines Service-Level agreeement
Skip the time to setup Extensibility
Cost reduction Location which should be as near as possible to avoid network latency
Instant development Security providers
Quick shipping
And still evolutive
No need to order the machines

Follows a demo of the use of the CloudBees platform, given an existing project. CloudBees offers both a “cloudified” forge (through Jenkins) and a “cloudified” application server (through Tomcat or JBoss).

Though there were not too many demo effects and the application was deployed and available in the cloud in the end, I’m strangely enough still not convinced by the model. Perhaps am I too narrow-minded (or too old)? Only the future will tell…

Getting Started with Jenkins by Harpreet Singh, Nicolas De Loof & Stephen Connolly

I didn’t want to go hear about ClearCase in the competing session, so I prefered to hear about basics so as to be sure to not having missed anything. A few facts I actually learned:

  • Jenkins has built-in cluster management. The good practice is to use the master node to manage configuration but to delegate build on slave nodes
  • The number of executors should be set to the number of CPU cores by default
  • Project’s description field accepts HTML
  • CVS commit units are file (not bunches) so you’d better configure the job’s quiet period or you’ll launch a commit storm on poor Jenkins
  • Some plugins, including the Maven plugin, are tied to the Jenkins version. Do not update them or you’ll be (very) sorry.

There are a couple of best practices that need to be mentioned:

  • Use a rememberable URL
  • Share port 80 with other applications
  • Use virtual host to distinguish multiple applications, not the context path: it’s more evolutive in case you have to change the infrastructure
  • Prepare for disk usage growth, or you’ll regret it later: plan for the worst (or the best, depending of your point of view – it means Jenkins will be part of your enterprise culture)

The rest of the session was lost unto me since I couldn’t create a Cloudbees account :-( The worst demo effect: sorry, guys.

Integrating PHP Projects with Jenkins by Sebastian Bergmann

It may be strange, but my interest lies in better builds and since my current company develops PHP applications, I thought this talk might be interesting.

The session begins with a short story on CI in the PHP world. It’s very similar to Java, with CruiseControl. The biggest problem with CruiseControl is that it very monolithic: you had to get the source code, hack it and build it. Finally, this approach in the PHP world made way to phpUnderControl. Moreover, it seems that the CruiseControl community was not so friendly with PHP developers.

Hudson is build on top of a plugin architecture, so it’s much easier to extend (and the community nicer…). A lot of tools are available to govern quality with PHP.  Associated with Jenkins plugins, they can get the job done.

  • PHP_CodeSniffer is a static code analysis tool for PHP, JavaScript and CSS files and outputs a Checkstyle XML file
  • Checkstyle plugin to report the former results
  • CloverPHP
  • PHPUnit
  • PHPCPD
  • PHP_Depend is a port of JDepend and output a JDepend XML file
  • JDepend plugin to report previous results
  • PHP Mess Detector (PHPPMD) is a spin-off of PHPDepend and aims to be the PMD of the PHP world
  • PMD plugin to report previous results
  • Violations plugins is used a central place to aggregate the results of CodeSniffer, CPD and PMD
  • PHP_CodeBrowser generates a browsable representation of PHP code where violations found by CodeSniffer or PMD are highlighted
  • PHPLOC, a tool for measuring simple metrics on your project (lines of codes, classes, etc.)
  • Plot Jenkins plugin to plot metrics by previous plugin
  • Jenkins PHP offers template for Ant scripts and Jenkins jobs for PHP projects

At that time, I had to go. Having no real experience with PHP projects, this material can get me going for months.

Conclusion

Jenkins Conference Paris 2012 was a great success and I found most sessions very interesting.

Categories: Event Tags: