I was lucky to assist to the Jenkins Users Conference Paris 2012 and here are some of the notes I’ve taken from the different talks. Take into account I’m French and not used to hear much spoken English 😉
Welcome talk by Kohsuke Kawaguchi.
A little bit of history first. Jenkins began innocently enough, because Kohsuke broke one build too many. In 2004, in order to monitor the success/failure of his builds, he created two little shell scripts. The idea was born and soon, he used JavaEE stacks to build the soon-to-be Jenkins (formerly Hudson). Since its inception, Jenkins has some culture principles that are still upheld today: weekly release cycles, an architecture based on plugins, low barrier entry and backward compatibility policy.
From a usage point of view, there are many big companies using Jenkins and strangely enough, Europe is a big users base. Today, there a 535 plugins available, with a nice pace of growth. Follows the evolution of bugs and bugfixes, in constant progression (and a taunt toward Hudson whose same figure show a break following the fork).
The most important part, to me at last, is new features. Most are focused on UI and usability (and to be blunt, there was room for improvement):
- You can now install without restarting. No need to wait to all your jobs to finish to be able to get a new feature!
- The Save button sticks at the bottom of the page. When you’ve installed many features, the page was long to scroll to get the damned button…
- Saving itself is now handled by AJAX and you’re noticed by a message at the top: you stay on the page
- You get a breadcrumb at the top of each page
- From this breacrumb (and also from other places), you get a contextual menu. This menu provides a way to navigate to the sections of the job configuration page (and the more plugins you have installed, the more sections there are)
Getting by Nicolas de Loof and Matthieu Ancelin
Jenkins is a cron on steroids
Jenkins is now used to do all things continuous: Integration, Delivery (able to deploy into production) and Deployment (deploy into production). In order to automate to production, there’s a need of a development pipeline, from SCM to production. Undertaken in a naive way, this process could take hours or days: you’ve to rethink your pipeline to streamline it:
- Pass binaries from job to job (no need to rebuild packages in every step)
- Parallelize tasks
- Cleanup in case of failure
- Manage resources when unavailable
If you want to do that in Jenkins, there are some Jenkins feature and plugins to use (Naginator, Promotion, and so on). This is clearly not the way to go if you want to manage dozens of project builds, because you’ll early lose sight of what you’re doing. There’s no way to have a simple overview of each pipeline because there is no central definition of the workflow build process. Moreover, there could be side-effects and worse, complex interactions between all plugins and jobs.
The answer to these problems is Build Flow, a dedicated pipeline plugin. Build Flow provides a new kind of job that let us describes the chaining of jobs in a DSL (
parallel, etc.). The Console output displays the workflow of jobs. Strangely enough, the demo worked 🙂 Best of all, there’s an attempt at providing a ordered display of ran jobs, with older jobs at top and parallel jobs on the same horizontal line. To be frank, there’s a load of improvements to be made in this area but a discussion after the talk convinced me it’s on the way.
Nicolas sure knows how to hold an audience captive! Otherwise, a plugin I wasn’t aware of worth considering when you have a lot of chaining between jobs.
Build Trust in Your Build to Development Flow with JenkinsCI by Fred Simon
Annoucement: JFrog offers a repository manager for all Jenkins artifacts, now used for snapshots, soon for releases.
The speaker is from JFrog: the example doesn’t use Maven but Gradle (sigh). An interesting point is that with the Graven plugin, you can override the configuration of what’s inside the Gradle script, including the deploy repository so that you can manage this configuration inside the build environment and not inside the script. I think that’s a nice step up the ladder of improvement.
Things of note:
- A nice touch of the Artifactory plugin is that you can check license of the dependencies to alert you when something bad happens, such as when a GPL dependency creeps into your build.
- Maven deploys module by module, Gradle also but the Artifactory does, it publishes only when all module builds are successful.
- Artifactory computes all artifact metadata when an artifact is deployed in it: it ignores metadata sent by Maven (it has trust issues). Also, a remark is made about Nexus filesystem strategy (whereas Artifactory uses a database - Jackrabbit if I remember well). As an example, when copying from filesystem to filesystem, it can have really have bad performances when facing the Gb scale.
An opinions of the speaker is that:
Releases are not about recompiling and such but about renaming, changing metadata and changing dependencies.
I’m not sure I agree.
Frankly, at the end, I didn’t pay much attention, there was much talk about Artifactory itself, and not enough about Jenkins Artifactory integration. I guess that’s the price to pay to have JFrog as a sponsor. That’s not to say I have something against Artifactory (it was my first repository server and the UI was light years in advance at the time), just that I expected something else.
Jenkins at the Apache Software Foundation by Olivier Lamy
Apache Foundation has around 100 Top Level projects and 2500 commiters, so CI is really important. Continum, BuildBot and Jenkins are used but the most used is Jenkins. There are two build instances, one for normal builds, the other for Sonar builds for security reasons.
The first one runs on Apache Tomcat 6 clustered on 20 nodes and hosts about 750 jobs. Building some projects require hours (such as Lucene and Hadoop). Check directly directly here.
Advanced Continuous Deployment with Jenkins by Benoit Moussaud
The session begins by a reminder of what is CI: compile, test and package. Often, we woud like deployment, so as to test deployment process and test the deployed application on the target platform. This is what is Continuous Deployment is: include deployment in the build itself.
This will allow us to pass, in ascending order:
- Smoke tests
- Functional tests
- Performance tests
The idea is to sync the deployment cycle on the development cycle. Two requirements that developers must fulfill are in order to allow that:
For each version of the application, I shall provide one single package definitions containing all the artifacts and the resource definitions. The package should be independent of the target environment.
This essentailly means one thing: shipped artifacts should be self-sufficient, e.g. the JDBC driver, the datasource and so on. There are challenges to overcome: application server’s specifics, secure credentials management, etc. XebiaLabs provide a solution named DeployIt. Of course, there’s a plugin available for Jenkins to achieve that.
In Jenkins, you assemble what constitures a package (webapps, datasources, SQL scripts, tests packages, etc.) and you choose the environment to deploy to. Jenkins handles scheduling and launch. Finally, the deployment itself is managed by DeployIt: it’s done through plugins, each dedicated to a single task (deployment, copy, pass SQL, etc.).
For access control, credentials are securely stored inside DeployIt, so there’s no need for identity management nightmare on each platform.
Finally, DeployIt started from JavaEE environments but now includes PHP and .Net.
All in all, a very interesting talk, that raises many questions such as how to get the product in some clients but a path worth investigating for open-minded or agile ones.
Jenkins at Sfeir by Bruno Guedes
Bruno presents the following case study: in order to implement from scratch a software forge, the decision is made at the end of August and the service should be opened on the 1st of September. What are the components of a successful solution? The key is the Cloud, which brings the following benefits/critical points:
No need to order the machines
Skip the time to setup
Location which should be as near as possible to avoid network latency
And still evolutive
No need to order the machines
Follows a demo of the use of the CloudBees platform, given an existing project. CloudBees offers both a "cloudified" forge (through Jenkins) and a "cloudified" application server (through Tomcat or JBoss).
Though there were not too many demo effects and the application was deployed and available in the cloud in the end, I’m strangely enough still not convinced by the model. Perhaps am I too narrow-minded (or too old)? Only the future will tell…
Getting Started with Jenkins by Harpreet Singh, Nicolas De Loof & Stephen Connolly
I didn’t want to go hear about ClearCase in the competing session, so I prefered to hear about basics so as to be sure to not having missed anything. A few facts I actually learned:
- Jenkins has built-in cluster management. The good practice is to use the master node to manage configuration but to delegate build on slave nodes
- The number of executors should be set to the number of CPU cores by default
- Project’s description field accepts HTML
- CVS commit units are file (not bunches) so you’d better configure the job’s quiet period or you’ll launch a commit storm on poor Jenkins
- Some plugins, including the Maven plugin, are tied to the Jenkins version. Do not update them or you’ll be (very) sorry.
There are a couple of best practices that need to be mentioned:
- Use a rememberable URL
- Share port 80 with other applications
- Use virtual host to distinguish multiple applications, not the context path: it’s more evolutive in case you have to change the infrastructure
- Prepare for disk usage growth, or you’ll regret it later: plan for the worst (or the best, depending of your point of view - it means Jenkins will be part of your enterprise culture)
The rest of the session was lost unto me since I couldn’t create a Cloudbees account 🙁 The worst demo effect: sorry, guys.
Integrating PHP Projects with Jenkins by Sebastian Bergmann
It may be strange, but my interest lies in better builds and since my current company develops PHP applications, I thought this talk might be interesting.
The session begins with a short story on CI in the PHP world. It’s very similar to Java, with CruiseControl. The biggest problem with CruiseControl is that it very monolithic: you had to get the source code, hack it and build it. Finally, this approach in the PHP world made way to phpUnderControl. Moreover, it seems that the CruiseControl community was not so friendly with PHP developers.
Hudson is build on top of a plugin architecture, so it’s much easier to extend (and the community nicer…). A lot of tools are available to govern quality with PHP. Associated with Jenkins plugins, they can get the job done.
- Checkstyle plugin to report the former results
- PHP_Depend is a port of JDepend and output a JDepend XML file
- JDepend plugin to report previous results
- PHP Mess Detector (PHPPMD) is a spin-off of PHPDepend and aims to be the PMD of the PHP world
- PMD plugin to report previous results
- Violations plugins is used a central place to aggregate the results of CodeSniffer, CPD and PMD
- PHP_CodeBrowser generates a browsable representation of PHP code where violations found by CodeSniffer or PMD are highlighted
- PHPLOC, a tool for measuring simple metrics on your project (lines of codes, classes, etc.)
- Plot Jenkins plugin to plot metrics by previous plugin
- Jenkins PHP offers template for Ant scripts and Jenkins jobs for PHP projects
At that time, I had to go. Having no real experience with PHP projects, this material can get me going for months.
Jenkins Conference Paris 2012 was a great success and I found most sessions very interesting.