Forget the language, the important is the tooling

November 1st, 2015 No comments

There’s not one week passing without stumbling upon a post claiming language X is superior to all others, and offers you things you cannot do in other languages, even make your kitchenware shine brightier and sometimes even return lost love. I wouldn’t mind these claims, because some features really open my Java developer mind to the lacking of what I’m using now, but in general, they are just bashing another language – usually Java.

For those that love to bitch, here’s a quote that might be of interest:

There are only two kinds of programming languages: those people always bitch about and those nobody uses.

— Bjarne Stroustrup

That said, my current situation spawned some thinking. I was trying to migrate the Android application I’m developing in my spare time to Kotlin. I used Android Studio to do that, for which JetBrains provide a migration tool through the Kotlin plugin. The process is quite straightforward, requiring only minor adjustments for some files. This made me realize the language is not the most important, the tooling is. You can have the best language in the world – and it seems each person has his own personal definition of “best”, if the tooling lacks, it amounts to just nothing.

Take Scala for example. I don’t pretend to be an expert in Scala, but I know enough to know it’s very powerful. However, if you don’t have a tool to handle advanced language features, such as implicit parameters, you’re in for a lot of trouble. You’d better have an advanced IDE to display where they come from. If to go beyond mere languages, the same can be said about technologies such as Dependency Injection – whether achieved though Spring or CDI or aspects from AOP.

Another fitting example would be XML. It might not be everyone’s opinion, but XML is still very much used in the so-called enterprise world. However, beyond a hundred of lines of code and a few namespaces, XML becomes quite hard to read without help. Come Eclipse or XMLSpy and presto, XML files can be displayed in a very handy tree-like representation.

On the opposite, successful languages (and technologies) come in with tooling. Look around and see for yourself. Still, I don’t pretend I know between the cause and the consequence: are languages successful because of their tooling, or are tools constructed around successful languages? Perhaps it’s both…

Previously, I didn’t believe Kotlin and its brethren had many chances for success. Given that Jetbrains is behind both Kotlin and IntelliJ IDEA, I believe Kotlin might have a very bright future ahead.

Categories: Development Tags: , ,

Using a framework or not?

October 25th, 2015 4 comments

If you haven’t read Uncle Bob latest blog post, you should do now. For impatient readers, it can be summed up with:

tl;dr: You don’t need frameworks, just write the appropriate code.

It’s not the first time I find myself at odd with Uncle Bob’s writing, which feels awkward in regard to his experience and aura. Another post that drew my attention was about JavaScript. Again, you can read it… or:

tl;dr: Better understand the underlying layer than become a framework specialist.

That I can understand. I mean, in the JavaScript ecosystem, a new framework pops up every 6 months and become the new hype. (Fortunately, I’m a Java guy and it doesn’t happen so much in ours.) Yet, is it so strange that recruiters want someone productive in the framework from day 1? If to make a parallel, I’d really like the pilot of my plane to be experienced in that exact type of plane, not a random pilot that can learn “on the fly” (pun intended).

As for software, why would anyone in his sane mind would still use a framework? Because frameworks are about being productive i.e. saving precious time! Something like Joda Time might not be the next best thing since slice bread but it’s nicely designed and is readily available. Could I cook something like this? Probably, given enough time. But here lies the rub: in a project, you don’t get time to create your own low-level libraries. There’s a budget and you better don’t overspend. Anyway, should I have time left, I would prefer to increase my test coverage, run some some mutation testing or refactor that part I’m not very proud of.

Refusing to reuse even has a name in software engineering, the NIH syndrom. There have been several examples of developers developing their own custom framework instead of an existing one, because they thought they understood the problem better than others. It usually ends up in an tangles mess of half-baked code, with zero documentation and a huge learning cost. Interestingly enough, examples I personally witnessed first-hand are about data persistence. Folks, don’t do this at home! Or better, do it at home for learning purpose, don’t do it at work…

There are a couple of hints that let you spot NIH:

  • A company policy prevents you (or hinders you so much it prevents you) from using a third-party library for “security” reasons – though Open Source code is much safer than closed source, thanks to the power of reviewing. You’re welcome to read this piece of art from Oracle stating the opposite, which has been removed since.
  • A person enforces the use of the code he developed, even though it’s not documented, not tested and a solid Open Source framework does the same and his position of power prevents to do otherwise

Of course, when you need to use a framework, that doesn’t mean you should use the first one you stumble upon. I firmly believe that knowing of a couple of reliable frameworks in different domains (logging, reactive, UI, metrics, etc.) should be part of a well-rounded software engineer portfolio.

One has to look at a few things before making the choice of using a framework, but they can be summed up in one question: “Will it be maintained?”. This criteria can be evaluated through a couple of factors:

  • Number of committers
  • Frequency of commits
  • Date of last commit
  • Documentation
  • Nature of the licence
  • Size and activity of the community
  • etc.

All the frameworks I use (SLF4J for logging, Vaadin for UI, Spring for Dependency Injection, etc.) score highly in those areas and will probably outlive the applications I use them in.

I agree with Uncle Bob on one point, though: one has to understand how it works… I remember 10 years ago, when the Java EE world (J2EE at the time) revolved around Struts. Junior developers were brought in, trained in Struts and released in the wild to code Struts applications – most of them didn’t care about knowing the Servlet API underneath! I was shocked that as a software developer, you were not curious enough to understand what happened under the cover.

But you have to stop somewhere. Software development has an history of adding abstraction layers upon layers, to get closer to the domain model. So it sure is nice to know that Vaadin is built upon AJAX and JSON, that produces HTML, that is sent over HTTP. But frankly, I will personally stop there. I don’t need to understand how every bit of memory is handled in the end, do I? I’m afraid dogma doesn’t handle well real-life constraints. That said, the more you know, the better it is…

In conclusion, I’d suggest everyone to make his opinion: try using frameworks and the opposite, and see how it goes depending on the context. But more importantly, don’t take anyone’s opinion to justify your lack of arguments taking one stance or the other.

Categories: Development Tags:

My case for Event Bus

October 18th, 2015 3 comments

Note: as I’ve attended Joker conference this week, I have to make a break in my “Going the microservices way” serie. However, I hope you dear readers will still enjoy this article.

Informal talks with colleagues around the coffee machine are a great way to improve your developer skills. Most of the time, people don’t agree and that’s a good way to learn about thinking in context. One of the latest subject was about the Event Bus. Though no Android developer, I’m part of a Mobile team that uses an Event Bus to dispatch events among the different components of the application. Amazingly enough (at least for me), one team member doesn’t like Event Bus. As coincidence go, I just happened to read this post a few days later.

I guess it’s my cue to make a case for Event Bus.

There are several abstractions available to implement messaging, one of the first that comes to mind probably being the Observer pattern. It’s implemented in a variety of UI frameworks, such as AWT, Swing, Vaadin, you name it. Though it reverses the caller responsibility from the Subscriber (aka Observer) to the Observable (aka Subject), this pattern has a big drawback: in order for Subscriber to be notified of Observable’s event firing, the Observable has to hold a reference to a collection of Subscribers. This directly translates as strong coupling between them, though not in the usual sense of the term.

To decrease coupling, the usual solution is to insert a third-party component between the two. You know where I’m headed: this component is the Event Bus. Now, instead of Observers registering to the Subject, they register to the Event Bus and Subject publishes events to the Event Bus.

The coupling between Subject and Observer is now replaced by a coupling between Subject and EventBus : the Observable doesn’t need to implement a register() method with the Subscriber’s type as a parameter. Thus, the event can come from any source and that source can change in time with no refactoring needed on the Subscriber’s part.

In the Java world, I know of 3 different libraries implementing Event Bus:

In the latest two, the implementation of the notify() method is replaced by the use of annotation while in the former, it’s based on an agreed upon method signature(i.e. onEvent(Event)). Thus, any type can be notified, removing the need to implement an Observer interface so that the coupling is reduced even further.

Finally, all those implementations offer a way to pass an additional payload in the form of an event. This is the final design:

Advantages of this design are manifold:

  • No need to reference both the Subscriber and the Observable in a code fragment to make the later register the former
  • No need to implement an Observer interface
  • Ability to pass additional payload
  • Many-to-many relations: one Subscriber can subscribe to different events, and an Observable can send different kind of events

The only con, but it might be an issue depending on your context, is the order in which events are published. In this case, a traditional coupled method-calling approach is better suited. In all other cases – especially in the user interface layer and even more so when it’s designed with true graphical components (AWT, Swing, Android, Vaadin, etc.), you should give Event Bus a shot given all its benefits.

For example, here’s an old but still relevant article on integrating Guava Event Bus with Vaadin.

Going the microservices way – part 3

October 11th, 2015 No comments

In the first post of this serie, I created a simple microservice based on a Spring Boot + Data JPA stack to display a list of available products in JSON format. In the second part, I demoed how this app could be uploaded on Pivotal Cloud Foundry. In this post, I’ll demo the required changes to deploy on Heroku.


As for PCF, Heroku requires a local dedicated app. For Heroku, it’s called the Toolbelt. Once installed, one needs to login on one’s account through the toolbelt:

heroku login

The next step is to create the application on Heroku. The main difference between Cloud Foundry and Heroku is that the former deploys ready binaries while Heroku builds them from source. Creating the Heroku app will also create a remote Git repository to push to, as Heroku uses Git to manage code.

heroku create
Creating salty-harbor-2168... done, stack is cedar-14 |
Git remote heroku added

The heroku create command also updates the .git config by adding a new heroku remote. Let’s push to the remote repository:

git push heroku master
Counting objects: 21, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (14/14), done.
Writing objects: 100% (21/21), 2.52 KiB | 0 bytes/s, done.
Total 21 (delta 1), reused 0 (delta 0)
remote: Compressing source files... done.
remote: Building source:
remote: -----> Java app detected
remote: -----> Installing OpenJDK 1.8... done
remote: -----> Installing Maven 3.3.3... done
remote: -----> Executing: mvn -B -DskipTests=true clean install
remote:        [INFO] Scanning for projects...
// Here the app is built remotely with Maven
remote:        [INFO] ------------------------------------------------------------------------
remote:        [INFO] BUILD SUCCESS
remote:        [INFO] ------------------------------------------------------------------------
remote:        [INFO] Total time: 51.955 s
remote:        [INFO] Finished at: 2015-10-03T09:13:31+00:00
remote:        [INFO] Final Memory: 32M/473M
remote:        [INFO] ------------------------------------------------------------------------
remote: -----> Discovering process types
remote:        Procfile declares types -> (none)
remote: -----> Compressing... done, 74.8MB
remote: -----> Launching... done, v5
remote: deployed to Heroku
remote: Verifying deploy.... done.
 * [new branch]      master -> master

At this point, one need to tell Heroku how to launch the newly built app. This is done through a Procfile located at the root of the project. It should contain an hint to Heroku that we are using a web application – reachable through http(s), and the standard Java command-line. Also, the web port should be bound an Heroku documented environment variable:

web: java -Dserver.port=$PORT -jar target/microservice-sample.jar

The application doesn’t work yet as the log will tell you. To display the remote log, type:

heroku logs --tail

There should be something like that:

Web process failed to bind to $PORT within 60 seconds of launch

That hints that a web node is required, or in Heroku’s parlance, a dyno. Let’s start one:

heroku ps:scale web=1

The application should now be working at (you should be patient, since it’s a free plan, the app is probably sleeping). Yet, something is still missing: it’s still running on the embedded H2 database. We should switch to a more resilient database. By default, Heroku provides a PostgreSQL development database for free. Creating such a database can be done either through the user interface or the command-line.

Spring Boot documentation describes the Spring Cloud Connectors library. It’s able to automatically detect the app is running on Heroku and to create a datasource bound to the Postgres database provided by Heroku. However, despite my best effort, I haven’t been able to make this work, running each time into the following: No unique service matching interface javax.sql.DataSource found. Expected 1, found 0

Time to get a little creative. Instead of sniffing the database and its configuration, let’s configure it explicitly. This requires creating a dedicated profile configuration properties file for Heroku, src/main/resources/application-heroku.yml:

        url: jdbc:postgresql://<url>
        username: <username>
        password: <password>
        driver-class-name: org.postgresql.Driver
        database-platform: org.hibernate.dialect.PostgreSQLDialect

The exact database connection settings you can find in the Heroku user interface. Note the dialect seems to be required. Also, the POM should be updated to add the Postgres driver to the dependencies.

Finally, to activate the profile on Heroku, change the Procfile as such:

web: java -Dserver.port=$PORT -jar target/microservice-sample.jar

Commit and push to Heroku again, the application should be updated to use the provided Postgres.

Next week, I’ll create a new shopping bag microservice that depend on this one and design for failure.

Categories: Java Tags: , ,

Going the microservices way – part 2

October 4th, 2015 No comments

In my previous post, I developed a simple microservices REST application based on Spring Boot in a few lines of code. Now is the time to put this application in the cloud. In the rest of the article, I suppose you already have an account configured for the provider.

Pivotal Cloud Foundry

Pivotal Cloud Foundry is the Cloud offering from Pivotal, based on Cloud Foundry. By registering, you have a 60-days free trial, which I happily use.

Pivotal CF requires a local executable to push apps. There’s one such executable for each major platform (Windows, Mac, Linux deb and Linux rpm). Once installed, it requires authentication with the same credentials as with your Pivotal CF account.

To push, one uses the cf push command. This commands needs at least the application name, which can be provided either on the command line cf push myapp or via a manifest.yml file. If one doesn’t provide a manifest, every file (from where the command is launched) will be pushed recursively to the cloud. However, Pivotal CF won’t be able to understand the format of the app in this case. Let’s be explicit and provide both a name and the path to the executable JAR in the manifest.yml:

-   name: product
    path: target/microservice-sample.jar

Notice I didn’t put the version in the JAR name. By default, Maven will generate a standard JAR will only the apps resources and through the Spring Boot plugin, a fat JAR that includes all the necessary libraries. This last JAR is the one to be pushed to Pivotal CF. This requires a tweak to the POM to removing the version name in it:


Calling cf push at this point will push the fat JAR to Pivotal CF. The output should read something like that:

Using manifest file /Volumes/Data/IdeaProjects/microservice-sample/manifest.yml

Updating app product in org frankel / space development as [email protected]..

Uploading product...
Uploading app files from: /Volumes/Data/IdeaProjects/microservice-sample/target/microservice-sample.jar
Uploading 871.1K, 109 files
Done uploading               

Stopping app product in org frankel / space development as [email protected]..

Starting app product in org frankel / space development as [email protected]..
-----> Downloaded app package (26M)
-----> Java Buildpack Version: v3.2 |
-----> Downloading Open Jdk JRE 1.8.0_60 from (0.9s)
       Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.0s)
-----> Downloading Open JDK Like Memory Calculator 2.0.0_RELEASE from (0.0s)
       Memory Settings: -XX:MetaspaceSize=104857K -Xmx768M -XX:MaxMetaspaceSize=104857K -Xss1M -Xms768M
-----> Downloading Spring Auto Reconfiguration 1.10.0_RELEASE from (0.0s)

-----> Uploading droplet (71M)

0 of 1 instances running, 1 starting
1 of 1 instances running

App started


App product was started using this command `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.0_RELEASE -memorySizes=metaspace:64m.. -memoryWeights=heap:75,metaspace:10,native:10,stack:5 -memoryInitials=heap:100%,metaspace:100% -totMemory=$MEMORY_LIMIT) && SERVER_PORT=$PORT $PWD/.java-buildpack/open_jdk_jre/bin/java -cp $PWD/.:$PWD/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.10.0_RELEASE.jar$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/ $CALCULATED_MEMORY org.springframework.boot.loader.JarLauncher`

Showing health and status for app product in org frankel / space development as [email protected]..

requested state: started
instances: 1/1
usage: 1G x 1 instances
last uploaded: Sun Sep 27 09:05:49 UTC 2015
stack: cflinuxfs2
buildpack: java-buildpack=v3.2- java-main open-jdk-like-jre=1.8.0_60 open-jdk-like-memory-calculator=2.0.0_RELEASE spring-auto-reconfiguration=1.10.0_RELEASE

     state     since                    cpu    memory         disk           details   
#0   running   2015-09-27 11:06:35 AM   0.0%   635.5M of 1G   150.4M of 1G 

With the modified configuration, Pivotal CF is able to detect it’s a Java Spring Boot application and deploy it. It’s now accessible at!

However, this application doesn’t reflect a real-world one for it still uses the h2 embedded database. Let’s decouple the database from the application but only when in the cloud, as h2 should still be used locally.

The first step is to create a new database service in Pivotal CF. Nothing fancy here, as JPA is used, we will use a standard SQL database. The ClearDB MySQL service is available in the marketplace and the SparkDB is available for free. Let’s create a new instance for our app and give it a relevant name (mysql-service in my case). It can be done either in the Pivotal CF GUI or via the command-line.

The second step is to bind the service to the app and it can also be achieved either way or even by declaring it in the manifest.yml.

The third step is to actually use it and thanks to Pivotal CF auto-reconfiguration feature, there’s nothing to do at this point! During deployment, the platform will introspect the app and replace the existing h2 data source with one wrapping the MySQL database. This can be asserted by checking the actuator (provided it has been added as a dependency).

This is it for pushing the Spring Boot app to Pivotal CF. The code is available on Github. Next week, let’s deploy to Heroku to check if there are any differences.

Going the microservices way – part 1

September 27th, 2015 1 comment

Microservices, are trending right now, whether you like it or not. There are good reasons for that as it resolves many issues organizations are faced with. It also opens a Pandora box as new issues pop up every now and then… But that is a story for another day: in the end, microservices are here to stay.

In this serie of articles, I’ll take a simple Spring Boot app ecosystem and turn it into microservices to deploy them into the Cloud. As an example, I’ll use an ecommerce shop, that requires different services such as:

  • an account service
  • a product service
  • a cart service
  • etc.

This week is dedicated to creating a sample REST application that lists products. It is based on Spring Boot because Boot makes developing such an application a breeze, as will see in the rest of this article.

Let’s use Maven. The relevant part of the POM is the following:


Easy enough:

  1. Use Java 8
  2. Inherit from Spring Boot parent POM
  3. Add Spring Data JPA & Spring Data REST dependencies
  4. Add a database provider, h2 for now

The next step is the application entry point, it’s the standard Spring Boot main class:

public class ProductApplication {

    public static void main(String... args) {;

It’s very straightforward, thanks to Spring Boot inferring which dependencies are on the classpath.

Besides that, it’s just adding a simple Product entity class and a Spring Data repository interface:

public class Product {

    @GeneratedValue(strategy = AUTO)
    private long id;

    private String name;

    // Constructors and getters

public interface ProductRepository extends JpaRepository<Product, Long> {}

At this point, we’re done. Spring Data REST will automatically provide a REST endpoint to the repository. After executing the main class of the application, browsing to http://localhost:8080/products will yield the list of available products in JSON format.

If you don’t believe how easy this, then you’re welcome to take a look at the Github repo and just launch the app with mvn spring-boot:run. There’s a script to populate initial data present.

Next week, I’ll try to upload the application in the cloud (probably among Cloud Foundry, Heroku and YaaS).

Categories: Java Tags: ,

SpringOne2GX 2015

September 20th, 2015 No comments

This week, I had the privilege to talk at SpringOne2GX in Washington D.C. in not only one but 2 talks:

Apart from preparing and rehearsing, I also used the occasion to attend some talks. Here follows a resume of the best.

Spring Framework, the ultimate configuration battle

The talk compared 3 methods to configure a Spring application, good old XML, JavaConfiguration and Groovy. There were a number of use cases, and one speaker was to implement it in one of the dedicated way. The talk was very entertaining, as the 3 speakers seemed to really enjoy their show. I think JavaConfig was downplayed in some aspects, such as not using an anonymous inner class, to make Groovy shinier. However, I learned that XML and Groovy returned Bean Definitions and later instantiated while JavaConfig returned the beans directly.

Hands on Spring Security

Nice talk from Spring Security’s lead himself, with plenty of demos that highlight different security problems. For me, the magical moment was when the browser displayed an image that triggered a JS script. Thanks for Internet Explorer for content sniffing. I think security is often undervalued and that talk reminds you about it.

Intro to Spring Boot for the web tier

Spring Boot developers showed how to kick-start a web application from an empty app, to a complete fully-configured one. A step-by-step demo, like I like them, with each step the basis for the next one. Special mention to the error page, it’s worth a sight.

Apache Spark for Big Data Processing

The talk was separated into two parts, with a speaker for each: the first presented raw Apache Spark and it’s was quite possible to understand thanks to the many drawings and a demo; the second described Spring XD but unfortunately, slides with bullet points were not enough for that in regard to my existing knowledge.

Comparing Hot JavaScript Frameworks: AngularJS, Ember.js and React.js

This had nothing to do with Spring, but since I don’t do JavaScript, I decided to go in there to have an overview of the beasts. Plus the speaker was Matt Raible who kind of specialized himself in comparing frameworks, and he’s a good speaker. The talk was entertaining, but the usual conclusion was the same: do as you want, which kind of defeats the purpose.

Reactive Web Applications

Last but for sure not least, this talk really blew my mind. The good folks at Spring are aware of the Reactive wave and are surfing it full speed. In this session, the presenters demoed a web application which implemented the Reactive paradigm using different APIs (I remember RxJava and Netty but I’m afraid there were more): the server served content in a non-blocking way, depending on the speed of the subscribing client! I think when this is available for GA, this means web developers – myself included, will have to rethink the way they model client-server interactions. No more request-response but infinite streams…


The 3 main ideas I come back with are: Spring Boot, Cloud Foundry and Reactive. Those are already either in place and evolving at light speed or soon to be implemented. Watch for them, they’re going to be the next big things (or already are).
However, as always, talks are only second to social interactions, meeting conference pals or being acquainted with new people and their ideas. It has been fun, folks, hope to see you next year!

Categories: Event Tags:

True singletons with Dagger 2

September 6th, 2015 No comments

I’ve written some time ago about Dagger 2. However, I still don’t understand every nook and cranny. In particular, the @Singleton annotation can be quite misleading as user Zhuiden was kind enough to point out:

If you create a new ApplicationComponent each time you inject, you will get a new instance in every place where you inject; and you will not actually have singletons where you expect singletons. An ApplicationComponent should be managed by the Application and made accessible throughout the application, and the Activity should have nothing to do with its creation.

After a quick check, I could only agree. The Singleton pattern is only applied in the context of a specific @Component, and one is created each time when calling:


Thus, the only problem is to instantiate the component once and store it in a scope available from every class in the application. Guess what, this scope exists: only 2 simple steps are required.

  1. Extend the Application class and in the onCreate() method, create a new XXXComponent instance and store it as a static attribute.
    public class Global extends Application {
        private static ApplicationComponent applicationComponent;
        public static ApplicationComponent getApplicationComponent() {
            return applicationComponent;
        public void onCreate() {
            applicationComponent = DaggerApplicationComponent.create();
  2. The next step is to hook into the created class into the application lifecycle in the Android manifest:
    <?xml version="1.0" encoding="utf-8"?>
    <manifest xmlns:android="" package="ch.frankel.todo">
      <application android:name=".shared.Global">

At this point, usage is quite straightforward. Replace the first snippet of this article with:


This achieves real singletons in Dagger 2.

Categories: Java Tags: , ,

Running your domain on HTTPS for free

August 23rd, 2015 1 comment

This blog runs on HTTP for a long time as there is no transaction taking place so no security is needed (I use SFTP for file transfer). However, since Google’s latest search algorithm change, I’ve noticed a sharp decrease in the number of monthly visits, from more than 20k to around 13k.

While my goal has never been to have the highest number of visits, it’s still good feedback to me (as well as a nice warm feeling). As it seems turning on HTTPS didn’t seem like a big deal, I thought “Why not?” and asked my hosting provider about that. The answer was that I had to both purchase a fixed IP and the Rapid SSL certificate acquired through them. As the former is a limitation of their infrastructure and the later simply a Domain Validated certificate sold with a comfortable margin, I thought that I could do without. But first, I wanted to try on a simpler site. The good thing is I have such a site already available, is a static site generated offline with Jekyll and themed with Jekyll Bootstrap. The advantages of such an offline process is that the site is provided very fast to users and that the security risks are very low (no SQL injection, etc.).

After a quick poll among my colleagues, I found out that Github was the most suitable to my use-case. Github provides Github Pages: basically, you push Markdown documents on Github and not only it generates the corresponding HTML for you but it also makes it available online. My previous publishing process was to generate the pages on my laptop with Jekyll and upload them through SFTP but Github pages do that for me, I just have to push the Jekyll-flavored Markdown pages. Plus, Github is available on HTTPS out-of-the-box.

Github Pages provides two kinds of sites: user/organization sites and project sites. There are a couple of differences, but I chose to create the morevaadin Github organization to neatly organize my repository. That means that everything I push on the master branch of a repository named will end up exposed on the domain, with HTTPS as icing on the cake (and the reason I started the whole stuff).

Yet, I was not finished as the remaining issue was to set my own domain on top of that. The first move was to choose a provider that would direct the traffic to Github while providing a SSL certificate at low cost: I found Cloudflare that does just that achieves that for… $0. Since I didn’t know about that business, I was a little wary at first but it seems I’m not the first non-paying customer and I’m happy so far. Once registered, you have to point your domain to their name servers: and

The second step takes care of the routing by adding a DNS record and depends on your custom domain. If you’re using an apex domain (i.e., then you have to add two A records named as your domain that point to Github IP for Github pages, and If you’re using a subdomain (i.e., then you have to add a single CNAME record named as your subdomain that point to your Github pages (i.e.

The next step is to enable HTTPS: in front of your DNS record, click on the cloud so that he becomes orange. This means Cloudflare will handle the request. On the opposite, a grey cloud will mean Cloudflare will just do the routing. On the Crypto tab, enable SSL (with Speedy) by choosing Flexible. That’s it for Cloudflare.

Now you have to tell Github pages that it will serve its content for another domain. At the root of your project, create a file named CNAME that contains your custom domain (but no protocol). Check this example. You also have to tell Jekyll what domain will be used in order for absolute link generations to work. This is achieved by setting the BASE_PATH to your URL (this time with the protocol) in your _config.yml. Check what I did.

As an added bonus, I added the following:

  • Force HTTPS traffic (redirect everything to HTTPS)
  • Minify JavaScript, CSS and HTML

This should end your journey to free HTTPS. Basically, it just boils down to let Cloudflare acts as a proxy between users and your site to add his magic SSL . Note that if you’re interested about security, it doesn’t solve everything as the path between Cloudflare and your content is not secured within this architecture.

Now that I know the ropes, I have several options available:

  • Just keep my current hosting provider and put Cloudflare in front
  • Choose another provider and put Cloudflare in front. Any of you know about a decent and cheap WordPress provider besides
  • Take on my current provider’s offer
  • Migrate from WordPress to Jekyll and put Cloudflare in front

I will think about these this week…

Creative Rx usage in testing setup

August 16th, 2015 No comments

I don’t know if events became part of software engineering since Graphical-User Interface interactions, but for sure they are a very convenient way of modeling them. With more and more interconnected systems, asynchronous event management has become an important issue to tackle. With Functional-Programming also on the raise, this gave birth to libraries such as RxJava. However, modeling a problem as the handling of a stream of events shouldn’t be restricted to system events handling. It can also be used in testing in many different ways.

One common use-case for testing setup is to launch a program, for example an external dependency such as a mock server. In this case, we need to wait until the program has been successfully launched. On the contrary, the test should stop as soon as the external program launch fails. If the program has a Java API, that’s easy. However, this is rarely the case and the more basics API are generally used, such as ProcessBuilder or Runtime.getRuntime().exec():

ProcessBuilder builder;

protected void setUp() throws IOException {
    builder = new ProcessBuilder().command("");
    process = builder.start();

protected void tearDown() {

The traditional way to handle this problem was to put a big Thread.sleep() just after the launch. Not only was it system dependent as the launch time changed from system to system, it didn’t tackle the case where the launch failed. In this later case, precious computing time as well as manual relaunch time were lost. Better solutions exist, but they involve a lot of lines of code as much at some (or a high) degree of complexity. Wouldn’t it be nice if we could have a simple and reliable way to start the program and depending on the output, either continue the setup or fail the test? Rx to the rescue!

The first step is to create an Observable around the launched process’s input stream:

Observable<String> observable = Observable.create(subscriber -> {
    InputStream stream = process.getInputStream();
    try (BufferedReader reader = new BufferedReader(new InputStreamReader(stream))) {
        String line;
        while ((line = reader.readLine()) != null) {
    } catch (Exception e) {

In the above snippet:

  • Each time the process writes on the output, a next event is sent
  • When there is no more output, a complete event is sent
  • Finally, if an exception happens, it’s an error event

By Rx definition, any of the last two events mark the end of the sequence.

Once the observable has been created, it just needs to be observed for events. A simple script that emits a single event can be listened to with the following snippet:

BlockingObservable<String> blocking = observable.toBlocking();

The thing of interest here is the wrapping of the Observable instance in a BlockingObservable. While the former can be combined together, the later adds methods to manage events. At this point, the first() method will listen to the first (and single) event.

For a more complex script that emits a random number of regular events terminated by a single end event, a code snippet could be:

BlockingObservable<String> blocking = observable

In this case, whatever the number of regular events, the filter() method provides the way to listen to the only event we are interested in.

Previous cases do not reflect the reality, however. Most of the time, setup scripts should start before and run in parallel to the tests i.e. the end event is never sent – at least until after tests have finished. In this case, there are some threading involved. Rx let it handle that quite easily:

BlockingObservable<String> blocking = observable

There is a simple difference there: the subscriber will listen on a new thread thanks to the subscribeOn() method. Alternatively, events could have been emitted on another thread with the observeOn() method. Note I replaced the filter() method with take() to pretend to be interested only in the first 5 events.

At this point, the test setup is finished. Happy testing!

Sources for this article can be found here in IDEA/Maven format.

Categories: Java Tags: , ,