Archive for the ‘Java’ Category

Going the microservices way – part 3

October 11th, 2015 No comments

In the first post of this serie, I created a simple microservice based on a Spring Boot + Data JPA stack to display a list of available products in JSON format. In the second part, I demoed how this app could be uploaded on Pivotal Cloud Foundry. In this post, I’ll demo the required changes to deploy on Heroku.


As for PCF, Heroku requires a local dedicated app. For Heroku, it’s called the Toolbelt. Once installed, one needs to login on one’s account through the toolbelt:

heroku login

The next step is to create the application on Heroku. The main difference between Cloud Foundry and Heroku is that the former deploys ready binaries while Heroku builds them from source. Creating the Heroku app will also create a remote Git repository to push to, as Heroku uses Git to manage code.

heroku create
Creating salty-harbor-2168... done, stack is cedar-14 |
Git remote heroku added

The heroku create command also updates the .git config by adding a new heroku remote. Let’s push to the remote repository:

git push heroku master
Counting objects: 21, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (14/14), done.
Writing objects: 100% (21/21), 2.52 KiB | 0 bytes/s, done.
Total 21 (delta 1), reused 0 (delta 0)
remote: Compressing source files... done.
remote: Building source:
remote: -----> Java app detected
remote: -----> Installing OpenJDK 1.8... done
remote: -----> Installing Maven 3.3.3... done
remote: -----> Executing: mvn -B -DskipTests=true clean install
remote:        [INFO] Scanning for projects...
// Here the app is built remotely with Maven
remote:        [INFO] ------------------------------------------------------------------------
remote:        [INFO] BUILD SUCCESS
remote:        [INFO] ------------------------------------------------------------------------
remote:        [INFO] Total time: 51.955 s
remote:        [INFO] Finished at: 2015-10-03T09:13:31+00:00
remote:        [INFO] Final Memory: 32M/473M
remote:        [INFO] ------------------------------------------------------------------------
remote: -----> Discovering process types
remote:        Procfile declares types -> (none)
remote: -----> Compressing... done, 74.8MB
remote: -----> Launching... done, v5
remote: deployed to Heroku
remote: Verifying deploy.... done.
 * [new branch]      master -> master

At this point, one need to tell Heroku how to launch the newly built app. This is done through a Procfile located at the root of the project. It should contain an hint to Heroku that we are using a web application – reachable through http(s), and the standard Java command-line. Also, the web port should be bound an Heroku documented environment variable:

web: java -Dserver.port=$PORT -jar target/microservice-sample.jar

The application doesn’t work yet as the log will tell you. To display the remote log, type:

heroku logs --tail

There should be something like that:

Web process failed to bind to $PORT within 60 seconds of launch

That hints that a web node is required, or in Heroku’s parlance, a dyno. Let’s start one:

heroku ps:scale web=1

The application should now be working at (you should be patient, since it’s a free plan, the app is probably sleeping). Yet, something is still missing: it’s still running on the embedded H2 database. We should switch to a more resilient database. By default, Heroku provides a PostgreSQL development database for free. Creating such a database can be done either through the user interface or the command-line.

Spring Boot documentation describes the Spring Cloud Connectors library. It’s able to automatically detect the app is running on Heroku and to create a datasource bound to the Postgres database provided by Heroku. However, despite my best effort, I haven’t been able to make this work, running each time into the following: No unique service matching interface javax.sql.DataSource found. Expected 1, found 0

Time to get a little creative. Instead of sniffing the database and its configuration, let’s configure it explicitly. This requires creating a dedicated profile configuration properties file for Heroku, src/main/resources/application-heroku.yml:

        url: jdbc:postgresql://<url>
        username: <username>
        password: <password>
        driver-class-name: org.postgresql.Driver
        database-platform: org.hibernate.dialect.PostgreSQLDialect

The exact database connection settings you can find in the Heroku user interface. Note the dialect seems to be required. Also, the POM should be updated to add the Postgres driver to the dependencies.

Finally, to activate the profile on Heroku, change the Procfile as such:

web: java -Dserver.port=$PORT -jar target/microservice-sample.jar

Commit and push to Heroku again, the application should be updated to use the provided Postgres.

Next week, I’ll create a new shopping bag microservice that depend on this one and design for failure.

Send to Kindle
Categories: Java Tags: , ,

Going the microservices way – part 2

October 4th, 2015 No comments

In my previous post, I developed a simple microservices REST application based on Spring Boot in a few lines of code. Now is the time to put this application in the cloud. In the rest of the article, I suppose you already have an account configured for the provider.

Pivotal Cloud Foundry

Pivotal Cloud Foundry is the Cloud offering from Pivotal, based on Cloud Foundry. By registering, you have a 60-days free trial, which I happily use.

Pivotal CF requires a local executable to push apps. There’s one such executable for each major platform (Windows, Mac, Linux deb and Linux rpm). Once installed, it requires authentication with the same credentials as with your Pivotal CF account.

To push, one uses the cf push command. This commands needs at least the application name, which can be provided either on the command line cf push myapp or via a manifest.yml file. If one doesn’t provide a manifest, every file (from where the command is launched) will be pushed recursively to the cloud. However, Pivotal CF won’t be able to understand the format of the app in this case. Let’s be explicit and provide both a name and the path to the executable JAR in the manifest.yml:

-   name: product
    path: target/microservice-sample.jar

Notice I didn’t put the version in the JAR name. By default, Maven will generate a standard JAR will only the apps resources and through the Spring Boot plugin, a fat JAR that includes all the necessary libraries. This last JAR is the one to be pushed to Pivotal CF. This requires a tweak to the POM to removing the version name in it:


Calling cf push at this point will push the fat JAR to Pivotal CF. The output should read something like that:

Using manifest file /Volumes/Data/IdeaProjects/microservice-sample/manifest.yml

Updating app product in org frankel / space development as [email protected]..

Uploading product...
Uploading app files from: /Volumes/Data/IdeaProjects/microservice-sample/target/microservice-sample.jar
Uploading 871.1K, 109 files
Done uploading               

Stopping app product in org frankel / space development as [email protected]..

Starting app product in org frankel / space development as [email protected]..
-----> Downloaded app package (26M)
-----> Java Buildpack Version: v3.2 |
-----> Downloading Open Jdk JRE 1.8.0_60 from (0.9s)
       Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.0s)
-----> Downloading Open JDK Like Memory Calculator 2.0.0_RELEASE from (0.0s)
       Memory Settings: -XX:MetaspaceSize=104857K -Xmx768M -XX:MaxMetaspaceSize=104857K -Xss1M -Xms768M
-----> Downloading Spring Auto Reconfiguration 1.10.0_RELEASE from (0.0s)

-----> Uploading droplet (71M)

0 of 1 instances running, 1 starting
1 of 1 instances running

App started


App product was started using this command `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.0_RELEASE -memorySizes=metaspace:64m.. -memoryWeights=heap:75,metaspace:10,native:10,stack:5 -memoryInitials=heap:100%,metaspace:100% -totMemory=$MEMORY_LIMIT) && SERVER_PORT=$PORT $PWD/.java-buildpack/open_jdk_jre/bin/java -cp $PWD/.:$PWD/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.10.0_RELEASE.jar$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/ $CALCULATED_MEMORY org.springframework.boot.loader.JarLauncher`

Showing health and status for app product in org frankel / space development as [email protected]..

requested state: started
instances: 1/1
usage: 1G x 1 instances
last uploaded: Sun Sep 27 09:05:49 UTC 2015
stack: cflinuxfs2
buildpack: java-buildpack=v3.2- java-main open-jdk-like-jre=1.8.0_60 open-jdk-like-memory-calculator=2.0.0_RELEASE spring-auto-reconfiguration=1.10.0_RELEASE

     state     since                    cpu    memory         disk           details   
#0   running   2015-09-27 11:06:35 AM   0.0%   635.5M of 1G   150.4M of 1G 

With the modified configuration, Pivotal CF is able to detect it’s a Java Spring Boot application and deploy it. It’s now accessible at!

However, this application doesn’t reflect a real-world one for it still uses the h2 embedded database. Let’s decouple the database from the application but only when in the cloud, as h2 should still be used locally.

The first step is to create a new database service in Pivotal CF. Nothing fancy here, as JPA is used, we will use a standard SQL database. The ClearDB MySQL service is available in the marketplace and the SparkDB is available for free. Let’s create a new instance for our app and give it a relevant name (mysql-service in my case). It can be done either in the Pivotal CF GUI or via the command-line.

The second step is to bind the service to the app and it can also be achieved either way or even by declaring it in the manifest.yml.

The third step is to actually use it and thanks to Pivotal CF auto-reconfiguration feature, there’s nothing to do at this point! During deployment, the platform will introspect the app and replace the existing h2 data source with one wrapping the MySQL database. This can be asserted by checking the actuator (provided it has been added as a dependency).

This is it for pushing the Spring Boot app to Pivotal CF. The code is available on Github. Next week, let’s deploy to Heroku to check if there are any differences.

Send to Kindle

Going the microservices way – part 1

September 27th, 2015 1 comment

Microservices, are trending right now, whether you like it or not. There are good reasons for that as it resolves many issues organizations are faced with. It also opens a Pandora box as new issues pop up every now and then… But that is a story for another day: in the end, microservices are here to stay.

In this serie of articles, I’ll take a simple Spring Boot app ecosystem and turn it into microservices to deploy them into the Cloud. As an example, I’ll use an ecommerce shop, that requires different services such as:

  • an account service
  • a product service
  • a cart service
  • etc.

This week is dedicated to creating a sample REST application that lists products. It is based on Spring Boot because Boot makes developing such an application a breeze, as will see in the rest of this article.

Let’s use Maven. The relevant part of the POM is the following:


Easy enough:

  1. Use Java 8
  2. Inherit from Spring Boot parent POM
  3. Add Spring Data JPA & Spring Data REST dependencies
  4. Add a database provider, h2 for now

The next step is the application entry point, it’s the standard Spring Boot main class:

public class ProductApplication {

    public static void main(String... args) {;

It’s very straightforward, thanks to Spring Boot inferring which dependencies are on the classpath.

Besides that, it’s just adding a simple Product entity class and a Spring Data repository interface:

public class Product {

    @GeneratedValue(strategy = AUTO)
    private long id;

    private String name;

    // Constructors and getters

public interface ProductRepository extends JpaRepository<Product, Long> {}

At this point, we’re done. Spring Data REST will automatically provide a REST endpoint to the repository. After executing the main class of the application, browsing to http://localhost:8080/products will yield the list of available products in JSON format.

If you don’t believe how easy this, then you’re welcome to take a look at the Github repo and just launch the app with mvn spring-boot:run. There’s a script to populate initial data present.

Next week, I’ll try to upload the application in the cloud (probably among Cloud Foundry, Heroku and YaaS).

Send to Kindle
Categories: Java Tags: ,

True singletons with Dagger 2

September 6th, 2015 No comments

I’ve written some time ago about Dagger 2. However, I still don’t understand every nook and cranny. In particular, the @Singleton annotation can be quite misleading as user Zhuiden was kind enough to point out:

If you create a new ApplicationComponent each time you inject, you will get a new instance in every place where you inject; and you will not actually have singletons where you expect singletons. An ApplicationComponent should be managed by the Application and made accessible throughout the application, and the Activity should have nothing to do with its creation.

After a quick check, I could only agree. The Singleton pattern is only applied in the context of a specific @Component, and one is created each time when calling:


Thus, the only problem is to instantiate the component once and store it in a scope available from every class in the application. Guess what, this scope exists: only 2 simple steps are required.

  1. Extend the Application class and in the onCreate() method, create a new XXXComponent instance and store it as a static attribute.
    public class Global extends Application {
        private static ApplicationComponent applicationComponent;
        public static ApplicationComponent getApplicationComponent() {
            return applicationComponent;
        public void onCreate() {
            applicationComponent = DaggerApplicationComponent.create();
  2. The next step is to hook into the created class into the application lifecycle in the Android manifest:
    <?xml version="1.0" encoding="utf-8"?>
    <manifest xmlns:android="" package="ch.frankel.todo">
      <application android:name=".shared.Global">

At this point, usage is quite straightforward. Replace the first snippet of this article with:


This achieves real singletons in Dagger 2.

Send to Kindle
Categories: Java Tags: , ,

Creative Rx usage in testing setup

August 16th, 2015 No comments

I don’t know if events became part of software engineering since Graphical-User Interface interactions, but for sure they are a very convenient way of modeling them. With more and more interconnected systems, asynchronous event management has become an important issue to tackle. With Functional-Programming also on the raise, this gave birth to libraries such as RxJava. However, modeling a problem as the handling of a stream of events shouldn’t be restricted to system events handling. It can also be used in testing in many different ways.

One common use-case for testing setup is to launch a program, for example an external dependency such as a mock server. In this case, we need to wait until the program has been successfully launched. On the contrary, the test should stop as soon as the external program launch fails. If the program has a Java API, that’s easy. However, this is rarely the case and the more basics API are generally used, such as ProcessBuilder or Runtime.getRuntime().exec():

ProcessBuilder builder;

protected void setUp() throws IOException {
    builder = new ProcessBuilder().command("");
    process = builder.start();

protected void tearDown() {

The traditional way to handle this problem was to put a big Thread.sleep() just after the launch. Not only was it system dependent as the launch time changed from system to system, it didn’t tackle the case where the launch failed. In this later case, precious computing time as well as manual relaunch time were lost. Better solutions exist, but they involve a lot of lines of code as much at some (or a high) degree of complexity. Wouldn’t it be nice if we could have a simple and reliable way to start the program and depending on the output, either continue the setup or fail the test? Rx to the rescue!

The first step is to create an Observable around the launched process’s input stream:

Observable<String> observable = Observable.create(subscriber -> {
    InputStream stream = process.getInputStream();
    try (BufferedReader reader = new BufferedReader(new InputStreamReader(stream))) {
        String line;
        while ((line = reader.readLine()) != null) {
    } catch (Exception e) {

In the above snippet:

  • Each time the process writes on the output, a next event is sent
  • When there is no more output, a complete event is sent
  • Finally, if an exception happens, it’s an error event

By Rx definition, any of the last two events mark the end of the sequence.

Once the observable has been created, it just needs to be observed for events. A simple script that emits a single event can be listened to with the following snippet:

BlockingObservable<String> blocking = observable.toBlocking();

The thing of interest here is the wrapping of the Observable instance in a BlockingObservable. While the former can be combined together, the later adds methods to manage events. At this point, the first() method will listen to the first (and single) event.

For a more complex script that emits a random number of regular events terminated by a single end event, a code snippet could be:

BlockingObservable<String> blocking = observable

In this case, whatever the number of regular events, the filter() method provides the way to listen to the only event we are interested in.

Previous cases do not reflect the reality, however. Most of the time, setup scripts should start before and run in parallel to the tests i.e. the end event is never sent – at least until after tests have finished. In this case, there are some threading involved. Rx let it handle that quite easily:

BlockingObservable<String> blocking = observable

There is a simple difference there: the subscriber will listen on a new thread thanks to the subscribeOn() method. Alternatively, events could have been emitted on another thread with the observeOn() method. Note I replaced the filter() method with take() to pretend to be interested only in the first 5 events.

At this point, the test setup is finished. Happy testing!

Sources for this article can be found here in IDEA/Maven format.

Send to Kindle
Categories: Java Tags: , ,

Compile-time dependency injection tradeoffs in Android

August 2nd, 2015 1 comment

As a backend software developer, I’m used to Spring as my favorite Dependency Injection engine. Alternatives include Java EE’s CDI which achieves the same result – in a different way. However, both inject at runtime: that means that there’s a definite performance cost to pay at the start of the application, the time it takes for all dependencies to be fulfilled. On an application server, where the application lifespan is measured in days (if not weeks), the start time overhead is acceptable. It is even fully transparent if the server is but a node in a large cluster.

As an Android user, I’m not happy when I start an app and it lags for several seconds before opening. It would be very bad in term of user-friendliness if we were to add several more seconds to that time. Even worse, the memory consumption from a DI engine would be a disaster. That’s the reason why Square developed a compile-time dependency injection mechanism called Dagger. Note that Dagger 2 is currently under development by Google. Before going further, I must admit that the documentation of Dagger 2 is succinct – at best. But it’s a great opportunity for another blog post :-)

Dagger 2 works with the annotation-processor: when compiling, it will analyze your annotated-code and produce the wiring code between you components. The good thing is that this code is pretty similar to what you would write yourself if you were to do it manually, there’s no secret black magic (as opposed to runtime DI and their proxies). The following code displays a class to be injected:

public class TimeSetListener implements TimePickerDialog.OnTimeSetListener {

    private final EventBus eventBus;

    public TimeSetListener(EventBus eventBus) {
        this.eventBus = eventBus;

    public void onTimeSet(TimePicker view, int hourOfDay, int minute) { TimeSetEvent(hourOfDay, minute));

Notice the code is completely independent of Dagger in every way. One cannot infer how it will be injected in the end. The interesting part is how to use Dagger to inject the required eventBus dependency. There are two steps:

  1. Get a reference to an eventBus instance in the context
  2. Call the constructor with the relevant parameter

The wiring configuration itself is done in a so-called module:

public class ApplicationModule {

    public TimeSetListener timeSetListener(EventBus eventBus) {
        return new TimeSetListener(eventBus());


Notice that the EventBus is passed as a parameter to the method, and it’s up to the context to provide it. Also, the scope is explicitly @Singleton.

The binding to the factory occurs in a component, which references the required module (or more):

@Component(modules = ApplicationModule.class)
public interface ApplicationComponent {
    TimeSetListener timeListener();

It’s quite straightforward… until one notices that some – if not most objects in Android have a lifecycle managed by Android itself, with no call to our injection-friendly constructor. Activities are such objects: they are instantiated and launched by the framework. Only through dedicated lifecycle methods like onCreate() can we hook our code into the object. This use-case looks much worse as field injection is mandatory. Worse, it is also required to call Dagger: in this case, it acts as a plain factory.

public class EditTaskActivity extends AbstractTaskActivity {

    @Inject TimeSetListener timeListener;

    protected void onCreate(Bundle savedInstanceState) {

For the first time we see a coupling to Dagger, but it’s a big one. What is DaggerApplicationComponent? An implementation of the former ApplicationComponent, as well as a factory to provide instances of them. And since it doesn’t provide an inject() method, we have to declare it into our interface:

@Component(modules = ApplicationModule.class)
public interface ApplicationComponent {
    TimeSetListener timeListener();
    void inject(EditTaskActivity editTaskActivity);

For the record, the generated class looks like:

public final class DaggerApplicationComponent implements ApplicationComponent {
  private Provider<TimeSetListener> timeSetListenerProvider;
  private MembersInjector<EditTaskActivity> editTaskActivityMembersInjector;


  private DaggerApplicationComponent(Builder builder) {  
    assert builder != null;

  public static Builder builder() {  
    return new Builder();

  public static ApplicationComponent create() {  
    return builder().build();

  private void initialize(final Builder builder) {  
    this.timeSetListenerProvider = ScopedProvider.create(ApplicationModule_TimeSetListenerFactory.create(builder.applicationModule, eventBusProvider));
    this.editTaskActivityMembersInjector = TimeSetListener_MembersInjector.create((MembersInjector) MembersInjectors.noOp(), timeSetListenerProvider);

  public EventBus eventBus() {  
    return eventBusProvider.get();

  public void inject(EditTaskActivity editTaskActivity) {  

  public static final class Builder {
    private ApplicationModule applicationModule;
    private Builder() {  
    public ApplicationComponent build() {  
      if (applicationModule == null) {
        this.applicationModule = new ApplicationModule();
      return new DaggerApplicationComponent(this);
    public Builder applicationModule(ApplicationModule applicationModule) {  
      if (applicationModule == null) {
        throw new NullPointerException("applicationModule");
      this.applicationModule = applicationModule;
      return this;

There’s no such thing as a free lunch. Despite compile-time DI being very appealing at first glance, it becomes much less so when used on objects outside which lifecycle is not managed by our code. The downsides become apparent: coupling to the DI framework and more importantly an increased difficulty to unit-test the class. However, considering Android constraints, this might be the best that can be achieved.

Send to Kindle
Categories: Java Tags: , ,

Fully automated Android build pipeline

July 12th, 2015 No comments

In the course of my current job, I had to automate jobs for building Android applications. This post aims at describing the pain points I encountered in order for you readers not to waste your time if you intend to do so.

The environment is the following:

  • Puppet to automate the infrastructure
  • Jenkins for the CI server
  • The Android project
  • A Gradle build file to build it
  • Robolectric as the main testing framework

Puppet and Jenkins

My starting point was quite sound, indeed. Colleagues had already automated the installation of the Jenkins server, the required packages – including Java, as well as provided reusable Puppet classes for job creations. Jenkins jobs rely on a single config.xml file, that is an assembly of different sections. Each section is handled by a dedicated template. At this point, I thought creating a simple Gradle job would be akin to a walk in the park, that it would take a few days at most and that I would soon be assigned another task.

The first step was easy enough: just update an existing Puppet manifest to add the Gradle plugin to Jenkins.

The Gradle wrapper

Regular readers of this blog know my opinion about Gradle. However, I admit that guaranteeing a build that works regardless of the installed tool version is something that Maven lacks – and should have. To achieve that, Gradle provides a so-called wrapper mechanism through a JAR, a shell script and a properties file, the latter containing the URL to the Gradle ZIP distribution. All three needs to be stored in the SCM.

This was the beginning of my troubles. Having to download in an enterprise environment means going through and authenticate to the proxy. The simplest option would be to set everything in the job configuration… including the proxy credentials. However, going this way is not very sound from a security point of view, as anyone having access to the Jenkins interface or the filesystem would be able to read those credentials. There’s a need for another option.

The customer already has a working Nexus repository with a configured proxy. It was easy as pie to upload the required Gradle distribution there and update the to point to it.

The Android SDK

The Android SDK is just a ZIP. I reused the same tactics: download it then upload it to Nexus. At this point, an existing Puppet script took care of downloading it, extracting it and setting the right permissions.

This step is the beginning of the real problems, however. Android developers know that the Android SDK is just a manager: one has to manually check the desired platforms and tools to download them on the local filesystem. What is a simple step for an Android developer on his machine is indeed a nightmare to automate, though there’s a command-line equivalent to install/update packages through the SDK (with the --no-ui parameter). For a full description, please check this link.

Google engineers fail to provide 2 important parameters:

  • Proxy credentials – login/password
  • Accepting license agreements

There’s a lot of non-working answers on the Web, the most alluring being a configuration file. I found none of them to work. However, I found a creative solution using the expect command. Expect is a nifty command that reads the standard output and fill in the standard input accordingly. The good thing about expect is that it accepts regexp. So, when it asks for proxy login, your type the login, when it asks for the password, you do likewise, and when it asks for license acceptance, you type ‘y’. It’s pretty straightforward – though it took me a lot of trial and error to achieve the desired results.

My initial design was to have all necessary Android packages installed with Puppet as part of the server provisioning. For standard operations, such as file creation or system package installation, Puppet is able to determine if provisioning is necessary, e.g. if the file exists, there’s no need to create it and likewise if the package is installed. In the end, Puppet reports each operation it performed in a log. At first, I tried to implement this sort of caching by telling Puppet about which packages was created during the provisioning, since the Android SDK creates one folder per package. The first problem is that Puppet only accepts a single folder to verify. Then, for some packages, there’s no version information (for example, this is the case for Google Play services).

Thus, a colleague had the idea to move from this update from Puppet provisioning to a pre-step in each job. This fixes the non-idempotent issue. Besides, it makes running the update configurable per job.


At this point, I thought it would have been done. Unfortunately, it wasn’t the case, due to a library – Robolectric.

I didn’t know about Robolectric at this time, I just knew it was a testing library for Android that provided a way to run tests without a connected physical device. While trying to run the build on Jenkins, I stumbled upon an “interesting” issue: although Roboletric provides a POM complete with dependencies, the MavenDependencyResolver class hard-codes the repository where to download from.

The only provided workaround is to extend the above class to hack your own implementation. Mine used the enterprise Nexus repository mentioned above.

Upload and release tasks

The end of the implementation was relatively easy. Only were missing the upload of the artifacts to the Nexus repository and the tag of the release in the SCM.

In order to achieve the former, I just added a custom Gradle task to get Nexus settings from the settings.xml (provisioned by Puppet). Then I managed for the upload task to depend on this one. Finally, for every flavor of assemble task execution, I added the output file to the to-be-uploaded artifacts set. This way, the following command would upload only flavours XXX and YYY regardless of what flavour are configured in the build file:

./gradlew assembleXXX assembleYYY upload

For the release, it’s even simpler: the only required thing was to set this Gradle plugin, which adds a release task, akin to Maven’s deploy.


As a backend developer, I’m used to Continuous Integration setup and was nearly sure I could handle Android CI process in a few days. I’ve been quite surprised at the lack of maturity of the Android ecosystem regarding CI. Every step is painful, badly documented (if at all) and solutions seem more like hacks than anything else. If you want to go down this path, you’ve been warned… and I wish you the best of luck.

Send to Kindle

Improve your tests with Mockito’s capture

July 5th, 2015 No comments

Unit Testing mandates to test the unit in isolation. In order to achieve that, the general consensus is to design our classes in a decoupled way using DI. In this paradigm, whether using a framework or not, whether using compile-time or runtime compilation, object instantiation is the responsibility of dedicated factories. In particular, this means the new keyword should be used only in those factories.

Sometimes, however, having a dedicated factory just doesn’t fit. This is the case when injecting an narrow-scope instance into a wider scope instance. A use-case I stumbled upon recently concerns event bus, code like this one:

 public class Sample {

    private EventBus eventBus;

    public Sample(EventBus eventBus) {
        this.eventBus = eventBus;

    public void done() {
        Result result = computeResult() DoneEvent(result));

    private Result computeResult() {

With a runtime DI framework – such as the Spring framework, and if the DoneEvent had no argument, this could be changed to a lookup method pattern.

public void done() {;

public abstract DoneEvent getDoneEvent();

Unfortunately, the argument just prevents us to use this nifty trick. And it cannot be done with runtime injection anyway. It doesn’t mean the done() method shouldn’t be tested, though. The problem is not only how to assert that when the method is called, a new DoneEvent is posted in the bus, but also check the wrapped result.

Experienced software engineers probably know about the Mockito.any(Class) method. This could be used like this:

public void doneShouldPostDoneEvent() {
    EventBus eventBus = Mockito.mock(EventBus.class);
    Sample sample = new Sample(eventBus);

In this case, we make sure an event of the right kind has been posted to the queue, but we are not sure what the result was. And if the result cannot be asserted, the confidence in the code decreases. Mockito to the rescue. Mockito provides captures, that act like placeholders for parameters. The above code can be changed like this:

public void doneShouldPostDoneEventWithExpectedResult() {
    ArgumentCaptor<DoneEvent> captor = ArgumentCaptor.forClass(DoneEvent.class);
    EventBus eventBus = Mockito.mock(EventBus.class);
    Sample sample = new Sample(eventBus);
    DoneEvent event = captor.getCapture();
    assertThat(event.getResult(), is(expectedResult));

At line 2, we create a new ArgumentCaptor. At line 6, We replace any() usage with captor.capture() and the trick is done. The result is then captured by Mockito and available through captor.getCapture() at line 7. The final line – using Hamcrest, makes sure the result is the expected one.

Send to Kindle
Categories: Java Tags: ,

Connection is a leaky abstraction

April 26th, 2015 2 comments

As junior Java developers, we learn very early in our career about the JDBC API. We learn it’s a very important abstraction because it allows to change the underlying database in a transparent manner. I’m afraid what appeared as a good idea is just over-engineering because:

  1. I’ve never seen such a database migration happen in more than 10 years
  2. Most of the time, the SQL written is not database independent

Still, there’ s no denying that JDBC is at the bottom of every database interaction in Java. However, I recently stumbled upon another trap hidden very deeply at the core of the javax.sql.Connection interface. Basically, you perhaps have been told to close the Statement returned by the Connection? And also to close the ResultSet returned by the Statement? But perhaps you also have been told that closing the Connection will close all underlying objects – Statement and ResultSet?

So, which one is true? Well, “it depends” and there’s the rub…

  • One one hand, if the connection is returned from the DriverManager, calling Connection.close() will close the physical connection to the database and all underlying objects.
  • On the other hand, if the connection is returned from a DataSource, calling Connection.close() will only return it to the pool and you’ll need to close statements yourself.

In the latter case, if you don’t close those underlying statements, database cursors will stay open, the RDBMS limit will be reached at some point and new statements won’t be executed. Conclusion: always close statement objects (as I already wrote about)! Note the result set will be closed when the statement is.

If you’re lucky to use Java 7 – and don’t use a data access framework, the code to use is the following:

try (PreparedStatement ps = connection.prepareStatement("Put SQL here")) {
    try (ResultSet rs = ps.executeQuery()) {
        // Do something with ResultSet
} catch (SQLException e) {
    // Handle exception

And if you want to make sure cursors will be closed even with faulty code, good old Tomcat provides the StatementFinalizer interceptor for that. Just configure it in the server.xml configuration file when you declare your Resource:

<Resource name="jdbc/myDB" auth="Container" type="javax.sql.DataSource"
 jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer" />

Note: while you’re there, you can also check the ResetAbandonedTimer interceptor. It can be used in conjunction with the removeAbandonedTimeout attribute: this configures the time after which the connection will be returned back to the pool. If the attribute’s value is too low, connections in use might be returned. With the interceptor, each time the connection is used resets the timer.

Send to Kindle
Categories: Java Tags: ,

What’s the version of my deployed application?

April 6th, 2015 2 comments

In my career, I’ve noticed many small and un-expensive features that didn’t find their way into the Sprint backlog because they didn’t provide business value. However, they provided plenty of ROI during the life of the application, but that was completely overlooked due to short-sighted objectives (set by short-sighted management). Those include, but are not limited to:

  • Monitoring in general, and more specifically metrics, health checks, etc. Spare 5 days now and spend 10 times that later (or more…) that because you don’t know how your application works.
  • Environment data e.g. development, test, production, etc. It’s especially effective when it’s associated with a coloured banner dependent on the environment. If you don’t do that, I’m not responsible if I just deleted all your production data from the last 10 days because I thought it was the test environment. This is quite easy, especially if login/passwords are the same for all environments – yes, LDAP setup is complex so let’s have only one.
  • Application build data, most importantly the version number, and if possible the build number and the build time. Having to SSH into the server (if possible at all) or search the Wiki (if it’s up-to-date, a most unlikely occurence) to have to know the version is quite cumbersome when you need the info right now.

Among them, I believe the most basic one is the latter. We are used to check the About dialog in desktop applications, but unless you deliver many (many many) times a day, those are necessary for any real-world enterprise-grade application. In the realm of SOA and micro-services, it means this info should also be part of responses.

With Maven, it’s quite easy to achieve, this as a simple properties file only is needed and the maven-resource-plugin will work its magic. Maven provides a filtering features, meaning any resource can be set placeholders and they will be replaced by their values at build-time. Filtering is not enabled by default. To activate it, the following snippet will do:


I wrote about placeholders above but I didn’t specify which. Simple, any data set in the POM – as well as a few special ones, can be used as placeholders. Just use the DOM path, inside $ and brackets, like that: ${dom.path}.

Here’s an example, with a property file:


If this snippet is put in a file inside the src/main/resources directory, Maven will generate a file named similarly with the values filtered inside the target/classes directory just after the process-resources. Specific values depends of course on the POM, but here’s a sample output:


Things are unfortunately not as straightforward, as there’s a bug in Maven regarding ${}. It cannot be filtered directly and requires adding an indirection level:


The following properties file will now work as expected:


At this point, it’s just a matter of reading this property file when required. This includes:

In webapps
  • Provide a dedicated About page, as in desktop applications. I’ve rarely seen that, and never implemented it
  • Add a footer with the info. For added user-friendliness, set the text color to the background color, so that users are not disturbed by the info. Only people who know (or a curious) can access that – it’s not confidential anyway.
In services
Whether SOAP or REST, XML or JSON, there are also a few options:

  • As a dedicated service endpoint (e.g. /about or /version). This is the simplest to implement. It’s even better if the same endpoint is used throughout the organization.
  • As additional info on all endpoints. This is required when each endpoint can be built separately and assembled. The easiest path then is to put it in HTTP headers, while harder is to put it in the data. The latter will probably require some interceptor approach as well as an operation on the schema (if any).

There are two additional options:

  1. To differentiate between snapshot versions, use the Maven Build Number plugin. I have no experience of actual usage. However, it requires correct configuration of SCM information.
  2. Some applications not only display version information but also environment environment information (e.g. development, integration, staging, etc.). I’ve seen it used either through a specific banner or through background color. This requires a dedicated Java property set at JVM launch time or an system environment variable.

The filtering stuff can be done in 1 hour max in a greenfield environment. The cost of using the data is more variable, but in a simple webapp footer case, it can be done in less than a day. Compare that to the time lost getting the version during the lifetime of the application…

Send to Kindle
Categories: Java Tags: ,