Running your domain on HTTPS for free

August 23rd, 2015 1 comment

This blog runs on HTTP for a long time as there is no transaction taking place so no security is needed (I use SFTP for file transfer). However, since Google’s latest search algorithm change, I’ve noticed a sharp decrease in the number of monthly visits, from more than 20k to around 13k.

While my goal has never been to have the highest number of visits, it’s still good feedback to me (as well as a nice warm feeling). As it seems turning on HTTPS didn’t seem like a big deal, I thought “Why not?” and asked my hosting provider about that. The answer was that I had to both purchase a fixed IP and the Rapid SSL certificate acquired through them. As the former is a limitation of their infrastructure and the later simply a Domain Validated certificate sold with a comfortable margin, I thought that I could do without. But first, I wanted to try on a simpler site. The good thing is I have such a site already available, morevaadin.com.

morevaadin.com is a static site generated offline with Jekyll and themed with Jekyll Bootstrap. The advantages of such an offline process is that the site is provided very fast to users and that the security risks are very low (no SQL injection, etc.).

After a quick poll among my colleagues, I found out that Github was the most suitable to my use-case. Github provides Github Pages: basically, you push Markdown documents on Github and not only it generates the corresponding HTML for you but it also makes it available online. My previous publishing process was to generate the pages on my laptop with Jekyll and upload them through SFTP but Github pages do that for me, I just have to push the Jekyll-flavored Markdown pages. Plus, Github is available on HTTPS out-of-the-box.

Github Pages provides two kinds of sites: user/organization sites and project sites. There are a couple of differences, but I chose to create the morevaadin Github organization to neatly organize my repository. That means that everything I push on the master branch of a repository named morevaadin.github.io will end up exposed on the https://morevaadin.github.io domain, with HTTPS as icing on the cake (and the reason I started the whole stuff).

Yet, I was not finished as the remaining issue was to set my own domain on top of that. The first move was to choose a provider that would direct the traffic to Github while providing a SSL certificate at low cost: I found Cloudflare that does just that achieves that for… $0. Since I didn’t know about that business, I was a little wary at first but it seems I’m not the first non-paying customer and I’m happy so far. Once registered, you have to point your domain to their name servers: art.ns.cloudflare.com and val.ns.cloudflare.com.

The second step takes care of the routing by adding a DNS record and depends on your custom domain. If you’re using an apex domain (i.e. morevaadin.com), then you have to add two A records named as your domain that point to Github IP for Github pages, 192.30.252.153 and 192.30.252.154. If you’re using a subdomain (i.e. www.morevaadin.com), then you have to add a single CNAME record named as your subdomain that point to your Github pages (i.e. morevaadin.github.io).

The next step is to enable HTTPS: in front of your DNS record, click on the cloud so that he becomes orange. This means Cloudflare will handle the request. On the opposite, a grey cloud will mean Cloudflare will just do the routing. On the Crypto tab, enable SSL (with Speedy) by choosing Flexible. That’s it for Cloudflare.

Now you have to tell Github pages that it will serve its content for another domain. At the root of your project, create a file named CNAME that contains your custom domain (but no protocol). Check this example. You also have to tell Jekyll what domain will be used in order for absolute link generations to work. This is achieved by setting the BASE_PATH to your URL (this time with the protocol) in your _config.yml. Check what I did.

As an added bonus, I added the following:

  • Force HTTPS traffic (redirect everything to HTTPS)
  • Minify JavaScript, CSS and HTML

This should end your journey to free HTTPS. Basically, it just boils down to let Cloudflare acts as a proxy between users and your site to add his magic SSL . Note that if you’re interested about security, it doesn’t solve everything as the path between Cloudflare and your content is not secured within this architecture.

Now that I know the ropes, I have several options available:

  • Just keep my current hosting provider and put Cloudflare in front
  • Choose another provider and put Cloudflare in front. Any of you know about a decent and cheap WordPress provider besides wordpress.com?
  • Take on my current provider’s offer
  • Migrate from WordPress to Jekyll and put Cloudflare in front

I will think about these this week…

Creative Rx usage in testing setup

August 16th, 2015 No comments

I don’t know if events became part of software engineering since Graphical-User Interface interactions, but for sure they are a very convenient way of modeling them. With more and more interconnected systems, asynchronous event management has become an important issue to tackle. With Functional-Programming also on the raise, this gave birth to libraries such as RxJava. However, modeling a problem as the handling of a stream of events shouldn’t be restricted to system events handling. It can also be used in testing in many different ways.

One common use-case for testing setup is to launch a program, for example an external dependency such as a mock server. In this case, we need to wait until the program has been successfully launched. On the contrary, the test should stop as soon as the external program launch fails. If the program has a Java API, that’s easy. However, this is rarely the case and the more basics API are generally used, such as ProcessBuilder or Runtime.getRuntime().exec():

ProcessBuilder builder;

@BeforeMethod
protected void setUp() throws IOException {
    builder = new ProcessBuilder().command("script.sh");
    process = builder.start();
}

@AfterMethod
protected void tearDown() {
    process.destroy();
}

The traditional way to handle this problem was to put a big Thread.sleep() just after the launch. Not only was it system dependent as the launch time changed from system to system, it didn’t tackle the case where the launch failed. In this later case, precious computing time as well as manual relaunch time were lost. Better solutions exist, but they involve a lot of lines of code as much at some (or a high) degree of complexity. Wouldn’t it be nice if we could have a simple and reliable way to start the program and depending on the output, either continue the setup or fail the test? Rx to the rescue!

The first step is to create an Observable around the launched process’s input stream:

Observable<String> observable = Observable.create(subscriber -> {
    InputStream stream = process.getInputStream();
    try (BufferedReader reader = new BufferedReader(new InputStreamReader(stream))) {
        String line;
        while ((line = reader.readLine()) != null) {
            subscriber.onNext(line);
        }
        subscriber.onCompleted();
    } catch (Exception e) {
        subscriber.onError(e);
    }
});

In the above snippet:

  • Each time the process writes on the output, a next event is sent
  • When there is no more output, a complete event is sent
  • Finally, if an exception happens, it’s an error event

By Rx definition, any of the last two events mark the end of the sequence.

Once the observable has been created, it just needs to be observed for events. A simple script that emits a single event can be listened to with the following snippet:

BlockingObservable<String> blocking = observable.toBlocking();
blocking.first();

The thing of interest here is the wrapping of the Observable instance in a BlockingObservable. While the former can be combined together, the later adds methods to manage events. At this point, the first() method will listen to the first (and single) event.

For a more complex script that emits a random number of regular events terminated by a single end event, a code snippet could be:

BlockingObservable<String> blocking = observable
    .filter("Done"::equals)
    .toBlocking();
blocking.first();

In this case, whatever the number of regular events, the filter() method provides the way to listen to the only event we are interested in.

Previous cases do not reflect the reality, however. Most of the time, setup scripts should start before and run in parallel to the tests i.e. the end event is never sent – at least until after tests have finished. In this case, there are some threading involved. Rx let it handle that quite easily:

BlockingObservable<String> blocking = observable
    .subscribeOn(Schedulers.newThread())
    .take(5)
    .toBlocking();
blocking.first();

There is a simple difference there: the subscriber will listen on a new thread thanks to the subscribeOn() method. Alternatively, events could have been emitted on another thread with the observeOn() method. Note I replaced the filter() method with take() to pretend to be interested only in the first 5 events.

At this point, the test setup is finished. Happy testing!

Sources for this article can be found here in IDEA/Maven format.

Categories: Java Tags: , ,

Compile-time dependency injection tradeoffs in Android

August 2nd, 2015 No comments

As a backend software developer, I’m used to Spring as my favorite Dependency Injection engine. Alternatives include Java EE’s CDI which achieves the same result – in a different way. However, both inject at runtime: that means that there’s a definite performance cost to pay at the start of the application, the time it takes for all dependencies to be fulfilled. On an application server, where the application lifespan is measured in days (if not weeks), the start time overhead is acceptable. It is even fully transparent if the server is but a node in a large cluster.

As an Android user, I’m not happy when I start an app and it lags for several seconds before opening. It would be very bad in term of user-friendliness if we were to add several more seconds to that time. Even worse, the memory consumption from a DI engine would be a disaster. That’s the reason why Square developed a compile-time dependency injection mechanism called Dagger. Note that Dagger 2 is currently under development by Google. Before going further, I must admit that the documentation of Dagger 2 is succinct – at best. But it’s a great opportunity for another blog post :-)

Dagger 2 works with the annotation-processor: when compiling, it will analyze your annotated-code and produce the wiring code between you components. The good thing is that this code is pretty similar to what you would write yourself if you were to do it manually, there’s no secret black magic (as opposed to runtime DI and their proxies). The following code displays a class to be injected:

public class TimeSetListener implements TimePickerDialog.OnTimeSetListener {

    private final EventBus eventBus;

    public TimeSetListener(EventBus eventBus) {
        this.eventBus = eventBus;
    }

    @Override
    public void onTimeSet(TimePicker view, int hourOfDay, int minute) {
        eventBus.post(new TimeSetEvent(hourOfDay, minute));
    }
}

Notice the code is completely independent of Dagger in every way. One cannot infer how it will be injected in the end. The interesting part is how to use Dagger to inject the required eventBus dependency. There are two steps:

  1. Get a reference to an eventBus instance in the context
  2. Call the constructor with the relevant parameter

The wiring configuration itself is done in a so-called module:

@Module
public class ApplicationModule {

    @Provides
    @Singleton
    public TimeSetListener timeSetListener(EventBus eventBus) {
        return new TimeSetListener(eventBus());
    }

    ...
}

Notice that the EventBus is passed as a parameter to the method, and it’s up to the context to provide it. Also, the scope is explicitly @Singleton.

The binding to the factory occurs in a component, which references the required module (or more):

@Component(modules = ApplicationModule.class)
@Singleton
public interface ApplicationComponent {
    TimeSetListener timeListener();
    ...
}

It’s quite straightforward… until one notices that some – if not most objects in Android have a lifecycle managed by Android itself, with no call to our injection-friendly constructor. Activities are such objects: they are instantiated and launched by the framework. Only through dedicated lifecycle methods like onCreate() can we hook our code into the object. This use-case looks much worse as field injection is mandatory. Worse, it is also required to call Dagger: in this case, it acts as a plain factory.

public class EditTaskActivity extends AbstractTaskActivity {

    @Inject TimeSetListener timeListener;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        DaggerApplicationComponent.create().inject(this);
    }
    ...
}

For the first time we see a coupling to Dagger, but it’s a big one. What is DaggerApplicationComponent? An implementation of the former ApplicationComponent, as well as a factory to provide instances of them. And since it doesn’t provide an inject() method, we have to declare it into our interface:

@Component(modules = ApplicationModule.class)
@Singleton
public interface ApplicationComponent {
    TimeSetListener timeListener();
    void inject(EditTaskActivity editTaskActivity);
    ...
}

For the record, the generated class looks like:

@Generated("dagger.internal.codegen.ComponentProcessor")
public final class DaggerApplicationComponent implements ApplicationComponent {
  private Provider<TimeSetListener> timeSetListenerProvider;
  private MembersInjector<EditTaskActivity> editTaskActivityMembersInjector;

  ...

  private DaggerApplicationComponent(Builder builder) {  
    assert builder != null;
    initialize(builder);
  }

  public static Builder builder() {  
    return new Builder();
  }

  public static ApplicationComponent create() {  
    return builder().build();
  }

  private void initialize(final Builder builder) {  
    this.timeSetListenerProvider = ScopedProvider.create(ApplicationModule_TimeSetListenerFactory.create(builder.applicationModule, eventBusProvider));
    this.editTaskActivityMembersInjector = TimeSetListener_MembersInjector.create((MembersInjector) MembersInjectors.noOp(), timeSetListenerProvider);
  }

  @Override
  public EventBus eventBus() {  
    return eventBusProvider.get();
  }

  @Override
  public void inject(EditTaskActivity editTaskActivity) {  
    editTaskActivityMembersInjector.injectMembers(editTaskActivity);
  }

  public static final class Builder {
    private ApplicationModule applicationModule;
  
    private Builder() {  
    }
  
    public ApplicationComponent build() {  
      if (applicationModule == null) {
        this.applicationModule = new ApplicationModule();
      }
      return new DaggerApplicationComponent(this);
    }
  
    public Builder applicationModule(ApplicationModule applicationModule) {  
      if (applicationModule == null) {
        throw new NullPointerException("applicationModule");
      }
      this.applicationModule = applicationModule;
      return this;
    }
  }
}

There’s no such thing as a free lunch. Despite compile-time DI being very appealing at first glance, it becomes much less so when used on objects outside which lifecycle is not managed by our code. The downsides become apparent: coupling to the DI framework and more importantly an increased difficulty to unit-test the class. However, considering Android constraints, this might be the best that can be achieved.

Categories: Java Tags: , ,

More DevOps for Spring Boot

July 19th, 2015 2 comments

I think Spring Boot brings something new to the table, especially concerning DevOps – and I’ve already written a post about it. However, there’s more than metrics and healthchecks.

In one of another of my previous post, I described how to provide versioning information for Maven-built applications. This article will describe how this later post is not necessary when using Spring Boot.

As a reminder, just adding adding the spring-boot-starter-actuator dependency in the POM, enable many endpoints, among them:

  • /metrics for monitoring the application
  • /health to check the application can deliver the expected service
  • /bean lists all Spring beans in the context
  • /configprops lists all properties regarding the running profile(s) (if any)

Among those, one of them is of specific interest: /info. By default, it displays… nothing – or more precisely, the string representation of an empty JSON object.

However, any property set in the application.properties file (or one of its profile flavor) will find its way into the page. For example:

Propery file Output
Key Value
info.application.name My Demo App
{
  "application" : {
    "name" : "My Demo App"
  }
}

Setting static info is sure nice, but our objective is to get the version of my application within Spring Boot. application.properties files are automatically filtered by Spring Boot during the process-resources build phase. Any property in the POM can be used: it just needs to be set between @ character. For example:

Propery file Output
Key Value
info.application.version @project.version@
{
  "application" : {
    "version" : "0.0.1-SNAPSHOT"
  }
}

Note that Spring Boot Maven plugin will remove the generated resources, and thus the application will use the unfiltered resource properties file from the sources. In order to keep (and use) the generated resources instead, configure the plugin in the POM like this:

<build>
  <plugins>
    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <configuration>
        <addResources>false</addResources>
      </configuration>
    </plugin>
  </plugins>
</build>

At this point, we have the equivalent of the previous article, but we can go even further. The maven-git-commit-id-plugin will generate a git.properties stuffed will all possible git-related information. The following snippet is an example of the produced file:

#Generated by Git-Commit-Id-Plugin
#Fri Jul 10 23:36:40 CEST 2015
git.tags=
git.commit.id.abbrev=bf4afbf
git.commit.user.email=nicolas@frankel.ch
git.commit.message.full=Initial commit\n
git.commit.id=bf4afbf167d51909bd984c35ad5b85a66b9c44b9
git.commit.id.describe-short=bf4afbf
git.commit.message.short=Initial commit
git.commit.user.name=Nicolas Frankel
git.build.user.name=Nicolas Frankel
git.commit.id.describe=bf4afbf
git.build.user.email=nicolas@frankel.ch
git.branch=master
git.commit.time=2015-07-10T23\:34\:46+0200
git.build.time=2015-07-10T23\:36\:40+0200
git.remote.origin.url=Unknown

From all of this data, only the following are used in the endpoint:

Key Output
git.branch
{
  "git" : {
    "branch" : "master",
    "commit" : {
      "id" : "bf4afbf",
      "time" : "2015-07-10T23:34:46+0200"
    }
  }
}
git.commit.id
git.commit.time

Since the path and the formatting is consistent, you can devise a cronjob to parse all your applications and generate a wiki page with all those information, per server/environment. No more having to ssh the server and dig into the filesystem to uncover the version.

Thus, the /info endpoint can be a very powerful asset in your organization, whether you’re a DevOps yourself or only willing to help your Ops. More detailed information can be found in the Spring Boot documentation.

Categories: Development Tags: ,

Fully automated Android build pipeline

July 12th, 2015 No comments

In the course of my current job, I had to automate jobs for building Android applications. This post aims at describing the pain points I encountered in order for you readers not to waste your time if you intend to do so.

The environment is the following:

  • Puppet to automate the infrastructure
  • Jenkins for the CI server
  • The Android project
  • A Gradle build file to build it
  • Robolectric as the main testing framework

Puppet and Jenkins

My starting point was quite sound, indeed. Colleagues had already automated the installation of the Jenkins server, the required packages – including Java, as well as provided reusable Puppet classes for job creations. Jenkins jobs rely on a single config.xml file, that is an assembly of different sections. Each section is handled by a dedicated template. At this point, I thought creating a simple Gradle job would be akin to a walk in the park, that it would take a few days at most and that I would soon be assigned another task.

The first step was easy enough: just update an existing Puppet manifest to add the Gradle plugin to Jenkins.

The Gradle wrapper

Regular readers of this blog know my opinion about Gradle. However, I admit that guaranteeing a build that works regardless of the installed tool version is something that Maven lacks – and should have. To achieve that, Gradle provides a so-called wrapper mechanism through a JAR, a shell script and a properties file, the latter containing the URL to the Gradle ZIP distribution. All three needs to be stored in the SCM.

This was the beginning of my troubles. Having to download in an enterprise environment means going through and authenticate to the proxy. The simplest option would be to set everything in the job configuration… including the proxy credentials. However, going this way is not very sound from a security point of view, as anyone having access to the Jenkins interface or the filesystem would be able to read those credentials. There’s a need for another option.

The customer already has a working Nexus repository with a configured proxy. It was easy as pie to upload the required Gradle distribution there and update the gradle.properties to point to it.

The Android SDK

The Android SDK is just a ZIP. I reused the same tactics: download it then upload it to Nexus. At this point, an existing Puppet script took care of downloading it, extracting it and setting the right permissions.

This step is the beginning of the real problems, however. Android developers know that the Android SDK is just a manager: one has to manually check the desired platforms and tools to download them on the local filesystem. What is a simple step for an Android developer on his machine is indeed a nightmare to automate, though there’s a command-line equivalent to install/update packages through the SDK (with the --no-ui parameter). For a full description, please check this link.

Google engineers fail to provide 2 important parameters:

  • Proxy credentials – login/password
  • Accepting license agreements

There’s a lot of non-working answers on the Web, the most alluring being a configuration file. I found none of them to work. However, I found a creative solution using the expect command. Expect is a nifty command that reads the standard output and fill in the standard input accordingly. The good thing about expect is that it accepts regexp. So, when it asks for proxy login, your type the login, when it asks for the password, you do likewise, and when it asks for license acceptance, you type ‘y’. It’s pretty straightforward – though it took me a lot of trial and error to achieve the desired results.

My initial design was to have all necessary Android packages installed with Puppet as part of the server provisioning. For standard operations, such as file creation or system package installation, Puppet is able to determine if provisioning is necessary, e.g. if the file exists, there’s no need to create it and likewise if the package is installed. In the end, Puppet reports each operation it performed in a log. At first, I tried to implement this sort of caching by telling Puppet about which packages was created during the provisioning, since the Android SDK creates one folder per package. The first problem is that Puppet only accepts a single folder to verify. Then, for some packages, there’s no version information (for example, this is the case for Google Play services).

Thus, a colleague had the idea to move from this update from Puppet provisioning to a pre-step in each job. This fixes the non-idempotent issue. Besides, it makes running the update configurable per job.

Robolectric

At this point, I thought it would have been done. Unfortunately, it wasn’t the case, due to a library – Robolectric.

I didn’t know about Robolectric at this time, I just knew it was a testing library for Android that provided a way to run tests without a connected physical device. While trying to run the build on Jenkins, I stumbled upon an “interesting” issue: although Roboletric provides a POM complete with dependencies, the MavenDependencyResolver class hard-codes the repository where to download from.

The only provided workaround is to extend the above class to hack your own implementation. Mine used the enterprise Nexus repository mentioned above.

Upload and release tasks

The end of the implementation was relatively easy. Only were missing the upload of the artifacts to the Nexus repository and the tag of the release in the SCM.

In order to achieve the former, I just added a custom Gradle task to get Nexus settings from the settings.xml (provisioned by Puppet). Then I managed for the upload task to depend on this one. Finally, for every flavor of assemble task execution, I added the output file to the to-be-uploaded artifacts set. This way, the following command would upload only flavours XXX and YYY regardless of what flavour are configured in the build file:

./gradlew assembleXXX assembleYYY upload

For the release, it’s even simpler: the only required thing was to set this Gradle plugin, which adds a release task, akin to Maven’s deploy.

Conclusion

As a backend developer, I’m used to Continuous Integration setup and was nearly sure I could handle Android CI process in a few days. I’ve been quite surprised at the lack of maturity of the Android ecosystem regarding CI. Every step is painful, badly documented (if at all) and solutions seem more like hacks than anything else. If you want to go down this path, you’ve been warned… and I wish you the best of luck.

Improve your tests with Mockito’s capture

July 5th, 2015 No comments

Unit Testing mandates to test the unit in isolation. In order to achieve that, the general consensus is to design our classes in a decoupled way using DI. In this paradigm, whether using a framework or not, whether using compile-time or runtime compilation, object instantiation is the responsibility of dedicated factories. In particular, this means the new keyword should be used only in those factories.

Sometimes, however, having a dedicated factory just doesn’t fit. This is the case when injecting an narrow-scope instance into a wider scope instance. A use-case I stumbled upon recently concerns event bus, code like this one:

 public class Sample {

    private EventBus eventBus;

    public Sample(EventBus eventBus) {
        this.eventBus = eventBus;
    }

    public void done() {
        Result result = computeResult()
        eventBus.post(new DoneEvent(result));
    }

    private Result computeResult() {
        ...
    }
}

With a runtime DI framework – such as the Spring framework, and if the DoneEvent had no argument, this could be changed to a lookup method pattern.

public void done() {
    eventBus.post(getDoneEvent());
}

public abstract DoneEvent getDoneEvent();

Unfortunately, the argument just prevents us to use this nifty trick. And it cannot be done with runtime injection anyway. It doesn’t mean the done() method shouldn’t be tested, though. The problem is not only how to assert that when the method is called, a new DoneEvent is posted in the bus, but also check the wrapped result.

Experienced software engineers probably know about the Mockito.any(Class) method. This could be used like this:

public void doneShouldPostDoneEvent() {
    EventBus eventBus = Mockito.mock(EventBus.class);
    Sample sample = new Sample(eventBus);
    sample.done();
    Mockito.verify(eventBus).post(Mockito.any(DoneEvent.class));
}

In this case, we make sure an event of the right kind has been posted to the queue, but we are not sure what the result was. And if the result cannot be asserted, the confidence in the code decreases. Mockito to the rescue. Mockito provides captures, that act like placeholders for parameters. The above code can be changed like this:

public void doneShouldPostDoneEventWithExpectedResult() {
    ArgumentCaptor<DoneEvent> captor = ArgumentCaptor.forClass(DoneEvent.class);
    EventBus eventBus = Mockito.mock(EventBus.class);
    Sample sample = new Sample(eventBus);
    sample.done();
    Mockito.verify(eventBus).post(captor.capture());
    DoneEvent event = captor.getCapture();
    assertThat(event.getResult(), is(expectedResult));
}

At line 2, we create a new ArgumentCaptor. At line 6, We replace any() usage with captor.capture() and the trick is done. The result is then captured by Mockito and available through captor.getCapture() at line 7. The final line – using Hamcrest, makes sure the result is the expected one.

Categories: Java Tags: ,

Spring 2015 European conferences tour

May 25th, 2015 No comments

I’ve just finished my Spring 2015 European conferences tour. I’ve talked about Integration Testing, Mutation Testing and Spring Boot for Devops at Spring IO (Spain), GeeCon, DevIT and JEEConf.

This is a resume of the sessions I attended and liked. Sessions I was not part of, I lost time in are or I slept in are not mentioned.

Spring I/O Barcelona (Spain)

Boot your Search with Spring
This is a nice introductory talk on the search feature brought by the Spring Data abstraction, over the SolR, Elasticsearch and MongoDB NoSQL stores.
Is Groovy better for testing than Java?
The title sums it all: checking whether Spock can/should be used for testing. I was pleasantly surprised to see the talk was well balanced and not an advertisement for Spock. I had already seen a talk on Spock and discarded it as inconclusive. Now, I should probably give it a try.
Master Spring Boot auto-configuration
Probably the best talk of the conference, it explains in a very comprehensive way how you can create your own Spring Boot module with auto-configuration capability.
Testing with Spring 4.x
Testing with Spring is not only very interesting to, it will be with the subject of my talk at Spring One with Sam Brannen. Good thing since the speaker was Sam himself. This was a good occasion to experience first hand the way he speaks at conferences.
Document like the Spring team using Asciidoctor
Though not related to Spring, this talk was an enlightenment! I’ve recently finished writing my latest book with simple Markdown and Asciidoctor would have just made the writing process so much easier! Now, I want to write another book just for the chance to use it.

GeeCon – Krakow (Poland)

A Survival Guide to Resilient Reactive Application
Scopes what is monitoring and defines related terms – the reactive part is not the most important.
G1 Garbage Collector: details and tuning
I’m not a system engineer but now and then, I try to attend related talks about it to have the feeling on what is going around. Most of the time, I end up disappointed – because the talk targets experts, this time I was not. The talk was clear and the speaker was entertaining.
HTTP/2 & Java Current Status
Same speaker, different subject. Good introduction to HTTP/2.
Analysing GitHub commits with R and Azure
I came to this talk by chance, because none during this timeframe really attracted me. Nice Data Mining example using Github as a use-case.

At this time, I had to take my plane to go to…

DevIT – Thessaloniki (Greece)

The future of responsive web design: web component queries
Very interesting introduction to some important features of HTML5: shadow DOM, templates, web components, etc. This talk really made me want to try myself!
Your Service is not Rest
This talk defined what is REST and what is not and proved given the definition that most APIs provided are simple HTTP, not REST. Due to lack of time, the speaker couldn’t answer my question: how does HATEOAS details about which HTTP methods are available for which resources. Guess I’ll have to return next year.

Due to a lack of sleep due to my late flight from Krakow, I’m afraid my attention has been less than optimal during the rest of the talks.

JEEConf – Kyiv (Ukrain)

Pragmatic Functional Refactoring with Java 8
Some of Java 8’s features, including functions, currying, immutability and Optional.
Painfree Object-Document Mapping for MongoDB
Description of the Xenia library, a Java ODM for MongoDB. I put this into my list of available tools in case I’ll have such a problem.
Making This Rhinoceros Thunder
This talk I went to because no other talk was in english, and I was pleasantly surprised. The speaker works on the Nashorn engine and told not only how to make the compilation of JavaScript faster on the JVM but also what challenges the team faced in the implementation, and how they solved it.

Initially, I had 2 talks at JEEConf. Because a speaker had medical issues, he couldn’t make it and so I had the privilege of being invited by Josh Long to host a last minute talk with him on Spring Boot and Vaadin. Then, he also proposed me to be a speaker on the Spring panel. All in all, between the preparation of my talks and the talks proper, I couldn’t manage to attend any other talks.

This was a great experience again, with many occasions to meet new people and see again conference buddies. Many thanks to the teams of those conferences for their organization and their time! See you soon again.

Categories: Event Tags: , , ,

Quality Tools: humble servants or tyrans?

May 10th, 2015 No comments

I’ve always been an ardent proponent of internal quality in software, because in my various experiences, I’ve had more than my share of crappy codebases to maintain. I believe that quality tools can increase the internal quality of the code, thus decreasing maintenance costs in the long run. However, I don’t think that such tools are the only way to achieve that – I’m also a firm believer in code reviews.

Regarding quality tools, I started with Checkstyle, then with PMD, both static analysis tools. I’ve used FindBugs, a tool that doesn’t check the source code but the bytecode itself, but only sparingly for it seemed to me it reported too many false positives.

Finally, I found SonarQube (called Sonar at the time). I didn’t immediately fall in love with it, and it took me some months to get rid of my former Checkstyle and PMD companions. As soon as I did, however, I wanted to put it in place in every project I worked on – and on others too. When it added a timeline to see the trend regarding violations and other metrics, I knew it was the quality tool to use.

Now that finally the dust has settled, I don’t see many organizations where no quality tool is used and that is good. I don’t imagine working with none: whether as a developer or team lead, whether using Sonar or simpler tools, their added value is simply too big to just ignore.

On the other hand, I’m very wary of a rising trend: it seems as if once Sonar is in place, developers and managers alike treat its reports as the word of God. I can expect it from managers, but I definitely don’t want my fellow developers to set their brain aside and delegate their responsibilities to a tool, whatever the tool. Things even become worse when metrics from those rules are used as build breakers: when the build fails because your project failed to achieve some pre-defined metrics.

Of course, there are some ways to mitigate the problem:

  • Use only a subset of Sonar rules. For example, the violation that checks for a private static final serialVersionUID attribute if the class directly or transitively implements Serializable is completely useless IMHO.
  • Use the NO-SONAR comment
  • Configure each project. For example, Vaadin projects should exclude graphical classes from the unit test coverage as they probably have no behavior, thus no associated tests (do you unit test your JSP?).

I’m afraid those are only ways to go around the limits. Every tool comes with a severe limitation: it cannot distinguish between contexts, and applies the same rules regardless of it. As a side note, notice this is also the case for big companies… The funniest part is that software engineers are in general the most active opponents against metrics-driven management – then they put SonarQube in place to assert code quality and they’re stubborn when it comes to contextualising the results.

Quality tools are a big asset toward a more maintainable code base, but stupidly applying one rule because the tool said so – or even worse, riddling your code base with // NOSONAR comments, is a serious mistake. I’m in favor of using tools, not tools ruling me. Know what I mean?

 

 

Categories: Development Tags: ,

Connection is a leaky abstraction

April 26th, 2015 2 comments

As junior Java developers, we learn very early in our career about the JDBC API. We learn it’s a very important abstraction because it allows to change the underlying database in a transparent manner. I’m afraid what appeared as a good idea is just over-engineering because:

  1. I’ve never seen such a database migration happen in more than 10 years
  2. Most of the time, the SQL written is not database independent

Still, there’ s no denying that JDBC is at the bottom of every database interaction in Java. However, I recently stumbled upon another trap hidden very deeply at the core of the javax.sql.Connection interface. Basically, you perhaps have been told to close the Statement returned by the Connection? And also to close the ResultSet returned by the Statement? But perhaps you also have been told that closing the Connection will close all underlying objects – Statement and ResultSet?

So, which one is true? Well, “it depends” and there’s the rub…

  • One one hand, if the connection is returned from the DriverManager, calling Connection.close() will close the physical connection to the database and all underlying objects.
  • On the other hand, if the connection is returned from a DataSource, calling Connection.close() will only return it to the pool and you’ll need to close statements yourself.

In the latter case, if you don’t close those underlying statements, database cursors will stay open, the RDBMS limit will be reached at some point and new statements won’t be executed. Conclusion: always close statement objects (as I already wrote about)! Note the result set will be closed when the statement is.

If you’re lucky to use Java 7 – and don’t use a data access framework, the code to use is the following:

try (PreparedStatement ps = connection.prepareStatement("Put SQL here")) {
    try (ResultSet rs = ps.executeQuery()) {
        // Do something with ResultSet
    }
} catch (SQLException e) {
    // Handle exception
    e.printStackTrace();
}

And if you want to make sure cursors will be closed even with faulty code, good old Tomcat provides the StatementFinalizer interceptor for that. Just configure it in the server.xml configuration file when you declare your Resource:

<Resource name="jdbc/myDB" auth="Container" type="javax.sql.DataSource"
 jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer" />

Note: while you’re there, you can also check the ResetAbandonedTimer interceptor. It can be used in conjunction with the removeAbandonedTimeout attribute: this configures the time after which the connection will be returned back to the pool. If the attribute’s value is too low, connections in use might be returned. With the interceptor, each time the connection is used resets the timer.

Categories: Java Tags: ,

Polyglot everywhere – part 2

April 19th, 2015 No comments

Last week, we set up a new project using the YAML flavor of Polyglot Maven. Now is time for some server-side code!

As a long time Vaadin advocate, let’s create a very simple Vaadin application. This will have the added advantage to let us hack something on the client-side as well for the last part of this serie. As we are fully polyglot, we will avoid the old Java language and use something very cool instead. As I’ve have been to some conferences with its number 1 advocate, I settled on the Kotlin language.

Disclaimer: I’m far from a Kotlin user – and this is not about Kotlin anyway, so please pardon my mistakes in the following.

Java is still Maven’s first class citizen so the first step is to add some configuration for the Kotlin compiler to kick in. It’s quite easy, especially given the previous work on the polyglot POM:

build:
    sourceDirectory: ${project.basedir}/src/main/kotlin
    plugins:
        -   groupId: org.jetbrains.kotlin
            artifactId: kotlin-maven-plugin
            version: 0.11.91.1
            executions:
            -   id: compile
                phase: compile
                goals:
                    - compile
        # Other plugins go there

The next step is to configure the web application. It can be done either with the old XML web deployment descriptor or since Servlet 3.0 with annotations. Since XML is far from polyglot, let’s use the Kotlin way. Besides, Kotlin annotations are cooler than in Java:

WebServlet (
    name = "VaadinServlet",
    urlPatterns = array("/*"),
    initParams = array(
        WebInitParam(
            name = "UI",
            value = "ch.frankel.blog.polyglot.MainUi"
        )
    )
)
VaadinServletConfiguration(
    productionMode = false,
    ui = javaClass<MainUi>()
)
class KotlinServlet: VaadinServlet()

Regular Vaadin users know about the final step. A UI needs to be created. Again, this is quite straightforward:

class MainUi: UI() {
    override fun init(request: VaadinRequest) {
        val label = Label("Hello from Polyglot Everywhere")
        val layout = VerticalLayout(label)
        layout.setMargin(true)
        layout.setSpacing(true)
        setContent(layout)
    }
}

And with these steps, we’ve achieved a polyglot webapp!
The next article in this serie will add a client-side component for Vaadin. Don’t miss it!

Categories: Development Tags: , ,