Archive

Archive for the ‘Java’ Category

Creative Rx usage in testing setup

August 16th, 2015 No comments

I don’t know if events became part of software engineering since Graphical-User Interface interactions, but for sure they are a very convenient way of modeling them. With more and more interconnected systems, asynchronous event management has become an important issue to tackle. With Functional-Programming also on the raise, this gave birth to libraries such as RxJava. However, modeling a problem as the handling of a stream of events shouldn’t be restricted to system events handling. It can also be used in testing in many different ways.

One common use-case for testing setup is to launch a program, for example an external dependency such as a mock server. In this case, we need to wait until the program has been successfully launched. On the contrary, the test should stop as soon as the external program launch fails. If the program has a Java API, that’s easy. However, this is rarely the case and the more basics API are generally used, such as ProcessBuilder or Runtime.getRuntime().exec():

ProcessBuilder builder;

@BeforeMethod
protected void setUp() throws IOException {
    builder = new ProcessBuilder().command("script.sh");
    process = builder.start();
}

@AfterMethod
protected void tearDown() {
    process.destroy();
}

The traditional way to handle this problem was to put a big Thread.sleep() just after the launch. Not only was it system dependent as the launch time changed from system to system, it didn’t tackle the case where the launch failed. In this later case, precious computing time as well as manual relaunch time were lost. Better solutions exist, but they involve a lot of lines of code as much at some (or a high) degree of complexity. Wouldn’t it be nice if we could have a simple and reliable way to start the program and depending on the output, either continue the setup or fail the test? Rx to the rescue!

The first step is to create an Observable around the launched process’s input stream:

Observable<String> observable = Observable.create(subscriber -> {
    InputStream stream = process.getInputStream();
    try (BufferedReader reader = new BufferedReader(new InputStreamReader(stream))) {
        String line;
        while ((line = reader.readLine()) != null) {
            subscriber.onNext(line);
        }
        subscriber.onCompleted();
    } catch (Exception e) {
        subscriber.onError(e);
    }
});

In the above snippet:

  • Each time the process writes on the output, a next event is sent
  • When there is no more output, a complete event is sent
  • Finally, if an exception happens, it’s an error event

By Rx definition, any of the last two events mark the end of the sequence.

Once the observable has been created, it just needs to be observed for events. A simple script that emits a single event can be listened to with the following snippet:

BlockingObservable<String> blocking = observable.toBlocking();
blocking.first();

The thing of interest here is the wrapping of the Observable instance in a BlockingObservable. While the former can be combined together, the later adds methods to manage events. At this point, the first() method will listen to the first (and single) event.

For a more complex script that emits a random number of regular events terminated by a single end event, a code snippet could be:

BlockingObservable<String> blocking = observable
    .filter("Done"::equals)
    .toBlocking();
blocking.first();

In this case, whatever the number of regular events, the filter() method provides the way to listen to the only event we are interested in.

Previous cases do not reflect the reality, however. Most of the time, setup scripts should start before and run in parallel to the tests i.e. the end event is never sent – at least until after tests have finished. In this case, there are some threading involved. Rx let it handle that quite easily:

BlockingObservable<String> blocking = observable
    .subscribeOn(Schedulers.newThread())
    .take(5)
    .toBlocking();
blocking.first();

There is a simple difference there: the subscriber will listen on a new thread thanks to the subscribeOn() method. Alternatively, events could have been emitted on another thread with the observeOn() method. Note I replaced the filter() method with take() to pretend to be interested only in the first 5 events.

At this point, the test setup is finished. Happy testing!

Sources for this article can be found here in IDEA/Maven format.

Send to Kindle
Categories: Java Tags: , ,

Compile-time dependency injection tradeoffs in Android

August 2nd, 2015 No comments

As a backend software developer, I’m used to Spring as my favorite Dependency Injection engine. Alternatives include Java EE’s CDI which achieves the same result – in a different way. However, both inject at runtime: that means that there’s a definite performance cost to pay at the start of the application, the time it takes for all dependencies to be fulfilled. On an application server, where the application lifespan is measured in days (if not weeks), the start time overhead is acceptable. It is even fully transparent if the server is but a node in a large cluster.

As an Android user, I’m not happy when I start an app and it lags for several seconds before opening. It would be very bad in term of user-friendliness if we were to add several more seconds to that time. Even worse, the memory consumption from a DI engine would be a disaster. That’s the reason why Square developed a compile-time dependency injection mechanism called Dagger. Note that Dagger 2 is currently under development by Google. Before going further, I must admit that the documentation of Dagger 2 is succinct – at best. But it’s a great opportunity for another blog post :-)

Dagger 2 works with the annotation-processor: when compiling, it will analyze your annotated-code and produce the wiring code between you components. The good thing is that this code is pretty similar to what you would write yourself if you were to do it manually, there’s no secret black magic (as opposed to runtime DI and their proxies). The following code displays a class to be injected:

public class TimeSetListener implements TimePickerDialog.OnTimeSetListener {

    private final EventBus eventBus;

    public TimeSetListener(EventBus eventBus) {
        this.eventBus = eventBus;
    }

    @Override
    public void onTimeSet(TimePicker view, int hourOfDay, int minute) {
        eventBus.post(new TimeSetEvent(hourOfDay, minute));
    }
}

Notice the code is completely independent of Dagger in every way. One cannot infer how it will be injected in the end. The interesting part is how to use Dagger to inject the required eventBus dependency. There are two steps:

  1. Get a reference to an eventBus instance in the context
  2. Call the constructor with the relevant parameter

The wiring configuration itself is done in a so-called module:

@Module
public class ApplicationModule {

    @Provides
    @Singleton
    public TimeSetListener timeSetListener(EventBus eventBus) {
        return new TimeSetListener(eventBus());
    }

    ...
}

Notice that the EventBus is passed as a parameter to the method, and it’s up to the context to provide it. Also, the scope is explicitly @Singleton.

The binding to the factory occurs in a component, which references the required module (or more):

@Component(modules = ApplicationModule.class)
@Singleton
public interface ApplicationComponent {
    TimeSetListener timeListener();
    ...
}

It’s quite straightforward… until one notices that some – if not most objects in Android have a lifecycle managed by Android itself, with no call to our injection-friendly constructor. Activities are such objects: they are instantiated and launched by the framework. Only through dedicated lifecycle methods like onCreate() can we hook our code into the object. This use-case looks much worse as field injection is mandatory. Worse, it is also required to call Dagger: in this case, it acts as a plain factory.

public class EditTaskActivity extends AbstractTaskActivity {

    @Inject TimeSetListener timeListener;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        DaggerApplicationComponent.create().inject(this);
    }
    ...
}

For the first time we see a coupling to Dagger, but it’s a big one. What is DaggerApplicationComponent? An implementation of the former ApplicationComponent, as well as a factory to provide instances of them. And since it doesn’t provide an inject() method, we have to declare it into our interface:

@Component(modules = ApplicationModule.class)
@Singleton
public interface ApplicationComponent {
    TimeSetListener timeListener();
    void inject(EditTaskActivity editTaskActivity);
    ...
}

For the record, the generated class looks like:

@Generated("dagger.internal.codegen.ComponentProcessor")
public final class DaggerApplicationComponent implements ApplicationComponent {
  private Provider<TimeSetListener> timeSetListenerProvider;
  private MembersInjector<EditTaskActivity> editTaskActivityMembersInjector;

  ...

  private DaggerApplicationComponent(Builder builder) {  
    assert builder != null;
    initialize(builder);
  }

  public static Builder builder() {  
    return new Builder();
  }

  public static ApplicationComponent create() {  
    return builder().build();
  }

  private void initialize(final Builder builder) {  
    this.timeSetListenerProvider = ScopedProvider.create(ApplicationModule_TimeSetListenerFactory.create(builder.applicationModule, eventBusProvider));
    this.editTaskActivityMembersInjector = TimeSetListener_MembersInjector.create((MembersInjector) MembersInjectors.noOp(), timeSetListenerProvider);
  }

  @Override
  public EventBus eventBus() {  
    return eventBusProvider.get();
  }

  @Override
  public void inject(EditTaskActivity editTaskActivity) {  
    editTaskActivityMembersInjector.injectMembers(editTaskActivity);
  }

  public static final class Builder {
    private ApplicationModule applicationModule;
  
    private Builder() {  
    }
  
    public ApplicationComponent build() {  
      if (applicationModule == null) {
        this.applicationModule = new ApplicationModule();
      }
      return new DaggerApplicationComponent(this);
    }
  
    public Builder applicationModule(ApplicationModule applicationModule) {  
      if (applicationModule == null) {
        throw new NullPointerException("applicationModule");
      }
      this.applicationModule = applicationModule;
      return this;
    }
  }
}

There’s no such thing as a free lunch. Despite compile-time DI being very appealing at first glance, it becomes much less so when used on objects outside which lifecycle is not managed by our code. The downsides become apparent: coupling to the DI framework and more importantly an increased difficulty to unit-test the class. However, considering Android constraints, this might be the best that can be achieved.

Send to Kindle
Categories: Java Tags: , ,

Fully automated Android build pipeline

July 12th, 2015 No comments

In the course of my current job, I had to automate jobs for building Android applications. This post aims at describing the pain points I encountered in order for you readers not to waste your time if you intend to do so.

The environment is the following:

  • Puppet to automate the infrastructure
  • Jenkins for the CI server
  • The Android project
  • A Gradle build file to build it
  • Robolectric as the main testing framework

Puppet and Jenkins

My starting point was quite sound, indeed. Colleagues had already automated the installation of the Jenkins server, the required packages – including Java, as well as provided reusable Puppet classes for job creations. Jenkins jobs rely on a single config.xml file, that is an assembly of different sections. Each section is handled by a dedicated template. At this point, I thought creating a simple Gradle job would be akin to a walk in the park, that it would take a few days at most and that I would soon be assigned another task.

The first step was easy enough: just update an existing Puppet manifest to add the Gradle plugin to Jenkins.

The Gradle wrapper

Regular readers of this blog know my opinion about Gradle. However, I admit that guaranteeing a build that works regardless of the installed tool version is something that Maven lacks – and should have. To achieve that, Gradle provides a so-called wrapper mechanism through a JAR, a shell script and a properties file, the latter containing the URL to the Gradle ZIP distribution. All three needs to be stored in the SCM.

This was the beginning of my troubles. Having to download in an enterprise environment means going through and authenticate to the proxy. The simplest option would be to set everything in the job configuration… including the proxy credentials. However, going this way is not very sound from a security point of view, as anyone having access to the Jenkins interface or the filesystem would be able to read those credentials. There’s a need for another option.

The customer already has a working Nexus repository with a configured proxy. It was easy as pie to upload the required Gradle distribution there and update the gradle.properties to point to it.

The Android SDK

The Android SDK is just a ZIP. I reused the same tactics: download it then upload it to Nexus. At this point, an existing Puppet script took care of downloading it, extracting it and setting the right permissions.

This step is the beginning of the real problems, however. Android developers know that the Android SDK is just a manager: one has to manually check the desired platforms and tools to download them on the local filesystem. What is a simple step for an Android developer on his machine is indeed a nightmare to automate, though there’s a command-line equivalent to install/update packages through the SDK (with the --no-ui parameter). For a full description, please check this link.

Google engineers fail to provide 2 important parameters:

  • Proxy credentials – login/password
  • Accepting license agreements

There’s a lot of non-working answers on the Web, the most alluring being a configuration file. I found none of them to work. However, I found a creative solution using the expect command. Expect is a nifty command that reads the standard output and fill in the standard input accordingly. The good thing about expect is that it accepts regexp. So, when it asks for proxy login, your type the login, when it asks for the password, you do likewise, and when it asks for license acceptance, you type ‘y’. It’s pretty straightforward – though it took me a lot of trial and error to achieve the desired results.

My initial design was to have all necessary Android packages installed with Puppet as part of the server provisioning. For standard operations, such as file creation or system package installation, Puppet is able to determine if provisioning is necessary, e.g. if the file exists, there’s no need to create it and likewise if the package is installed. In the end, Puppet reports each operation it performed in a log. At first, I tried to implement this sort of caching by telling Puppet about which packages was created during the provisioning, since the Android SDK creates one folder per package. The first problem is that Puppet only accepts a single folder to verify. Then, for some packages, there’s no version information (for example, this is the case for Google Play services).

Thus, a colleague had the idea to move from this update from Puppet provisioning to a pre-step in each job. This fixes the non-idempotent issue. Besides, it makes running the update configurable per job.

Robolectric

At this point, I thought it would have been done. Unfortunately, it wasn’t the case, due to a library – Robolectric.

I didn’t know about Robolectric at this time, I just knew it was a testing library for Android that provided a way to run tests without a connected physical device. While trying to run the build on Jenkins, I stumbled upon an “interesting” issue: although Roboletric provides a POM complete with dependencies, the MavenDependencyResolver class hard-codes the repository where to download from.

The only provided workaround is to extend the above class to hack your own implementation. Mine used the enterprise Nexus repository mentioned above.

Upload and release tasks

The end of the implementation was relatively easy. Only were missing the upload of the artifacts to the Nexus repository and the tag of the release in the SCM.

In order to achieve the former, I just added a custom Gradle task to get Nexus settings from the settings.xml (provisioned by Puppet). Then I managed for the upload task to depend on this one. Finally, for every flavor of assemble task execution, I added the output file to the to-be-uploaded artifacts set. This way, the following command would upload only flavours XXX and YYY regardless of what flavour are configured in the build file:

./gradlew assembleXXX assembleYYY upload

For the release, it’s even simpler: the only required thing was to set this Gradle plugin, which adds a release task, akin to Maven’s deploy.

Conclusion

As a backend developer, I’m used to Continuous Integration setup and was nearly sure I could handle Android CI process in a few days. I’ve been quite surprised at the lack of maturity of the Android ecosystem regarding CI. Every step is painful, badly documented (if at all) and solutions seem more like hacks than anything else. If you want to go down this path, you’ve been warned… and I wish you the best of luck.

Send to Kindle

Improve your tests with Mockito’s capture

July 5th, 2015 No comments

Unit Testing mandates to test the unit in isolation. In order to achieve that, the general consensus is to design our classes in a decoupled way using DI. In this paradigm, whether using a framework or not, whether using compile-time or runtime compilation, object instantiation is the responsibility of dedicated factories. In particular, this means the new keyword should be used only in those factories.

Sometimes, however, having a dedicated factory just doesn’t fit. This is the case when injecting an narrow-scope instance into a wider scope instance. A use-case I stumbled upon recently concerns event bus, code like this one:

 public class Sample {

    private EventBus eventBus;

    public Sample(EventBus eventBus) {
        this.eventBus = eventBus;
    }

    public void done() {
        Result result = computeResult()
        eventBus.post(new DoneEvent(result));
    }

    private Result computeResult() {
        ...
    }
}

With a runtime DI framework – such as the Spring framework, and if the DoneEvent had no argument, this could be changed to a lookup method pattern.

public void done() {
    eventBus.post(getDoneEvent());
}

public abstract DoneEvent getDoneEvent();

Unfortunately, the argument just prevents us to use this nifty trick. And it cannot be done with runtime injection anyway. It doesn’t mean the done() method shouldn’t be tested, though. The problem is not only how to assert that when the method is called, a new DoneEvent is posted in the bus, but also check the wrapped result.

Experienced software engineers probably know about the Mockito.any(Class) method. This could be used like this:

public void doneShouldPostDoneEvent() {
    EventBus eventBus = Mockito.mock(EventBus.class);
    Sample sample = new Sample(eventBus);
    sample.done();
    Mockito.verify(eventBus).post(Mockito.any(DoneEvent.class));
}

In this case, we make sure an event of the right kind has been posted to the queue, but we are not sure what the result was. And if the result cannot be asserted, the confidence in the code decreases. Mockito to the rescue. Mockito provides captures, that act like placeholders for parameters. The above code can be changed like this:

public void doneShouldPostDoneEventWithExpectedResult() {
    ArgumentCaptor<DoneEvent> captor = ArgumentCaptor.forClass(DoneEvent.class);
    EventBus eventBus = Mockito.mock(EventBus.class);
    Sample sample = new Sample(eventBus);
    sample.done();
    Mockito.verify(eventBus).post(captor.capture());
    DoneEvent event = captor.getCapture();
    assertThat(event.getResult(), is(expectedResult));
}

At line 2, we create a new ArgumentCaptor. At line 6, We replace any() usage with captor.capture() and the trick is done. The result is then captured by Mockito and available through captor.getCapture() at line 7. The final line – using Hamcrest, makes sure the result is the expected one.

Send to Kindle
Categories: Java Tags: ,

Connection is a leaky abstraction

April 26th, 2015 2 comments

As junior Java developers, we learn very early in our career about the JDBC API. We learn it’s a very important abstraction because it allows to change the underlying database in a transparent manner. I’m afraid what appeared as a good idea is just over-engineering because:

  1. I’ve never seen such a database migration happen in more than 10 years
  2. Most of the time, the SQL written is not database independent

Still, there’ s no denying that JDBC is at the bottom of every database interaction in Java. However, I recently stumbled upon another trap hidden very deeply at the core of the javax.sql.Connection interface. Basically, you perhaps have been told to close the Statement returned by the Connection? And also to close the ResultSet returned by the Statement? But perhaps you also have been told that closing the Connection will close all underlying objects – Statement and ResultSet?

So, which one is true? Well, “it depends” and there’s the rub…

  • One one hand, if the connection is returned from the DriverManager, calling Connection.close() will close the physical connection to the database and all underlying objects.
  • On the other hand, if the connection is returned from a DataSource, calling Connection.close() will only return it to the pool and you’ll need to close statements yourself.

In the latter case, if you don’t close those underlying statements, database cursors will stay open, the RDBMS limit will be reached at some point and new statements won’t be executed. Conclusion: always close statement objects (as I already wrote about)! Note the result set will be closed when the statement is.

If you’re lucky to use Java 7 – and don’t use a data access framework, the code to use is the following:

try (PreparedStatement ps = connection.prepareStatement("Put SQL here")) {
    try (ResultSet rs = ps.executeQuery()) {
        // Do something with ResultSet
    }
} catch (SQLException e) {
    // Handle exception
    e.printStackTrace();
}

And if you want to make sure cursors will be closed even with faulty code, good old Tomcat provides the StatementFinalizer interceptor for that. Just configure it in the server.xml configuration file when you declare your Resource:

<Resource name="jdbc/myDB" auth="Container" type="javax.sql.DataSource"
 jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer" />

Note: while you’re there, you can also check the ResetAbandonedTimer interceptor. It can be used in conjunction with the removeAbandonedTimeout attribute: this configures the time after which the connection will be returned back to the pool. If the attribute’s value is too low, connections in use might be returned. With the interceptor, each time the connection is used resets the timer.

Send to Kindle
Categories: Java Tags: ,

What’s the version of my deployed application?

April 6th, 2015 2 comments

In my career, I’ve noticed many small and un-expensive features that didn’t find their way into the Sprint backlog because they didn’t provide business value. However, they provided plenty of ROI during the life of the application, but that was completely overlooked due to short-sighted objectives (set by short-sighted management). Those include, but are not limited to:

  • Monitoring in general, and more specifically metrics, health checks, etc. Spare 5 days now and spend 10 times that later (or more…) that because you don’t know how your application works.
  • Environment data e.g. development, test, production, etc. It’s especially effective when it’s associated with a coloured banner dependent on the environment. If you don’t do that, I’m not responsible if I just deleted all your production data from the last 10 days because I thought it was the test environment. This is quite easy, especially if login/passwords are the same for all environments – yes, LDAP setup is complex so let’s have only one.
  • Application build data, most importantly the version number, and if possible the build number and the build time. Having to SSH into the server (if possible at all) or search the Wiki (if it’s up-to-date, a most unlikely occurence) to have to know the version is quite cumbersome when you need the info right now.

Among them, I believe the most basic one is the latter. We are used to check the About dialog in desktop applications, but unless you deliver many (many many) times a day, those are necessary for any real-world enterprise-grade application. In the realm of SOA and micro-services, it means this info should also be part of responses.

With Maven, it’s quite easy to achieve, this as a simple properties file only is needed and the maven-resource-plugin will work its magic. Maven provides a filtering features, meaning any resource can be set placeholders and they will be replaced by their values at build-time. Filtering is not enabled by default. To activate it, the following snippet will do:


    
        
            
                ${basedir}/src/main/resources
                true
            
        
    

I wrote about placeholders above but I didn’t specify which. Simple, any data set in the POM – as well as a few special ones, can be used as placeholders. Just use the DOM path, inside $ and brackets, like that: ${dom.path}.

Here’s an example, with a property file:

application.version=${project.version}
build.date=${maven.build.timestamp}

If this snippet is put in a file inside the src/main/resources directory, Maven will generate a file named similarly with the values filtered inside the target/classes directory just after the process-resources. Specific values depends of course on the POM, but here’s a sample output:

application.version=1.0.0-SNAPSHOT
build.date=20150303-1335

Things are unfortunately not as straightforward, as there’s a bug in Maven regarding ${maven.build.timestamp}. It cannot be filtered directly and requires adding an indirection level:


    
        
            ${maven.build.timestamp}
        
        
            
                ${basedir}/src/main/resources
                true
            
        
    

The following properties file will now work as expected:

application.version=${project.version}
build.date=${build.timestamp}

At this point, it’s just a matter of reading this property file when required. This includes:

In webapps
  • Provide a dedicated About page, as in desktop applications. I’ve rarely seen that, and never implemented it
  • Add a footer with the info. For added user-friendliness, set the text color to the background color, so that users are not disturbed by the info. Only people who know (or a curious) can access that – it’s not confidential anyway.
In services
Whether SOAP or REST, XML or JSON, there are also a few options:

  • As a dedicated service endpoint (e.g. /about or /version). This is the simplest to implement. It’s even better if the same endpoint is used throughout the organization.
  • As additional info on all endpoints. This is required when each endpoint can be built separately and assembled. The easiest path then is to put it in HTTP headers, while harder is to put it in the data. The latter will probably require some interceptor approach as well as an operation on the schema (if any).

There are two additional options:

  1. To differentiate between snapshot versions, use the Maven Build Number plugin. I have no experience of actual usage. However, it requires correct configuration of SCM information.
  2. Some applications not only display version information but also environment environment information (e.g. development, integration, staging, etc.). I’ve seen it used either through a specific banner or through background color. This requires a dedicated Java property set at JVM launch time or an system environment variable.

The filtering stuff can be done in 1 hour max in a greenfield environment. The cost of using the data is more variable, but in a simple webapp footer case, it can be done in less than a day. Compare that to the time lost getting the version during the lifetime of the application…

Send to Kindle
Categories: Java Tags: ,

Better developer-to-developer collaboration with Bintray

March 29th, 2015 1 comment

I recently got interested in Spring Social, and as part of my learning path, I tried to integrate their Github module which is still in Incubator mode. Unfortunately, this module seems to have been left behind, and its dependency on the core module uses an old version of it. And since I use the latest version of this core, Maven resolves one version to put in the WEB-INF/lib folder of the WAR package. Unfortunately, it doesn’t work so well at runtime.

The following diagram shows this situation:

Dependencies original situation

 

I could have excluded the old version from the transitive dependencies, but I’m lazy and Maven doesn’t make it easy (yet). Instead, I decided to just upgrade the Github module to the latest version and install it in my local repository. That proved to be quite easy as there was no incompatibility with the newest version of the core – I even created a pull request. This is the updated situation:

Dependencies final situation

Unfortunately, if I now decide to distribute this version of my application, nobody will be able to neither build nor run it since only I have the “patched” (latest) version of the Github module available in my local repo. I could distribute along the updated sources, but it would mean you would have to build it and install it into your local repo first before using my app.

Bintray to the rescue! Bintray is a binary repository, able to host any kind of binaries: jars, wars, deb, anything. It is hosted online, and free for OpenSource projects, which nicely suits my use-case. This is how I uploaded my artifact on Bintray.

Create an account
Bintray makes it quite easy to create such an account, using of the available authentication providers – Github, Twitter or Google+. Alternatively, one can create an old-style account, with password.
Create an artifact
Once authentified, an artifact needs to be created. Select your default Maven repository, it can be found at https://bintray.com//maven. Then, click on the big Add New Package button located on the right border. On the opening page, fill in the required information. The package can be named whatever you want, I chose to use the Maven artifact identifier: spring-social-github.
Create a version
Files can only be added to a version, so that a version need to be created first. On the package detail page, click on the New Version link (second column, first line).On the opening page, fill in the version name. Note that snapshots are not accepted and this is only checked through the -SNAPSHOT suffix. I chose to use 1.0.0.BUILD.
Upload files
Once the version is created, files can finally be uploaded. In the top bar, click the Upload Files button. Drag and drop all desired files, of course the main JAR and the POM, but it can also include source and javadoc JARs. Notice the Target Repository Path field: it should be set to the logical path to the Maven artifact, including groupId, artifactId and version separated by slashes. For example, my use-case should resolve to org/springframework/social/spring-social-github/1.0.0.BUILD. Note that instead of filling this field, you can wait for the files to be uploaded as Bintray will detect this upload, analyze the POM and propose to set it automatically: if this fits – and it probably does, just accept the proposal.
Publish
Uploading files is not enough, as those files are temporary until publication. A big notice warns about it: just click on the Publish link located on the right border.

At this point, you need only to add the Bintray repository in the POM.


    
        bintray
        http://dl.bintray.com/nfrankel/maven
        
            true
        
    
Send to Kindle
Categories: Java Tags: ,

Become a DevOps with Spring Boot

March 8th, 2015 1 comment

Have you ever found yourself in the situation to finish a project and you’re about to deliver it to the Ops team. You’re so happy because this time, you covered all the bases: the documentation contains the JNDI datasource name the application will use, all environment-dependent parameters have been externalized in a property file – and documented, and you even made sure logging has been implemented at key points in the code. Unfortunately, Ops refuse your delivery since they don’t know how to monitor the new application. And you missed that… Sure you could hack something to fulfill this requirement, but the project is already over-budget. In some (most?) companies, this means someone will have to be blamed and chances are the developer will bear all the burden. Time for some sleepless nights.

Spring Boot is a product from Spring that brings many out-of-the-box features to the table. Convention over configuration, in-memory default datasource and and embedded Tomcat are part of the features known to most. However, I think there’s a hidden gem that should be much more advertised. The actuator module actually provides metrics and health checks out-of-the-box as well as an easy way to add your own. In this article, we’ll see how to access those metrics from HTTP and send them to JMX and Graphite.

As an example application, let’s use an update of the Spring Pet Clinic made with Boot – thanks forArnaldo Piccinelli for his work. The starting point is commit 790e5d0. Now, let’s add some metrics in no time.

The first step is to add the actuator module starter in the Maven POM and let Boot does its magic:


    org.springframework.boot
    spring-boot-starter-actuator

At this point, we can launch the Spring Pet Clinic with mvn spring-boot:run and navigate to http://localhost:8090/metrics (note that the path is protected by Spring Security, credentials are user/password) to see something like the following:

{
  "mem" : 562688,
  "mem.free" : 328492,
  "processors" : 8,
  "uptime" : 26897,
  "instance.uptime" : 18974,
  "heap.committed" : 562688,
  "heap.init" : 131072,
  "heap.used" : 234195,
  "heap" : 1864192,
  "threads.peak" : 20,
  "threads.daemon" : 17,
  "threads" : 19,
  "classes" : 9440,
  "classes.loaded" : 9443,
  "classes.unloaded" : 3,
  "gc.ps_scavenge.count" : 16,
  "gc.ps_scavenge.time" : 104,
  "gc.ps_marksweep.count" : 2,
  "gc.ps_marksweep.time" : 152
}

As can be seen, Boot provides hardware- and Java-related metrics without further configuration. Even better, if one browses the app e.g. repeatedly refreshed the root, new metrics appear:

{
  "counter.status.200.metrics" : 1,
  "counter.status.200.root" : 2,
  "counter.status.304.star-star" : 4,
  "counter.status.304.webjars.star-star" : 1,
  "gauge.response.metrics" : 72.0,
  "gauge.response.root" : 16.0,
  "gauge.response.star-star" : 8.0,
  "gauge.response.webjars.star-star" : 11.0,
  ...
}

Those metrics are more functional in nature, and they are are separated into two separate groups:

  • Gauges are the simplest metrics and return a numeric value e.g. gauge.response.root is the time (in milliseconds) of the last response from the /metrics path
  • Counters are metrics which can be incremented/decremented e.g. counter.status.200.metrics is the number of times the /metrics path returned a HTTP 200 code

At this point, your Ops team could probably scrape the returned JSON and make something out of it. It will be their responsibility to regularly poll the URL and to use the figures the way they want. However, with just a little more effort, we can ease the life of our beloved Ops team by putting these metrics in JMX.

Spring Boot integrates easily with Dropwizard metrics. By just adding the following dependency to the POM, Boot is able to provide a MetricRegistry, a Dropwizard registry for all metrics:


    io.dropwizard.metrics
    metrics-core
    4.0.0-SNAPSHOT

Using the provided registry, one is able to send metrics to JMX in addition to the HTTP endpoint. We just need a simple configuration class as well as a few API calls:

@Configuration
public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;

    @Bean
    public JmxReporter jmxReporter() {
        JmxReporter reporter = JmxReporter.forRegistry(registry).build();
        reporter.start();
        return reporter;
    }
}

Launching jconsole let us check it works alright: The Ops team now just needs to get metrics from JMX and push them into their preferred graphical display tool, such as Graphite. One such way to achieve this is through jmx-trans. However, it’s also possible to directly send metrics to the Graphite server with just a few different API calls:

@Configuration
public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;

    @Bean
    public GraphiteReporter graphiteReporter() {
        Graphite graphite = new Graphite(new InetSocketAddress("localhost", 2003));
        GraphiteReporter reporter = GraphiteReporter.forRegistry(registry)
                                                    .prefixedWith("boot").build(graphite);
        reporter.start(500, TimeUnit.MILLISECONDS);
        return reporter;
    }
}

The result is quite interesting given the few lines of code: Note that going to Graphite using the JMX route makes things easier as there’s no need for a dedicated Graphite server in development environments.

Send to Kindle
Categories: Java Tags: , ,

Final release of Integration Testing from the Trenches

March 1st, 2015 3 comments
Writing a book is a journey. At the beginning of the journey, you mostly know where you want to go, but have only vague notion of the way to get there and the time it will take. I’ve finally released the paperback version of on Amazon and that means this specific journey is at end.

The book starts by a very generic discussion about testing and continues by defining Integration Testing in comparison to Unit Testing. The next chapter compares the respective merits of Junit and TestNG. It is followed by complete description on how to make a design testable: what works for Unit Testing works also for Integration Testing. Testing in software relies on automation, so that specific usage of the Maven build tool is described in regard to Integration Testing – as well as Gradle. Dependencies on external resources make integration tests more fragile so faking those make them more robust. Those resources include: databases, the file system, SOAP and REST web services, etc. The most important dependency in any application is the container. The last chapters are dedicated to the Spring framework, including Spring MVC and Java EE.

In this journey, I also dared ask Josh Long of Spring fame and Aslak Knutsen, team lead of the Arquillian project to write a foreword to the book – and I’ve been delighted to have them both answer positively. Thank you guys!

I’ve also talked on the subject at some JUG and European conferences: JavaDay Kiev, Joker, Agile Tour London, and JUG Lyon and will again at JavaLand, DevIt, TopConf Romania and GeeCon. I hope that by doing so, Integration Testing will be used more effectively on projects and with bigger ROI.

Should you want to go further, the book is available in multiple formats:

  1. A paperback version on for $49.99
  2. Electronic versions for Mac, Kindle and plain old PDF on . The pricing here is more open, starting from $21.10 with a suggested price of $31.65. Note you can get it in all formats to read on all your devices.

If you’re already a reader and you like it, please feel free to recommend it. If you don’t, I welcome your feedback in the comments section. Of course, if neither – I encourage you to get a book and see for yourself!

Send to Kindle

Avoid sequences of if…else statements

February 15th, 2015 8 comments

Adding a feature to legacy code while trying to improve it can be quite challenging, but also quite straightforward. Nothing angers me more (ok, I might be a little exaggerating) than stumbling upon such pattern:

public Foo getFoo(Bar bar) {
    if (bar instanceof BarA) {
        return new FooA();
    } else if (bar instanceof BarB) {
        return new FooB();
    } else if (bar instanceof BarC) {
        return new FooC();
    } else if (bar instanceof BarD) {
        return new FooD();
    }
    throw new BarNotFoundException();
}

Apply Object-Oriented Programming

The first reflex when writing such thing – yes, please don’t wait for the poor guy coming after you to clean your mess, should be to ask yourself whether applying basic Object-Oriented Programming couldn’t help you. In this case, you would have multiple children classes of:

public interface FooBarFunction<T extends Bar, R extends Foo> extends Function<T, R>

For example:

public class FooBarAFunction implements FooBarFunction<BarA, FooA> {
    public FooA apply(BarA bar) {
        return new FooA();
    }
}

Note: not enjoying the benefits of Java 8 is not reason not to use this: just create your own Function interface or use Guava’s.

Use a Map

I must admit that it not only scatters tightly related code in multiple files (this is Java…), it’s unfortunately not always possible to easily apply OOP. In that case, it’s quite easy to initialise a map that returns the correct type.

public class FooBarFunction {
    private static final Map<Class<Bar>, Foo> MAPPINGS = new HashMap<>();
    static {
        MAPPINGS.put(BarA.class, new FooA());
        MAPPINGS.put(BarB.class, new FooB());
        MAPPINGS.put(BarC.class, new FooC());
        MAPPINGS.put(BarD.class, new FooD());
    }
    public Foo getFoo(Bar bar) {
        Foo foo = MAPPINGS.get(bar.getClass());
        if (foo == null) {
            throw new BarNotFoundException();
        }
        return foo;
    }
}

Note this is only a basic example, and users of Dependency Injection can easily pass the map in the object constructor instead.

More than a return

The previous Map trick works quite well with return statements but not with code snippets. In this case, you need to use the map to return an enum and associate it with a switchcase.

public class FooBarFunction {
    private enum BarEnum {
        A, B, C, D
    }
    private static final Map<Class<Bar>, BarEnum> MAPPINGS = new HashMap<>();
    static {
        MAPPINGS.put(BarA.class, BarEnum.A);
        MAPPINGS.put(BarB.class, BarEnum.B);
        MAPPINGS.put(BarC.class, BarEnum.C);
        MAPPINGS.put(BarD.class, BarEnum.D);
    }
    public Foo getFoo(Bar bar) {
        BarEnum barEnum = MAPPINGS.get(bar.getClass());
        switch(barEnum) {
            case BarEnum.A:
                // Do something;
                break;
            case BarEnum.B:
                // Do something;
                break;
            case BarEnum.C:
                // Do something;
                break;
            case BarEnum.D:
                // Do something;
                break;
            default:
                throw new BarNotFoundException();
        }
    }
}

Note that not only I believe that this code is more readable but it’s also a fact it has better performance and the switch is evaluated once as opposite to each ifelse being evaluated in sequence until it returns true.

Note it’s expected not to put the code directly in the statement but use dedicated method calls.

Conclusion

I hope that at this point, you’re convinced there are other ways than sequences of ifelse. The rest is in your hands.

Send to Kindle
Categories: Java Tags: