Improve your tests with Mockito’s capture

July 5th, 2015 No comments

Unit Testing mandates to test the unit in isolation. In order to achieve that, the general consensus is to design our classes in a decoupled way using DI. In this paradigm, whether using a framework or not, whether using compile-time or runtime compilation, object instantiation is the responsibility of dedicated factories. In particular, this means the new keyword should be used only in those factories.

Sometimes, however, having a dedicated factory just doesn’t fit. This is the case when injecting an narrow-scope instance into a wider scope instance. A use-case I stumbled upon recently concerns event bus, code like this one:

 public class Sample {

    private EventBus eventBus;

    public Sample(EventBus eventBus) {
        this.eventBus = eventBus;
    }

    public void done() {
        Result result = computeResult()
        eventBus.post(new DoneEvent(result));
    }

    private Result computeResult() {
        ...
    }
}

With a runtime DI framework – such as the Spring framework, and if the DoneEvent had no argument, this could be changed to a lookup method pattern.

public void done() {
    eventBus.post(getDoneEvent());
}

public abstract DoneEvent getDoneEvent();

Unfortunately, the argument just prevents us to use this nifty trick. And it cannot be done with runtime injection anyway. It doesn’t mean the done() method shouldn’t be tested, though. The problem is not only how to assert that when the method is called, a new DoneEvent is posted in the bus, but also check the wrapped result.

Experienced software engineers probably know about the Mockito.any(Class) method. This could be used like this:

public void doneShouldPostDoneEvent() {
    EventBus eventBus = Mockito.mock(EventBus.class);
    Sample sample = new Sample(eventBus);
    sample.done();
    Mockito.verify(eventBus).post(Mockito.any(DoneEvent.class));
}

In this case, we make sure an event of the right kind has been posted to the queue, but we are not sure what the result was. And if the result cannot be asserted, the confidence in the code decreases. Mockito to the rescue. Mockito provides captures, that act like placeholders for parameters. The above code can be changed like this:

public void doneShouldPostDoneEventWithExpectedResult() {
    ArgumentCaptor<DoneEvent> captor = ArgumentCaptor.forClass(DoneEvent.class);
    EventBus eventBus = Mockito.mock(EventBus.class);
    Sample sample = new Sample(eventBus);
    sample.done();
    Mockito.verify(eventBus).post(captor.capture());
    DoneEvent event = captor.getCapture();
    assertThat(event.getResult(), is(expectedResult));
}

At line 2, we create a new ArgumentCaptor. At line 6, We replace any() usage with captor.capture() and the trick is done. The result is then captured by Mockito and available through captor.getCapture() at line 7. The final line – using Hamcrest, makes sure the result is the expected one.

Categories: Java Tags: ,

Spring 2015 European conferences tour

May 25th, 2015 No comments

I’ve just finished my Spring 2015 European conferences tour. I’ve talked about Integration Testing, Mutation Testing and Spring Boot for Devops at Spring IO (Spain), GeeCon, DevIT and JEEConf.

This is a resume of the sessions I attended and liked. Sessions I was not part of, I lost time in are or I slept in are not mentioned.

Spring I/O Barcelona (Spain)

Boot your Search with Spring
This is a nice introductory talk on the search feature brought by the Spring Data abstraction, over the SolR, Elasticsearch and MongoDB NoSQL stores.
Is Groovy better for testing than Java?
The title sums it all: checking whether Spock can/should be used for testing. I was pleasantly surprised to see the talk was well balanced and not an advertisement for Spock. I had already seen a talk on Spock and discarded it as inconclusive. Now, I should probably give it a try.
Master Spring Boot auto-configuration
Probably the best talk of the conference, it explains in a very comprehensive way how you can create your own Spring Boot module with auto-configuration capability.
Testing with Spring 4.x
Testing with Spring is not only very interesting to, it will be with the subject of my talk at Spring One with Sam Brannen. Good thing since the speaker was Sam himself. This was a good occasion to experience first hand the way he speaks at conferences.
Document like the Spring team using Asciidoctor
Though not related to Spring, this talk was an enlightenment! I’ve recently finished writing my latest book with simple Markdown and Asciidoctor would have just made the writing process so much easier! Now, I want to write another book just for the chance to use it.

GeeCon – Krakow (Poland)

A Survival Guide to Resilient Reactive Application
Scopes what is monitoring and defines related terms – the reactive part is not the most important.
G1 Garbage Collector: details and tuning
I’m not a system engineer but now and then, I try to attend related talks about it to have the feeling on what is going around. Most of the time, I end up disappointed – because the talk targets experts, this time I was not. The talk was clear and the speaker was entertaining.
HTTP/2 & Java Current Status
Same speaker, different subject. Good introduction to HTTP/2.
Analysing GitHub commits with R and Azure
I came to this talk by chance, because none during this timeframe really attracted me. Nice Data Mining example using Github as a use-case.

At this time, I had to take my plane to go to…

DevIT – Thessaloniki (Greece)

The future of responsive web design: web component queries
Very interesting introduction to some important features of HTML5: shadow DOM, templates, web components, etc. This talk really made me want to try myself!
Your Service is not Rest
This talk defined what is REST and what is not and proved given the definition that most APIs provided are simple HTTP, not REST. Due to lack of time, the speaker couldn’t answer my question: how does HATEOAS details about which HTTP methods are available for which resources. Guess I’ll have to return next year.

Due to a lack of sleep due to my late flight from Krakow, I’m afraid my attention has been less than optimal during the rest of the talks.

JEEConf – Kyiv (Ukrain)

Pragmatic Functional Refactoring with Java 8
Some of Java 8’s features, including functions, currying, immutability and Optional.
Painfree Object-Document Mapping for MongoDB
Description of the Xenia library, a Java ODM for MongoDB. I put this into my list of available tools in case I’ll have such a problem.
Making This Rhinoceros Thunder
This talk I went to because no other talk was in english, and I was pleasantly surprised. The speaker works on the Nashorn engine and told not only how to make the compilation of JavaScript faster on the JVM but also what challenges the team faced in the implementation, and how they solved it.

Initially, I had 2 talks at JEEConf. Because a speaker had medical issues, he couldn’t make it and so I had the privilege of being invited by Josh Long to host a last minute talk with him on Spring Boot and Vaadin. Then, he also proposed me to be a speaker on the Spring panel. All in all, between the preparation of my talks and the talks proper, I couldn’t manage to attend any other talks.

This was a great experience again, with many occasions to meet new people and see again conference buddies. Many thanks to the teams of those conferences for their organization and their time! See you soon again.

Categories: Event Tags: , , ,

Quality Tools: humble servants or tyrans?

May 10th, 2015 No comments

I’ve always been an ardent proponent of internal quality in software, because in my various experiences, I’ve had more than my share of crappy codebases to maintain. I believe that quality tools can increase the internal quality of the code, thus decreasing maintenance costs in the long run. However, I don’t think that such tools are the only way to achieve that – I’m also a firm believer in code reviews.

Regarding quality tools, I started with Checkstyle, then with PMD, both static analysis tools. I’ve used FindBugs, a tool that doesn’t check the source code but the bytecode itself, but only sparingly for it seemed to me it reported too many false positives.

Finally, I found SonarQube (called Sonar at the time). I didn’t immediately fall in love with it, and it took me some months to get rid of my former Checkstyle and PMD companions. As soon as I did, however, I wanted to put it in place in every project I worked on – and on others too. When it added a timeline to see the trend regarding violations and other metrics, I knew it was the quality tool to use.

Now that finally the dust has settled, I don’t see many organizations where no quality tool is used and that is good. I don’t imagine working with none: whether as a developer or team lead, whether using Sonar or simpler tools, their added value is simply too big to just ignore.

On the other hand, I’m very wary of a rising trend: it seems as if once Sonar is in place, developers and managers alike treat its reports as the word of God. I can expect it from managers, but I definitely don’t want my fellow developers to set their brain aside and delegate their responsibilities to a tool, whatever the tool. Things even become worse when metrics from those rules are used as build breakers: when the build fails because your project failed to achieve some pre-defined metrics.

Of course, there are some ways to mitigate the problem:

  • Use only a subset of Sonar rules. For example, the violation that checks for a private static final serialVersionUID attribute if the class directly or transitively implements Serializable is completely useless IMHO.
  • Use the NO-SONAR comment
  • Configure each project. For example, Vaadin projects should exclude graphical classes from the unit test coverage as they probably have no behavior, thus no associated tests (do you unit test your JSP?).

I’m afraid those are only ways to go around the limits. Every tool comes with a severe limitation: it cannot distinguish between contexts, and applies the same rules regardless of it. As a side note, notice this is also the case for big companies… The funniest part is that software engineers are in general the most active opponents against metrics-driven management – then they put SonarQube in place to assert code quality and they’re stubborn when it comes to contextualising the results.

Quality tools are a big asset toward a more maintainable code base, but stupidly applying one rule because the tool said so – or even worse, riddling your code base with // NOSONAR comments, is a serious mistake. I’m in favor of using tools, not tools ruling me. Know what I mean?

 

 

Categories: Development Tags: ,

Connection is a leaky abstraction

April 26th, 2015 2 comments

As junior Java developers, we learn very early in our career about the JDBC API. We learn it’s a very important abstraction because it allows to change the underlying database in a transparent manner. I’m afraid what appeared as a good idea is just over-engineering because:

  1. I’ve never seen such a database migration happen in more than 10 years
  2. Most of the time, the SQL written is not database independent

Still, there’ s no denying that JDBC is at the bottom of every database interaction in Java. However, I recently stumbled upon another trap hidden very deeply at the core of the javax.sql.Connection interface. Basically, you perhaps have been told to close the Statement returned by the Connection? And also to close the ResultSet returned by the Statement? But perhaps you also have been told that closing the Connection will close all underlying objects – Statement and ResultSet?

So, which one is true? Well, “it depends” and there’s the rub…

  • One one hand, if the connection is returned from the DriverManager, calling Connection.close() will close the physical connection to the database and all underlying objects.
  • On the other hand, if the connection is returned from a DataSource, calling Connection.close() will only return it to the pool and you’ll need to close statements yourself.

In the latter case, if you don’t close those underlying statements, database cursors will stay open, the RDBMS limit will be reached at some point and new statements won’t be executed. Conclusion: always close statement objects (as I already wrote about)! Note the result set will be closed when the statement is.

If you’re lucky to use Java 7 – and don’t use a data access framework, the code to use is the following:

try (PreparedStatement ps = connection.prepareStatement("Put SQL here")) {
    try (ResultSet rs = ps.executeQuery()) {
        // Do something with ResultSet
    }
} catch (SQLException e) {
    // Handle exception
    e.printStackTrace();
}

And if you want to make sure cursors will be closed even with faulty code, good old Tomcat provides the StatementFinalizer interceptor for that. Just configure it in the server.xml configuration file when you declare your Resource:

<Resource name="jdbc/myDB" auth="Container" type="javax.sql.DataSource"
 jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer" />

Note: while you’re there, you can also check the ResetAbandonedTimer interceptor. It can be used in conjunction with the removeAbandonedTimeout attribute: this configures the time after which the connection will be returned back to the pool. If the attribute’s value is too low, connections in use might be returned. With the interceptor, each time the connection is used resets the timer.

Categories: Java Tags: ,

Polyglot everywhere – part 2

April 19th, 2015 No comments

Last week, we set up a new project using the YAML flavor of Polyglot Maven. Now is time for some server-side code!

As a long time Vaadin advocate, let’s create a very simple Vaadin application. This will have the added advantage to let us hack something on the client-side as well for the last part of this serie. As we are fully polyglot, we will avoid the old Java language and use something very cool instead. As I’ve have been to some conferences with its number 1 advocate, I settled on the Kotlin language.

Disclaimer: I’m far from a Kotlin user – and this is not about Kotlin anyway, so please pardon my mistakes in the following.

Java is still Maven’s first class citizen so the first step is to add some configuration for the Kotlin compiler to kick in. It’s quite easy, especially given the previous work on the polyglot POM:

build:
    sourceDirectory: ${project.basedir}/src/main/kotlin
    plugins:
        -   groupId: org.jetbrains.kotlin
            artifactId: kotlin-maven-plugin
            version: 0.11.91.1
            executions:
            -   id: compile
                phase: compile
                goals:
                    - compile
        # Other plugins go there

The next step is to configure the web application. It can be done either with the old XML web deployment descriptor or since Servlet 3.0 with annotations. Since XML is far from polyglot, let’s use the Kotlin way. Besides, Kotlin annotations are cooler than in Java:

WebServlet (
    name = "VaadinServlet",
    urlPatterns = array("/*"),
    initParams = array(
        WebInitParam(
            name = "UI",
            value = "ch.frankel.blog.polyglot.MainUi"
        )
    )
)
VaadinServletConfiguration(
    productionMode = false,
    ui = javaClass<MainUi>()
)
class KotlinServlet: VaadinServlet()

Regular Vaadin users know about the final step. A UI needs to be created. Again, this is quite straightforward:

class MainUi: UI() {
    override fun init(request: VaadinRequest) {
        val label = Label("Hello from Polyglot Everywhere")
        val layout = VerticalLayout(label)
        layout.setMargin(true)
        layout.setSpacing(true)
        setContent(layout)
    }
}

And with these steps, we’ve achieved a polyglot webapp!
The next article in this serie will add a client-side component for Vaadin. Don’t miss it!

Categories: Development Tags: , ,

Polyglot everywhere – part 1

April 12th, 2015 No comments

This is the era of polyglot! Proponents of this practice spread the word that you’ve to choose the language best adapted to the problem at hand. And with a single team dedicated to a microservice, this might make sense.

My pragmatic side tells me it means that developers get to choose the language they are developing with and don’t care how it will be maintained when they go away… On the other hand, my shiny-loving side just want to try – albeit in a more controlled environment, such as this blog!

Introduction

In this 3 parts serie, I’ll try to use polyglot on a project:

  • The first part is about the build system
  • The second part will be about the server side
  • The final part will be about the client-side

My example will use a Vaadin project built with Maven and using a simple client-side extension. You can follow the project on Github.

Polyglot Maven

Though it may have been largely ignored, Maven can now talk many different languages since its version 3.3.1 thanks to an improved extension mechanism. In the end, the system is quite easy:

  • Create a .mvn folder at the root of your project
  • Create a extensions.xml file
  • Set the type of language you’d like to use:
    
    
      
        io.takari.polyglot
        polyglot-yaml
        0.1.8
      
    

    Here, I set the build “language” as YAML.

In the end, the translation from XML to YAML is very straightforward:

modelVersion: 4.0.0
groupId: ch.frankel.blog.polyglot
artifactId: polyglot-example
packaging: war
version: 1.0.0-SNAPSHOT

dependencies:
    - { groupId: com.vaadin, artifactId: vaadin-spring, version: 1.0.0.beta2 }

build:
    plugins:
        - artifactId: maven-compiler-plugin
          version: 3.1
          configuration:
            source: 1.8
            target: 1.8
        - artifactId: maven-war-plugin
          version: 2.2
          configuration:
            failOnMissingWebXml: false

The only problem I had was in the YAML syntax itself: just make sure to align the elements of the plugin to the plugin declaration (e.g. align version with artifactId).

Remember to check the POM on Github with each new part of the serie!

Categories: Development Tags: , ,

What’s the version of my deployed application?

April 6th, 2015 2 comments

In my career, I’ve noticed many small and un-expensive features that didn’t find their way into the Sprint backlog because they didn’t provide business value. However, they provided plenty of ROI during the life of the application, but that was completely overlooked due to short-sighted objectives (set by short-sighted management). Those include, but are not limited to:

  • Monitoring in general, and more specifically metrics, health checks, etc. Spare 5 days now and spend 10 times that later (or more…) that because you don’t know how your application works.
  • Environment data e.g. development, test, production, etc. It’s especially effective when it’s associated with a coloured banner dependent on the environment. If you don’t do that, I’m not responsible if I just deleted all your production data from the last 10 days because I thought it was the test environment. This is quite easy, especially if login/passwords are the same for all environments – yes, LDAP setup is complex so let’s have only one.
  • Application build data, most importantly the version number, and if possible the build number and the build time. Having to SSH into the server (if possible at all) or search the Wiki (if it’s up-to-date, a most unlikely occurence) to have to know the version is quite cumbersome when you need the info right now.

Among them, I believe the most basic one is the latter. We are used to check the About dialog in desktop applications, but unless you deliver many (many many) times a day, those are necessary for any real-world enterprise-grade application. In the realm of SOA and micro-services, it means this info should also be part of responses.

With Maven, it’s quite easy to achieve, this as a simple properties file only is needed and the maven-resource-plugin will work its magic. Maven provides a filtering features, meaning any resource can be set placeholders and they will be replaced by their values at build-time. Filtering is not enabled by default. To activate it, the following snippet will do:


    
        
            
                ${basedir}/src/main/resources
                true
            
        
    

I wrote about placeholders above but I didn’t specify which. Simple, any data set in the POM – as well as a few special ones, can be used as placeholders. Just use the DOM path, inside $ and brackets, like that: ${dom.path}.

Here’s an example, with a property file:

application.version=${project.version}
build.date=${maven.build.timestamp}

If this snippet is put in a file inside the src/main/resources directory, Maven will generate a file named similarly with the values filtered inside the target/classes directory just after the process-resources. Specific values depends of course on the POM, but here’s a sample output:

application.version=1.0.0-SNAPSHOT
build.date=20150303-1335

Things are unfortunately not as straightforward, as there’s a bug in Maven regarding ${maven.build.timestamp}. It cannot be filtered directly and requires adding an indirection level:


    
        
            ${maven.build.timestamp}
        
        
            
                ${basedir}/src/main/resources
                true
            
        
    

The following properties file will now work as expected:

application.version=${project.version}
build.date=${build.timestamp}

At this point, it’s just a matter of reading this property file when required. This includes:

In webapps
  • Provide a dedicated About page, as in desktop applications. I’ve rarely seen that, and never implemented it
  • Add a footer with the info. For added user-friendliness, set the text color to the background color, so that users are not disturbed by the info. Only people who know (or a curious) can access that – it’s not confidential anyway.
In services
Whether SOAP or REST, XML or JSON, there are also a few options:

  • As a dedicated service endpoint (e.g. /about or /version). This is the simplest to implement. It’s even better if the same endpoint is used throughout the organization.
  • As additional info on all endpoints. This is required when each endpoint can be built separately and assembled. The easiest path then is to put it in HTTP headers, while harder is to put it in the data. The latter will probably require some interceptor approach as well as an operation on the schema (if any).

There are two additional options:

  1. To differentiate between snapshot versions, use the Maven Build Number plugin. I have no experience of actual usage. However, it requires correct configuration of SCM information.
  2. Some applications not only display version information but also environment environment information (e.g. development, integration, staging, etc.). I’ve seen it used either through a specific banner or through background color. This requires a dedicated Java property set at JVM launch time or an system environment variable.

The filtering stuff can be done in 1 hour max in a greenfield environment. The cost of using the data is more variable, but in a simple webapp footer case, it can be done in less than a day. Compare that to the time lost getting the version during the lifetime of the application…

Categories: Java Tags: ,

Better developer-to-developer collaboration with Bintray

March 29th, 2015 1 comment

I recently got interested in Spring Social, and as part of my learning path, I tried to integrate their Github module which is still in Incubator mode. Unfortunately, this module seems to have been left behind, and its dependency on the core module uses an old version of it. And since I use the latest version of this core, Maven resolves one version to put in the WEB-INF/lib folder of the WAR package. Unfortunately, it doesn’t work so well at runtime.

The following diagram shows this situation:

Dependencies original situation

 

I could have excluded the old version from the transitive dependencies, but I’m lazy and Maven doesn’t make it easy (yet). Instead, I decided to just upgrade the Github module to the latest version and install it in my local repository. That proved to be quite easy as there was no incompatibility with the newest version of the core – I even created a pull request. This is the updated situation:

Dependencies final situation

Unfortunately, if I now decide to distribute this version of my application, nobody will be able to neither build nor run it since only I have the “patched” (latest) version of the Github module available in my local repo. I could distribute along the updated sources, but it would mean you would have to build it and install it into your local repo first before using my app.

Bintray to the rescue! Bintray is a binary repository, able to host any kind of binaries: jars, wars, deb, anything. It is hosted online, and free for OpenSource projects, which nicely suits my use-case. This is how I uploaded my artifact on Bintray.

Create an account
Bintray makes it quite easy to create such an account, using of the available authentication providers – Github, Twitter or Google+. Alternatively, one can create an old-style account, with password.
Create an artifact
Once authentified, an artifact needs to be created. Select your default Maven repository, it can be found at https://bintray.com//maven. Then, click on the big Add New Package button located on the right border. On the opening page, fill in the required information. The package can be named whatever you want, I chose to use the Maven artifact identifier: spring-social-github.
Create a version
Files can only be added to a version, so that a version need to be created first. On the package detail page, click on the New Version link (second column, first line).On the opening page, fill in the version name. Note that snapshots are not accepted and this is only checked through the -SNAPSHOT suffix. I chose to use 1.0.0.BUILD.
Upload files
Once the version is created, files can finally be uploaded. In the top bar, click the Upload Files button. Drag and drop all desired files, of course the main JAR and the POM, but it can also include source and javadoc JARs. Notice the Target Repository Path field: it should be set to the logical path to the Maven artifact, including groupId, artifactId and version separated by slashes. For example, my use-case should resolve to org/springframework/social/spring-social-github/1.0.0.BUILD. Note that instead of filling this field, you can wait for the files to be uploaded as Bintray will detect this upload, analyze the POM and propose to set it automatically: if this fits – and it probably does, just accept the proposal.
Publish
Uploading files is not enough, as those files are temporary until publication. A big notice warns about it: just click on the Publish link located on the right border.

At this point, you need only to add the Bintray repository in the POM.


    
        bintray
        http://dl.bintray.com/nfrankel/maven
        
            true
        
    
Categories: Java Tags: ,

Become a DevOps with Spring Boot

March 8th, 2015 1 comment

Have you ever found yourself in the situation to finish a project and you’re about to deliver it to the Ops team. You’re so happy because this time, you covered all the bases: the documentation contains the JNDI datasource name the application will use, all environment-dependent parameters have been externalized in a property file – and documented, and you even made sure logging has been implemented at key points in the code. Unfortunately, Ops refuse your delivery since they don’t know how to monitor the new application. And you missed that… Sure you could hack something to fulfill this requirement, but the project is already over-budget. In some (most?) companies, this means someone will have to be blamed and chances are the developer will bear all the burden. Time for some sleepless nights.

Spring Boot is a product from Spring that brings many out-of-the-box features to the table. Convention over configuration, in-memory default datasource and and embedded Tomcat are part of the features known to most. However, I think there’s a hidden gem that should be much more advertised. The actuator module actually provides metrics and health checks out-of-the-box as well as an easy way to add your own. In this article, we’ll see how to access those metrics from HTTP and send them to JMX and Graphite.

As an example application, let’s use an update of the Spring Pet Clinic made with Boot – thanks forArnaldo Piccinelli for his work. The starting point is commit 790e5d0. Now, let’s add some metrics in no time.

The first step is to add the actuator module starter in the Maven POM and let Boot does its magic:


    org.springframework.boot
    spring-boot-starter-actuator

At this point, we can launch the Spring Pet Clinic with mvn spring-boot:run and navigate to http://localhost:8090/metrics (note that the path is protected by Spring Security, credentials are user/password) to see something like the following:

{
  "mem" : 562688,
  "mem.free" : 328492,
  "processors" : 8,
  "uptime" : 26897,
  "instance.uptime" : 18974,
  "heap.committed" : 562688,
  "heap.init" : 131072,
  "heap.used" : 234195,
  "heap" : 1864192,
  "threads.peak" : 20,
  "threads.daemon" : 17,
  "threads" : 19,
  "classes" : 9440,
  "classes.loaded" : 9443,
  "classes.unloaded" : 3,
  "gc.ps_scavenge.count" : 16,
  "gc.ps_scavenge.time" : 104,
  "gc.ps_marksweep.count" : 2,
  "gc.ps_marksweep.time" : 152
}

As can be seen, Boot provides hardware- and Java-related metrics without further configuration. Even better, if one browses the app e.g. repeatedly refreshed the root, new metrics appear:

{
  "counter.status.200.metrics" : 1,
  "counter.status.200.root" : 2,
  "counter.status.304.star-star" : 4,
  "counter.status.304.webjars.star-star" : 1,
  "gauge.response.metrics" : 72.0,
  "gauge.response.root" : 16.0,
  "gauge.response.star-star" : 8.0,
  "gauge.response.webjars.star-star" : 11.0,
  ...
}

Those metrics are more functional in nature, and they are are separated into two separate groups:

  • Gauges are the simplest metrics and return a numeric value e.g. gauge.response.root is the time (in milliseconds) of the last response from the /metrics path
  • Counters are metrics which can be incremented/decremented e.g. counter.status.200.metrics is the number of times the /metrics path returned a HTTP 200 code

At this point, your Ops team could probably scrape the returned JSON and make something out of it. It will be their responsibility to regularly poll the URL and to use the figures the way they want. However, with just a little more effort, we can ease the life of our beloved Ops team by putting these metrics in JMX.

Spring Boot integrates easily with Dropwizard metrics. By just adding the following dependency to the POM, Boot is able to provide a MetricRegistry, a Dropwizard registry for all metrics:


    io.dropwizard.metrics
    metrics-core
    4.0.0-SNAPSHOT

Using the provided registry, one is able to send metrics to JMX in addition to the HTTP endpoint. We just need a simple configuration class as well as a few API calls:

@Configuration
public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;

    @Bean
    public JmxReporter jmxReporter() {
        JmxReporter reporter = JmxReporter.forRegistry(registry).build();
        reporter.start();
        return reporter;
    }
}

Launching jconsole let us check it works alright: The Ops team now just needs to get metrics from JMX and push them into their preferred graphical display tool, such as Graphite. One such way to achieve this is through jmx-trans. However, it’s also possible to directly send metrics to the Graphite server with just a few different API calls:

@Configuration
public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;

    @Bean
    public GraphiteReporter graphiteReporter() {
        Graphite graphite = new Graphite(new InetSocketAddress("localhost", 2003));
        GraphiteReporter reporter = GraphiteReporter.forRegistry(registry)
                                                    .prefixedWith("boot").build(graphite);
        reporter.start(500, TimeUnit.MILLISECONDS);
        return reporter;
    }
}

The result is quite interesting given the few lines of code: Note that going to Graphite using the JMX route makes things easier as there’s no need for a dedicated Graphite server in development environments.

Categories: Java Tags: , ,

Final release of Integration Testing from the Trenches

March 1st, 2015 2 comments
Writing a book is a journey. At the beginning of the journey, you mostly know where you want to go, but have only vague notion of the way to get there and the time it will take. I’ve finally released the paperback version of on Amazon and that means this specific journey is at end.

The book starts by a very generic discussion about testing and continues by defining Integration Testing in comparison to Unit Testing. The next chapter compares the respective merits of Junit and TestNG. It is followed by complete description on how to make a design testable: what works for Unit Testing works also for Integration Testing. Testing in software relies on automation, so that specific usage of the Maven build tool is described in regard to Integration Testing – as well as Gradle. Dependencies on external resources make integration tests more fragile so faking those make them more robust. Those resources include: databases, the file system, SOAP and REST web services, etc. The most important dependency in any application is the container. The last chapters are dedicated to the Spring framework, including Spring MVC and Java EE.

In this journey, I also dared ask Josh Long of Spring fame and Aslak Knutsen, team lead of the Arquillian project to write a foreword to the book – and I’ve been delighted to have them both answer positively. Thank you guys!

I’ve also talked on the subject at some JUG and European conferences: JavaDay Kiev, Joker, Agile Tour London, and JUG Lyon and will again at JavaLand, DevIt, TopConf Romania and GeeCon. I hope that by doing so, Integration Testing will be used more effectively on projects and with bigger ROI.

Should you want to go further, the book is available in multiple formats:

  1. A paperback version on for $49.99
  2. Electronic versions for Mac, Kindle and plain old PDF on . The pricing here is more open, starting from $21.10 with a suggested price of $31.65. Note you can get it in all formats to read on all your devices.

If you’re already a reader and you like it, please feel free to recommend it. If you don’t, I welcome your feedback in the comments section. Of course, if neither – I encourage you to get a book and see for yourself!