• First release of Integration Testing from the Trenches

    My job as a software architect is to make sure the builds I provide have the best possible quality, and more specifically internal quality. While Unit Testing sure helps creating less regressions, relying only on it is akin to testing a car by testing its nuts and bolts. Integration Testing is about getting the car on a circuit.

    Last week, I finally released the fist version of Integration Testing from the Trenches. As its name implies, this book is about Integration Testing. It is organized in the following chapters:

    Chapter 1 - Foundations of testing
    This is an introductory chapter, laying out the foundations for the rest of the book. It describes Unit Testing, Integration Testing and Functional Testing, as well as their associated notions.
    Chapter 2 - Developer testing tools
    This chapter covers both the JUnit and TestNG testing frameworks. Tips and tricks on how to use them for Integration Testing are also included.
    Chapter 3 - Test-Friendly Design
    This chapter details Dependency Injection, DI-compatible design and which objects should be set as dependencies during tests execution. This includes definitions of Test Doubles, such as Dummy,  Fake and Mock along with an explanation of Mockito, a Mocking framework and Spring Test and Mockrunner, two OpenSource available Fake libraries.
    Chapter 4 - Automated testing
    It covers how to get our carefully crafted Integration Tests to run through automated build tools, like Maven and Gradle.
    Chapter 5 - Infrastructure Resources Integration
    This chapter concerns itself about Integration Testing applied to infrastructure resources such as databases, mail servers, ftp servers and others. Tools and techniques about each resource type will be explained.
    Chapter 6 - Web Services Integration
    This chapter is solely dedicated to Integration Testing with Web Services, either in SOAP or REST flavor.
    Chapter 7 - Spring in-container testing
    In this chapter, testing recipes for Spring and Spring MVC applications are described. It also includes coverage of the Spring Test library.
    Chapter 8 - JavaEE testing
    Last but not least, this chapter covers testing of Java EE applications, including the Arquillian testing framework.

    There’s a free sample chapter for you kind reader if you want to go further. Here’s a 10% discount valid for the whole week to have something to read on the beach during vacations!

    In all cases, I’ll take excerpts from the book and publish them on this blog in the following week.

    Categories: Java Tags: integration testing
  • The right bean at the right place

    Among the different customers I worked for, I noticed a widespread misunderstanding regarding the use of Spring contexts in Spring MVC.

    Basically, you have contexts, in a parent-child relationship:

    • The main context is where service beans are hosted. By convention, it is spawned from the /WEB-INF/applicationContext.xml file but this location can be changed by using the contextConfigLocation context parameter. Alternatively, one can use the AbstractAnnotationConfigDispatcherServletInitializer and in this case, configuration classes should be parts of the array returned by the getRootConfigClasses() method.
    • Web context(s) is where Spring MVC dispatcher servlet configuration beans and controllers can be found. It is spawned from <servlet-name>-servlet.xml or, if using the JavaConfig above, comes from classes returned by the getServletConfigClasses() method

    As in every parent-child relationship, there’s a catch:

    Beans from the child contexts can access beans from the parent context, but not the opposite.

    That makes sense if you picture this: I want my controllers to be injected with services, but not the other way around (it would be a funny idea to inject controllers in services). Besides, you could have multiple Spring servlets with its own web context, each sharing the same main context for parent. When it goes beyond controllers and services, one should decide in which context a bean should go. For some, that’s pretty evident: view resolvers, message sources and such go into the web context; for others, one would have to spend some time thinking about it.

    A good rule of thumb to decide in which context which bean should go is the following: IF you had multiple servlets (even if you do not), what would you like to share and what not.

    Note: this way of thinking should not be tied to your application itself, as otherwise you’d probably end up sharing message sources in the main application context, which is a (really) bad idea.

    This modularization let you put the right bean at the right place, promoting bean reusability.

    Categories: JavaEE Tags: beancontextspring
  • Back to basics: encapsulating collections

    Younger, I learned there were 3 properties of the Object-Oriented paradigm:

    • Encapsulation
    • Inheritance
    • Polymorphism

    In Java, encapsulation is implemented through usage of private attributes with accessors methods commonly known as getters and setters. Whether this is proper encapsulation is subject to debate and is outside the scope of this article. However, using this method to attain encapsulation when the attribute is a collection (of types java.util.Collection, java.util.Map and their subtypes) is just plain wrong.

    The code I see most of the times is the following:

    public class MyBean {
        private Collection collection;
        public Collection getCollection() {
            return collection;
        public void setCollection(Collection collection) {
            this.collection = collection;

    This is the most common code I see: this design has been popularized by ORM frameworks such as Hibernate. Many times when I raise this point, the next proposal is an immutable one.

    public class MyBean {
        private Collection collection;
        public MyBean(Collection collection) {
            this.collection = collection;
        public Collection getCollection() {
            return collection;

    No proper encapsulation

    However, in the case of collections, this changes nothing as Java collections are mutable themselves. Obviously, both passing a reference to the collection in the constructor and returning a reference to it is no encapsulation at all. Real encapsulation is only possible if no reference to the collection is kept nor returned.

    List list = new ArrayList();
    MyBean mybean = new MyBean(list);
    list.add(new Object()); // We just modified the collection outside my bean

    Not possible to use a specific subtype

    Besides, my bean could require a more specific collection of its own, such as List or Set. With the following code snippet, passing a Set is simply not possible.

    public class MyBean {
        private List collection;
        public List getCollection() {
            return collection;
        public void setCollection(List collection) {
            this.collection = collection;

    No choice of the concrete implementation

    As a corollary from the last point, using the provided reference prevents us from using our own (perhaps more efficient) type e.g. a Apache Commons FastArrayList.

    An implementation proposal

    The starting point of any true encapsulation is the following:

    public class MyBean {
        private List collection = new ArrayList();
        public MyBean(Collection collection) {
        public Collection getCollection() {
            return Collections.unmodifiableList(collection);

    This fixes the aforementioned cons:

    1. No reference to the collection is passed in the constructor, thus preventing any subsequent changes from outside the object
    2. Freedom to use the chosen collection implementation, with complete isolation - leaving room for change
    3. No reference to the wrapped collection is passed in the collection returned by the getter

    Note: previous snippets do not use generics for easier readability, please do use them

    Categories: Java Tags: collections
  • A single simple rule for easier Exception hierarchy design

    Each new project usually requires setting up an Exception hierarchy, usually always the same.

    I will not go into details whether we should extend RuntimeException or directly Exception, or whether the hierarchy roots should be FunctionalException/TechnicalException or TransientException/PersistentException. Those will be rants for another time as my current problem is completely unrelated.

    The situation is the following: when something bad happens deep in the call layer (i.e. an authentication failure from the authentication provider), a new FunctionalException is created with a known error code, say 123.

    public class FunctionalException extends RuntimeException {
        private long errorCode;
        public FunctionalException(long errorCode) {
            this.errorCode = errorCode;
        // Other constructors

    At this point, there are some nice advantages to this approach: the error code can be both logged and shown to the user with an adequate error message.

    The downside is in order to analyze where the authentication failure exception is effectively used in the code is completely impossible. As I’m stuck with the task of adding new features on this codebase, I must say this sucks big time. Dear readers, when you design an Exception hierarchy, please add the following:

    public class AuthenticationFailureException extends FunctionalException {
        public AuthenticationFailureException() {
     // Other constructors

    This is slightly more verbose, of course, but you’ll keep all aforementioned advantages as well as letting poor maintainers like me analyze code much less painlessly. Many thanks in advance!

    Categories: Java Tags: designexception
  • Scala on Android and stuff: lessons learned

    I play Role-Playing since I’m eleven, and me and my posse still play once or twice a year. Recently, they decided to play Earthdawn again, a game we didn’t play since more than 15 years! That triggered my desire to create an application to roll all those strangely-shaped dice. And to combine the useful with the pleasant, I decided to use technologies I’m not really familiar with: the Scala language, the Android platform and the Gradle build system.

    The first step was to design a simple and generic die-rolling API in Scala, and that was the subject of one of my former article. The second step was to build upon this API to construct something more specific to Earthdawn and design the GUI. Here’s the write up of my musings in this development.

    Here’s the a general component overview:

    Component overview

    Reworking the basics

    After much internal debate, I finally changed the return type of the roll method from (Rollable[T], T) to simply T following a relevant comment on reddit. I concluded that it’s to the caller to get hold of the die itself, and return it if it wants. That’s what I did in the Earthdawn domain layer.

    Scala specs

    Using Scala meant I also dived into Scala Specs 2 framework. Specs 2 offers a Behavior-Driven way of writing tests as well as integration with JUnit through runners and Mockito through traits. Test instances can be initialized separately in a dedicated class, isolated of all others.

    My biggest challenge was to configure the Maven Surefire plugin to execute Specs 2 tests:


    Scala + Android = Scaloid

    Though perhaps disappointing at first, wanting to run Scala on Android is feasible enough. Normal Java is compiled to bytecode and then converted to Dalvik-compatible Dex files. Since Scala is also compiled to bytecode, the same process can also be applied. Not only can Scala be easily be ported to Android, some frameworks to do that are available online: the one which seemed the most mature was Scaloid.

    Scaloid most important feature is to eschew traditional declarative XML-based layout in favor of a Scala-based Domain Specific Language with the help of implicit:

    val layout = new SLinearLayout {

    Scaloid also offers:

    • Lifecycle management
    • Implicit conversions
    • Trait-based hierarchy
    • Improved getters and setters
    • etc.

    If you want to do Scala on Android, this is the prject you’ve to look at!

    Some more bitching about Gradle

    I’m not known to be a big fan of Gradle - to say the least. The bigest reason however, is not because I think Gradle is bad but because using a tool based on its hype level is the worst reason I can think of.

    I used Gradle for the API project, and I must admit it it is more concise than Maven. For example, instead of adding the whole maven-scala-plugin XML snippet, it’s enough to tell Gradle to use it with:

    apply plugin: 'scala'

    However the biggest advantage is that Gradle keeps the state of the project, so that unnecessary tasks are not executed. For example, if code didn’t change, compilation will not be executed.

    Now to the things that are - to put in politically correct way, less than optimal:

    • First, interestingly enough, Gradle does not output test results in the console. For a Maven user, this is somewhat unsettling. But even without this flaw, I'm afraid any potential user is interested in the test ouput. Yet, this can be remedied with some Groovy code: ```groovy test { onOutput { descriptor, event -> logger.lifecycle("Test: " + descriptor + ": " + event.message ) } } ```
    • Then, as I wanted to install my the resulting package into my local Maven repository, I had to add the Maven plugin: easy enough... This also required Maven coordinates, quite expected. But why am I allowed to install without executing test phases??? This is not only disturbing, but I cannot accept any rationale to allow installing without testing.
    • Neither of the previous points proved to be a show stopper, however. For the Scala-Android project, you might think I just needed to apply both scala and android plugins and be done with that. Well, this is wrong! It seems that despite Android being showcased as the use-case for Gradle, scala and android plugins are not compatible. This was the end of my Gradle adventure, and probably for the foreseeable future.

    Testing with ease and Genymotion

    The build process i.e. transforming every class file into Dex, packaging them into an .apk and signing it takes ages. It would be even worse if using the emulator from Google Android’s SDK. But rejoice, Genymotion is an advanced Android emulator that not only very fast but easy as pie to use.

    Instead of doing an adb install, installing an apk on Genymotion can be achieved by just drag’n’dropping it on the emulated device. Even better, it doesn’t require first uninstalling the old version and it launches the application directly. Easy as pie I told you!


    I have now a working Android application, complete with tests and repeatable build. It’s not much, but it gets the job done and it taught me some new stuff I didn’t know previously. I can only encourage you to do the same: pick a small application and develop it with languages/tools/platforms that you don’t use in your day-to-day job. In the end, you will have learned stuff and have a working application. Doesn’t it make you feel warm inside?


    Categories: Java Tags: androidgenymotiongradlemavenscala
  • My summary of JEEConf 2014

    2014 saw my first participation in JEEConf (Kiev, Ukrain) as well as my farthest travel East so far. I’m so glad I could attend! As a speaker, I was not only shown Kiev during a guided tour, I also had the privilege to attend a true traditional Ukrainian banya (bath), complete with wet leaves, cold water and full head-to-toe scrubbing. I do not know if there’s a tradition of Ukrainian hospitality, but if there’s one, it was more than upheld! As conferences go, I was also very happy to meet new people from all over the world.

    Apart from these nice asides, I also followed the following sessions:

    Easy Distributed Systems using Hazelcast by Peter Veentjer
    A presentation of Hazelcast, an in-memory data grid. This presentation was focused on Hazelcast main capabilities and usage: caching and clustering using distributed data structures. The format was especially interesting, as the speaker used small shell scripts showing examples of the capability just presented. As I attended a presentation about Hazlecast more than one year ago, this presentation served as a nice refresher for me.
    Mobile functional testing with Arquillian Droidium by Stefan Miklosovic
    Given I've currently writing Integration Testing from the Trenches, I admit I was expecting much from this talk since Integration Testing and End-to-end Testing are somehow related. I learned new tools related to end-to-end testing on mobile I didn't know of previously, but to be frank, 50 minutes were too long for the information provided: I could have learned the same amount of information in a blog post much quicker. I don't know if this feeling comes from my high expectations or the speaker's performance, but the fact is we didn't met.
    Holding down your Technical Debt with SonarQube by Patroklos Papapetrou
    I think I know SonarQube very well as I'm an early adopter, but I attended this talk nonetheless because I wanted to Patroklos Papapetrou's performance first-hand. I was not disappointed: not only does he know the subject deeply, he's also a very good presenter of Code Quality Analysis in general and SonarQube in particular. Icing on the cake, it appears we share opinions on the subject though we never me before nor talked about that. Highly recommended talk to attend to!
    Reflection Madness by Heinz Kabutz
    A list of reflection tricks, this talk's conclusion was not to use it unless really really (really!) required, as it can lead to doing stuff that cross the boundaries of the langage (e.g. adding a new enum value at runtime). It must have been a very interesting presentation, but at this point, the banya of the day before had worn me out and I confess I slept for most of the talk.

    I also had the privilege of presenting two talks myself:

    I think both received a warm welcome from the audience, but I cannot be the judge of that, of course.

    JEEConf is really a one of a kind conference. For example, one of the sponsor had the crazy idea to bring a live racoon (though I wouldn’t have dared kiss it like the guy: the trainer had numerous bite marks on his hands). This is something highly unusual in tech conference! I enjoyed the 2014 edition I really hope to join JEEConf 2015: since the date is already set to May 22-23, you know what to do if you want it too!

    PS: now I have this song playing in my head endlessly - bonus points for a good translation (Google Translate is not a good one)

    Categories: Event Tags: JEEConf
  • Dead simple API design for Dice Rolling

    I wanted to create a small project where I could achieve results fairly quickly in technologies I never (or rarely) use. At the Mix-IT conference, I realized the few stuff I learned in Scala had been quickly forgotten. And I wanted wanted to give Gradle a try, despite my regular bitching about it. Since my Role-Playing crew wants to play Earthdawn (we stopped for like 20 years), I decided to create a Dice Roller app in Scala, running on Android (all of my friends have Android devices) and built with Gradle (I promised it at Devoxx).

    I soon realized that there were a definite Dice Rolling API that could be isolated from the rest. Rolling a die in Earthdawn has some definite quirk - if you roll the highest result, you re-roll and add it to the result and so on until you roll not the highest, but the basics of rolling a die is similar in every system. It was time to design an extensible API. By extensible, I’m talking about something I could re-use in every RPG system.

    I’d already been trying to create such an API and the root of it is the roll() method signature. Before, I used 2 methods in conjunction:


    This is a big mistake, as implementations will require state handling! This created plenty of problems, among them:

    • creating instances each time I needed to roll
    • calling 2 different methods in the right order, also known as time coupling

    This time, having learned from my mistake, I replaced that with the following:


    This time, implementation can (and will) do without state.

    The next design decision is about the result type. I’m not really sure about this point, but I formerly returned only the result. This time, I returned the result as well as the object itself, so I can pass both along and it let users know which die was rolled as well as the result. This is possible without creating a new class structure thanks to Scala’s out-of-the-box Tuple2. My final interface looks like:

    trait Rollable[T] {

    As I’m aiming toward RPG, I just need to set the sides number as a parameter (to be able to create those strange 12 and 20-sided dice). A naive implementation is very straightforward:

    class Die (val sides:Int)extends Rollable[Int] {
      var random:SecureRandom
      override def roll:(Die, Int) = (this, random.nextInt(sides) + 1)

    Seems good enough. However, how can we test this design? If I need to build upon this, I will need to be able to set desired results: in this case, this is not cheating, it’s faking! As it turns out, I need to pass the random as a constructor parameter (but without a getter).

    class Die (val sides:Int, random:Random)extends Rollable[Int] {
      override def roll:(Die, Int) = (this, random.nextInt(sides) + 1)

    That’s better, but only marginally. With this code, I need to pass the random parameter each time I create a new instance. A slightly better option would be to add a constructor with a default SecureRandom instance. However, what if the next Java version offers an even better Random implementation? Or if the API user prefers to rely on an external secure entropy source? It would he would still have to pass the new improved random for each call - back to square one. Fortunately, Scala offers this one nice language feature called implicit. With implicit, API users only have to reference the random generator once in a file to use it everywhere. The improved design now looks like:

    class Die (val sides:Int)(implicit random:Random) extends Rollable[Int] {
      override def roll:(Die, Int) = (this, random.nextInt(sides) + 1)
    object SecureDie {
      implicit val random = new SecureRandom

    Callers then just need to import the provided random, or to use their own and call the constructor with the desired sides number:

    import SecureDie.random // or import MyQuantumRandomGenerator.random
    val die = new Die(6)

    The final step is to create Scala object (singletons) in order to offer a convenient API. This is only possible since we designed our classes with no state:

    import SecureDie.random
    object d3 extends Die(3)
    object d4 extends Die(4)
    object d6 extends Die(6)
    object d8 extends Die(8)
    object d10 extends Die(10)
    object d12 extends Die(12)
    object d20 extends Die(20)
    object d100 extends Die(100)

    Users now just need to call d6.roll to roll dice!

    With a simple domain, I showed how Scala’s most basic feature can really help getting a clean design. Results are available on Github.

    In the next article I will detail how I got Scala to run on Android and pitfalls I stumbled upon. Spoiler: there will some Gradle involved…

    Categories: Development Tags: apiscala
  • Playing with constructors

    Immutability is a property I look after when designing most of my classes. Achieving immutability requires:

    • A constructor initializing all attributes
    • No setter for those attributes

    However, this design prevents or makes testing more complex. In order to allow (or ease) testing, a public no-arg constructor is needed.

    Other use-cases requiring usage of a no-arg constructor include:

    • De-serialization of serialized objects
    • Sub-classing with no constructor invocation of parent classes
    • etc.

    There are a couple of solutions to this.

    1. Writing a public no-arg constructor

    The easiest way is to create a public no-arg constructor, then add a big bright Javadoc to warn developers not to use. As you can imagine, in this case easy doesn’t mean it enforces anything, as you are basically relying on developers willingness to follow instructions (or even more on their ability to read them in the first place - a risky bet).

    The biggest constraint, however, is that it you need to be able to change the class code.

    2. Writing a package-visible no-arg constructor

    A common approach used for testing is to change visibility of a class private methods to package-visible, so they can be tested by test classes located in the same package. The same approach can be used in our case: write a package-visible no-arg constructor.

    This requires the test class to be in the same package as the class which constructor has been created. As in case 1 above, you also need to change the class code.

    3. Playing it Unsafe

    The JDK is like a buried treasure: it contains many hidden and shiny features; the sun.misc.Unsafe class is one of them. Of course, as both its name and package imply, its usage is extremely discouraged. Unsafe offers a allocateInstance(Class<?>) method to create new instances without calling any constructors whatsoever, nor any initializers.

    Note Unsafe only has instance methods, and its only constructor private… but offers a private singleton attribute. Getting a reference on this attribute requires a bit of reflection black magic as well as a lenient security manager (by default).

    Field field = Unsafe.class.getDeclaredField("theUnsafe");
    Unsafe unsafe = (Unsafe) field.get(null);
    java.sql.Date date = (java.sql.Date) unsafe.allocateInstance(java.sql.Date.class);

    Major constraints of this approach include:

    • Relying on a class outside the public API
    • Using reflection to access a private field
    • Only available in Oracle's HotSpot JVM
    • Setting a lenient enough security manager

    4. Objenesis

    Objenesis is a framework which sole goal is to create new instances without invoking constructors. It offers an abstraction layer upon the Unsafe class in Oracle’s HotSpot JVM. Objenesis also works on different JVMs including OpenJDK, Oracle’s JRockit and Dalvik (i.e. Android) in many different versions by using strategies adapted to each JVM/version pair.

    The above code can be replaced with the following:

    Objenesis objenesis = new ObjenesisStd();
    ObjectInstantiator instantiator = objenesis.getInstantiatorOf(java.sql.Date.class);
    java.sql.Date date = (java.sql.Date) instantiator.newInstance();

    Running this code on Oracle’s HotSpot will still require a lenient security manager as Objenesis will use the above Unsafe class. However, such requirements will be different from JVM to JVM, and handled by Objenesis.


    Though a very rare and specialized requirement, creating instances without constructor invokation might sometimes be necessary. In this case, the Objenesis framework offers a portable and abstract to achieve this at the cost of a single additional dependency.

    Categories: Java Tags: objenesis
  • The Visitor design pattern

    I guess many people know about the Visitor design pattern, described in the Gang of Four’s Design Patterns: Elements of Reusable Object-Oriented Software book. The pattern itself is not very complex (as many design patterns go):

    Visitor UML class diagram

    I’ve known Visitor since ages, but I’ve never needed it… yet. Java handles polymorphism natively: the method call is based upon the runtime type of the calling object, not on its compile type.

    interface Animal {
        void eat();
    public class Dog implements Animal {
        public void eat() {
            System.out.println("Gnaws bones");
    Animal a = new Dog();
    a.eats(); // Prints "Gnaws bones"

    However, this doesn’t work so well (i.e. at all) for parameter types:

    public class Feeder {
        public void feed(Dog d) {
        public void feed(Cat c) {
    Feeder feeder = new Feeder();
    Object o = new Dog();
    feeder.feed(o); // Cannot compile!

    This issue is called double dispatch as it requires calling a method based on both instance and parameter types, which Java doesn’t handle natively. In order to make it compile, the following code is required:

    if (o instanceof Dog) {
        feeder.feed((Dog) o);
    } else if (o instanceof Cat) {
        feeder.feed((Cat) o);
    } else {
        throw new RuntimeException("Invalid type");

    This gets even more complex with more overloaded methods available - and exponentially so with more parameters. In maintenance phase, adding more overloaded methods requires reading the whole if stuff and updating it. Multiple parameters are implemented through embedded ifs, which is even worse regarding maintainability. The Visitor pattern is an elegant way to achieve the same, with no ifs, at the expense of a single method on the Animal class.

    public interface Animal {
        void eat();
        void accept(Visitor v);
    public class Cat {
        public void eat() { ... }
        public void accept(Visitor v) {
    public class Dog {
        public void eat() { ... }
        public void accept(Visitor v) {
    public class FeederVisitor {
        public void visit(Cat c) {
            new Feeder().feed(c);
        public void visit(Dog d) {
            new Feeder().feed(d);


    • No evaluation logic anywhere
    • Only adherence between Animal and FeederVisitor is limited to the visit() method
    • As a corollary, when adding new Animal subtypes, the Feeder type is left untouched
    • When adding new Animal subtypes, the FeederVisitor type may implement an additional method to handle it
    • Other cross-cutting logic may follow the same pattern, e.g. a train feature to teach animals new tricks

    It might seem overkill to go to such lengths for some simple example. However, my experience taught me that simple stuff as above are fated to become more complex with time passing.

    Categories: Java Tags: design pattern
  • Introduction to Mutation Testing

    Last week, I took some days off to attend Devoxx France 2014 3rd edition. As for oysters, the largest talks do not necessarily contain the prettiest pearls. During this year’s edition, my revelation came from a 15 minutes talk by my friend Alexandre Victoor, who introduced me to the wonders of Mutation Testing. Since I’m currently writing about Integration Testing, I’m very much interested in Testing flavors I don’t know about.

    Experience software developers know not to put too much faith in code coverage metrics. Reasons include:

    • Some asserts may have been forgotten (purposely or not)
    • Code with no value, such as getters and setters, may have been tested
    • And so on...

    Mutation Testing tries to go beyond code coverage metrics to increase one’s faith in tests. Here’s is how this is achieved: random code changes called mutations are introduced in the tested code. If a test still succeed despite a code change, something is definitely fishy as the test is worth nothing. As an example is worth a thousand words, here is a snippet that needs to be tested:

    public class DiscountEngine {
        public Double apply(Double discount, Double price) {
            return (1 - discount.doubleValue()) * price.doubleValue();

    The testing code would be akin to:

    public class DiscountEngineTest {
        private DiscountEngine discounter;
        protected void setUp() {
            discounter = new DiscountEngine();
        public void should_apply_discount() {
            Double price = discounter.apply(new Double(0.5), new Double(10));
            assertEquals(price, new Double(5));

    Now, imagine line 16 was forgotten: results from DiscountEngineTest will still pass. In this case, however, wrong code updates in the DiscountEngine would not be detected. That’s were mutation testing enters the arena. By changing DiscountEngine, DiscountEngineTest will still pass and that would mean nothing is tested.

    PIT is a Java tool offering mutation testing. In order to achieve this, PIT creates a number of alternate classes called mutants, where the un-mutated class is the initial source class. Those mutants will be tested against existing tests targeting the original class. If the test still pass, well, there’s a problem and the mutant is considered to have survived; if not, everything is fine as the mutant has been killed. For a single un-mutated class, this goes until the mutant gets killed or all tests targeting the class have been executed and it is still surviving.

    Mutation testing in general and PIT in particular has a big disadvantage: the higher the number of mutants for a class, the higher the confidence in the results, but the higher the time required to execute tests. Therefore, it is advised to run Mutating Testing only on nightly builds. However, this cost is nothing in comparison to having trust in your tests again…

    Out-of-the-box, PIT offers:

    • Maven integration
    • Ant integration
    • Command-line

    Also, Alexandre has written a dedicated plugin for Sonar.

    Source code for this article can be found in IntelliJ/Maven format there.

    Categories: Java Tags: mutation testingtest