Archive

Archive for the ‘Java’ Category

Spring configuration modularization for Integration Testing

July 27th, 2014 No comments

Object-Oriented Programming advocates for modularization in order to build small and reusable components. There are however other reasons for this. In the case of the Spring framework, modularization enables Integration Testing, the ability to test the system or parts of it, including assembly configuration.

Why is it so important to test the system assembled with the final configuration? Let’s take a simple example, the making of a car. Unit Testing the car would be akin to testing every nuts and bolts of the car separately, while Integration Testing the car would be like driving it on a circuit. By testing only the car’s components separately, selling the assembled car is a huge risk as nothing guarantees it will behave correctly in real-life conditions.

Now that we have asserted Integration Testing is necessary to guarantee the adequate level of internal quality, it’s time to enable Integration Testing with the Spring framework. Integration Testing is based on the notion of SUT. Defining the SUT is to define the boundaries between what is tested and its dependencies. In nearly all cases, test setup will require to provide some kind of test double for each required dependency. Configuring those test doubles can only be achieved by modularizing Spring configuration, so that they can replace dependent beans located outside the SUT.

Sample bean dependency diagram

Fig. 1 – Sample bean dependency diagram

Spring’s DI configuration comes in 3 different flavors: XML – the legacy way, autowiring and the newest JavaConfig. We’ll have a look at how modularization can be achieved for each flavor. Mixed DI modularization can be deduced from each separate entry.

Autowiring

Autowiring is an easy way to assemble Spring applications. It is achieved through the use of either @Autowiring or @Inject. Let’s cover quickly autowiring: as injection is implicit, there’s no easy way to modularize configuration. Applications using autowiring will just have to migrate to another DI flavor to allow for Integration Testing.

XML

XML is the legacy way to inject dependencies, but is still in use. Consider the following monolithic XML configuration file:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
     xmlns:jee="http://www.springframework.org/schema/jee"
     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
     xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
      http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee.xsd">
  <jee:jndi-lookup id="dataSource" jndi-name="jdbc/MyDataSource" />
  <bean id="productRepository" class="ProductRepository">
    <constructor-arg ref="dataSource" />
  </bean>
  <bean id="customerRepository" class="CustomerRepository">
    <constructor-arg ref="dataSource" />
  </bean>
  <bean id="orderRepository" class="OrderRepository">
    <constructor-arg ref="dataSource" />
  </bean>
  <bean id="orderService" class="OrderService">
    <constructor-arg ref="productRepository" index="0" />
    <constructor-arg ref="customerRepository" index="1" />
    <constructor-arg ref="orderRepository" index="2" />
  </bean>
</beans>

At this point, Integration Testing orderService is not easy as it should be. In particular, we need to:

  • Download the application server
  • Configure the server for the jdbc/MyDataSource data source
  • Deploy all classes to the server
  • Start the server
  • Stop the server After the test(s)

Of course, all previous tasks have to be automated! Though not impossible thanks to tools such as Arquillian, it’s contrary to the KISS principle. To overcome this problem and make our life (as well as test maintenance) easier in the process requires tooling and design. On the tooling part, we’ll be using a local database. Usually, such a database is of the in-memory kind e.g. H2. On the design part, his requires separating our beans by creating two different configuration fragments, one solely dedicated to the data source to be faked and the other one for the beans constituting the SUT.

Then, we’ll use a Maven classpath trick: Maven puts the test classpath in front of the main classpath when executing tests. This way, files found in the test classpath will “override” similarly-named files in the main classpath. Let’s create two configuration files fragments:

  • The “real” JNDI datasource as in the monolithic configuration
    <beans ...>
      <jee:jndi-lookup id="dataSource" jndi-name="jdbc/MyDataSource" />
    </beans>
  • The Fake datasource
    <beans...>
      <bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource">
          <property name="driverClassName" value="org.h2.Driver" />
          <property name="url" value="jdbc:h2:~/test" />
          <property name="username" value="sa" />
          <property name="maxActive" value="1" />
      </bean>
    </beans>

    Note we are using a Tomcat datasource object, this requires the org.apache.tomcat:tomcat-jdbc:jar library on the test classpath. Also note the maxActive property. This reflects the maximum number of connections to the database. It is advised to always set it to 1 for test scenarios so that connections pool exhaustion bugs can be checked as early as possible.

The final layout is the following:

Fig. 2 – Project structure for Spring XML configuration Integration Testing

  1. JNDI datasource
  2. Other beans
  3. Fake datasource

The final main-config.xml file looks like:

<?xml version="1.0" encoding="UTF-8"?>
<beans...>
  <import resource="classpath:datasource-config.xml" />
  <!-- other beans go here -->
</beans>

Such a structure is the basics to enable Integration Testing.

JavaConfig

JavaConfig is the most recent way to configure Spring applications, bringing both compile-time (as autowiring) and explicit configuration (as XML) safety.

The above datasources fragments can be “translated” in Java as follows:

  • The “real” JNDI datasource as in the monolithic configuration
    @Configuration
    public class DataSourceConfig {
    
        @Bean
        public DataSource dataSource() throws Exception {
            Context ctx = new InitialContext();
            return (DataSource) ctx.lookup("jdbc/MyDataSource");
        }
    }
  • The Fake datasource
    public class FakeDataSourceConfig {
    
        public DataSource dataSource() {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:~/test");
            dataSource.setUsername("sa");
            dataSource.setMaxActive(1);
            return dataSource;
        }
    }

However, there are two problems that appear when using JavaConfig.

  1. It’s not possible to use the same classpath trick with an import as with XML previously, as Java forbids to have 2 (or more) classes with the same qualified name loaded by the same classloader (which is the case with Maven). Therefore, JavaConfig configuration fragments shouldn’t explicitly import other fragments but should leave the fragment assembly responsibility to their users (application or tests) so that names can be different, e.g.:
    @ContextConfiguration(classes = {MainConfig.class, FakeDataSource.class})
    public class SimpleDataSourceIntegrationTest extends AbstractTestNGSpringContextTests {
    
        @Test
        public void should_check_something_useful() {
            // Test goes there
        }
    }
  2. The main configuration fragment uses the datasource bean from the other configuration fragment. This mandates for the former to have a reference on the latter. This is obtained by using the @Autowired annotation (one of the few relevant usage of it).
    @Configuration
    public class MainConfig {
    
        @Autowired
        private DataSource dataSource;
    
        // Other beans go there. They can use dataSource!
    }

Summary

In this article, I showed how Integration Testing to a Fake data source could be achieved by modularizing the monolithic Spring configuration into different configuration fragments, either XML or JavaConfig.

However, the realm of Integration Testing – with Spring or without, is vast. Should you want to go further, I’ll hold a talk on Integration Testing at Agile Tour London on Oct. 24th and at Java Days Kiev on Oct. 17th-18th.

This article is an abridged version of a part of the Spring chapter of Integration Testing from the Trenches. Have a look at it, there’s even a sample free chapter!

Integration Testing from the Trenches

Send to Kindle
Categories: Java Tags: ,

Easier Spring version management

July 6th, 2014 No comments

Earlier on, Spring migrated from a monolithic approach – the whole framework, to a modular one – bean, context, test, etc. so that one could decide to use only the required modules. This modularity came at a cost, however: in the Maven build configuration (or the Gradle one for that matter), one had to specify the version for each used module.

<?xml version="1.0" encoding="UTF-8"?>
<project ...>
    ...
    <dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
            <version>4.0.5.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-jdbc</artifactId>
            <version>4.0.5.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>4.0.5.RELEASE</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

Of course, professional Maven users would improve this POM with the following:

<?xml version="1.0" encoding="UTF-8"?>
<project ...>
    ...
   <properties>
        <spring.version>4.0.5.RELEASE</spring.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-jdbc</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <version>${spring.version}</version>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

There’s an more concise way to achieve the same through a BOM-typed POM (see section on scope import) since version 3.2.6 though.

<?xml version="1.0" encoding="UTF-8"?>
<project ...>
    ...
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-framework-bom</artifactId>
                <type>pom</type>
                <version>4.0.5.RELEASE</version>
                <scope>import</scope>
            </dependency>
        <dependencies>
    </dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-jdbc</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

Note that Spring’s BOM only sets version but not scope, this has to be done in each user POM.

Spring released very recently the Spring IO platform which also includes a BOM. This BOM not only includes Spring dependencies but also other third-party libraries.

<?xml version="1.0" encoding="UTF-8"?>
<project ...>
    ...
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>io.spring.platform</groupId>
                <artifactId>platform-bom</artifactId>
                <type>pom</type>
                <version>1.0.0.RELEASE</version>
                <scope>import</scope>
            </dependency>
        <dependencies>
    </dependencyManagement>
    <dependencies>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-jdbc</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-test</artifactId>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.testng</groupId>
            <artifactId>testng</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>
</project>

There’s one single problem with Spring IO platform’s BOM, there’s no simple mapping from the BOM version to declared dependencies versions. For example, the BOM’s 1.0.0.RELEASE maps to Spring 4.0.5.RELEASE.

To go further:

Send to Kindle
Categories: Java Tags: ,

First release of Integration Testing from the Trenches

June 29th, 2014 5 comments

My job as a software architect is to make sure the builds I provide have the best possible quality, and more specifically internal quality. While Unit Testing sure helps creating less regressions, relying only on it is akin to testing a car by testing its nuts and bolts. Integration Testing is about getting the car on a circuit.

Last week, I finally released the fist version of Integration Testing from the Trenches. As its name implies, this book is about Integration Testing. It is organized in the following chapters:

Chapter 1 – Foundations of testing
This is an introductory chapter, laying out the foundations for the rest of the book. It describes Unit Testing, Integration Testing and Functional Testing, as well as their associated notions.
Chapter 2 – Developer testing tools
This chapter covers both the JUnit and TestNG testing frameworks. Tips and tricks on how to use them for Integration Testing are also included.
Chapter 3 – Test-Friendly Design
This chapter details Dependency Injection, DI-compatible design and which objects should be set as dependencies during tests execution. This includes definitions of Test Doubles, such as Dummy,  Fake and Mock along with an explanation of Mockito, a Mocking framework and Spring Test and Mockrunner, two OpenSource available Fake libraries.
Chapter 4 – Automated testing
It covers how to get our carefully crafted Integration Tests to run through automated build tools, like Maven and Gradle.
Chapter 5 – Infrastructure Resources Integration
This chapter concerns itself about Integration Testing applied to infrastructure resources such as databases, mail servers, ftp servers and others. Tools and techniques about each resource type will be explained.
Chapter 6 – Web Services Integration
This chapter is solely dedicated to Integration Testing with Web Services, either in SOAP or REST flavor.
Chapter 7 – Spring in-container testing
In this chapter, testing recipes for Spring and Spring MVC applications are described. It also includes coverage of the Spring Test library.
Chapter 8 – JavaEE testing
Last but not least, this chapter covers testing of Java EE applications, including the Arquillian testing framework.

There’s a free sample chapter for you kind reader if you want to go further. Here’s a 10% discount valid for the whole week to have something to read on the beach during vacations!

In all cases, I’ll take excerpts from the book and publish them on this blog in the following week.

Send to Kindle
Categories: Java Tags:

The right bean at the right place

June 22nd, 2014 No comments

Among the different customers I worked for, I noticed a widespread misunderstanding regarding the use of Spring contexts in Spring MVC.

Basically, you have contexts, in a parent-child relationship:

  • The main context is where service beans are hosted. By convention, it is spawned from the /WEB-INF/applicationContext.xml file
    but this location can be changed by using the contextConfigLocation context parameter. Alternatively, one can use the AbstractAnnotationConfigDispatcherServletInitializer and in this case, configuration classes should be parts of the array returned by the getRootConfigClasses() method.
  • Web context(s) is where Spring MVC dispatcher servlet configuration beans and controllers can be found. It is spawned from <servlet-name>-servlet.xml or, if using the JavaConfig above, comes from classes returned by the getServletConfigClasses() method

As in every parent-child relationship, there’s a catch:

Beans from the child contexts can access beans from the parent context, but not the opposite.

That makes sense if you picture this: I want my controllers to be injected with services, but not the other way around (it would be a funny idea to inject controllers in services). Besides, you could have multiple Spring servlets with its own web context, each sharing the same main context for parent. When it goes beyond controllers and services, one should decide in which context a bean should go. For some, that’s pretty evident: view resolvers, message sources and such go into the web context; for others, one would have to spend some time thinking about it.

A good rule of thumb to decide in which context which bean should go is the following: IF you had multiple servlets (even if you do not), what would you like to share and what not.

Note: this way of thinking should not be tied to your application itself, as otherwise you’d probably end up sharing message sources in the main application context, which is a (really) bad idea.

This modularization let you put the right bean at the right place, promoting bean reusability.

Send to Kindle
Categories: JavaEE Tags: , ,

Back to basics: encapsulating collections

June 15th, 2014 4 comments

Younger, I learned there were 3 properties of the Object-Oriented paradigm:

  • Encapsulation
  • Inheritance
  • Polymorphism

In Java, encapsulation is implemented through usage of private attributes with accessors methods commonly known as getters and setters. Whether this is proper encapsulation is subject to debate and is outside the scope of this article. However, using this method to attain encapsulation when the attribute is a collection (of types java.util.Collection, java.util.Map and their subtypes) is just plain wrong.

The code I see most of the times is the following:

public class MyBean {
    private Collection collection;

    public Collection getCollection() {
        return collection;
    }

    public void setCollection(Collection collection) {
        this.collection = collection;
    }
}

This is the most common code I see: this design has been popularized by ORM frameworks such as Hibernate. Many times when I raise this point, the next proposal is an immutable one.

public class MyBean {
    private Collection collection;

    public MyBean(Collection collection) {
        this.collection = collection;
    }

    public Collection getCollection() {
        return collection;
    }
}

No proper encapsulation

However, in the case of collections, this changes nothing as Java collections are mutable themselves. Obviously, both passing a reference to the collection in the constructor and returning a reference to it is no encapsulation at all. Real encapsulation is only possible if no reference to the collection is kept nor returned.

List list = new ArrayList();
MyBean mybean = new MyBean(list);
list.add(new Object()); // We just modified the collection outside my bean

Not possible to use a specific subtype

Besides, my bean could require a more specific collection of its own, such as List or Set. With the following code snippet, passing a Set is simply not possible.

public class MyBean {
    private List collection;

    public List getCollection() {
        return collection;
    }

    public void setCollection(List collection) {
        this.collection = collection;
    }
}

No choice of the concrete implementation

As a corollary from the last point, using the provided reference prevents us from using our own (perhaps more efficient) type e.g. a Apache Commons FastArrayList.

An implementation proposal

The starting point of any true encapsulation is the following:

public class MyBean {
    private List collection = new ArrayList();

    public MyBean(Collection collection) {
        this.collection.addAll(collection);
    }

    public Collection getCollection() {
        return Collections.unmodifiableList(collection);
    }
}

This fixes the aforementioned cons:

  1. No reference to the collection is passed in the constructor, thus preventing any subsequent changes from outside the object
  2. Freedom to use the chosen collection implementation, with complete isolation – leaving room for change
  3. No reference to the wrapped collection is passed in the collection returned by the getter

Note: previous snippets do not use generics for easier readability, please do use them

Send to Kindle
Categories: Java Tags:

A single simple rule for easier Exception hierarchy design

June 9th, 2014 6 comments

Each new project usually requires setting up an Exception hierarchy, usually always the same.

I will not go into details whether we should extend RuntimeException or directly Exception, or whether the hierarchy roots should be FunctionalException/TechnicalException or TransientException/PersistentException. Those will be rants for another time as my current problem is completely unrelated.

The situation is the following: when something bad happens deep in the call layer (i.e. an authentication failure from the authentication provider), a new FunctionalException is created with a known error code, say 123.

public class FunctionalException extends RuntimeException {
    private long errorCode;
    public FunctionalException(long errorCode) {
        this.errorCode = errorCode;
    }
    // Other constructors
}

At this point, there are some nice advantages to this approach: the error code can be both logged and shown to the user with an adequate error message.

The downside is in order to analyze where the authentication failure exception is effectively used in the code is completely impossible. As I’m stuck with the task of adding new features on this codebase, I must say this sucks big time. Dear readers, when you design an Exception hierarchy, please add the following:

public class AuthenticationFailureException extends FunctionalException {
    public AuthenticationFailureException() {
       super(123L);
 }
 // Other constructors
}

This is slightly more verbose, of course, but you’ll keep all aforementioned advantages as well as letting poor maintainers like me analyze code much less painlessly. Many thanks in advance!

Send to Kindle
Categories: Java Tags: ,

Scala on Android and stuff: lessons learned

June 1st, 2014 3 comments

I play Role-Playing since I’m eleven, and me and my posse still play once or twice a year. Recently, they decided to play Earthdawn again, a game we didn’t play since more than 15 years! That triggered my desire to create an application to roll all those strangely-shaped dice. And to combine the useful with the pleasant, I decided to use technologies I’m not really familiar with: the Scala language, the Android platform and the Gradle build system.

The first step was to design a simple and generic die-rolling API in Scala, and that was the subject of one of my former article. The second step was to build upon this API to construct something more specific to Earthdawn and design the GUI. Here’s the write up of my musings in this development.

Here’s the a general component overview:

Component overview

Reworking the basics

After much internal debate, I finally changed the return type of the roll method from (Rollable[T], T) to simply T following a relevant comment on reddit. I concluded that it’s to the caller to get hold of the die itself, and return it if it wants. That’s what I did in the Earthdawn domain layer.

Scala specs

Using Scala meant I also dived into Scala Specs 2 framework. Specs 2 offers a Behavior-Driven way of writing tests as well as integration with JUnit through runners and Mockito through traits. Test instances can be initialized separately in a dedicated class, isolated of all others.

My biggest challenge was to configure the Maven Surefire plugin to execute Specs 2 tests:

<plugin>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.17</version>
    <configuration>
        <argLine>-Dspecs2.console</argLine>
            <includes>
                <include>**/*Spec.java</include>
            </includes>
    </configuration>
</plugin>

Scala + Android = Scaloid

Though perhaps disappointing at first, wanting to run Scala on Android is feasible enough. Normal Java is compiled to bytecode and then converted to Dalvik-compatible Dex files. Since Scala is also compiled to bytecode, the same process can also be applied. Not only can Scala be easily be ported to Android, some frameworks to do that are available online: the one which seemed the most mature was Scaloid.

Scaloid most important feature is to eschew traditional declarative XML-based layout in favor of a Scala-based Domain Specific Language with the help of implicit:

val layout = new SLinearLayout {
  SButton("Click").<<.Weight(1.0f).>>
}

Scaloid also offers:

  • Lifecycle management
  • Implicit conversions
  • Trait-based hierarchy
  • Improved getters and setters
  • etc.

If you want to do Scala on Android, this is the prject you’ve to look at!

Some more bitching about Gradle

I’m not known to be a big fan of Gradle – to say the least. The bigest reason however, is not because I think Gradle is bad but because using a tool based on its hype level is the worst reason I can think of.

I used Gradle for the API project, and I must admit it it is more concise than Maven. For example, instead of adding the whole maven-scala-plugin XML snippet, it’s enough to tell Gradle to use it with:

apply plugin: 'scala'

However the biggest advantage is that Gradle keeps the state of the project, so that unnecessary tasks are not executed. For example, if code didn’t change, compilation will not be executed.

Now to the things that are – to put in politically correct way, less than optimal:

  • First, interestingly enough, Gradle does not output test results in the console. For a Maven user, this is somewhat unsettling. But even without this flaw, I’m afraid any potential user is interested in the test ouput. Yet, this can be remedied with some Groovy code:
    test {
        onOutput { descriptor, event ->
            logger.lifecycle("Test: " + descriptor + ": " + event.message )
        }
    }
  • Then, as I wanted to install my the resulting package into my local Maven repository, I had to add the Maven plugin: easy enough… This also required Maven coordinates, quite expected. But why am I allowed to install without executing test phases??? This is not only disturbing, but I cannot accept any rationale to allow installing without testing.
  • Neither of the previous points proved to be a show stopper, however. For the Scala-Android project, you might think I just needed to apply both scala and android plugins and be done with that. Well, this is wrong! It seems that despite Android being showcased as the use-case for Gradle, scala and android plugins are not compatible. This was the end of my Gradle adventure, and probably for the foreseeable future.

Testing with ease and Genymotion

The build process i.e. transforming every class file into Dex, packaging them into an .apk and signing it takes ages. It would be even worse if using the emulator from Google Android’s SDK. But rejoice, Genymotion is an advanced Android emulator that not only very fast but easy as pie to use.

Instead of doing an adb install, installing an apk on Genymotion can be achieved by just drag’n’dropping it on the emulated device. Even better, it doesn’t require first uninstalling the old version and it launches the application directly. Easy as pie I told you!

Conclusion

I have now a working Android application, complete with tests and repeatable build. It’s not much, but it gets the job done and it taught me some new stuff I didn’t know previously. I can only encourage you to do the same: pick a small application and develop it with languages/tools/platforms that you don’t use in your day-to-day job. In the end, you will have learned stuff and have a working application. Doesn’t it make you feel warm inside?

 

Send to Kindle
Categories: Java Tags: , , , ,

Playing with constructors

May 4th, 2014 1 comment

Immutability is a property I look after when designing most of my classes. Achieving immutability requires:

  • A constructor initializing all attributes
  • No setter for those attributes

However, this design prevents or makes testing more complex. In order to allow (or ease) testing, a public no-arg constructor is needed.

Other use-cases requiring usage of a no-arg constructor include:

  • De-serialization of serialized objects
  • Sub-classing with no constructor invocation of parent classes
  • etc.

There are a couple of solutions to this.

1. Writing a public no-arg constructor

The easiest way is to create a public no-arg constructor, then add a big bright Javadoc to warn developers not to use. As you can imagine, in this case easy doesn’t mean it enforces anything, as you are basically relying on developers willingness to follow instructions (or even more on their ability to read them in the first place – a risky bet).

The biggest constraint, however, is that it you need to be able to change the class code.

2. Writing a package-visible no-arg constructor

A common approach used for testing is to change visibility of a class private methods to package-visible, so they can be tested by test classes located in the same package. The same approach can be used in our case: write a package-visible no-arg constructor.

This requires the test class to be in the same package as the class which constructor has been created. As in case 1 above, you also need to change the class code.

3. Playing it Unsafe

The JDK is like a buried treasure: it contains many hidden and shiny features; the sun.misc.Unsafe class is one of them. Of course, as both its name and package imply, its usage is extremely discouraged. Unsafe offers a allocateInstance(Class<?>) method to create new instances without calling any constructors whatsoever, nor any initializers.

Note Unsafe only has instance methods, and its only constructor private… but offers a private singleton attribute. Getting a reference on this attribute requires a bit of reflection black magic as well as a lenient security manager (by default).

Field field = Unsafe.class.getDeclaredField("theUnsafe");
field.setAccessible(true);
Unsafe unsafe = (Unsafe) field.get(null);

java.sql.Date date = (java.sql.Date) unsafe.allocateInstance(java.sql.Date.class);
System.out.println(date);

Major constraints of this approach include:

  • Relying on a class outside the public API
  • Using reflection to access a private field
  • Only available in Oracle’s HotSpot JVM
  • Setting a lenient enough security manager

4. Objenesis

Objenesis is a framework which sole goal is to create new instances without invoking constructors. It offers an abstraction layer upon the Unsafe class in Oracle’s HotSpot JVM. Objenesis also works on different JVMs including OpenJDK, Oracle’s JRockit and Dalvik (i.e. Android) in many different versions by using strategies adapted to each JVM/version pair.

The above code can be replaced with the following:

Objenesis objenesis = new ObjenesisStd();
ObjectInstantiator instantiator = objenesis.getInstantiatorOf(java.sql.Date.class);

java.sql.Date date = (java.sql.Date) instantiator.newInstance();
System.out.println(date);

Running this code on Oracle’s HotSpot will still require a lenient security manager as Objenesis will use the above Unsafe class. However, such requirements will be different from JVM to JVM, and handled by Objenesis.

Conclusion

Though a very rare and specialized requirement, creating instances without constructor invokation might sometimes be necessary. In this case, the Objenesis framework offers a portable and abstract to achieve this at the cost of a single additional dependency.

Send to Kindle
Categories: Java Tags:

The Visitor design pattern

April 27th, 2014 5 comments

I guess many people know about the Visitor design pattern, described in the Gang of Four’s Design Patterns: Elements of Reusable Object-Oriented Software book. The pattern itself is not very complex (as many design patterns go):

Visitor UML class diagram

I’ve known Visitor since ages, but I’ve never needed it… yet. Java handles polymorphism natively: the method call is based upon the runtime type of the calling object, not on its compile type.

interface Animal {
    void eat();
}
public class Dog implements Animal {
    public void eat() {
        System.out.println("Gnaws bones");
    }
}

Animal a = new Dog();
a.eats(); // Prints "Gnaws bones"

However, this doesn’t work so well (i.e. at all) for parameter types:

public class Feeder {
    public void feed(Dog d) {
        d.eat();
    }
    public void feed(Cat c) {
        c.eat();
    }
}

Feeder feeder = new Feeder();
Object o = new Dog();
feeder.feed(o); // Cannot compile!

This issue is called double dispatch as it requires calling a method based on both instance and parameter types, which Java doesn’t handle natively. In order to make it compile, the following code is required:

if (o instanceof Dog) {
    feeder.feed((Dog) o);
} else if (o instanceof Cat) {
    feeder.feed((Cat) o);
} else {
    throw new RuntimeException("Invalid type");
}

This gets even more complex with more overloaded methods available – and exponentially so with more parameters. In maintenance phase, adding more overloaded methods requires reading the whole if stuff and updating it. Multiple parameters are implemented through embedded ifs, which is even worse regarding maintainability. The Visitor pattern is an elegant way to achieve the same, with no ifs, at the expense of a single method on the Animal class.

public interface Animal {
    void eat();
    void accept(Visitor v);
}

public class Cat {
    public void eat() { ... }
    public void accept(Visitor v) {
        v.visit(this);
    }
}

public class Dog {
    public void eat() { ... }
    public void accept(Visitor v) {
        v.visit(this);
    }
}

public class FeederVisitor {
    public void visit(Cat c) {
        new Feeder().feed(c);
    }
    public void visit(Dog d) {
        new Feeder().feed(d);
    }
}

Benefits:

  • No evaluation logic anywhere
  • Only adherence between Animal and FeederVisitor is limited to the visit() method
  • As a corollary, when adding new Animal subtypes, the Feeder type is left untouched
  • When adding new Animal subtypes, the FeederVisitor type may implement an additional method to handle it
  • Other cross-cutting logic may follow the same pattern, e.g. a train feature to teach animals new tricks

It might seem overkill to go to such lengths for some simple example. However, my experience taught me that simple stuff as above are fated to become more complex with time passing.

Send to Kindle
Categories: Java Tags:

Introduction to Mutation Testing

April 20th, 2014 2 comments

Last week, I took some days off to attend Devoxx France 2014 3rd edition. As for oysters, the largest talks do not necessarily contain the prettiest pearls. During this year’s edition, my revelation came from a 15 minutes talk by my friend Alexandre Victoor, who introduced me to the wonders of Mutation Testing. Since I’m currently writing about Integration Testing, I’m very much interested in Testing flavors I don’t know about.

Experience software developers know not to put too much faith in code coverage metrics. Reasons include:

  • Some asserts may have been forgotten (purposely or not)
  • Code with no value, such as getters and setters, may have been tested
  • And so on…

Mutation Testing tries to go beyond code coverage metrics to increase one’s faith in tests. Here’s is how this is achieved: random code changes called mutations are introduced in the tested code. If a test still succeed despite a code change, something is definitely fishy as the test is worth nothing. As an example is worth a thousand words, here is a snippet that needs to be tested:

public class DiscountEngine {

    public Double apply(Double discount, Double price) {

        return (1 - discount.doubleValue()) * price.doubleValue();
    }
}

The testing code would be akin to:

public class DiscountEngineTest {

    private DiscountEngine discounter;

    @BeforeMethod
    protected void setUp() {

        discounter = new DiscountEngine();
    }

    @Test
    public void should_apply_discount() {

        Double price = discounter.apply(new Double(0.5), new Double(10));

        assertEquals(price, new Double(5));
    }
}

Now, imagine line 16 was forgotten: results from DiscountEngineTest will still pass. In this case, however, wrong code updates in the DiscountEngine would not be detected. That’s were mutation testing enters the arena. By changing DiscountEngine, DiscountEngineTest will still pass and that would mean nothing is tested.

PIT is a Java tool offering mutation testing. In order to achieve this, PIT creates a number of alternate classes called mutants, where the un-mutated class is the initial source class. Those mutants will be tested against existing tests targeting the original class. If the test still pass, well, there’s a problem and the mutant is considered to have survived; if not, everything is fine as the mutant has been killed. For a single un-mutated class, this goes until the mutant gets killed or all tests targeting the class have been executed and it is still surviving.

Mutation testing in general and PIT in particular has a big disadvantage: the higher the number of mutants for a class, the higher the confidence in the results, but the higher the time required to execute tests. Therefore, it is advised to run Mutating Testing only on nightly builds. However, this cost is nothing in comparison to having trust in your tests again…

Out-of-the-box, PIT offers:

  • Maven integration
  • Ant integration
  • Command-line

Also, Alexandre has written a dedicated plugin for Sonar.

Source code for this article can be found in IntelliJ/Maven format there.

Send to Kindle
Categories: Java Tags: ,