Archive

Posts Tagged ‘testng’

PowerMock, features and use-cases

October 7th, 2012 No comments

Even if you don’t like it, your job sometimes requires you to maintain legacy applications. It happened to me (fortunately rarely), and the first thing I look for before writing as much as a single character in a legacy codebase is a unit testing harness. Most of the time, I tend to focus on the code coverage of the part of the application I need to change, and try to improve it as much as possible, even in the total lack of unit tests.

Real trouble happens when design isn’t state-of-the-art. Unit testing requires mocking, which is only good when dependency injection (as well as initialization methods) makes classes unit-testable: this isn’t the case with some legacy applications, where there is a mess of private and static methods or initialization code in constructors, not to mention static blocks.

For example, consider the following example:

public class ExampleUtils {

    public static void doSomethingUseful() { ... }
}

public class CallingCode {

    public void codeThatShouldBeTestedInIsolation() {

        ExampleUtils.doSomethingUseful();

        ...
    }
}

It’s impossible to properly unit test the codeThatShouldBeTestedInIsolation() method since it has a dependency on another unmockable static method of another class. Of course, there are proven techniques to overcome this obstacle. One such technique would be to create a “proxy” class that would wrap the call to the static method and inject this class in the calling class like so:

public class UsefulProxy {

    public void doSomethingUsefulByProxy() {

        ExampleUtils.doSomethingUseful();
    }
}

public class CallingCodeImproved {

    private UsefulProxy proxy;

    public void codeThatShouldBeTestedInIsolation() {

        proxy.doSomethingUSeful();

        ...
    }
}

Now I can inject a mock UsefulProxy and finally test my method in isolation. There are several drawbacks to ponder, though:

  • The produced code hasn’t provided tests, only a way to make tests possible.
  • While writing this little workaround, you didn’t produce any tests. At this point, you achieved nothing.
  • You changed code before testing and took the risk of breaking behavior! Granted, the example doesn’t imply any complexity but such is not always the case in real life applications.
  • You made the code more testable, but only with an additional layer of complexity.

For all these reasons, I would recommend this approach only as a last resort. Even worse, there are designs that are completely closed to simple refactoring, such as the following example which displays a static initializer:

public class ClassWithStaticInitializer {

    static { ... }
}

As soon as the ClassWithStaticInitializer class is loaded by the class loader, the static block will be executed, for good or ill (in the light of unit testing, it probably will be the latter).

My mock framework of choice is Mockito. Its designers made sure features such as static method mocking weren’t available and I thank them for that. It means that if I cannot use Mockito, it’s a design smell. Unfortunately, as we’ve seen previously, tackling legacy code may require such features. That’s when enters PowerMock (and only then – using PowerMock in a standard development process is also a sure sign the design is fishy).

With PowerMock, you can leave the initial code untouched and still test to begin changing the code with confidence. Here’s the test code for the first legacy snippet, using Mockito and TestNG:

@PrepareForTest(ExampleUtils.class)
public class CallingCodeTest {

    private CallingCode callingCode;

    @BeforeMethod
    protected void setUp() {

        mockStatic(ExampleUtils.class);

        callingCode = new CallingCode();
    }

    @ObjectFactory
    public IObjectFactory getObjectFactory() {

        return new PowerMockObjectFactory();
    }

    @Test
    public void callingMethodShouldntRaiseException() {

        callingCode.codeThatShouldBeTestedInIsolation();

        assertEquals(getInternalState(callingCode, "i"), 1);
    }
}

There isn’t much to do, namely:

  • Annotate test classes (or individual test methods) with @PrepareForTest, which references classes or whole packages. This tells PowerMock to allow for byte-code manipulation of those classes, the effective instruction being done in the following step.
  • Mock the desired methods with the available palette of mockXXX() methods.
  • Provide the object factory in a method that returns IObjectFactory and annotated with @ObjectFactory.

Also note that with the help of the Whitebox class we can access the class internal state (i.e. private variables). Even though this is bad, the alternative – taking chance with the legacy code without test harness is worse: remember our goal is to lessen the chance to introduce new bugs.

Features list of PowerMock is available here for Mockito. Note that suppressing static blocks is not possible with TestNG right now.

You can find the sources for this article here in Maven format.

Send to Kindle
Categories: Java Tags: , , ,

Database unit testing with DBUnit, Spring and TestNG

June 3rd, 2012 4 comments

I really like Spring, so I tend to use its features to the fullest. However, in some dark corners of its philosophy, I tend to disagree with some of its assumptions. One such assumption is the way database testing should work. In this article, I will explain how to configure your projects to make Spring Test and DBUnit play nice together in a multi-developers environment.

Context

My basic need is to be able to test some complex queries: before integration tests, I’ve to validate those queries get me the right results. These are not unit tests per se but let’s assilimate them as such. In order to achieve this, I use since a while a framework named DBUnit. Although not maintained since late 2010, I haven’t found yet a replacement (be my guest for proposals).

I also have some constraints:

  • I want to use TestNG for all my test classes, so that new developers wouldn’t think about which test framework to use
  • I want to be able to use Spring Test, so that I can inject my test dependencies directly into the test class
  • I want to be able to see for myself the database state at the end of any of my test, so that if something goes wrong, I can execute my own queries to discover why
  • I want every developer to have its own isolated database instance/schema

Considering the last point, our organization let us benefit from a single Oracle schema per developer for those “unit-tests”.

Basic set up

Spring provides the AbstractTestNGSpringContextTests class out-of-the-box. In turn, this means we can apply TestNG annotations as well as @Autowired on children classes. It also means we have access to the underlying applicationContext, but I prefer not to (and don’t need to in any case).

The structure of such a test would look like this:

@ContextConfiguration(location = "classpath:persistence-beans.xml")
public class MyDaoTest extends AbstractTestNGSpringContextTests {

    @Autowired
    private MyDao myDao;

    @Test
    public void whenXYZThenTUV() {
        ...
    }
}

Readers familiar with Spring and TestNG shouldn’t be surprised here.

Bringing in DBunit

DbUnit is a JUnit extension targeted at database-driven projects that, among other things, puts your database into a known state between test runs. […] DbUnit has the ability to export and import your database data to and from XML datasets. Since version 2.0, DbUnit can also work with very large datasets when used in streaming mode. DbUnit can also help you to verify that your database data match an expected set of values.

DBunit being a JUnit extension, it’s expected to extend the provided parent class org.dbunit.DBTestCase. In my context, I have to redefine some setup and teardown operation to use Spring inheritance hierarchy. Luckily, DBUnit developers thought about that and offer relevant documentation.

Among the different strategies available, my tastes tend toward the CLEAN_INSERT and NONE operations respectively on setup and teardown. This way, I can check the database state directly if my test fails. This updates my test class like so:

@ContextConfiguration(locations = {"classpath:persistence-beans.xml", "classpath:test-beans.xml"})
public class MyDaoTest extends AbstractTestNGSpringContextTests {

    @Autowired
    private MyDao myDao;

    @Autowired
    private IDatabaseTester databaseTester;

        @BeforeMethod
        protected void setUp() throws Exception {

            // Get the XML and set it on the databaseTester
            // Optional: get the DTD and set it on the databaseTester

            databaseTester.setSetUpOperation(DatabaseOperation.CLEAN_INSERT);
            databaseTester.setTearDownOperation(DatabaseOperation.NONE);
            databaseTester.onSetup();
        }

        @Test
        public void whenXYZThenTUV() {
            ...
    }
}

Per-user configuration with Spring

Of course, we need to have a specific Spring configuration file to inject the databaseTester. As an example, here is one:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.springframework.org/schema/beans

http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">

        <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
            <property name="location" value="${user.name}.database.properties" />
        </bean>

        <bean name="dataSource" class="org.springframework.jdbc.datasource.SingleConnectionDataSource">
             <property name="driverClass" value="oracle.jdbc.driver" />
             <property name="username" value="${db.username}" />
             <property name="password" value="${db.password}" />
             <property name="url" value="jdbc:oracle:thin:@<server>:<port>/${db.schema}" />
        </bean>

        <bean name="databaseTester" class="org.dbunit.DataSourceDatabaseTester">
            <constructor-arg ref="dataSource" />
        </bean>
</beans>

However, there’s more than meets the eye. Notice the databaseTester has to be fed a datasource. Since a requirement is to have a database per developer, there are basically two options: either use a in-memory database or use the same database as in production and provide one such database schema per developer. I tend toward the latter solution (when possible) since it tends to decrease differences between the testing environment and the production environment.

Thus, in order for each developer to use its own schema, I use Spring’s ability to replace Java system properties at runtime: each developer is characterized by a different user.name. Then, I configure a PlaceholderConfigurer that looks for {user.name}.database.properties file, that will look like so:

db.username=myusername1
db.password=mypassword1
db.schema=myschema1

This let me achieve my goal of each developer using its own instance of Oracle. If you want to use this strategy, do not forget to provide a specific database.properties for the Continuous Integration server.

Huh oh?

Finally, the whole testing chain is configured up to the database tier. Yet, when the previous test is run, everything is fine (or not), but when checking the database, it looks untouched. Strangely enough, if you did load some XML dataset and assert it during the test, it does behaves accordingly: this bears all symptoms of a transaction issue. In fact, when you closely look at Spring’s documentation, everything becomes clear. Spring’s vision is that the database should be left untouched by running tests, in complete contradiction to DBUnit’s. It’s achieved by simply rollbacking all changes at the end of the test by default.

In order to change this behavior, the only thing to do is annotate the test class with @TransactionConfiguration(defaultRollback=false). Note this doesn’t prevent us from specifying specific methods that shouldn’t affect the database state on a case-by-case basis with the @Rollback annotation.

The test class becomes:

@ContextConfiguration(locations = {classpath:persistence-beans.xml", "classpath:test-beans.xml"})
@TransactionConfiguration(defaultRollback=false)
public class MyDaoTest extends AbstractTestNGSpringContextTests {

    @Autowired
    private MyDao myDao;

    @Autowired
    private IDatabaseTester databaseTester;

	@BeforeMethod
	protected void setUp() throws Exception {

		// Get the XML and set it on the databaseTester
		// Optional: get the DTD and set it on the databaseTester

        databaseTester.setSetUpOperation(DatabaseOperation.CLEAN_INSERT);
        databaseTester.setTearDownOperation(DatabaseOperation.NONE);
        databaseTester.onSetup();
    }

	@Test
	public void whenXYZThenTUV() {
		...
	}
}

Conclusion

Though Spring and DBUnit views on database testing are opposed, Spring’s configuration versatility let us make it fit our needs (and benefits from DI). Of course, other improvements are possible: pushing up common code in a parent test class, etc.

To go further:

Send to Kindle
Categories: Java Tags: , , ,

Arquillian on legacy servers

May 27th, 2012 No comments

In most contexts, when something doesn’t work, you just Google the error and you’re basically done. One good thing about working for organizations that lag behind technology-wise is that it generally is more challenging and you’re bound to be creative. Me, I’m stuck on JBoss 5.1 EAP, but that doesn’t stop me for trying to use modern approach in software engineering. In the quality domain, one such try is to be able to provide my developers a way to test their code in-container. Since we are newcomers in the EJB3 realm, that means they will have a need for true integration testing.

Given the stacks available at the time of this writing, I’ve decided for Arquillian, which seems the best (and only?) tool to achieve this. With this framework (as well as my faithful TestNG), I was set to test Java EE components on JBoss 5.1 EAP. This article describes how to do just that (as well as mentioning the pitfalls I had to overcome).

Arquillian basics

Arquillian basically provides a way for developers to manage the lifecycle of an application server and deploy a JavaEE artifact in it, in an automated way and integrated with your favorite testing engine (read TestNG – or JUnit if you must). Arquillian architecture is based on a generic engine, and adapters for specific application servers. If no adapter is available for your application server, that’s tough luck. If you’re feeling playful, try searching for a WebSphere adapter… the develop one. The first difficulty was that at the time of my work, there was no JBoss EAP 5.1 adapter, but I read from the web that it’s between a 5.0 GA and a 6.0 GA: a lengthy trial-and-error process took place (now, there’s at least some documentation, thanks to Arquillian guys hearing my plea on Twitter – thanks!).

Server type and version are not enough to get the right adapter, you’ll also need to choose how you will interact with the application server:

  • Embedded: you download all dependencies, provides a configuration and presto, Arquillian magically creates a running embedded application server. Pro: you don’t have to install the app server on each of you devs computer; cons: configuring may well be a nightmare, and there may be huge differences with the final platform, defeating the purpose of integration testing.
  • Remote: use an existing and running application server. Choose wisely between a dedicated server for all devs (who will share the same configuration, but also the same resources) and a single app server per dev (who said maintenace nightmare?).
  • Managed: same as remote, but tests will start and stop the server. Better suited for one app server per dev.
  • Local: I’ve found traces of this one, even though it seems to be buried deep. This seems to be the same as managed, but it uses the filesystem instead of the wire to do its job (starting/stopping by calling scripts, deploying by copying archives to the expected location, …), thus the term local.

Each adapter can be configured according to your needs through the arquilian.xml file. For Maven users, it takes place at the root of src/test/resources. Don’t worry, each adapter is aptly documented for configuration parameters. Mine looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<arquillian xmlns="http://jboss.org/schema/arquillian"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">
    <container qualifier="jboss" default="true">
        <configuration>
            <property name="javaHome">${jdk.home}</property>
            <property name="jbossHome">${jboss.home}</property>
            <property name="httpPort">8080</property>
            <property name="rmiPort">1099</property>
            <property name="javaVmArguments">-Xmx512m -Xmx768m -XX:MaxPermSize=256m
                -Djava.net.preferIPv4Stack=true
                -Djava.util.logging.manager=org.jboss.logmanager.LogManager
                -Djava.endorsed.dirs=${jboss.home}/lib/endorsed
                -Djboss.boot.server.log.dir=${jboss.home}
                -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
            </property>
        </configuration>
    </container>
</arquillian>

Finally, the test class itself has to extend org.jboss.arquillian.testng.Arquillian (or if you stick with JUnit, use @RunWith). The following snippet shows an example of a TestNG test class using Arquillian:

public class WebIT extends Arquillian {

    @Test
    public void testMethod() { ... }

    @Deployment
    public static WebArchive createDeployment() { ... }
}

Creating the archive

Arquillian tactics regarding in-container testing is to deploy only what you need. As such, it comes with a tool named ShrinkWrap that let you package only the relevant parts of what is to be tested.

For example, the following example creates an archive named web-test.war, bundling the ListPersonServlet class and the web deployment descriptor. More advanced uses would include libraries.

WebArchive archive = ShrinkWrap.create(WebArchive.class, "web-test.war")
    .addClass(ListPersonServlet.class)
    .setWebXML(new File("src/main/webapp/WEB-INF/web.xml"));

The Arquillian framework will look for a static method, annotated with @Deployment that returns such an archive, in order to deploy it to the application server (through the adapter).

Tip: use the toString() method on the archive to see what it contains if you have errors.

Icing on the Maven cake

Even if you’re no Maven fan, I think this point deserves some attention. IMHO, integration tests shouldn’t be part of the build process since they are by essence more fragile: a simple configuration error on the application server and your build fails without real cause.

Thus, I would recommend putting your tests in a separate module that is not called by its parent.

Besides, I also use the Maven Failsafe plugin, instead of the Surefire one so that I get clean separation of unit tests and integration tests, even with separated modules, so that I get different metrics for both (in Sonar for example). In order to get that separation out-of-the-box, just ensure your test classes end with IT (like Integration Test). Note that integration tests are bound to the integration-test lifecycle phase, just before install.

Here comes JBoss EAP

Now comes the hard part, actual deployment on JBoss 5.1 EAP. This stage will probably last for a long long time, and involve a lot of going back and forth. In order to be as productive as possible, the first thing to do is to locate the logs. If you kept the above configurations, they are in the JBoss <profile>/logs directory. If you feel the logs are too verbose for your own tastes (I did), just change the root priority from ${jboss.server.log.threshold} to INFO in the <profile>/conf/jboss-log4j.xml file.

On my machine, I kept getting JVM bind port errors. Thus, I changed the guilty connector port in the server.xml file (FYI, it was the AJP connector and JBoss stopped complaining when I changed 8009 to 8010).

One last step for EAP is to disable security. Since it’s enterprise oriented, security is enabled by default in EAP: in testing contexts, I couldn’t care less about authenticating to deploy my artifacts. Open the <profile>/deploy/profileservice-jboss-beans.xml configuration file and search for “comment this list to disable auth checks for the profileservice”. Do as you’re told :-) Alternatively, you could also walk the hard path and configure authentication: detailed instructions are available on the adapter page.

Getting things done, on your own

Until this point, we more or less followed instructions here and there. Now, we have to sully our nails and use some of our gray matter.

  • The first thing to address is some strange java.lang.IllegalStateExceptionwhen launching a test. Strangely enough, this is caused by Arquillian missing some libraries that have to be ShrinkWrapped along with your real code. In my case, I had to had the following snippet to my web archive:
    MavenDependencyResolver resolver = DependencyResolvers.use(MavenDependencyResolver.class);
    archive.addAsLibraries(
        resolver.artifact("org.jboss.arquillian.testng:arquillian-testng-container:1.0.0.Final")
                .artifact("org.jboss.arquillian.testng:arquillian-testng-core:1.0.0.Final")
                .artifact("org.testng:testng:6.5.2").resolveAsFiles());
  • The next error is much more vicious and comes from Arquillian inner workings.
    java.util.NoSuchElementException
        at java.util.LinkedHashMap$LinkedHashIterator.nextEntry(LinkedHashMap.java:375)
        at java.util.LinkedHashMap$KeyIterator.next(LinkedHashMap.java:384)
        at org.jboss.arquillian.container.test.spi.util.TestRunners.getTestRunner(TestRunners.java:60)

    When you look at the code, you see Arquillian uses the Service Provider feature (for more info, see here). But much to my chagrin, it doesn’t configure the implementation the org.jboss.arquillian.container.test.spi.TestRunner service should use and thus fails miserably. We have to create such a configuration manually: the file should only contain org.jboss.arquillian.testng.container.TestNGTestRunner (for such is the power of the Service Provider).Don’t forget to package it along the archive to have any chance at success:

    archive.addAsResource(new File("src/test/resources/META-INF/services"), "META-INF/services");

Update [28th May]: the two points above can be abandoned if you use the correct Maven dependency (the Arquillian BOM). Check the POMs in the attached project.

At the end, everything should work fine except a final message log in the test:

Shutting down server: default
Writing server thread dump to /path/to/jboss-as/server/default/log/threadDump.log

This means Arquillian cannot shutdown the server because it can’t authenticate. This would have no consequence whatsoever but it marks the test as failed and thus needs to be corrected. Edit the <profile>/conf/props/jmx-console-users.properties file and uncomment the admin = admin line.

Conclusion

The full previous steps took me about about half a week work spread over a week (it seems I’m more productive when not working full-time as my brain launches some background threads to solve problems). This was not easy but I’m proud to have beaten the damn thing. Moreover, a couple of proprietary configuration settings were omitted in this article. In conclusion, Arquillian seems to be a nice in-container testing framework but seems to have to be polished around some corners: I think using TestNG may be the cultprit here.

You can find the sources for this article here, in Eclipse/Maven format.

To go further:

Send to Kindle

TestNG, FEST et CDI

October 2nd, 2011 No comments

No, those are not ingredients for a new fruit salad recipe. These are just the components I used in one of my pet project: it’ss a Swing application in which I wanted to try out CDI. I ended up with Weld SE, which is the CDI RI from JBoss.

The application was tested alright with TestNG (regular users know about my preference of TestNG over JUnit) save the Swing GUI. A little browsing on the Net convinced me the FEST Swing testing framework was the right solution:

  • It offers a DSL for end-to-end functional testing from GUI.
  • It has an utility class that checks that Swing components methods are called on the Event Dispatch Thread (EDT).
  • It may check calls to System.exit().
  • It has a bunch of verify methods such as requireEnabled(), requireVisible(), requireValue() and many others that depend on the component’s type.

The challenge was to make TestNG, FEST and CDI work together. Luckily, FEST already integrates TestNG in the form of the FestSwingTestngTestCase class. This utility class checks for point 2 above (EDT use rule) and create a “robot” that can simulates events on the GUI.

Fixture

FEST manages GUI interaction through fixtures, wrapper around components that can pilot tests. So, just declare your fixture as a test class attribute that will be set in the setup sequence.

Launch Weld in tests

FEST offers an initialization hook in the form of the onSetup() method, called by FestSwingTestngTestCase. In order to launch Weld at test setup, use the following implementation:

protected void onSetUp() {

	// container if of type WeldContainer. It should be declared as a class attribute in order to be cleanly shutdow in the tear down step
	container = new Weld().initialize();

	MainFrame frame = GuiActionRunner.execute(new GuiQuery<MainFrame>() {

		@Override
		protected MainFrame executeInEDT() throws Throwable {

			return container.instance().select(MainFrame.class).get();
		}
	});

	// window is a test class attribute
	window = new FrameFixture(robot(), frame);

	window.show();
}

This will display the window fixture that will wrap the application’s main window.

Generate screenshots on failure

For GUI testing, test failure messages are not enough. Fortunately, FEST let us generate screenshots when a test fails. Just annotate the test class:

@GUITest
@Listeners(org.fest.swing.testng.listener.ScreenshotOnFailureListener.class)
public abstract class MainFrameTestCase extends FestSwingTestngTestCase {
	...
}

Best practices

Clicking a button on the frame fixture is just a matter of calling the click() method on the fixture, passing the button label as a parameter. During developement, however, I realized it would be better to create a method for each button so that it’s easier for developers to read tests.

Expanding this best practice can lead to functional-like testing:

selectCivility(Civility.MISTER);
enterFirstName("Nicolas");
enterLastName("Frankel");

Conclusion

I was very wary at first of testing the CDI-wired GUI. I thought it would be hard and would be too time-consuming given the expected benefits, I was wrong. Uniting TestNG and CDI is a breeze thanks to FEST. Having written a bunch of tests, I uncovered some nasty bugs. Life is good!

Send to Kindle
Categories: Java Tags: , ,

Top Eclipse plugins I wouldn’t go without

August 8th, 2009 5 comments

Using an IDE to develop today is necessary but any IDE worth his salt can be enhanced with additional features. NetBeans, IntelliJ IDEA and Eclipse have this kind of mechanism. In this article, I will mention the plugins I couldn’t develop without in Eclipse and for each one advocate for it.

m2eclipse

Maven is my build tool of choice since about 2 years. It adds some very nice features comparing to Ant, mainly the dependencies management, inheritance and variable filtering. Configuring the POM is kind of hard once you’ve reached a fairly high number of lines. The Sonatype m2eclipse plugin (formerly hosted by Codehaus) gives you a tabs-oriented view of every aspect of the POM:

  • An Overview tab neatly grouped into : Artifact, Parent, Properties, Modules, Project, Organization, SCM, Issue Management and Continuous Integration,
  • m2eclipse Overview tab Read more…

Send to Kindle

The unit test war : JUnit vs TestNG

June 1st, 2008 5 comments

What is unit testing anyway?

If you’re in contact with Java development, as a developer, an architect or a project manager, you can’t have heard of unit testing, as every quality method enforces its use. From Wikipedia:

“Unit testing is a test (often automated) that validates that individual units of source code are working properly. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a method, which may belong to a base/super class, abstract class or derived/child class. Ideally, each test case is independent from the others.”

Unit tests have much added value, apart from validating the working of your classes. If you have to change classes that you didn’t develop yourself, you should be glad to have them… because if they fail, it would instantly alert you of a failure: you won’t discover the nasty NullPointerException you just coded in production!

Before the coming of testing frameworks (more later), the usual testing was done: Read more…

Send to Kindle
Categories: Java Tags: , ,