Archive

Posts Tagged ‘testng’
  • PowerMock, features and use-cases

    Even if you don’t like it, your job sometimes requires you to maintain legacy applications. It happened to me (fortunately rarely), and the first thing I look for before writing as much as a single character in a legacy codebase is a unit testing harness. Most of the time, I tend to focus on the code coverage of the part of the application I need to change, and try to improve it as much as possible, even in the total lack of unit tests.

    Real trouble happens when design isn’t state-of-the-art. Unit testing requires mocking, which is only good when dependency injection (as well as initialization methods) makes classes unit-testable: this isn’t the case with some legacy applications, where there is a mess of private and static methods or initialization code in constructors, not to mention static blocks.

    For example, consider the following example:

    public class ExampleUtils {
        public static void doSomethingUseful() { ... }
    }
    
    public class CallingCode {
        public void codeThatShouldBeTestedInIsolation() {
            ExampleUtils.doSomethingUseful();
            ...
        }
    }
    

    It’s impossible to properly unit test the codeThatShouldBeTestedInIsolation() method since it has a dependency on another unmockable static method of another class. Of course, there are proven techniques to overcome this obstacle. One such technique would be to create a “proxy” class that would wrap the call to the static method and inject this class in the calling class like so:

    public class UsefulProxy {
        public void doSomethingUsefulByProxy() {
            ExampleUtils.doSomethingUseful();
        }
    }
    
    public class CallingCodeImproved {
        private UsefulProxy proxy;
        public void codeThatShouldBeTestedInIsolation() {
            proxy.doSomethingUSeful();
            ...
        }
    }
    

    Now I can inject a mock UsefulProxy and finally test my method in isolation. There are several drawbacks to ponder, though:

    • The produced code hasn't provided tests, only a way to make tests possible.
    • While writing this little workaround, you didn't produce any tests. At this point, you achieved nothing.
    • You changed code before testing and took the risk of breaking behavior! Granted, the example doesn't imply any complexity but such is not always the case in real life applications.
    • You made the code more testable, but only with an additional layer of complexity.

    For all these reasons, I would recommend this approach only as a last resort. Even worse, there are designs that are completely closed to simple refactoring, such as the following example which displays a static initializer:

    public class ClassWithStaticInitializer {
        static { ... }
    }
    

    As soon as the ClassWithStaticInitializer class is loaded by the class loader, the static block will be executed, for good or ill (in the light of unit testing, it probably will be the latter).

    My mock framework of choice is Mockito. Its designers made sure features such as static method mocking weren’t available and I thank them for that. It means that if I cannot use Mockito, it’s a design smell. Unfortunately, as we’ve seen previously, tackling legacy code may require such features. That’s when enters PowerMock (and only then - using PowerMock in a standard development process is also a sure sign the design is fishy).

    With PowerMock, you can leave the initial code untouched and still test to begin changing the code with confidence. Here’s the test code for the first legacy snippet, using Mockito and TestNG:

    @PrepareForTest(ExampleUtils.class)
    public class CallingCodeTest {
    
        private CallingCode callingCode;
    
        @BeforeMethod
        protected void setUp() {
            mockStatic(ExampleUtils.class);
            callingCode = new CallingCode();
        }
    
        @ObjectFactory
        public IObjectFactory getObjectFactory() {
            return new PowerMockObjectFactory();
        }
    
        @Test
        public void callingMethodShouldntRaiseException() {
            callingCode.codeThatShouldBeTestedInIsolation();
            assertEquals(getInternalState(callingCode, "i"), 1);
        }
    }
    

    There isn’t much to do, namely:

    • Annotate test classes (or individual test methods) with @PrepareForTest, which references classes or whole packages. This tells PowerMock to allow for byte-code manipulation of those classes, the effective instruction being done in the following step.
    • Mock the desired methods with the available palette of mockXXX() methods.
    • Provide the object factory in a method that returns IObjectFactory and annotated with @ObjectFactory.

    Also note that with the help of the Whitebox class we can access the class internal state (i.e. private variables). Even though this is bad, the alternative - taking chance with the legacy code without test harness is worse: remember our goal is to lessen the chance to introduce new bugs.

    Features list of PowerMock is available here for Mockito. Note that suppressing static blocks is not possible with TestNG right now.

    You can find the sources for this article here in Maven format.

    Categories: Java Tags: mockpowermocktestngunit testing
  • Database unit testing with DBUnit, Spring and TestNG

    I really like Spring, so I tend to use its features to the fullest. However, in some dark corners of its philosophy, I tend to disagree with some of its assumptions. One such assumption is the way database testing should work. In this article, I will explain how to configure your projects to make Spring Test and DBUnit play nice together in a multi-developers environment.

    Context

    My basic need is to be able to test some complex queries: before integration tests, I’ve to validate those queries get me the right results. These are not unit tests per se but let’s assilimate them as such. In order to achieve this, I use since a while a framework named DBUnit. Although not maintained since late 2010, I haven’t found yet a replacement (be my guest for proposals).

    I also have some constraints:

    • I want to use TestNG for all my test classes, so that new developers wouldn't think about which test framework to use
    • I want to be able to use Spring Test, so that I can inject my test dependencies directly into the test class
    • I want to be able to see for myself the database state at the end of any of my test, so that if something goes wrong, I can execute my own queries to discover why
    • I want every developer to have its own isolated database instance/schema

    Considering the last point, our organization let us benefit from a single Oracle schema per developer for those “unit-tests”.

    Basic set up

    Spring provides the AbstractTestNGSpringContextTests class out-of-the-box. In turn, this means we can apply TestNG annotations as well as @Autowired on children classes. It also means we have access to the underlying applicationContext, but I prefer not to (and don’t need to in any case).

    The structure of such a test would look like this:

    @ContextConfiguration(location = "classpath:persistence-beans.xml")
    public class MyDaoTest extends AbstractTestNGSpringContextTests {
    
        @Autowired
        private MyDao myDao;
    
        @Test
        public void whenXYZThenTUV() {
            ...
        }
    }
    

    Readers familiar with Spring and TestNG shouldn’t be surprised here.

    Bringing in DBunit

    DbUnit is a JUnit extension targeted at database-driven projects that, among other things, puts your database into a known state between test runs. [...] DbUnit has the ability to export and import your database data to and from XML datasets. Since version 2.0, DbUnit can also work with very large datasets when used in streaming mode. DbUnit can also help you to verify that your database data match an expected set of values.

    DBunit being a JUnit extension, it’s expected to extend the provided parent class org.dbunit.DBTestCase. In my context, I have to redefine some setup and teardown operation to use Spring inheritance hierarchy. Luckily, DBUnit developers thought about that and offer relevant documentation.

    Among the different strategies available, my tastes tend toward the CLEAN_INSERT and NONE operations respectively on setup and teardown. This way, I can check the database state directly if my test fails. This updates my test class like so:

    @ContextConfiguration(locations = {"classpath:persistence-beans.xml", "classpath:test-beans.xml"})
    public class MyDaoTest extends AbstractTestNGSpringContextTests {
    
        @Autowired
        private MyDao myDao;
    
        @Autowired
        private IDatabaseTester databaseTester;
    
        @BeforeMethod
        protected void setUp() throws Exception {
            // Get the XML and set it on the databaseTester
            // Optional: get the DTD and set it on the databaseTester
            databaseTester.setSetUpOperation(DatabaseOperation.CLEAN_INSERT);
            databaseTester.setTearDownOperation(DatabaseOperation.NONE);
            databaseTester.onSetup();
        }
    
        @Test
        public void whenXYZThenTUV() {
            ...
        }
    }
    

    Per-user configuration with Spring

    Of course, we need to have a specific Spring configuration file to inject the databaseTester. As an example, here is one:

    <?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://www.springframework.org/schema/beans
            http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
            <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
                <property name="location" value="${user.name}.database.properties" />
            </bean>
            <bean name="dataSource" class="org.springframework.jdbc.datasource.SingleConnectionDataSource">
                 <property name="driverClass" value="oracle.jdbc.driver" />
                 <property name="username" value="${db.username}" />
                 <property name="password" value="${db.password}" />
                 <property name="url" value="jdbc:oracle:thin:@<server>:<port>/${db.schema}" />
            </bean>
            <bean name="databaseTester" class="org.dbunit.DataSourceDatabaseTester">
                <constructor-arg ref="dataSource" />
            </bean>
    </beans>
    

    However, there’s more than meets the eye. Notice the databaseTester has to be fed a datasource. Since a requirement is to have a database per developer, there are basically two options: either use a in-memory database or use the same database as in production and provide one such database schema per developer. I tend toward the latter solution (when possible) since it tends to decrease differences between the testing environment and the production environment.

    Thus, in order for each developer to use its own schema, I use Spring’s ability to replace Java system properties at runtime: each developer is characterized by a different user.name. Then, I configure a PlaceholderConfigurer that looks for {user.name}.database.properties file, that will look like so:

    db.password=mypassword1
    db.schema=myschema1
    

    This let me achieve my goal of each developer using its own instance of Oracle. If you want to use this strategy, do not forget to provide a specific database.properties for the Continuous Integration server.

    Huh oh?

    Finally, the whole testing chain is configured up to the database tier. Yet, when the previous test is run, everything is fine (or not), but when checking the database, it looks untouched. Strangely enough, if you did load some XML dataset and assert it during the test, it does behaves accordingly: this bears all symptoms of a transaction issue. In fact, when you closely look at Spring’s documentation, everything becomes clear. Spring’s vision is that the database should be left untouched by running tests, in complete contradiction to DBUnit’s. It’s achieved by simply rollbacking all changes at the end of the test by default.

    In order to change this behavior, the only thing to do is annotate the test class with @TransactionConfiguration(defaultRollback=false). Note this doesn’t prevent us from specifying specific methods that shouldn’t affect the database state on a case-by-case basis with the @Rollback annotation.

    The test class becomes:

    @ContextConfiguration(locations = {classpath:persistence-beans.xml", "classpath:test-beans.xml"})
    @TransactionConfiguration(defaultRollback=false)
    public class MyDaoTest extends AbstractTestNGSpringContextTests {
    
        @Autowired
        private MyDao myDao;
    
        @Autowired
        private IDatabaseTester databaseTester;
    
        @BeforeMethod
        protected void setUp() throws Exception {
            // Get the XML and set it on the databaseTester
            // Optional: get the DTD and set it on the databaseTester
            databaseTester.setSetUpOperation(DatabaseOperation.CLEAN_INSERT);
            databaseTester.setTearDownOperation(DatabaseOperation.NONE);
            databaseTester.onSetup();
        }
    
        @Test
        public void whenXYZThenTUV() {
            ...
        }
    }
    

    Conclusion

    Though Spring and DBUnit views on database testing are opposed, Spring’s configuration versatility let us make it fit our needs (and benefits from DI). Of course, other improvements are possible: pushing up common code in a parent test class, etc.

    To go further:

    Categories: Java Tags: dbunitspringtesttestng
  • Arquillian on legacy servers

    In most contexts, when something doesn’t work, you just Google the error and you’re basically done. One good thing about working for organizations that lag behind technology-wise is that it generally is more challenging and you’re bound to be creative. Me, I’m stuck on JBoss 5.1 EAP, but that doesn’t stop me for trying to use modern approach in software engineering. In the quality domain, one such try is to be able to provide my developers a way to test their code in-container. Since we are newcomers in the EJB3 realm, that means they will have a need for true integration testing.

    Given the stacks available at the time of this writing, I’ve decided for Arquillian, which seems the best (and only?) tool to achieve this. With this framework (as well as my faithful TestNG), I was set to test Java EE components on JBoss 5.1 EAP. This article describes how to do just that (as well as mentioning the pitfalls I had to overcome).

    Arquillian basics

    Arquillian basically provides a way for developers to manage the lifecycle of an application server and deploy a JavaEE artifact in it, in an automated way and integrated with your favorite testing engine (read TestNG - or JUnit if you must). Arquillian architecture is based on a generic engine, and adapters for specific application servers. If no adapter is available for your application server, that’s tough luck. If you’re feeling playful, try searching for a WebSphere adapter… the develop one. The first difficulty was that at the time of my work, there was no JBoss EAP 5.1 adapter, but I read from the web that it’s between a 5.0 GA and a 6.0 GA: a lengthy trial-and-error process took place (now, there’s at least some documentation, thanks to Arquillian guys hearing my plea on Twitter - thanks!).

    Server type and version are not enough to get the right adapter, you’ll also need to choose how you will interact with the application server:

    • Embedded: you download all dependencies, provides a configuration and presto, Arquillian magically creates a running embedded application server. Pro: you don’t have to install the app server on each of you devs computer; cons: configuring may well be a nightmare, and there may be huge differences with the final platform, defeating the purpose of integration testing.
    • Remote: use an existing and running application server. Choose wisely between a dedicated server for all devs (who will share the same configuration, but also the same resources) and a single app server per dev (who said maintenace nightmare?).
    • Managed: same as remote, but tests will start and stop the server. Better suited for one app server per dev.
    • Local: I’ve found traces of this one, even though it seems to be buried deep. This seems to be the same as managed, but it uses the filesystem instead of the wire to do its job (starting/stopping by calling scripts, deploying by copying archives to the expected location, …), thus the term local.

    Each adapter can be configured according to your needs through the arquilian.xml file. For Maven users, it takes place at the root of src/test/resources. Don’t worry, each adapter is aptly documented for configuration parameters. Mine looks like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <arquillian xmlns="http://jboss.org/schema/arquillian"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://jboss.org/schema/arquillian http://jboss.org/schema/arquillian/arquillian_1_0.xsd">
        <container qualifier="jboss" default="true">
            <configuration>
                <property name="javaHome">${jdk.home}</property>
                <property name="jbossHome">${jboss.home}</property>
                <property name="httpPort">8080</property>
                <property name="rmiPort">1099</property>
                <property name="javaVmArguments">-Xmx512m -Xmx768m -XX:MaxPermSize=256m
                    -Djava.net.preferIPv4Stack=true
                    -Djava.util.logging.manager=org.jboss.logmanager.LogManager
                    -Djava.endorsed.dirs=${jboss.home}/lib/endorsed
                    -Djboss.boot.server.log.dir=${jboss.home}
                    -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000
                </property>
            </configuration>
        </container>
    </arquillian>
    

    Finally, the test class itself has to extend org.jboss.arquillian.testng.Arquillian (or if you stick with JUnit, use @RunWith). The following snippet shows an example of a TestNG test class using Arquillian:

    public class WebIT extends Arquillian {
    
        @Test
        public void testMethod() { ... }
    
        @Deployment
        public static WebArchive createDeployment() { ... }
    }
    

    Creating the archive

    Arquillian tactics regarding in-container testing is to deploy only what you need. As such, it comes with a tool named ShrinkWrap that let you package only the relevant parts of what is to be tested.

    For example, the following example creates an archive named web-test.war, bundling the ListPersonServlet class and the web deployment descriptor. More advanced uses would include libraries.

    WebArchive archive = ShrinkWrap.create(WebArchive.class, "web-test.war")
        .addClass(ListPersonServlet.class)
        .setWebXML(new File("src/main/webapp/WEB-INF/web.xml"));
    

    The Arquillian framework will look for a static method, annotated with @Deployment that returns such an archive, in order to deploy it to the application server (through the adapter).

    Tip: use the toString() method on the archive to see what it contains if you have errors.

    Icing on the Maven cake

    Even if you’re no Maven fan, I think this point deserves some attention. IMHO, integration tests shouldn’t be part of the build process since they are by essence more fragile: a simple configuration error on the application server and your build fails without real cause.

    Thus, I would recommend putting your tests in a separate module that is not called by its parent.

    Besides, I also use the Maven Failsafe plugin, instead of the Surefire one so that I get clean separation of unit tests and integration tests, even with separated modules, so that I get different metrics for both (in Sonar for example). In order to get that separation out-of-the-box, just ensure your test classes end with IT (like Integration Test). Note that integration tests are bound to the integration-test lifecycle phase, just before install.

    Here comes JBoss EAP

    Now comes the hard part, actual deployment on JBoss 5.1 EAP. This stage will probably last for a long long time, and involve a lot of going back and forth. In order to be as productive as possible, the first thing to do is to locate the logs. If you kept the above configurations, they are in the JBoss <profile>/logs directory. If you feel the logs are too verbose for your own tastes (I did), just change the root priority from ${jboss.server.log.threshold} to INFO in the <profile>/conf/jboss-log4j.xml file.

    On my machine, I kept getting JVM bind port errors. Thus, I changed the guilty connector port in the server.xml file (FYI, it was the AJP connector and JBoss stopped complaining when I changed 8009 to 8010).

    One last step for EAP is to disable security. Since it’s enterprise oriented, security is enabled by default in EAP: in testing contexts, I couldn’t care less about authenticating to deploy my artifacts. Open the &lt;profile&gt;/deploy/profileservice-jboss-beans.xml configuration file and search for “comment this list to disable auth checks for the profileservice”. Do as you’re told :-) Alternatively, you could also walk the hard path and configure authentication: detailed instructions are available on the adapter page.

    Getting things done, on your own

    Until this point, we more or less followed instructions here and there. Now, we have to sully our nails and use some of our gray matter.

    • The first thing to address is some strange java.lang.IllegalStateExceptionwhen launching a test. Strangely enough, this is caused by Arquillian missing some libraries that have to be ShrinkWrapped along with your real code. In my case, I had to had the following snippet to my web archive:
    MavenDependencyResolver resolver = DependencyResolvers.use(MavenDependencyResolver.class);
    archive.addAsLibraries(
    resolver.artifact("org.jboss.arquillian.testng:arquillian-testng-container:1.0.0.Final")
            .artifact("org.jboss.arquillian.testng:arquillian-testng-core:1.0.0.Final")
            .artifact("org.testng:testng:6.5.2").resolveAsFiles());
    
    • The next error is much more vicious and comes from Arquillian inner workings.
    java.util.NoSuchElementException
        at java.util.LinkedHashMap$LinkedHashIterator.nextEntry(LinkedHashMap.java:375)
        at java.util.LinkedHashMap$KeyIterator.next(LinkedHashMap.java:384)
        at org.jboss.arquillian.container.test.spi.util.TestRunners.getTestRunner(TestRunners.java:60)
    

    When you look at the code, you see Arquillian uses the Service Provider feature (for more info, see here). But much to my chagrin, it doesn’t configure the implementation the org.jboss.arquillian.container.test.spi.TestRunner service should use and thus fails miserably. We have to create such a configuration manually: the file should only contain org.jboss.arquillian.testng.container.TestNGTestRunner (for such is the power of the Service Provider).Don’t forget to package it along the archive to have any chance at success: java archive.addAsResource(new File("src/test/resources/META-INF/services"), "META-INF/services");

    Update [28th May]: the two points above can be abandoned if you use the correct Maven dependency (the Arquillian BOM). Check the POMs in the attached project.

    At the end, everything should work fine except a final message log in the test:

    Shutting down server: default
    Writing server thread dump to /path/to/jboss-as/server/default/log/threadDump.log
    

    This means Arquillian cannot shutdown the server because it can’t authenticate. This would have no consequence whatsoever but it marks the test as failed and thus needs to be corrected. Edit the <profile>/conf/props/jmx-console-users.properties file and uncomment the admin = admin line.

    Conclusion

    The full previous steps took me about about half a week work spread over a week (it seems I’m more productive when not working full-time as my brain launches some background threads to solve problems). This was not easy but I’m proud to have beaten the damn thing. Moreover, a couple of proprietary configuration settings were omitted in this article. In conclusion, Arquillian seems to be a nice in-container testing framework but seems to have to be polished around some corners: I think using TestNG may be the cultprit here.

    You can find the sources for this article here, in Eclipse/Maven format.

    To go further:

    Categories: JavaEE Tags: arquillianintegration testingjbosstestng
  • TestNG, FEST et CDI

    No, those are not ingredients for a new fruit salad recipe. These are just the components I used in one of my pet project: it’ss a Swing application in which I wanted to try out CDI. I ended up with Weld SE, which is the CDI RI from JBoss.

    The application was tested alright with TestNG (regular users know about my preference of TestNG over JUnit) save the Swing GUI. A little browsing on the Net convinced me the FEST Swing testing framework was the right solution:

    • It offers a DSL for end-to-end functional testing from GUI.
    • It has an utility class that checks that Swing components methods are called on the Event Dispatch Thread (EDT).
    • It may check calls to System.exit().
    • It has a bunch of verify methods such as requireEnabled(), requireVisible(), requireValue() and many others that depend on the component’s type.

    The challenge was to make TestNG, FEST and CDI work together. Luckily, FEST already integrates TestNG in the form of the FestSwingTestngTestCase class. This utility class checks for point 2 above (EDT use rule) and create a “robot” that can simulates events on the GUI.

    Fixture

    FEST manages GUI interaction through fixtures, wrapper around components that can pilot tests. So, just declare your fixture as a test class attribute that will be set in the setup sequence.

    Launch Weld in tests

    FEST offers an initialization hook in the form of the onSetup() method, called by FestSwingTestngTestCase. In order to launch Weld at test setup, use the following implementation:

    protected void onSetUp() {
    	// container is of type WeldContainer
    	// it should be declared as a class attribute in order to be cleanly shutdow in the tear down step
    	container = new Weld().initialize();
    
    	MainFrame frame = GuiActionRunner.execute(new GuiQuery<MainFrame>() {
    		@Override
    		protected MainFrame executeInEDT() throws Throwable {
    			return container.instance().select(MainFrame.class).get();
    		}
    	});
    
    	// window is a test class attribute
    	window = new FrameFixture(robot(), frame);
    	window.show();
    }
    

    This will display the window fixture that will wrap the application’s main window.

    Generate screenshots on failure

    For GUI testing, test failure messages are not enough. Fortunately, FEST let us generate screenshots when a test fails. Just annotate the test class:

    @GUITest
    @Listeners(org.fest.swing.testng.listener.ScreenshotOnFailureListener.class)
    public abstract class MainFrameTestCase extends FestSwingTestngTestCase {
    	...
    }
    

    Best practices

    Clicking a button on the frame fixture is just a matter of calling the click() method on the fixture, passing the button label as a parameter. During developement, however, I realized it would be better to create a method for each button so that it’s easier for developers to read tests.

    Expanding this best practice can lead to functional-like testing:

    selectCivility(Civility.MISTER);
    enterFirstName("Nicolas");
    enterLastName("Frankel");
    

    Conclusion

    I was very wary at first of testing the CDI-wired GUI. I thought it would be hard and would be too time-consuming given the expected benefits, I was wrong. Uniting TestNG and CDI is a breeze thanks to FEST. Having written a bunch of tests, I uncovered some nasty bugs. Life is good!

    Categories: Java Tags: CDIfesttestng
  • Top Eclipse plugins I wouldn't go without

    Using an IDE to develop today is necessary but any IDE worth his salt can be enhanced with additional features. NetBeans, IntelliJ IDEA and Eclipse have this kind of mechanism. In this article, I will mention the plugins I couldn’t develop without in Eclipse and for each one advocate for it.

    m2eclipse

    Maven is my build tool of choice since about 2 years. It adds some very nice features comparing to Ant, mainly the dependencies management, inheritance and variable filtering. Configuring the POM is kind of hard once you’ve reached a fairly high number of lines. The Sonatype m2eclipse plugin (formerly hosted by Codehaus) gives you a tabs-oriented view of every aspect of the POM:

    • An Overview tab neatly grouped into : Artifact, Parent, Properties, Modules, Project, Organization, SCM, Issue Management and Continuous Integration,

      m2eclipse Overview tab

    • A Dependencies tab for managing (guess what) dependencies and dependencies management. For each of the former, you can even exclude dependent artifacts. This tab is mostly initialized at the start of the project, since its informations shouldn’t change during the lifecycle,
    • A Repositories tab to deal with repositories, plugin repositories, distribution, site and relocation (an often underused feature that enable you to change an artifact location without breaking builds a.k.a a level of indirection),
    • A Build tab for customizing Maven default folders (a usually very bad idea),
    • A Plugins tab to configure and/or execute Maven plugins. This is one of the most important tab since it’s here you will configure maven-compiler-plugin to use Java 6, or such,
    • A Report tab to manage the ` part,
    • A Profiles tab to cope with profiles,
    • A Team tab to fill out team-oriented data such as developers and contributors information,
    • The most useful and important tab (according to me) graphically displays the dependency tree. Even better, each scope is represented in a different way and you can filter out unwanted scope.

      m2eclipse Dependencies tab

    • Last but not least, the last tab enables you to directly edit the underlying XML.

    Moreover, m2eclipse adds a new Maven build Run configuration that is equivalent for the command line:

    m2eclipse Run Configuration

    With this, you can easily configure the -X option (Debug Output) or the -Dmaven.test.skip option (Skip Tests).

    More importantly, you can set up the plugin to resolve dependencies from within the workspace during Eclipse builds; that is, instead of using your repository classpath, Eclipse will use the project’s classpath (provided it is in the desired version). It prevents the need to build an artifact each time it is modified because another won’t compile after the change. It merely duplicates the legacy Eclipse dependency management.

    I advise not to use the Resolve Workspace artifacts in the previous Run configuration because it will use this default behaviour. In Maven build, I want to distance myself from the IDE, using only the tool’s features.

    TestNG plugin

    For those not knowing TestNG, it is very similar to JUnit4. It was the first to bring Java 5 annotations (even before JUnit) so I adopted the tool. Now as to why I keep it even though JUnit4 uses annotations: it has one important feature JUnit has not. You can make a test method dependent on another, so as to develop test scenarios. I know this is not pure unit testing anymore; still I like using some scenarios in my testing packages in order to test build breaks as early as possible.

    FYI, Maven knows about TestNG and runs TestNG tests as readily as JUnit ones.

    The TestNG plugin for Eclipse does as the integrated JUnit plugin does, whether regarding configuration or run or even test results.

    TestNG Plugin Run configuration

    Emma

    When developing, and if one uses tests, one should know about one’s test coverage over code. I used to use Cobertura Maven plugin: I configured in the POM and, every once in a while, I ran a simple mvn cobertura:cobertura. Unfortunately, it is not very convenient to do so. I searched for an Eclipse plugin to have the same feature; alas, there’s none.  However, I found the EclEmma Eclipse plugin that brings the same functionality. It uses Emma (an OpenSource code coverage tool) under the hood and, though I searched thoroughly, Emma has no Maven 2 plugin. Since I value equally IDE code coverage during development and Maven code coverage during nightly builds (on a Continuous Integration infrastrcuture), so you’re basically stuck with 2 different products. So?

    EclEmma line highlighting

    ElcEmma provides a 4th run button (in addition to Run, Debug and External Tools) that launches the desired run configuration (JUnit, TestNG or what have you) in enhanced mode, the latter providing the code coverage feature. In the above screenshot, you can see line 20 was not run during the test.

    Even better, the plugin provides a aggregation view for code coverage percentage. This view can be decomposed on the project, source path, package and class levels.

    EclEmma statistics

    Spring IDE

    Spring does not need to be introduced. Whether it will be outed by JEE 5 dependency injection annotations remains to be seen. Plenty of projects still use Spring and that’s a fact. Still, XML configuration is very ankward in Spring in a number of cases:

    • referencing a qualified class name. It is not neither easy nor productive to type it; the same is true for properties
    • understanding complex or big configurations files
    • referencing a Spring bean in a hundred or more lines of file
    • refactoring a class name or a property name without breaking the configurations files
    • being informed of whether a class is a Spring bean in a project and if so, where it is used

    Luckily, Spring IDE provides features that make such tasks easy as a breeze:

    • auto-completion on XML configuration files
    • graphic view of such files

      Spring IDE Graph View

    • an internal index so that refactoring is takes into account the XML files (though I suspect there is some bugs hidden somewhere for I regularly have errors)
    • a enhanced Project Explorer view to display where a bean is used

      Spring Project Explorer View

    This entire package guarantees a much better productivity when using XML configuration in Spring than plain old XML editor. Of course, you can still use Annotation configuration, though I’m reluctant to do so (more in a latter post).

    I conclusion, these 4 integration plugins mean I feel really more comfortable using their underlying tools. If I were in an environment where I couldn’t update my Eclipse however I choose, I would definitely think about using these tools at all (except Maven), or use Annotations configuration for Spring. You can have exceptional products or frameworks, they have to integrate seamlessly into your IDE(s) to really add value: think about the fate of EJB v2!

  • The unit test war : JUnit vs TestNG

    What is unit testing anyway?

    If you’re in contact with Java development, as a developer, an architect or a project manager, you can’t have heard of unit testing, as every quality method enforces its use. From Wikipedia:

    "Unit testing is a test (often automated) that validates that individual units of source code are working properly. A unit is the smallest testable part of an application. In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is a method, which may belong to a base/super class, abstract class or derived/child class. Ideally, each test case is independent from the others."

    Unit tests have much added value, apart from validating the working of your classes. If you have to change classes that you didn’t develop yourself, you should be glad to have them… because if they fail, it would instantly alert you of a failure: you won’t discover the nasty NullPointerException you just coded in production!

    Before the coming of testing frameworks (more later), the usual testing was done:

    • either manually, eventually traced on an external document,
    • in the main method of the class.

    Both of the options were dead wrong. In the first case, it was impossible to know what test was passed, if the trace was up-to-date. In the second case, all your classes would become executable in the production environment, not a very good design.

    The coming of JUnit

    Then came Erich Gamma (yes, the same who wrote about design patterns!) and his magical framework JUnit. JUnit just industrializes your unit tests without messing with your code base proper. You just write a class in the right way and presto, you can launch JUnit to pass the tests. The test classes can be localized anywhere you please so you don’t have to ship them along with your application.

    Even better, now that JUnit has gained widespread recognition, every tool proposes to plug into it:

    • most IDEs (Eclipse, NetBeans, what have you) propose integrated plugins,
    • Ant provides a JUnit task that can run from your build,
    • Event better, if you put your tests in a specific Maven foldern, namely src/test/java, they are run automatically and if a test fails, it breaks the build.

    Cons

    Well, it can’t get better, can it? Yet, when using JUnit extensively, you can’t avoid to see some drawbacks.

    Parameters

    First of all, JUnit doesn’t allow for parameters. For example, let’s say that you want to test the following generic method:

    public <T extends Number> T add(T p1, T p2) {...}
    

    In the JUnit framework, you would have to implement at least two test methods, one for integers and one for decimal point numbers, and better, one for each Number type. The most rational use would be to code a single test method and to call it many times, each time passing a different set of parameters.

    Grouping

    JUnit test classes can be grouped into packages like any regular classes. But I would like to group my tests into a nicely organized hierarchy to know precisely where my tests are failing. For example:

    • a functional group,
    • a technical group
    • and a gui group.

    JUnit doesn’t provide an answer.

    Time cost of running tests

    Followers of Agile methodologies preach that builds, including unit tests, should be run in less than 10 minutes. I’m not sure if that’s the good limit, but I agree that a build should be the fastest possible.

    What about failed tests? In my opinion, if your build fails because of failed tests, it should be at the earliest time possible. What’s the point of running more tests if the build is fated to fail? That’s only a waste of time.

    More precisely, I want my framework not to run all my database tests if a test method informs me the database can’t be accessed whatever the reason. A nice feature would be to have natively such a dependency mechanism in the testing framework.

    Where do we go from here?

    I recently discovered another testing framework. The name is TestNG: the authors freely ackonwledge their framework is directly inspired from JUnit. It provides every functionality of JUnit and solves all the aforementioned problems.

    Morevover:

    • the Java 5 version of TestNG uses exactly the same annotation names as JUnit,
    • if you want to keep your JUnit tests, TestNG can run JUnit tests,
    • if you don't want, specific converter classes make the migration a breeze,
    • there are Eclipse and IDEA plugins available,
    • TestNG tests are run natively by Maven.

    In conclusion, I have nothing against JUnit, on the contrary. It did a marvelous job in making unit testing easy and we, as developers, owe their authors. But it is not a final product: it can be enhanced (see all the extensions people made). TestNG is such an enhancement, I can’t recommend enough its use.

    Categories: Java Tags: junittestngunit test