Archive

Posts Tagged ‘unit testing’
  • Should tests be ordered or not?

    Everything ordered and tidy

    Most of our day-to-day job is learned through mentorship and experience and not based upon scientific research. Once a dogma has permeated a significant minority of practitioners, it becomes very hard to challenge it.

    Yet, in this post, I’ll attempt to not only challenge that sometimes tests must be ordered but prove that in different use-cases.

    Your tests shall not be ordered (shall they?)

    Some of my conference talks are more or less related to testing, and I never fail to point out that TestNG is superior to JUnit if only because it allows for test method ordering. At that point, I’m regularly asked at the end of the talk why method ordering matters. It’s a widespread belief that tests shouldn’t be ordered. Here are some samples found here and there:

    Of course, well-written test code would not assume any order, but some do.
    --JUnit Wiki - Test execution order
    Each test runs in its own test fixture to isolate tests from the changes made by other tests. That is, tests don't share the state of objects in the test fixture. Because the tests are isolated, they can be run in any order.
    --JUnit FAQ - How do I use a test fixture?
    You’ve definitely taken a wrong turn if you have to run your tests in a specific order [...]
    --Writing Great Unit Tests: Best and Worst Practices
    Always Write Isolated Test Cases
    The order of execution has to be independent between test cases. This gives you the chance to rearrange the test cases in clusters (e.g. short-, long-running) and retest single test cases.
    --Top 12 Selected Unit Testing Best Practices

    And this goes on ad nauseam

    In most cases, this makes perfect sense. If I’m testing an add(int, int) method, there’s no reason why one test case should run before another. However, this is hardly a one-size-fits-all rule. The following use-cases take advantage of test ordering.

    Tests should fail for a single reason

    Let’s start with a simple example: the code consists of a controller that stores a list of x Foo entities in the HTTP request under the key bar.

    The naive approach

    The first approach would be to create a test method that asserts the following:

    1. a value is stored under the key bar in the request
    2. the value is of type List
    3. the list is not empty
    4. the list has size x
    5. the list contains no null entities
    6. the list contains only Foo entities

    Using AssertJ, the code looks like the following:

    {% highlight java %} // 1: asserts can be chained through the API // 2: AssertJ features can make the code less verbose @Test public void should_store_list_of_x_Foo_in_request_under_bar_key() { controller.doStuff(); Object key = request.getAttribute(“bar”); assertThat(key).isNotNull(); // #1 assertThat(key).isInstanceOf(List.class); // #2 List list = (List) key; assertThat(list).isNotEmpty(); // #3 assertThat(list).hasSize(x); // #4 list.stream().forEach((Object it) -> { assertThat(object).isNotNull(); // #5 assertThat(object).isInstanceOf(Foo.class); // #6 }); } {% endhighlight %}

    If this test method fails, the reason can be found in any of the previous steps. A customary glance at the failure report is not enough to tell exactly which one.

    Single tests result

    To know that, one has to analyze the stack trace then the source code.

    java.lang.AssertionError: 
    Expecting actual not to be null
    
    	at ControllerTest.should_store_list_of_x_Foo_in_request_under_bar_key(ControllerTest.java:31)
    

    A test method per assertion

    An alternative could be to refactor each assertion into its own test method:

    {% highlight java %} @Test public void bar_should_not_be_null() { controller.doStuff(); Object bar = request.getAttribute(“bar”); assertThat(bar).isNotNull(); }

    @Test public void bar_should_of_type_list() { controller.doStuff(); Object bar = request.getAttribute(“bar”); assertThat(bar).isInstanceOf(List.class); }

    @Test public void list_should_not_be_empty() { controller.doStuff(); Object bar = request.getAttribute(“bar”); List<?> list = (List) bar; assertThat(list).isNotEmpty(); }

    @Test public void list_should_be_of_size_x() { controller.doStuff(); Object bar = request.getAttribute(“bar”); List<?> list = (List) bar; assertThat(list).hasSize(x); }

    @Test public void instances_should_be_of_type_foo() { controller.doStuff(); Object bar = request.getAttribute(“bar”); List<?> list = (List) bar; list.stream().forEach((Object it) -> { assertThat(it).isNotNull(); assertThat(it).isInstanceOf(Foo.class); }); } {% endhighlight %}

    Now, every failing test is correctly displayed. But if the bar attribute is not found in the request, every test will still run and still fail, whereas they should merely be skipped.

    Unordered tests results

    Even if the waste is small, it still takes time to run unnecessary tests. Worse, it’s waste of time to analyze the cause of the failure.

    A private method per assertion

    It seems ordering the tests makes sense. But ordering is bad, right? Let’s try to abide by the rule, by having a single test calling private methods:

    {% highlight java %} public void should_store_list_of_x_Foo_in_request_under_bar_key() { controller.doStuff(); Object bar = request.getAttribute(“bar”); bar_should_not_be_null(bar); bar_should_of_type_list(bar); List<?> list = (List) bar; list_should_not_be_empty(list); list_should_be_of_size_x(list); instances_should_be_of_type_foo(list); }

    private void bar_should_not_be_null(Object bar) { assertThat(bar).isNotNull(); }

    private void bar_should_of_type_list(Object bar) { assertThat(bar).isInstanceOf(List.class); }

    private void list_should_not_be_empty(List<?> list) { assertThat(list).isNotEmpty(); }

    private void list_should_be_of_size_x(List<?> list) { assertThat(list).hasSize(x); }

    private void instances_should_be_of_type_foo(List<?> list) { list.stream().forEach((Object it) -> { assertThat(it).isNotNull(); assertThat(it).isInstanceOf(Foo.class); }); } {% endhighlight %}

    Unfortunately, it’s back to square one: it’s not possible to just know in which step the test failed just at a glance.

    Single test result

    At least the stack trace conveys a little more information:

    java.lang.AssertionError: 
    Expecting actual not to be null
    
    	at ControllerTest.bar_should_not_be_null(ControllerTest.java:40)
    	at ControllerTest.should_store_list_of_x_Foo_in_request_under_bar_key(ControllerTest.java:31)
    

    How to skip unnecessary tests, and easily know the exact reason of the failure?

    Ordering it is

    Like it or not, there’s no way to achieve skipping and easy analysis without ordering:

    // Ordering is achieved using TestNG
    
    @Test
    public void bar_should_not_be_null() {
        controller.doStuff();
        Object bar = request.getAttribute("bar");
        assertThat(bar).isNotNull();
    }
    
    @Test(dependsOnMethods = "bar_should_not_be_null")
    public void bar_should_of_type_list() {
        controller.doStuff();
        Object bar = request.getAttribute("bar");
        assertThat(bar).isInstanceOf(List.class);
    }
    
    @Test(dependsOnMethods = "bar_should_of_type_list")
    public void list_should_not_be_empty() {
        controller.doStuff();
        Object bar = request.getAttribute("bar");
        List<?> list = (List) bar;
        assertThat(list).isNotEmpty();
    }
    
    @Test(dependsOnMethods = "list_should_not_be_empty")
    public void list_should_be_of_size_x() {
        controller.doStuff();
        Object bar = request.getAttribute("bar");
        List<?> list = (List) bar;
        assertThat(list).hasSize(x);
    }
    
    @Test(dependsOnMethods = "list_should_be_of_size_x")
    public void instances_should_be_of_type_foo() {
        controller.doStuff();
        Object bar = request.getAttribute("bar");
        List<?> list = (List) bar;
        list.stream().forEach((Object it) -> {
            assertThat(it).isNotNull();
            assertThat(it).isInstanceOf(Foo.class);
        });
    }
    

    The result is the following:

    Ordered tests results

    Of course, the same result is achieved when the test is run with Maven:

    Tests run: 5, Failures: 1, Errors: 0, Skipped: 4, Time elapsed: 0.52 sec <<< FAILURE!
    bar_should_not_be_null(ControllerTest)  Time elapsed: 0.037 sec  <<< FAILURE!
    java.lang.AssertionError: 
    Expecting actual not to be null
    	at ControllerTest.bar_should_not_be_null(ControllerTest.java:31)
    
    Results :
    
    Failed tests: 
    	ControllerTest.bar_should_not_be_null:31 
    Expecting actual not to be null
    
    Tests run: 5, Failures: 1, Errors: 0, Skipped: 4
    

    In this case, by ordering in unit test methods, one can achieve both optimization of testing time and fast failure analysis by skipping tests that are bound to fail anyway.

    Unit testing and Integration testing

    In my talks about Integration testing, I usually use the example of a prototype car. Unit testing is akin to testing every nut and bolt of the car, while Integration testing is like taking the prototype on a test drive.

    No project manager would take the risk of sending the car on a test drive without having made sure its pieces are of good enough quality. It would be too expensive to fail just because of a faulty screw; test drives are supposed to validate higher-levels concerns, those that cannot be checked by Unit testing.

    Hence, unit tests should be ran first, and only integration tests only afterwards. In that case, one can rely on the Maven Failsafe plugin to run Integration tests later in the Maven lifecycle.

    Integration testing scenarios

    What might be seen as a corner-case in unit-testing is widespread in integration tests, and even more so in end-to-end tests. In the latest case, an example I regularly use is the e-commerce application. Steps of a typical scenario are as follow:

    1. Browse the product catalog
    2. Put one product in the cart
    3. Display the summary page
    4. Fill in the delivery address
    5. Choose a payment type
    6. Enter payment details
    7. Get order confirmation

    In a context with no ordering, this has several consequences:

    • Step X+1 is dependent on step X e.g. to enter payment details, one must have chosen a payment type first, requiring that the latter works
    • Step X+2 and X+1 both need to set up step X. This leads either to code duplication - as setup code is copied-pasted in all required steps, or common setup code - which increases maintenance cost (yes, sharing is caring but it’s also more expensive).
    • The initial state of step X+1 is the final state of step X i.e. at the end of testing step X, the system is ready to start testing step X+1
    • Trying to test step X+n if step X failed already is time wasted, both in terms of server execution time and and of failure analysis time. Of course, the higher n, the more waste.

    This is very similar to the section above about unit tests order. Given this, it makes no doubt for me that ordering steps in an integration testing scenario is far from a bad practice but good judgement.

    Conclusion

    As in many cases in software development, a rule has to be contextualized. While in general, it makes no sense to have ordering between tests, there are more than a few cases where it does.

    Software development is hard because the “real” stuff is not learned by sitting on universities benches but through repeated practice and experimenting under the tutorship of more senior developers. If enough more-senior-than-you devs tend to hold the same opinion on a subject, chances are you’ll take that for granted as well. At some point, one should single out of such opinion and challenge it to check whether it’s right or not in one’s own context.

  • On PowerMock abuse

    PowerMock logo

    Still working on my legacy application, and still trying to improve unit tests.

    This week, I noticed how much PowerMock was used throughout the tests, to mock either static or private methods. In one specific package, removing it improved tests execution time by one order of magnitude (from around 20 seconds to 2). That’s clearly abuse: I saw three main reasons of using PowerMock.

    Lack of knowledge of the API

    There probably must have been good reasons, but some of PowerMock uses could have been avoided if developers had just checked the underlying code. One example of such code was the following:

    @RunWith(PowerMockRunner.class)
    @PrepareForTest(SecurityContextHolder.class)
    public class ExampleTest {
    
        @Mock private SecurityContext securityContext;
    
        public void setUp() throws Exception {
            mockStatic(SecurityContextHolder.class);
            when(SecurityContextHolder.getContext()).thenReturn(securityContext);
        }
    
        // Rest of the test
    }
    

    Just a quick glance at Spring’s SecurityContextHolder reveals it has a setContext() method, so that the previous snippet can easily be replaced with:

    @RunWith(MockitoJUnitRunner.class)
    public class ExampleTest {
    
        @Mock private SecurityContext securityContext;
    
        public void setUp() throws Exception {
            SecurityContextHolder.setContext(securityContext);
        }
    
        // Rest of the test
    }
    

    Another common snippet I noticed was the following:

    @RunWith(PowerMockRunner.class)
    @PrepareForTest(WebApplicationContextUtils.class)
    public class ExampleTest {
    
        @Mock private WebApplicationContext wac;
    
        public void setUp() throws Exception {
            mockStatic(WebApplicationContextUtils.class);
            when(WebApplicationContextUtils.getWebApplicationContext(any(ServletContext.class))).thenReturn(wac);
        }
    
        // Rest of the test
    }
    

    While slightly harder than the previous example, looking at the source code of WebApplicationContextUtils reveals it looks into the servlet context for the context.

    The testing code can be easily change to remove PowerMock:

    @RunWith(MockitoJUnitRunner.class)
    public class ExampleTest {
    
        @Mock private WebApplicationContext wac;
        @Mock private ServletContext sc;
    
        public void setUp() throws Exception {
            when(sc.getAttribute(WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE)).thenReturn(wac);
        }
    
        // Rest of the test
    }
    

    Too strict visibility

    As seen above, good frameworks - such as Spring, make it easy to use them in tests. Unfortunately, the same cannot always be said of our code. In this case, I removed PowerMock by widening the visibility of methods and classes from private (or package) to public.

    You could argue that breaking encapsulation to improve tests is wrong, but in this case, I tend to agree with Uncle Bob:

    Tests trump Encapsulation.

    In fact, you think your encapsulation prevents other developers from misusing your code. Yet, you break it with reflection within your tests… What guarantees developers won’t use reflection the same way in production code?

    A pragmatic solution is to compromise your design a bit but - and that’s the heart of the matter, document it. Guava and Fest libraries have both a @VisibleForTesting annotation that I find quite convenient. Icing on the cake would be for IDEs to recognize it and not propose auto-completion in src/main/java folder.

    Direct usage of static methods

    This last point has been explained times and times again, but some developers still fail to apply it correctly. Some very common APIs offer only static methods and they have no alternatives e.g. Locale.getDefault() or Calendar.getInstance(). Such methods shouldn’t be called directly on your production code, or they’ll make your design testable only with PowerMock.

    public class UntestableFoo {
        public void doStuff() {
            Calendar cal = Calendar.getInstance();
            // Do stuff on calendar;
        }
    }
    
    @RunWith(PowerMock.class)
    @PrepareForTest(Calendar.class)
    public class UntestableFooTest {
    
        @Mock private Calendar cal;
        private UntestableFoo foo;
    
        @Before
        public void setUp() {
            mockStatic(Calendar.class);
            when(Calendar.getInstance()).thenReturn(cal);
            // Stub cal accordingly
    
            foo = new UntestableFoo();
        }
    
        // Add relevant test methods
    }
    
    

    To fix this design flaw, simply use injection and more precisely constructor injection:

    public class TestableFoo {
    
        private final Calendar calendar;
    
        public TestableFoo(Calendar calendar) {
            this.calendar = calendar;
        }
    
        public void doStuff() {
            // Do stuff on calendar;
        }
    }
    
    @RunWith(MockitoJUnitRunner.class)
    public class TestableFooTest {
    
        @Mock
        private Calendar cal;
    
        private TestableFoo foo;
    
        @Before
        public void setUp() {
            // Stub cal accordingly
    
            foo = new TestableFoo(cal);
        }
    
        // Add relevant test methods
    }
    
    

    At this point, the only question is how to create the instance in the first place. Quite easily, depending on your injection method: Spring @Bean methods, CDI @Inject Provider</code> methods or calling the getInstance() method in one of your own. Here's the Spring way:

    @Configuration
    public class MyConfiguration {
    
        @Bean
        public Calendar calendar() {
            return Calendar.getInstance();
        }
    
        @Bean
        public TestableFoo foo() {
            return new TestableFoo(calendar());
        }
    }
    

    Conclusion

    PowerMock is a very powerful and useful tool. But it should only be used when it’s strictly necessary as it has a huge impact on test execution time. In this article, I’ve tried to show how you can do without it in 3 different use-cases: lack of knowledge of the API, too strict visibility and direct static method calls. If you notice your test codebase full of PowerMock usages, I suggest you try the aforementioned techniques to get rid of them.

    Note: I’ve never been a fan of TDD (probably the subject of another article) but I believe the last 2 points could easily have been avoided if TDD would have been used.

    Categories: Java Tags: powermockunit testing
  • Initializing your Mockito mocks

    Mockito logo

    Maintenance projects are not fun compared to greenfield projects, but they sure provide most of the meat for this blog. This week saw me not checking the production code but the tests. What you see in tests reveals much of how the production code itself is written. And it’s a way to change things for the better, with less risks.

    At first, I only wanted to remove as much PowerMock uses as possible. Then I found out most Mockito spies were not necessary. Then I found out that Mockito mocks were initialized in 3 different ways in the same file; then usages of both given() and when() in the same class, then usages of when() and doReturn() in the same class… And many more areas that could be improved.

    In this post, I’ll limit myself to sum up the 3 ways to initialize your mocks in your test class: pick and choose the one you like best, but please stick with it in your class (if not your project). Consistency is one of the pillar of maintainability.

    Reminder

    Unit-testing is an important foundation of Software Quality (but not the only one!). As Object-Oriented Design is to make multiple components each with a dedicated responsibility, it’s crucial to make sure each of those components perform its tasks in a adequate manner. Hence, we have to feed a component with known inputs and check the validity of the outputs. Thus, testing the component in isolation requires a way to replace the dependencies with both input providers and output receivers that are in our control. Mockito is such a framework (among others) that let you achieve that.

    Creating those mocks in the first place can be done using different ways (I’ll use Mockito wording in place of the standard one).

    One-by-one explicit mocking

    The first and most straightforward way is to use Mockito’s mock() static method.

    public class FooTest {
    
        private Foo foo;
    
        @Before
        public void setUp() {
            foo = Mockito.mock(Foo.class);
        }
    
        @Test
        public void test_foo() {
            Bar bar = Mockito.mock(Bar.class);
            
            // Testing code go there
        }
    }
    

    It might be the most verbose one, but it’s also the easiest to understand, as the API is quite explicit. Plus it’s not dependent on a testing framework. And you may mock On the downside, if you’ve got a high number of dependencies, you need to initialize them one by one - but perhaps having such a high number is a sign of bad design?

    All-in-one general mocking call

    The second option aims to fix this problem (the number of calls, not the design…). It replaces all method calls by a single call that will mock every required attribute. In order to tell which attribute should be a mock, the @Mock annotation must be used.

    public class FooTest {
    
        @Mock private Foo foo;
    
        @Mock private Bar bar;
    
        @Before
        public void setUp() {
            MockitoAnnotations.initMocks();
        }
    
        @Test
        public void test_foo() {
            // Testing code go there
        }
    }
    

    Here, the 2 Mockito.mock() calls (or more) were replaced by a single call. On the downside, a local variable had to be moved to an attribute in order for Mockito to inject the mock. Besides, you have to understand what how the initMocks() method works.

    JUnit runner

    The last option replaces the explicit initMocks() call with a JUnitRunner.

    @RunWith(MockitoJUnitRunner.class)
    
    public class FooTest {
    
        @Mock private Foo foo;
    
        @Mock private Bar bar;
    
        @Test
        public void test_foo() {
            // Testing code go there
        }
    }
    

    This option alleviates the pain to write the single method call. But it has the downsides of option 2, plus it binds you to JUnit, foregoing other alternatives such as TestNG.

    Conclusion

    As a maintainer, my preferences are in decreasing order:

    1. explicit method calls: because they're the most explicit, thus readable
    2. initMocks: because it might hide the fact that you've too many dependencies... and having to replace local variables by attributes sucks
    3. runner: for it's even worse than the previous one. Didn't you read all my claims about TestNG?

    But the most important bit is, whatever option you choose, stick to it in the same class - if not the same project. Nothing makes code more unreadable than a lack of consistency.

    Categories: Java Tags: mockitounit testing
  • Integration tests from the trenches

    This post is the written form of one of my submission for Devoxx France 2013. As it was only chosen as backup, I lacked the necessary motivation to prepare it. The subject is important though, so I finally decided to write it down.

    In 2013, if you’re a standard developer, it is practically a given that you test your code. Whether you’re a practicioner of TDD or just create them afterwards, most realize a robust automated test harness is not optional but mandatory.

    Unit Test vs Integration Test

    There’s some implicit behind the notion of ‘test’ though. Depending on the person you ask, there may be some differences of what a test is. Let us provide two definitions to prevent potential future misunderstanding.

    Unit test
    A unit test tests a method in isolation. In this case, the unit is the method. Outside dependencies should be mocked to provide known input(s). This input can be set different values, including bordercase values, to check all code branches.
    Integration test
    An integration test tests the collaboration of two (or more) units. In this case, the unit is the class. This means that as soon as you want to check the execution result of more than one class, it is an integration test.

    On one hand, achieving 100% UT sucess with 100% code coverage is not enough to guarantee a bug-free application. On the other hand, a complete IT harness is not feasible in the context of real-world project since it would cost too much. The consequence is that both are complementary.

    Knowing the good IT & UT ratio is part of being a successful developer. It depends on many factors, including some outside the pure technical, such as the team skills and experience, application lifetime and so on.

    Fail-fast tests

    Most IT are usually inherently slower than UT, as they require some kind of initialization. For example, in modern-day applications, this directly translates to some DI framework bootstrap.

    It is highly unlikely that IT be successful while UT fail. Therefore, in order to have the shortest feedback loop in case of a failure, it is a good idea to execute the fastest tests first, check if they fail, and only execute slowest ones then.

    Maven provides the way to achieve this with the following plugins combination:

    1. maven-surefire-plugin to execute unit tests; convention is that their names end with Test
    2. maven-failsafe-plugin to execute integration tests; names end with IT

    By default, the former is executed during the test phase of the Maven build lifecycle. Unfortunately, configuration of the latter is mandatory:

    <project>
      ...
      <build>
        <plugins>
          <plugin>
            <artifactId>maven-failsafe-plugin</artifactId>
            <version>2.14.1</version>
            <executions>
              <execution>
                <goals>
                  <goal>integration-test</goal>
                </goals>
              </execution>
            </executions>
          </plugin>
        </plugins>
      </build>
    </project>
    

    Note the Maven mojo is already bound to the integration-test phase, so that it is not necessary to configure it explicitly.

    This way, UT - whose name pattern is *Test, are executed during the test phase while IT - whose name patter is *IT, are executed in the integration-test phase.

    System under test boundaries

    The first step before writing IT is to define the SUT boundaries. Inside lies everything that we can manage, outside the rest. For example, the database clearly belongs to the SUT because we can easily provide test data, whereas a web service does not.

    Inside the SUT, mocking or initializing subsystems depend on the needed/wanted integration level:

    1. To UT a use-case, one could mock DAOs in the service class
    2. Alternatively, one could initialize a test database and use real world data. DBUnit is the framework to use in this case, providing ways to initialize data during set up and check results afterwards.

    Doing both would let us test corner cases in the mocking approach (faster) and the standard case in the initialization one (slower).

    Outside the SUT, there is an important decision to make regarding IT. If external dependencies are not up and stable most of the time, the build would break too often and throws too many red herrings. Those IT will probably we deactivated very quickly.

    In order to still benefit from those IT but so that they do not get ignored (or even worse, removed), they should be put in a dedicated module outside the standard build. When using a continuous integration server, you should set two jobs: one for the standard build and one for those dependent IT.

    In Maven, it is easily achieved by creating a module at the parent root and not referencing it in the parent POM. This way, running mvn integration-test will only launch tests that are in the standard modules, those that are reliable enough.

    Fragile tests

    As seen above, fragility in IT comes from dependency on external systems. Those fragile tests are best handled as a separate module, outside the standard build process.

    However, another cause of fragility is unstability. And in any application, there’s a layer that is fundamentally unstable and that is the presentation layer. Whatever the chosen technology, this layer is exposed to end-users and you can be sure there will be many changes there. Whereas your service API are your own, GUI is subject to users whims, period.

    IT that uses GUI, whether dedicated GUI tests or end-to-end tests should thus also be considered fragile and isolated in a dedicated module, as above.

    My personal experience, coupled by some writings by Gojko Adzic, taught me to bypass the GUI layer and start my tests at the servlet layer. Whether you use  Spring Test provides a bunch of Fakes regarding the Servlet API.

    Tests ordering

    Developers unfamiliar with IT should probably be more than astounded by this section title. In fact, as UT should be context-independent and not be ordered in any case.

    For IT, things are a little different: I like to define some UT as user stories. For example, user logs in then performs some action and then another. Those steps can of course be defined in the same test method.

    The negative side of this, is that if the test fails, we have no way of easily knowing what went wrong and during which step. We could remedy to that by isolating each step in a single method and ordering those methods.

    TestNG - a JUnit offshoot fully integrated in Maven and Spring, let us do that easily:

    public class MyScenarioIT {
    
        @Test
        public void userLogsIn() {
            ...
        }
    
        @Test(dependsOnMethod = "userLogsIn")
        public void someAction() {
            ...
        }
    
        @Test(dependsOnMethod = "someAction")
        public void anotherAction() {
            ...
        }
    }
    

    This way, when an error occurs or an assert fails, log displays the faulty method: we just need to have adequate name for each method.

    Framework specifics UT

    Java EE in-container UT

    Up until recently, automated UT were executed independently of any container. For Spring applications, this was no big deal but for Java EE intensive applications, you had to fake and mock dependencies. In the end, there was no guarantee really running the application inside the container would produce expected results.

    Arquillian brought a paradigm shift, with the ability to produce UT for applications using Java EE. If you do not use it already, know that Arquillian let you automate creation of the desired archive and its deployment on one or more configured application servers.

    Those can be either existing or downloaded, extracted and configured during UT execution. The former category would need to be segregated in a separate module as to prevent breaking the build while the latter can safely be assimilated to regular UT.

    Spring UT

    Spring Test provide a feature to assemble different Spring configuration fragments (either XML or Java classes). This means that by designing our application with enough modularity, we can use desired production-ready and tests-only configuration fragments.

    As an example, by separating our data source and DAO in two different fragments, we can reuse the regular DAO fragment in UT and use a test fragment declaring an in-memory database. With Maven, this means having the former in src/main/resources and the latter in src/test/resources.

    @ContextConfiguration(locations = {"classpath:datasource.xml", "classpath:dao.xml"})
    public class MyCustomUT extends AbstractTestNGSpringContextTests {
        ...
    }
    

    Conclusion

    This is the big picture regarding my personal experience regarding UT. As always, they shouldn’t be seen as hard and fast rules but be adapted to your specific project context.

    However, tools listed in the article should be a real asset in all cases.

    Categories: Java Tags: integration testingtestunit testing
  • PowerMock, features and use-cases

    Even if you don’t like it, your job sometimes requires you to maintain legacy applications. It happened to me (fortunately rarely), and the first thing I look for before writing as much as a single character in a legacy codebase is a unit testing harness. Most of the time, I tend to focus on the code coverage of the part of the application I need to change, and try to improve it as much as possible, even in the total lack of unit tests.

    Real trouble happens when design isn’t state-of-the-art. Unit testing requires mocking, which is only good when dependency injection (as well as initialization methods) makes classes unit-testable: this isn’t the case with some legacy applications, where there is a mess of private and static methods or initialization code in constructors, not to mention static blocks.

    For example, consider the following example:

    public class ExampleUtils {
        public static void doSomethingUseful() { ... }
    }
    
    public class CallingCode {
        public void codeThatShouldBeTestedInIsolation() {
            ExampleUtils.doSomethingUseful();
            ...
        }
    }
    

    It’s impossible to properly unit test the codeThatShouldBeTestedInIsolation() method since it has a dependency on another unmockable static method of another class. Of course, there are proven techniques to overcome this obstacle. One such technique would be to create a “proxy” class that would wrap the call to the static method and inject this class in the calling class like so:

    public class UsefulProxy {
        public void doSomethingUsefulByProxy() {
            ExampleUtils.doSomethingUseful();
        }
    }
    
    public class CallingCodeImproved {
        private UsefulProxy proxy;
        public void codeThatShouldBeTestedInIsolation() {
            proxy.doSomethingUSeful();
            ...
        }
    }
    

    Now I can inject a mock UsefulProxy and finally test my method in isolation. There are several drawbacks to ponder, though:

    • The produced code hasn't provided tests, only a way to make tests possible.
    • While writing this little workaround, you didn't produce any tests. At this point, you achieved nothing.
    • You changed code before testing and took the risk of breaking behavior! Granted, the example doesn't imply any complexity but such is not always the case in real life applications.
    • You made the code more testable, but only with an additional layer of complexity.

    For all these reasons, I would recommend this approach only as a last resort. Even worse, there are designs that are completely closed to simple refactoring, such as the following example which displays a static initializer:

    public class ClassWithStaticInitializer {
        static { ... }
    }
    

    As soon as the ClassWithStaticInitializer class is loaded by the class loader, the static block will be executed, for good or ill (in the light of unit testing, it probably will be the latter).

    My mock framework of choice is Mockito. Its designers made sure features such as static method mocking weren’t available and I thank them for that. It means that if I cannot use Mockito, it’s a design smell. Unfortunately, as we’ve seen previously, tackling legacy code may require such features. That’s when enters PowerMock (and only then - using PowerMock in a standard development process is also a sure sign the design is fishy).

    With PowerMock, you can leave the initial code untouched and still test to begin changing the code with confidence. Here’s the test code for the first legacy snippet, using Mockito and TestNG:

    @PrepareForTest(ExampleUtils.class)
    public class CallingCodeTest {
    
        private CallingCode callingCode;
    
        @BeforeMethod
        protected void setUp() {
            mockStatic(ExampleUtils.class);
            callingCode = new CallingCode();
        }
    
        @ObjectFactory
        public IObjectFactory getObjectFactory() {
            return new PowerMockObjectFactory();
        }
    
        @Test
        public void callingMethodShouldntRaiseException() {
            callingCode.codeThatShouldBeTestedInIsolation();
            assertEquals(getInternalState(callingCode, "i"), 1);
        }
    }
    

    There isn’t much to do, namely:

    • Annotate test classes (or individual test methods) with @PrepareForTest, which references classes or whole packages. This tells PowerMock to allow for byte-code manipulation of those classes, the effective instruction being done in the following step.
    • Mock the desired methods with the available palette of mockXXX() methods.
    • Provide the object factory in a method that returns IObjectFactory and annotated with @ObjectFactory.

    Also note that with the help of the Whitebox class we can access the class internal state (i.e. private variables). Even though this is bad, the alternative - taking chance with the legacy code without test harness is worse: remember our goal is to lessen the chance to introduce new bugs.

    Features list of PowerMock is available here for Mockito. Note that suppressing static blocks is not possible with TestNG right now.

    You can find the sources for this article here in Maven format.

    Categories: Java Tags: mockpowermocktestngunit testing
  • Dependency Injection on GUI components

    For some, it goes without saying but a recent communication made me wonder about the dangers of implicitness in it.

    In my book Learning Vaadin, I showed how to integrate the Vaadin web framework with the Spring DI framework: in order to make my point, I wired Vaadin UI components together. I did this because I didn’t want to go into the intricacies of creating a service layer. I had some comments questioning whether injecting UI components was relevant. The question applies equally to Vaadin and Swing. IMHO, I think not and if there’s a second version of Learning Vaadin, I will either write a warning in big bold font or choose a more complex example.

    Why so? Here’s my reasoning so far. There are some reasons to use DI. To just name a few:

    • If I wanted to be controversial, I'd say that some use it because it's hype (or because they've always done so)... but people that know me also know I'm not this kind of guy.
    • Some developers (or Heaven forbids, architects) use DI because it breaks dependencies
    • As for myself, the main reason I use DI is to make my classes more testable.

    Now, UI components may have to collaborate with 3 other kinds of components: other UI components, GUI behavior and services. Since I consider the Vaadin framework to be top quality, I won’t unit test available components, only the behavior I’ve coded (more info on how to separate UI components and their behavior here) and services. Therefore, wiring  UI components together has no real value for me.

    Injecting behavior and services is another matter: since those will likely need to be unit-tested and have dependencies themselves (either on the service layer or the data access layer), I need them to be provided by the DI context so they’ll be injectable themselves. At this point, there are basically two solutions:

    • Either wire only services and behaviors. This will lead to a dependency to the DI framework in the UI components.
    • Or wire all components at the root and let the framework handle injection. This may be overkill in some cases. This is the choice I made when writing the example for Learning Vaadin.

    I hope this post will clarify the whole inject UI components thing for you dear readers.

    Categories: Development Tags: unit testinguser interface
  • Shoud you change your design for testing purposes?

    As Dependency Injection frameworks go, the standard is currently CDI. When switching from Spring, one has to consider the following problem: how do you unit test your injected classes?

    In Spring, DI is achieved through either constructor injection or setter injection. Both allow for simple unit testing by providing the dependencies and calling either the constructor or the desired setter. Now, how do you unit test the following code, which uses field injection:

    public class MyMainClass {
      @Inject
      private MyDependencyClass dependency;
      // dependency is used somewhere else
      ...
    }
    

    Of course, there are some available options:

    1. you can provide a setter for the dependency field, just as you did in Spring
    2. you can use reflection to access the field
    3. you can use a testing framework that does the reflection for you (PrivateAccessor from JUnit addons or Powermock come to mind)
    4. you can increase the field visibility to package (i.e. default) and put the test case in the same package

    Amazingly enough, when Googling through the web, the vast majority of unit testing field-injected classes code demonstrate the increased visibility. Do a check if you do not believe me: they do not display the private visibility (here, here and here for example). Granted, it’s a rather rethorical question, but what annoys me is that it’s implicit, whereas IMHO it should be a deeply thought-out decision.

    My point is, changing design for testing purpose is like shooting yourself in the foot. Design shouldn’t be cheaply sold. Else, what will be the next step, changing your design for build purposes? Feels like a broken window to me. As long as it’s possible, I would rather stick to using the testing framework that enable private field-access. It achieves exactly the same purpose but at least it keeps my initial design.

    Categories: Development Tags: unit testing
  • Mockito' spy() method and Spring

    Mockito is a mocking framework (see Two different mocking approaches) that is an offshoot of EasyMock. Whatever the mocking framework one uses, a common feature is the ability to mock interfaces, through the JDK Proxy class. This is well and nice, but one has to explicitly mock every method that one wants to use in the course of the test.

    What if I want to mock an already existing implementation, with some methods providing behaviours that suit me ? Today, I ran across this case: I had a legacy helper class I wanted to reuse. This class used commons-http-client to ease the process of calling an URL. It had property accessors, like any good old POJO, that I really needed even in the test scope and a method that made the real call, using previously set properties (such as URL). It implemented no interface, though. Here’s what it looked like:

    public class LegacyHelper {
    
        // Various attributes
        ...
    
        // Various accessors to get/set these properties
        ...
    
        // One big method that uses external resources (very bad for Unit Testing)
        public int callUrl() {
            ...
        }
    }
    

    Although Mockito lacks exhaustive documentation (a trait shared by many Google Code projects, much to my dismay), I happened to run across the Mockito.spy() method. This magic method creates a proxy (hence the name spy) on a real object. It delegates its method calls to the proxied object unless these methods are stubbed. It means I could rely on the getters/setters doing their work while neutralizing the legacy method that broke isolation testing.

    public class MyTest {
    
        // This I don't want to test but my class uses it
        private LegacyHelper helper;
    
        @BeforeMethod
        public void setUp() {
            helper = Mockito.spy(new LegacyHelper());
            Mockito.when(helper.callUrl()).thenReturn(0);
        }
    
        @Test
        public void testCall() {
            // Now I can use helper without it really calling anything
            helper.callUrl();
            // Do real testing here
           ...
        }
    }
    

    This is only the first step. What if I need to provide spied objects throughout the entire application? Spring certainly helps here with the FactoryBean interface. When Spring creates a new instance, either it calls the new operator or the getObject() method if the referenced class is of type FactoryBean. Our spy factory looks like this:

    public class SpyFactoryBean {
    
        // Real or spied object
        private Object real;
    
        public void setReal(Object object) {
            real = object;
        }
    
        public boolean isSingleton() {
            return false;
        }
    
        public Class getObjectType() {
            return real.getClass();
        }
    
        public Object getObject() {
    
            return Mock.spy(real);
        }
    }
    

    To use it in a Spring context file:

    <?xml version="1.0" encoding="ISO-8859-1"?>
    <beans>
        <bean id="legacyHelper" class="LegacyHelper" />
        <bean id="mockHelper" class="SpyFactoryBean" dependency-check="objects">
            <property name="real" ref="legacyHelper" />
        </bean>
    </beans>
    

    Now you’ve got a factory of spies that you can reuse across your project’s tests or even better, ship for use among all your enterprise’s projects. The only thing to do is not to forget to stub the methods that may have side-effects.