Archive

Archive for the ‘Technical’ Category

Designing your own Spring Boot starter – part 1

February 7th, 2016 2 comments

Since its release, Spring Boot has been a huge success: it boosts developers productivity with its convention over configuration philosophy. However, sometimes, it just feels too magical. I have always been an opponent of autowiring for this exact same reason. And when something doesn’t work, it’s hard to get back on track.

This is the reason why I wanted to dig deeper into Spring Boot starter mechanism – to understand every nook and cranny. This post is the first part and will focus on analyzing how it works. The second part will be a case study on creating a starter.

spring.factories

At the root of every Spring Boot starter lies the META-INF/spring.factories file. Let’s check the content of this file in the spring-boot-autoconfigure.jar. Here’s an excerpt of it:

...
# Auto Configure
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.boot.autoconfigure.admin.SpringApplicationAdminJmxAutoConfiguration,\
org.springframework.boot.autoconfigure.aop.AopAutoConfiguration,\
org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration,\
org.springframework.boot.autoconfigure.MessageSourceAutoConfiguration,\
org.springframework.boot.autoconfigure.PropertyPlaceholderAutoConfiguration,\
org.springframework.boot.autoconfigure.batch.BatchAutoConfiguration,\
...

Now let’s have a look at their content. For example, here’s the JpaRepositoriesAutoConfiguration class:

@Configuration
@ConditionalOnBean(DataSource.class)
@ConditionalOnClass(JpaRepository.class)
@ConditionalOnMissingBean({ JpaRepositoryFactoryBean.class,  JpaRepositoryConfigExtension.class })
@ConditionalOnProperty(prefix = "spring.data.jpa.repositories", name = "enabled", havingValue = "true",  matchIfMissing = true)
@Import(JpaRepositoriesAutoConfigureRegistrar.class)
@AutoConfigureAfter(HibernateJpaAutoConfiguration.class)
public class JpaRepositoriesAutoConfiguration {}

There are a couple of interesting things to note:

  1. It’s a standard Spring @Configuration class
  2. The class contains no “real” code but imports another configuration – JpaRepositoriesAutoConfigureRegistrar, which contains the “real” code
  3. There are a couple of @ConditionalOnXXX annotations used
  4. There seem to be a order dependency management of some sort with @AutoConfigureAfter

Points 1 and 2 are self-explanatory, point 4 is rather straightforward so let’s focus on point 3.

@Conditional annotations

If you didn’t start to work with Spring yesterday, you might know about the @Profile annotation. Profiles are a way to mark a bean-returning method as being optional. When a profile is activated, the relevant profile-annotated method is called and the returning bean contributed to the bean factory.

Some time ago, @Profile looked like that:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface Profile {
    String[] value();
}

Interestingly enough, @Profile has been rewritten to use the new @Conditional annotation:

@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE, ElementType.METHOD})
@Documented
@Conditional(ProfileCondition.class)
public @interface Profile {
    String[] value();
}

Basically, a @Conditional annotation just points to a Condition. In turn, a condition is a functional interface with a single method that returns a boolean: if true, the @Conditional-annotated method is executed by Spring and its returning object added to the context as a bean.

There are a lot of conditions available out-of-the-box with Spring Boot:

Condition Description
OnBeanCondition Checks if a bean is in the Spring factory
OnClassCondition Checks if a class is on the classpath
OnExpressionCondition Evalutates a SPeL expression
OnJavaCondition Checks the version of Java
OnJndiCondition Checks if a JNDI branch exists
OnPropertyCondition Checks if a property exists
OnResourceCondition Checks if a resource exists
OnWebApplicationCondition Checks if a WebApplicationContext exists

Those can be combined together with boolean conditions:

Condition Description
AllNestedConditions AND operator
AnyNestedConditions OR operator
NoneNestedCondition NOT operator

Dedicated @Conditional annotations point to those annotations. For example, @ConditionalOnMissingBean points to the OnBeanCondition class.

Time to experiment

Let’s create a configuration class annotated with @Configuration.

The following method will run in all cases:

@Bean
public String string() {
    return "string()";
}

This one won’t, for java.lang.String is part of Java’s API:

@Bean
@ConditionalOnMissingClass("java.lang.String")
public String missingClassString() {
    return "missingClassString()";
}

And this one will, for the same reason:

@Bean
@ConditionalOnClass(String.class)
public String classString() {
    return "classString()";
}

Analysis of the previous configuration

Armed with this new knowledge, let’s analyze the above JpaRepositoriesAutoConfiguration class.

This configuration will be enabled if – and only if all conditions are met:

@ConditionalOnBean(DataSource.class)
There’s a bean of type DataSource in the Spring context
@ConditionalOnClass(JpaRepository.class)
The JpaRepository class is on the classpath i.e. the project has a dependency on Spring Data JPA
@ConditionalOnMissingBean
There are no beans of type JpaRepositoryFactoryBean nor JpaRepositoryConfigExtension in the context
@ConditionalOnProperty
The standard application.properties file must contain a property named spring.data.jpa.repositories.enabled with a value of true

Additionally, the configuration will run after HibernateJpaAutoConfiguration (if the latter is referenced).

Conclusion

I hope I demonstrated that Spring Boot starters are no magic. Join me next week for a simple case study.

To go further:

Send to Kindle
Categories: Java Tags:

Why you shouldn’t trust the HTML password input

January 31st, 2016 4 comments

This week, I wanted to make a simple experiment. For sure, all applications we develop make use of HTTPS to encrypt the login/password but what happens before?

Let’s say I typed my login/password but before sending them, I’m called by my colleague and I leave my computer open. My password is protected by the HTML password input, right? It shows stars instead of the real characters. Well, it’s stupidly easy to circumvent this. If you use a developer workstation and have developer tools on your browser, just live edit the page and change type="password" to type="text". Guess what? It works and displays the password in clear text. You have been warned!

Send to Kindle
Categories: Development Tags: ,

The Java Security Manager: why and how?

January 17th, 2016 No comments

Generally, security concerns are boring for developers. I hope this article is entertaining enough for you to read it until the end since it tackles a very serious issue on the JVM.

Quizz

Last year, at Joker conference, my colleague Volker Simonis showed a snippet that looked like the following:

public class StrangeReflectionExample {

    public Character aCharacter;

    public static void main(String... args) throws Exception {
        StrangeReflectionExample instance = new StrangeReflectionExample();
        Field field = StrangeReflectionExample.class.getField("aCharacter");
        Field type = Field.class.getDeclaredField("type");
        type.setAccessible(true);
        type.set(field, String.class);
        field.set(instance, 'A');
        System.out.println(instance.aCharacter);
    }
}

Now a couple of questions:

  1. Does this code compile?
  2. If yes, does it run?
  3. If yes, what does it display?

Answers below (dots to let you think before checking them).
.
..

….
…..
……
…….
……..
………
……….
………..
…………
………….
…………..
……………
…………….
……………..
This code compiles just fine. In fact, it uses the so-called reflection API (located in the java.lang.reflect package) which is fully part of the JDK.

Executing this code leads to the following exception:

Exception in thread "main" java.lang.IllegalArgumentException: Can not set java.lang.String field ch.frankel.blog.securitymanager.StrangeReflectionExample.aCharacter to java.lang.Character
	at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:167)
	at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:171)
	at sun.reflect.UnsafeObjectFieldAccessorImpl.set(UnsafeObjectFieldAccessorImpl.java:81)
	at java.lang.reflect.Field.set(Field.java:764)
	at ch.frankel.blog.securitymanager.StrangeReflectionExample.main(StrangeReflectionExample.java:15)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

So, despite the fact that we defined the type of the aCharacter attribute as a Character at development time, the reflection API is able to change its type to String at runtime! Hence, trying to set it to 'A' fails.

Avoiding nasty surprises with the Security Manager

Reflection is not the only risky operation one might want to keep in check on the JVM. Reading a file or writing one also belong to the set of potentially dangerous operations. Fortunately, the JVM has a system to restrict those operations. Unfortunately, it’s not set by default.

In order to activate the SecurityManager, just launch the JVM with the java.security.manager system property i.e. java -Djava.security.manager. At this point, the JVM will use the default JRE policy. It’s configured in the file located at %JAVA_HOME%/lib/security/java.policy (for Java 8). Here’s a sample of this file:

grant codeBase "file:${{java.ext.dirs}}/*" {
        permission java.security.AllPermission;
};

grant {
        permission java.lang.RuntimePermission "stopThread";
        permission java.net.SocketPermission "localhost:0", "listen";
        permission java.util.PropertyPermission "java.version", "read";
        permission java.util.PropertyPermission "java.vendor", "read";
        ...
}

The first section – grant codeBase, is about which code can be executed; the second – grant, is about specific permissions.

Regarding the initial problem regarding reflection mentioned above, the second part is the most relevant. One can read the source of the AccessibleObject.setAccessible() method:

SecurityManager sm = System.getSecurityManager();
if (sm != null) sm.checkPermission(ACCESS_PERMISSION);
setAccessible0(this, flag);

Every sensitive method to a Java API has the same check through the Security Manager. You can verify that for yourself in the following code:

  • Thread.stop()
  • Socket.bind()
  • System.getProperty()
  • etc.

Using an alternate java.policy file

Using the JRE’s policy file is not convenient when one uses the same JRE for different applications. Given the current micro-service trend, this might not be the case. However, with automated provisioning, it might be more convenient to always provision the same JRE over and over and let each application provides its own specific policy file.

To add another policy file in addition to the default JRE’s, thus adding more permissions, launch the JVM with:
java -Djava.security.manager -Djava.security.policy=/path/to/other.policy

To replace the default policy file with your own, launch the JVM with:
java -Djava.security.manager -Djava.security.policy==/path/to/other.policy
Note the double equal sign.

Configuring your own policy file

Security configuration can be either based on a:

Black list
In a black list scenario, everything is allowed but exceptions can be configured to disallow some operations.
White list
On the opposite, in a white list scenario, only operations that are explicitly configured are allowed. By default, all operations are disallowed.

If you want to create your own policy file, it’s suggested you start with a blank one and then launch your app. As soon, as you get a security exception, add the necessary permission is the policy. Repeat until you have all necessary permissions. Following this process will let you have only the minimal set of permissions to run the application, thus implementing the least privilege security principle.

Note that if you’re using a container or a server, you’ll probably require a lot of those permissions, but this is the price to pay to secure your JVM against abuses.

Conclusion

I never checked policy files in production, but since I never had any complain, I assume the JVM’s policy was never secured. This is a very serious problem! I hope this article will raise awareness regarding that lack of hardening – especially since with the latest JVM, you can create and compile Java code on the fly, leading to even more threats.

To go further:

Send to Kindle
Categories: Java Tags: ,

Playing with Spring Boot, Vaadin and Kotlin

January 10th, 2016 1 comment

It’s no mystery that I’m a fan of both Spring Boot and Vaadin. When the Spring Boot Vaadin add-on became GA, I was ecstatic. Lately, I became interested in Kotlin, a JVM-based language offered by JetBrains. Thus, I wanted to check how I could develop a small Spring Boot Vaadin demo app in Kotlin – and learn something in the process. Here are my discoveries, in no particular order.

Spring needs non-final stuff

It seems Spring needs @Configuration classes and @Bean methods to be non-final. As my previous Spring projects were in Java, I never became aware of that because I never use the final keyword. However, Kotlin classes and methods are final by default: hence, you have to use the open keyword in Kotlin.

@Configuration
open class AppConfiguration {
    @Bean
    @UIScope
    open fun mainScreen() = MainScreen()
}

No main class

Spring Boot applications require a class with a public static void main(String... args) method to reference a class annotated with @SpringBootApplication. In general, those two classes are the same.

Kotlin has no concept of such static methods, but offers pure functions and objects. I tried to be creative by having an annotated object referenced by a pure function, both in the same file.

@SpringBootApplication
open class BootVaadinKotlinDemoApplication

fun main(vararg args: String) {
    SpringApplication.run(arrayOf(BootVaadinKotlinDemoApplication::class.java), args)
}

Different entry-point reference

Since the main function is not attached to a class, there’s no main class to reference to launch inside the IDE. Yet, Kotlin creates a .class with the same name as the file name suffixed with Kt.

My file is named BootVaadinKotlinDemoApplication.kt, hence the generated class name is BootVaadinKotlinDemoApplicationKt.class. This is the class to reference to launch the application in the IDE. Note that there’s no need to bother about that when using mvn spring-boot:run on the command-line, as Spring Boot seems to scan for the main method.

Short and readable bean definition

Java syntax is seen as verbose. I don’t think it’s a big issue when its redundancy is very low compared to the amount of useful code. However, in some cases, even I have to admit it’s a lot of ceremony for not much. When of such case is defining beans with the Java syntax:

@Bean @UIScope
public MainScreen mainScreen() {
    return new MainScreen();
}

Kotlin cuts through all of the ceremony to keep only the meat:

  • No semicolon required
  • No new keyword
  • Block replaced with an equal sign since the body consists of a single expression
  • No return keyword required as there’s no block
  • No return type required as it can easily be inferred
@Bean @UIScope
fun mainScreen() = MainScreen()

Spring configuration files are generally quite long and hard to read. Kotlin makes them much shorter, without sacrificing readability.

The init block is your friend

In Java, the constructor is used for different operations:

  1. storing arguments into attributes
  2. passing arguments to the super constructor
  3. other initialization code

The first operation is a no-brainer because attributes are part of the class signature in Kotlin. Likewise, calling the super constructor is handled by the class signature. The rest of the initialization code is not part of the class signature and should be part of an init block. Most applications do not this part, but Vaadin needs to setup layout and related stuff.

class MainScreenPresenter(tablePresenter: TablePresenter,
                          buttonPresenter: NotificationButtonPresenter,
                          view: MainScreen, eventBus: EventBus) : Presenter<MainScreen>(view, eventBus) {

    init {
        view.setComponents(tablePresenter.view, buttonPresenter.view)
    }
}

Use the apply method

Kotlin has a standard library offering small dedicated functions. One of them is apply, defined as inline fun T.apply(f: T.() -> Unit): T (source). It’s an extension function, meaning every type will have it as soon as it’s imported into scope. This function requires an argument that is a function and that returns nothing. Inside this function, the object that has been apply-ed is accessible as this (and this is implicit, as in standard Java code). It allows code like this:

VerticalLayout(button, table).apply {
    setSpacing(true)
    setMargin(true)
    setSizeFull()
}

Factor view and presenter into same file

Kotlin makes code extremely small, thus some files might be only a line long (not counting import). Opening different files to check related classes is useless. Packages are a way to organize your code; I think files might be another way in Kotlin. For example, Vaadin views and presenters can be put into the same file.

class NotificationButton: Button("Click me")

class NotificationButtonPresenter(view: NotificationButton, eventBus: EventBus): Presenter<NotificationButton>(view, eventBus) { ... }

Lambdas make great listeners

As of Java 8, single-method interfaces implemented as anonymous inner classes can be replaced with lambdas. Kotlin offers the same feature plus:

  • it allows to omit parentheses if the lambda is the only argument
  • if the lambda has a single argument, its default name is it and it doesn’t need to be declared
  • Both make for a very readable syntax when used in conjunction with the Vaadin API:

    view.addValueChangeListener {
        val rowId = it.property.value
        val rowItem = view.containerDataSource.getItem(rowId)
        eventBus.publish(SESSION, rowItem)
    }

Note: still, more complex logic should be put into its own function.

Send to Kindle
Categories: JavaEE Tags: , ,

Refactoring code for testability: an example

December 20th, 2015 1 comment

Working on a legacy project those last weeks gave me plenty of material to write about tests, Mockito and PowerMock. Last week, I wrote about abusing PowerMock. However, this doesn’t mean that you should never use PowerMock; only that if its usage is commonplace, it’s a code smell. In this article, I’d like to show an example how one can refactor legacy code to a more testable design with the temporary help of PowerMock.

Let’s check how we can do that using the following code as an example:

public class CustomersReader {

    public JSONObject read() throws IOException {
        String url = Configuration.getCustomersUrl();
        CloseableHttpClient client = HttpClients.createDefault();
        HttpGet get = new HttpGet(url);
        try (CloseableHttpResponse response = client.execute(get)) {
            HttpEntity entity = response.getEntity();
            String result = EntityUtils.toString(entity);
            return new JSONObject(result);
        }
    }
}

Note that the Configuration class is outside our reach, in a third-party library. Also, for brevity’s sake, I cared only about the happy path; real-world code would probably be much more complex with failure handling.

Obviously, this code reads an HTTP URL from this configuration, browse the URL and return its output wrapped into a JSONObject. The problem with that it’s that it’s pretty hard to test, so we’d better refactor it to a more testable design. However, refactoring is a huge risk, so we have to first create tests to ensure non-regression. Worst, unit tests do not help in this case, as refactoring will change classes and break existing tests.

Before anything, we need tests to verify the existing behavior – whatever we can hack together, even if they don’t adhere to good practives. Two alternatives are possible:

  • Fakes: set up an HTTP server to answer the HTTP client and a database/file for the configuration class to read (depending on the exact implementation)
  • Mocks: create mocks and stub their behavior as usual

Though PowerMock is dangerous, it’s less fragile and easy to set up than Fakes. So let’s start with PowerMock but only as a temporary measure. The goal is to refine both design and tests in parallel to that at the end, PowerMock will be removed. This test is a good start:

@RunWith(PowerMockRunner.class)
public class CustomersReaderTest {

    @Mock private CloseableHttpClient client;
    @Mock private CloseableHttpResponse response;
    @Mock private HttpEntity entity;

    private CustomersReader customersReader;

    @Before
    public void setUp() {
        customersReader = new CustomersReader();
    }

    @Test
    @PrepareForTest({Configuration.class, HttpClients.class})
    public void should_return_json() throws IOException {
        mockStatic(Configuration.class, HttpClients.class);
        when(Configuration.getCustomersUrl()).thenReturn("crap://test");
        when(HttpClients.createDefault()).thenReturn(client);
        when(client.execute(any(HttpUriRequest.class))).thenReturn(response);
        when(response.getEntity()).thenReturn(entity);
        InputStream stream = new ByteArrayInputStream("{ \"hello\" : \"world\" }".getBytes());
        when(entity.getContent()).thenReturn(stream);
        JSONObject json = customersReader.read();
        assertThat(json.has("hello"));
        assertThat(json.get("hello")).isEqualTo("world");
    }
}

At this point, the test harness is in place and the design can change bit by bit (to ensure non-regression).

The first problem is calling Configuration.getCustomersUrl(). Let’s introduce a service ConfigurationService class as a simple broker between the CustomersReader class and the Configuration class.

public class ConfigurationService {

    public String getCustomersUrl() {
        return Configuration.getCustomersUrl();
    }
}

Now, let’s inject this service into our main class:

public class CustomersReader {

    private final ConfigurationService configurationService;

    public CustomersReader(ConfigurationService configurationService) {
        this.configurationService = configurationService;
    }

    public JSONObject read() throws IOException {
        String url = configurationService.getCustomersUrl();
        // Rest of code unchanged
    }
}

Finally, let’s change the test accordingly:

@RunWith(PowerMockRunner.class)
public class CustomersReaderTest {

    @Mock private ConfigurationService configurationService;
    @Mock private CloseableHttpClient client;
    @Mock private CloseableHttpResponse response;
    @Mock private HttpEntity entity;

    private CustomersReader customersReader;

    @Before
    public void setUp() {
        customersReader = new CustomersReader(configurationService);
    }

    @Test
    @PrepareForTest(HttpClients.class)
    public void should_return_json() throws IOException {
        when(configurationService.getCustomersUrl()).thenReturn("crap://test");
        // Rest of code unchanged
    }
}

The next step is to cut the dependency to the static method call to HttpClients.createDefault(). In order to do that, let’s delegate this call to another class and inject the instance into ours.

public class CustomersReader {

    private final ConfigurationService configurationService;
    private final CloseableHttpClient client;

    public CustomersReader(ConfigurationService configurationService, CloseableHttpClient client) {
        this.configurationService = configurationService;
        this.client = client;
    }

    public JSONObject read() throws IOException {
        String url = configurationService.getCustomersUrl();
        HttpGet get = new HttpGet(url);
        try (CloseableHttpResponse response = client.execute(get)) {
            HttpEntity entity = response.getEntity();
            String result = EntityUtils.toString(entity);
            return new JSONObject(result);
        }
    }
}

The final step is to remove PowerMock altogether. Easy as pie:

@RunWith(MockitoJUnitRunner.class)
public class CustomersReaderTest {

    @Mock private ConfigurationService configurationService;
    @Mock private CloseableHttpClient client;
    @Mock private CloseableHttpResponse response;
    @Mock private HttpEntity entity;

    private CustomersReader customersReader;

    @Before
    public void setUp() {
        customersReader = new CustomersReader(configurationService, client);
    }

    @Test
    public void should_return_json() throws IOException {
        when(configurationService.getCustomersUrl()).thenReturn("crap://test");
        when(client.execute(any(HttpUriRequest.class))).thenReturn(response);
        when(response.getEntity()).thenReturn(entity);
        InputStream stream = new ByteArrayInputStream("{ \"hello\" : \"world\" }".getBytes());
        when(entity.getContent()).thenReturn(stream);
        JSONObject json = customersReader.read();
        assertThat(json.has("hello"));
        assertThat(json.get("hello")).isEqualTo("world");
    }
}

No trace of PowerMock whatsoever, neither in mocking static methods nor in the runner. We achieved a 100% testing-friendly design, according to our initial goal. Of course, this is a very simple example, real-life code is much more intricate. However, by changing code little bit by little bit with the help of PowerMock, it’s possible to achieve a clean design in the end.

The complete source code for this article is available on Github.

Send to Kindle

On PowerMock abuse

December 13th, 2015 No comments

PowerMock logoStill working on my legacy application, and still trying to improve unit tests.

This week, I noticed how much PowerMock was used throughout the tests, to mock either static or private methods. In one specific package, removing it improved tests execution time by one order of magnitude (from around 20 seconds to 2). That’s clearly abuse: I saw three main reasons of using PowerMock.

Lack of knowledge of the API

There probably must have been good reasons, but some of PowerMock uses could have been avoided if developers had just checked the underlying code. One example of such code was the following:

@RunWith(PowerMockRunner.class)
@PrepareForTest(SecurityContextHolder.class)
public class ExampleTest {

    @Mock private SecurityContext securityContext;

    public void setUp() throws Exception {
        mockStatic(SecurityContextHolder.class);
        when(SecurityContextHolder.getContext()).thenReturn(securityContext);
    }

    // Rest of the test
}

Just a quick glance at Spring’s SecurityContextHolder reveals it has a setContext() method, so that the previous snippet can easily be replaced with:

@RunWith(MockitoJUnitRunner.class)
public class ExampleTest {

    @Mock private SecurityContext securityContext;

    public void setUp() throws Exception {
        SecurityContextHolder.setContext(securityContext);
    }

    // Rest of the test
}

Another common snippet I noticed was the following:

@RunWith(PowerMockRunner.class)
@PrepareForTest(WebApplicationContextUtils.class)
public class ExampleTest {

    @Mock private WebApplicationContext wac;

    public void setUp() throws Exception {
        mockStatic(WebApplicationContextUtils.class);
        when(WebApplicationContextUtils.getWebApplicationContext(any(ServletContext.class))).thenReturn(wac);
    }

    // Rest of the test
}

While slightly harder than the previous example, looking at the source code of WebApplicationContextUtils reveals it looks into the servlet context for the context.

The testing code can be easily change to remove PowerMock:

@RunWith(MockitoJUnitRunner.class)
public class ExampleTest {

    @Mock private WebApplicationContext wac;
    @Mock private ServletContext sc;

    public void setUp() throws Exception {
        when(sc.getAttribute(WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE)).thenReturn(wac);
    }

    // Rest of the test
}

Too strict visibility

As seen above, good frameworks – such as Spring, make it easy to use them in tests. Unfortunately, the same cannot always be said of our code. In this case, I removed PowerMock by widening the visibility of methods and classes from private (or package) to public.

You could argue that breaking encapsulation to improve tests is wrong, but in this case, I tend to agree with Uncle Bob:

Tests trump Encapsulation.

In fact, you think your encapsulation prevents other developers from misusing your code. Yet, you break it with reflection within your tests… What guarantees developers won’t use reflection the same way in production code?

A pragmatic solution is to compromise your design a bit but – and that’s the heart of the matter, document it. Guava and Fest libraries have both a @VisibleForTesting annotation that I find quite convenient. Icing on the cake would be for IDEs to recognize it and not propose auto-completion in src/main/java folder.

Direct usage of static methods

This last point has been explained times and times again, but some developers still fail to apply it correctly. Some very common APIs offer only static methods and they have no alternatives e.g. Locale.getDefault() or Calendar.getInstance(). Such methods shouldn’t be called directly on your production code, or they’ll make your design testable only with PowerMock.

public class UntestableFoo {

    public void doStuff() {
        Calendar cal = Calendar.getInstance();
        // Do stuff on calendar;
    }
}

@RunWith(PowerMock.class)
@PrepareForTest(Calendar.class)
public class UntestableFooTest {

    @Mock
    private Calendar cal;

    private UntestableFoo foo;

    @Before
    public void setUp() {
        mockStatic(Calendar.class);
        when(Calendar.getInstance()).thenReturn(cal);
        // Stub cal accordingly
        foo = new UntestableFoo();
    }

    // Add relevant test methods
}

To fix this design flaw, simply use injection and more precisely constructor injection:

public class TestableFoo {

    private final Calendar calendar;

    public TestableFoo(Calendar calendar) {
        this.calendar = calendar;
    }

    public void doStuff() {
        // Do stuff on calendar;
    }
}

@RunWith(MockitoJUnitRunner.class)
public class TestableFooTest {

    @Mock
    private Calendar cal;

    private TestableFoo foo;

    @Before
    public void setUp() {
        // Stub cal accordingly
        foo = new TestableFoo(cal);
    }

    // Add relevant test methods
}

At this point, the only question is how to create the instance in the first place. Quite easily, depending on your injection method: Spring @Bean methods, CDI @Inject Provider methods or calling the getInstance() method in one of your own. Here’s the Spring way:

@Configuration
public class MyConfiguration {

    @Bean
    public Calendar calendar() {
        return Calendar.getInstance();
    }

    @Bean
    public TestableFoo foo() {
        return new TestableFoo(calendar());
    }
}

Conclusion

PowerMock is a very powerful and useful tool. But it should only be used when it’s strictly necessary as it has a huge impact on test execution time. In this article, I’ve tried to show how you can do without it in 3 different use-cases: lack of knowledge of the API, too strict visibility and direct static method calls. If you notice your test codebase full of PowerMock usages, I suggest you try the aforementioned techniques to get rid of them.

Note: I’ve never been a fan of TDD (probably the subject of another article) but I believe the last 2 points could easily have been avoided if TDD would have been used.

Send to Kindle
Categories: Java Tags: ,

The danger of @InjectMocks

December 6th, 2015 No comments

Last week, I wrote about the ways to initialize your Mockito’s mocks and my personal preferences. I’m still working on my legacy project, and I wanted to go deeper into some Mockito’s feature that are used.

For example, Mockito’s developers took a real strong opinionated stance on the design: Mockito can only mock public non-final instance methods. That’s something I completely endorse. To go outside this scope, you’d have to use PowerMock (which I wrote about a while ago). That’s good, because for me, spotting PowerMock on the classpath is a sure sign of a code smell. Either you’re using a library that needs some design improvement… or you code definitely should.

However, I think Mockito slipped some dangerous abilities in its API, akin to PowerMock. One such feature is the ability to inject your dependencies’s dependencies through reflection. That’s not clear? Let’s have an example with the following class hierarchy:

Rules of unit testing would mandate that when testing ToTest, we would mock dependencies DepA and DepB. Let’s stretch our example further, that DepA and DepB are classes that are:

  • Out of our reach, because they come from a third-party library/framework
  • Designed in a way that they are difficult to mock i.e. they require a lot of mocking behavior

In this case, we would not unit test our class only but integration test the behavior of ToTest, DepA and DepB. This is not the Grail but acceptable because of the limitations described above.

Now let’s imagine one more thing: DepA and DepB are themselves dependent on other classes. And since they are badly designed, they rely on field injection through @Autowiring – no constructor or even setter injection is available. In this case, one would have to use reflection to set those dependencies, either through the Java API or some utility class like Spring’s ReflectionTestUtils. In both cases, this is extremely fragile as it’s based on the name of the attribute:

DepA depA = new DepA();
DepX depX = new DepX();
DepY depY = new DepY();
ReflectionTestUtils.setField(depA, "depX", depX);
ReflectionTestUtils.setField(depA, "depY", depY);

Mockito offers an easy alternative to this method: by using @InjectMocks, Mockito is able to automatically inject mocked dependencies that are in context.

@RunWith(MockitoJUnitRunner.class)
public class Test {

    @InjectMocks
    private DepA depA = new DepA();

    @Mock
    private DepX depX;

    @Mock
    private DepY depY;

    // tests follow
}

Since depX and depY are mocked my Mockito, they are in context and thus can automatically be injected in depA by Mockito. And because they are mocks, they can be stubbed for behavior.

There are a couple of drawbacks though. The most important one is that you loose explicit injection – also the reason why I don’t use autowiring. In this case, your IDE might report depX and depY as unused. Or even worse, changes in the initial structure of DepA won’t trigger any warning for unused fields. Finally, as for reflection, those changes may result in runtime exceptions.

The most important problem of @InjectMocks, however, is that it’s very easy to use, too easy… @InjectMocks hides the problems of both fields injection and too many dependencies. Those should hurt but they don’t anymore when using @InjectMocks. So if applied to dependencies from libraries – like depA and depB, there’s no choice; but if you start using it for your own class aka ToTest, this for sure seems like a code smell.

Send to Kindle
Categories: Java Tags:

Initializing your Mockito mocks

November 29th, 2015 3 comments

Mockito logoMaintenance projects are not fun compared to greenfield projects, but they sure provide most of the meat for this blog. This week saw me not checking the production code but the tests. What you see in tests reveals much of how the production code itself is written. And it’s a way to change things for the better, with less risks.

At first, I only wanted to remove as much PowerMock uses as possible. Then I found out most Mockito spies were not necessary. Then I found out that Mockito mocks were initialized in 3 different ways in the same file; then usages of both given() and when() in the same class, then usages of when() and doReturn() in the same class… And many more areas that could be improved.

In this post, I’ll limit myself to sum up the 3 ways to initialize your mocks in your test class: pick and choose the one you like best, but please stick with it in your class (if not your project). Consistency is one of the pillar of maintainability.

Reminder

Unit-testing is an important foundation of Software Quality (but not the only one!). As Object-Oriented Design is to make multiple components each with a dedicated responsibility, it’s crucial to make sure each of those components perform its tasks in a adequate manner. Hence, we have to feed a component with known inputs and check the validity of the outputs. Thus, testing the component in isolation requires a way to replace the dependencies with both input providers and output receivers that are in our control. Mockito is such a framework (among others) that let you achieve that.

Creating those mocks in the first place can be done using different ways (I’ll use Mockito wording in place of the standard one).

One-by-one explicit mocking

The first and most straightforward way is to use Mockito’s mock() static method.

public class FooTest {
    private Foo foo;

    @Before
    public void setUp() {
        foo = Mockito.mock(Foo.class);
    }

    @Test
    public void test_foo() {
        Bar bar = Mockito.mock(Bar.class);
        // Testing code go there
    }
}

It might be the most verbose one, but it’s also the easiest to understand, as the API is quite explicit. Plus it’s not dependent on a testing framework. And you may mock On the downside, if you’ve got a high number of dependencies, you need to initialize them one by one – but perhaps having such a high number is a sign of bad design?

All-in-one general mocking call

The second option aims to fix this problem (the number of calls, not the design…). It replaces all method calls by a single call that will mock every required attribute. In order to tell which attribute should be a mock, the @Mock annotation must be used.

public class FooTest {
    @Mock private Foo foo;
    @Mock private Bar bar;

    @Before
    public void setUp() {
        MockitoAnnotations.initMocks();
    }

    @Test
    public void test_foo() {
        // Testing code go there
    }
}

Here, the 2 Mockito.mock() calls (or more) were replaced by a single call. On the downside, a local variable had to be moved to an attribute in order for Mockito to inject the mock. Besides, you have to understand what how the initMocks() method works.

JUnit runner

The last option replaces the explicit initMocks() call with a JUnitRunner.

@RunWith(MockitoJUnitRunner.class)
public class FooTest {
    @Mock private Foo foo;
    @Mock private Bar bar;

    @Test
    public void test_foo() {
        // Testing code go there
    }
}

This option alleviates the pain to write the single method call. But it has the downsides of option 2, plus it binds you to JUnit, foregoing other alternatives such as TestNG.

Conclusion

As a maintainer, my preferences are in decreasing order:

  1. explicit method calls: because they’re the most explicit, thus readable
  2. initMocks: because it might hide the fact that you’ve too many dependencies… and having to replace local variables by attributes sucks
  3. runner: for it’s even worse than the previous one. Didn’t you read all my claims about TestNG?

But the most important bit is, whatever option you choose, stick to it in the same class – if not the same project. Nothing makes code more unreadable than a lack of consistency.

Send to Kindle
Categories: Java Tags: ,

Semantics: state or dependency, not both

November 15th, 2015 3 comments

The more I program, the easier it gets. However, the more questions arise regarding programming.

This week, my thinking was about Object-Oriented Programming. I’ve been told that OOP is about encapsulating state and behavior is a single isolated unit.

In languages I know, state translates into attributes and behavior into methods.

public class Cat {
    private Color color;
    public void mew() { ... }
}

That said, unit testing requires the ability to test in isolation: this has been enabled by Dependency Injection (whether manual or framework-based). Dependencies are also translated into attributes, though most (if not all) never change through the instance lifecycle.

public class MechanicalCat {
    private final RepairmentManager repairmentMgr;
    public MechanicalCat(RepairmentManager repairmentMgr) {
        this.repairmentMgr = repairmentMgr;
    }
}

Now, we have both state and dependencies handled as attributes. For Java, it makes sense because Dependency Injection became mainstream much later than the language. More modern JVM languages introduced the concept of val, which makes an attribute immutable. Here’s a Kotlin port of the above code:

class Cat(var color:Color) {
    fun void mew() { ... }
}

class MechanicalCat(val repairmentMgr:RepairmentManager)

However, this semantics is only about mutability/immutability. Java already had the same semnantics with a different syntax – the final keyword.

I was wondering if we could have different semantics for state and dependencies? Any language creator reading this blog and thinking this might be a good idea to separate between them?

Send to Kindle

Forget the language, the important is the tooling

November 1st, 2015 No comments

There’s not one week passing without stumbling upon a post claiming language X is superior to all others, and offers you things you cannot do in other languages, even make your kitchenware shine brightier and sometimes even return lost love. I wouldn’t mind these claims, because some features really open my Java developer mind to the lacking of what I’m using now, but in general, they are just bashing another language – usually Java.

For those that love to bitch, here’s a quote that might be of interest:

There are only two kinds of programming languages: those people always bitch about and those nobody uses.

— Bjarne Stroustrup

That said, my current situation spawned some thinking. I was trying to migrate the Android application I’m developing in my spare time to Kotlin. I used Android Studio to do that, for which JetBrains provide a migration tool through the Kotlin plugin. The process is quite straightforward, requiring only minor adjustments for some files. This made me realize the language is not the most important, the tooling is. You can have the best language in the world – and it seems each person has his own personal definition of “best”, if the tooling lacks, it amounts to just nothing.

Take Scala for example. I don’t pretend to be an expert in Scala, but I know enough to know it’s very powerful. However, if you don’t have a tool to handle advanced language features, such as implicit parameters, you’re in for a lot of trouble. You’d better have an advanced IDE to display where they come from. If to go beyond mere languages, the same can be said about technologies such as Dependency Injection – whether achieved though Spring or CDI or aspects from AOP.

Another fitting example would be XML. It might not be everyone’s opinion, but XML is still very much used in the so-called enterprise world. However, beyond a hundred of lines of code and a few namespaces, XML becomes quite hard to read without help. Come Eclipse or XMLSpy and presto, XML files can be displayed in a very handy tree-like representation.

On the opposite, successful languages (and technologies) come in with tooling. Look around and see for yourself. Still, I don’t pretend I know between the cause and the consequence: are languages successful because of their tooling, or are tools constructed around successful languages? Perhaps it’s both…

Previously, I didn’t believe Kotlin and its brethren had many chances for success. Given that Jetbrains is behind both Kotlin and IntelliJ IDEA, I believe Kotlin might have a very bright future ahead.

Send to Kindle
Categories: Development Tags: , ,