Archive

Archive for the ‘Java’ Category

Feedback on customizing Vaadin HTML template

March 28th, 2016 No comments

Last week, my post was about how you could customize the default Vaadin HTML template so you could add a lang attribute for example. I didn’t ask for feedback, but I got it anyway, so let’s use the occasion to analyze it.

First, I must admit that my solution was not Vaadin-esque as I used AOP to get the job done. My friend Matti Tahvonen from the Vaadin team was kind enough to provide not only one but 2 alternatives to my proposal. Thanks Matti!

Alternative solutions

The first solution – also the one I got the last, was submitted by AMahdy AbdElAziz:

Evil but it works and couldn’t be simpler!

The second solution uses Matti’s own framework – Viritin, to achieve that. I haven’t looked at the framework yet (but it’s in my todo list) so I cannot comment it but here is the snippet:

The standard Vaadin-esque server-side way is a bit more convoluted:

However, this one I can analyze and comment 🙂

Flow decomposition

Note: readers who are more interested in possoble improvements can skip directly to that section.

The first part is quite easy. It just registers a new SessionInitListener (the proposed code uses a lambda to do that):

The second part happens when a request is made and Vaadin notices a new session must be created:

The end of the previous sequence diagram is designed in a generic way in Vaadin. In our specific case, it’s rendered as such:

Improvements

I think a couple of improvements can be made.

  • The code is awfully verbose – even using Java 8’s lambda
  • Two objects are created each time a session is initialized, the session listener and the bootstrap listener
  • Setting the servlet class as an internal class of the UI sends shiver down my spine. Though I understand this is a Gist, this is unfortunately what you get by using the Vaadin Maven archetype. This is very far from the Single-Responsibility Principle.

Since my initial example uses both Spring Boot and Kotlin, here’s my version:

With Spring Boot, I can manage the SessionInitListener as a singleton-scoped Bean. By passing it as a servlet parameter, I can create only a single instance of SessionInitListener and BootstrapListener each. Of course, it’s only possible because the language value is set in stone.

Thanks to Kotlin, I co-located the overriden servlet class within the configuration file but outside the configuration class. Since the servlet is used only by the configuration, it makes sense to put them together… but not too much.

Finally, note that SessionInitListener is a functional interface, meaning it has a single method. This single method is the equivalent of a function taking a SessionInitEvent and return nothing. In Kotlin, the signature is (SessionInitEvent) -> Unit. Instead of creating an anonymous inner class, I preferred using the function. It’s not an improvement, but a more functional alternative. At runtime, both alternatives will allocate the same amount of memory.

The complete source code can be found on Github in the manage-lang branch.

Send to Kindle
Categories: JavaEE Tags: ,

Customizing Vaadin HTML template

March 20th, 2016 2 comments

This week, I had an interesting question on Twitter: “How in Vaadin do you add a lang attribute to the html element?”, like this:

<html lang="fr">

While it’s quite easy to customize individual components on the page, the outer html tag is outside our control. In this article, I’ll describe a possible solution to this problem. Note that I think this is too specialized for morevaadin.com. However, this should still be my reference site for all things Vaadin.

My first (wrong) thought, was that the html element is generated by the UI widget client-side. It’s not the case – as there’s no such widget. In order to find the answer, I had to understand how the framework works. Here’s a short summary of the sequence when one makes a request to a Vaadin application.

Basically, the Vaadin Servlet delegates to the Vaadin Service. In turn, the service loops over its internal list of Request Handlers until one of them is able to handle the request. By default, the Vaadin Service registers the following handlers:

  1. ConnectorResourceHandler
  2. UnsupportedBrowserHandler: returns a specific message if the browser is not supported by Vaadin
  3. UidlRequestHandler: handles RPC communication between client and server
  4. FileUploadHandler: handles file uploads achieved with the FileUpload component
  5. HeartbeatHandler: handles heartbeat requests
  6. PublishedFileHandler
  7. SessionRequestHandler: delegates in turn to request handlers that have been registered in the session

This is implemented in VaadinService.createRequestHandlers(). However, notice that VaadinService is abstract. There’s a subtle changed introduced in the concrete VaadinServletService subclass that is used. It registers a new request handler – the ServletBootstrapHandler, in order for Vaadin to run transparently in both a servlet and a portlet context. In this later case, only a fragment :

The servlet boostrap handler is responsible to generate the initial HTML page at the start of the application, including the desired outer html tag. Next requests to the application will just update part of the page via AJAX, but the outside HTML won’t change. Thus, this is the class that needs to be updated if one wishes to add a lang attribute. A quick glance at the class makes it very clear that it’s quite hard to extend. Besides, it delegates HTML generation to the JSoup and more precisely to the Document.createShell() static method.

At this point, you have 3 options:

  1. Forget about it, what is it worth anyway?
  2. Rewrite a large portion of the BootstrapHandler and add it before the BootstrapHandler in the handlers sequence
  3. Be lazy and apply Ockham’s razor: just use a simple AOP‘s aspect

In order to be of any use, I chose the latest option. It’s quite straightforward and very concise, you only need the following steps.

Create the aspect itself. Note that I’m using Kotlin to be coherent with the existing project but Java or any other JVM-based language would be the same.

@Aspect
open class UpdateLangAspect {

    @Pointcut("execution(static * org.jsoup.nodes.Document.createShell(..))")
    fun callDocumentCreateShell(): Unit {}

    @AfterReturning(pointcut = "callDocumentCreateShell()", returning = "document")
    fun setLangAttribute(document: Document): Unit {
        document.childNode(0).attr("lang", "fr")
    }
}

As the method to instrument is static, it’s not possible to use simple Spring proxies, but we need AspectJ class instrumentation via LTW. In order to activate it in Spring Boot, I just have to update the application.properties file:

spring.aop.proxy-target-class=true

Also, one has to tell Aspect’s weaver what needs to be instrumented in the META-INF/aop.xml:

<?xml version="1.0" encoding="UTF-8"?>
<aspectj>
  <aspects>
    <aspect name="ch.frankel.blog.bootvaadinkotlin.UpdateLangAspect" />
  </aspects>
  <weaver options="-XnoInline -Xlint:ignore -verbose -showWeaveInfo">
    <include within="org.jsoup.nodes.Document" />
    <include within="ch.frankel.blog.bootvaadinkotlin.UpdateLangAspect" />
  </weaver>
</aspectj> 

The first section is about the aspect, the second is about what classes need to be instrumented. Note that weaving section needs also to reference the aspect too.

Finally, one has to update the POM so that launching the application through the Spring Boot Maven plugin will also use the aspect.

<plugin>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-maven-plugin</artifactId>
  <configuration>
    <agent>${settings.localRepository}/org/aspectj/aspectjweaver/1.8.8/aspectjweaver-1.8.8.jar</agent>
  </configuration>
</plugin>

For sake of completeness, the code is available on Github with the manage-langv1 tag.

At this point, generating the page will display the desired change. Job done!

Send to Kindle
Categories: JavaEE Tags: ,

Log management in Spring Boot

February 21st, 2016 1 comment

Logging is for sure not a glamorous subject, but it’s a critical one – at least for DevOps and Ops teams. While there are plenty of material on the Web describing how to change your ASCII banner, there is not much on how to efficiently manage the log output.

By default, Spring Boot will output log on the console and not use any file at all.

However, it’s possible to tell Spring Boot to log in an output file. At the simplest level, the path where Spring Boot will put the spring.log file can be easily configured in the application.properties under the logging.path key:

logging.path=/var/log

Note that another alternative is to use the logging.file key in order to not only set the file path but also the file name.

logging.file=/var/log/myapp.log

While this works very well for development purposes, it’s not an acceptable process for the Ops team to unzip the final jar, update the application.properties file and repackage it – this for each and every different environment.

Spring Boot allows to override the value set in the packaged file (if any) on the command-line as a standard system property when launching the jar:

java -jar -Dlogging.path=/tmp  myapp.jar

Finally, it’s also possible to override this value when invoking the Spring Boot Maven plugin on the command line. However, directly using the system property doesn’t work for the plugin will spawn another JVM. One has to use the run.jvmArguments system property and delegate it the wanted value:

mvn spring-boot:run -Drun.jvmArguments="-Dlogging.path=/tmp"

Note that this works for every available property!

To go further:

Send to Kindle
Categories: Java Tags: ,

Designing your own Spring Boot starter – part 2

February 14th, 2016 2 comments

In the last post, I tried to describe the internal working of Spring Boot starter. It’s now time to develop our own!

As an example, we will use XStream, a no-fluff just-stuff XML/JSON (de)serializer offered by Thoughtworks. Readers who only use JAXB and Jackson are advised to have a look at XStream, it’s extremely efficient and its API is quite easy to use.

As seen in our last post, the entry-point of a starter lies in the META-INF/spring.factories file. Let’s create such a file, with the adequate content:

org.springframework.boot.autoconfigure.EnableAutoConfiguration=ch.frankel.blog.xstream.XStreamAutoConfiguration

Now, let’s create the class referenced above. As we have seen previously, an auto-configuration class is just a regular configuration class. It’s ok to keep it empty for the time being.

@Configuration
public class XStreamAutoConfiguration {}

XStream is build around the aptly-named XStream class, which is an entry-point into its serialization features. Thoughtworks designed it to avoid static methods, so that you need an XStream instance. Creating an instance is a boring repetitive task with no value: it looks like it’s a perfect target for a Spring bean. Let’s create this instance in the auto-configuration class as a singleton bean so that client applications can use it. Our configuration class becomes:

@Configuration
public class XStreamAutoConfiguration {

    @Bean
    public XStream xstream() {
        return new XStream();
    }
}

There are other alternatives to the XStream no-args constructor documented on XStream’s website e.g. one for StaX, another for JSON, etc. Our starter should allow client applications to use their own instance, thus creating it only if there are none provided in the context. That sounds a lot like a conditional on missing bean:

@Configuration
public class XStreamAutoConfiguration {

    @Bean
    @ConditionalOnMissingBean(XStream.class)
    public XStream xstream() {
        return new XStream();
    }
}

XStream is based on converters, a way to convert from one typed value to a JSON/XML formatted string (and the other way around). There are a lot of out-of-the-box pre-registered converters but clients can register their own. In that case, it should be possible to provide them in the context so that they get registered with the provided instance.

To do that, create a @Bean method that takes both an XStream instance and a collection of converters as injected arguments. This method should only be called if there’s at least one Converter instance in the context. This can easily be configured with the @ConditionalOnBean annotation.

@Bean
@ConditionalOnBean(Converter.class)
public Collection<Converter> converters(XStream xstream, Collection<Converter> converters) {
    converters.forEach(xstream::registerConverter);
    return converters;
}

At this point, any custom converter provided in the Spring context by client applications will be registered in the XStream instance.

This concludes this post on creating Spring Boot starters. It’s quite easy and straightforward! Before going forward with your own, don’t forget to check existing starters, there are already quite a lot provided out-of-the-box and by the community.

UPDATE: the code is available on Github

Send to Kindle
Categories: Java Tags:

Designing your own Spring Boot starter – part 1

February 7th, 2016 3 comments

Since its release, Spring Boot has been a huge success: it boosts developers productivity with its convention over configuration philosophy. However, sometimes, it just feels too magical. I have always been an opponent of autowiring for this exact same reason. And when something doesn’t work, it’s hard to get back on track.

This is the reason why I wanted to dig deeper into Spring Boot starter mechanism – to understand every nook and cranny. This post is the first part and will focus on analyzing how it works. The second part will be a case study on creating a starter.

spring.factories

At the root of every Spring Boot starter lies the META-INF/spring.factories file. Let’s check the content of this file in the spring-boot-autoconfigure.jar. Here’s an excerpt of it:

...
# Auto Configure
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
org.springframework.boot.autoconfigure.admin.SpringApplicationAdminJmxAutoConfiguration,\
org.springframework.boot.autoconfigure.aop.AopAutoConfiguration,\
org.springframework.boot.autoconfigure.amqp.RabbitAutoConfiguration,\
org.springframework.boot.autoconfigure.MessageSourceAutoConfiguration,\
org.springframework.boot.autoconfigure.PropertyPlaceholderAutoConfiguration,\
org.springframework.boot.autoconfigure.batch.BatchAutoConfiguration,\
...

Now let’s have a look at their content. For example, here’s the JpaRepositoriesAutoConfiguration class:

@Configuration
@ConditionalOnBean(DataSource.class)
@ConditionalOnClass(JpaRepository.class)
@ConditionalOnMissingBean({ JpaRepositoryFactoryBean.class,  JpaRepositoryConfigExtension.class })
@ConditionalOnProperty(prefix = "spring.data.jpa.repositories", name = "enabled", havingValue = "true",  matchIfMissing = true)
@Import(JpaRepositoriesAutoConfigureRegistrar.class)
@AutoConfigureAfter(HibernateJpaAutoConfiguration.class)
public class JpaRepositoriesAutoConfiguration {}

There are a couple of interesting things to note:

  1. It’s a standard Spring @Configuration class
  2. The class contains no “real” code but imports another configuration – JpaRepositoriesAutoConfigureRegistrar, which contains the “real” code
  3. There are a couple of @ConditionalOnXXX annotations used
  4. There seem to be a order dependency management of some sort with @AutoConfigureAfter

Points 1 and 2 are self-explanatory, point 4 is rather straightforward so let’s focus on point 3.

@Conditional annotations

If you didn’t start to work with Spring yesterday, you might know about the @Profile annotation. Profiles are a way to mark a bean-returning method as being optional. When a profile is activated, the relevant profile-annotated method is called and the returning bean contributed to the bean factory.

Some time ago, @Profile looked like that:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
public @interface Profile {
    String[] value();
}

Interestingly enough, @Profile has been rewritten to use the new @Conditional annotation:

@Retention(RetentionPolicy.RUNTIME)
@Target({ElementType.TYPE, ElementType.METHOD})
@Documented
@Conditional(ProfileCondition.class)
public @interface Profile {
    String[] value();
}

Basically, a @Conditional annotation just points to a Condition. In turn, a condition is a functional interface with a single method that returns a boolean: if true, the @Conditional-annotated method is executed by Spring and its returning object added to the context as a bean.

There are a lot of conditions available out-of-the-box with Spring Boot:

Condition Description
OnBeanCondition Checks if a bean is in the Spring factory
OnClassCondition Checks if a class is on the classpath
OnExpressionCondition Evalutates a SPeL expression
OnJavaCondition Checks the version of Java
OnJndiCondition Checks if a JNDI branch exists
OnPropertyCondition Checks if a property exists
OnResourceCondition Checks if a resource exists
OnWebApplicationCondition Checks if a WebApplicationContext exists

Those can be combined together with boolean conditions:

Condition Description
AllNestedConditions AND operator
AnyNestedConditions OR operator
NoneNestedCondition NOT operator

Dedicated @Conditional annotations point to those annotations. For example, @ConditionalOnMissingBean points to the OnBeanCondition class.

Time to experiment

Let’s create a configuration class annotated with @Configuration.

The following method will run in all cases:

@Bean
public String string() {
    return "string()";
}

This one won’t, for java.lang.String is part of Java’s API:

@Bean
@ConditionalOnMissingClass("java.lang.String")
public String missingClassString() {
    return "missingClassString()";
}

And this one will, for the same reason:

@Bean
@ConditionalOnClass(String.class)
public String classString() {
    return "classString()";
}

Analysis of the previous configuration

Armed with this new knowledge, let’s analyze the above JpaRepositoriesAutoConfiguration class.

This configuration will be enabled if – and only if all conditions are met:

@ConditionalOnBean(DataSource.class)
There’s a bean of type DataSource in the Spring context
@ConditionalOnClass(JpaRepository.class)
The JpaRepository class is on the classpath i.e. the project has a dependency on Spring Data JPA
@ConditionalOnMissingBean
There are no beans of type JpaRepositoryFactoryBean nor JpaRepositoryConfigExtension in the context
@ConditionalOnProperty
The standard application.properties file must contain a property named spring.data.jpa.repositories.enabled with a value of true

Additionally, the configuration will run after HibernateJpaAutoConfiguration (if the latter is referenced).

Conclusion

I hope I demonstrated that Spring Boot starters are no magic. Join me next week for a simple case study.

To go further:

Send to Kindle
Categories: Java Tags:

The Java Security Manager: why and how?

January 17th, 2016 No comments

Generally, security concerns are boring for developers. I hope this article is entertaining enough for you to read it until the end since it tackles a very serious issue on the JVM.

Quizz

Last year, at Joker conference, my colleague Volker Simonis showed a snippet that looked like the following:

public class StrangeReflectionExample {

    public Character aCharacter;

    public static void main(String... args) throws Exception {
        StrangeReflectionExample instance = new StrangeReflectionExample();
        Field field = StrangeReflectionExample.class.getField("aCharacter");
        Field type = Field.class.getDeclaredField("type");
        type.setAccessible(true);
        type.set(field, String.class);
        field.set(instance, 'A');
        System.out.println(instance.aCharacter);
    }
}

Now a couple of questions:

  1. Does this code compile?
  2. If yes, does it run?
  3. If yes, what does it display?

Answers below (dots to let you think before checking them).
.
..

….
…..
……
…….
……..
………
……….
………..
…………
………….
…………..
……………
…………….
……………..
This code compiles just fine. In fact, it uses the so-called reflection API (located in the java.lang.reflect package) which is fully part of the JDK.

Executing this code leads to the following exception:

Exception in thread "main" java.lang.IllegalArgumentException: Can not set java.lang.String field ch.frankel.blog.securitymanager.StrangeReflectionExample.aCharacter to java.lang.Character
	at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:167)
	at sun.reflect.UnsafeFieldAccessorImpl.throwSetIllegalArgumentException(UnsafeFieldAccessorImpl.java:171)
	at sun.reflect.UnsafeObjectFieldAccessorImpl.set(UnsafeObjectFieldAccessorImpl.java:81)
	at java.lang.reflect.Field.set(Field.java:764)
	at ch.frankel.blog.securitymanager.StrangeReflectionExample.main(StrangeReflectionExample.java:15)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:497)
	at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)

So, despite the fact that we defined the type of the aCharacter attribute as a Character at development time, the reflection API is able to change its type to String at runtime! Hence, trying to set it to 'A' fails.

Avoiding nasty surprises with the Security Manager

Reflection is not the only risky operation one might want to keep in check on the JVM. Reading a file or writing one also belong to the set of potentially dangerous operations. Fortunately, the JVM has a system to restrict those operations. Unfortunately, it’s not set by default.

In order to activate the SecurityManager, just launch the JVM with the java.security.manager system property i.e. java -Djava.security.manager. At this point, the JVM will use the default JRE policy. It’s configured in the file located at %JAVA_HOME%/lib/security/java.policy (for Java 8). Here’s a sample of this file:

grant codeBase "file:${{java.ext.dirs}}/*" {
        permission java.security.AllPermission;
};

grant {
        permission java.lang.RuntimePermission "stopThread";
        permission java.net.SocketPermission "localhost:0", "listen";
        permission java.util.PropertyPermission "java.version", "read";
        permission java.util.PropertyPermission "java.vendor", "read";
        ...
}

The first section – grant codeBase, is about which code can be executed; the second – grant, is about specific permissions.

Regarding the initial problem regarding reflection mentioned above, the second part is the most relevant. One can read the source of the AccessibleObject.setAccessible() method:

SecurityManager sm = System.getSecurityManager();
if (sm != null) sm.checkPermission(ACCESS_PERMISSION);
setAccessible0(this, flag);

Every sensitive method to a Java API has the same check through the Security Manager. You can verify that for yourself in the following code:

  • Thread.stop()
  • Socket.bind()
  • System.getProperty()
  • etc.

Using an alternate java.policy file

Using the JRE’s policy file is not convenient when one uses the same JRE for different applications. Given the current micro-service trend, this might not be the case. However, with automated provisioning, it might be more convenient to always provision the same JRE over and over and let each application provides its own specific policy file.

To add another policy file in addition to the default JRE’s, thus adding more permissions, launch the JVM with:
java -Djava.security.manager -Djava.security.policy=/path/to/other.policy

To replace the default policy file with your own, launch the JVM with:
java -Djava.security.manager -Djava.security.policy==/path/to/other.policy
Note the double equal sign.

Configuring your own policy file

Security configuration can be either based on a:

Black list
In a black list scenario, everything is allowed but exceptions can be configured to disallow some operations.
White list
On the opposite, in a white list scenario, only operations that are explicitly configured are allowed. By default, all operations are disallowed.

If you want to create your own policy file, it’s suggested you start with a blank one and then launch your app. As soon, as you get a security exception, add the necessary permission is the policy. Repeat until you have all necessary permissions. Following this process will let you have only the minimal set of permissions to run the application, thus implementing the least privilege security principle.

Note that if you’re using a container or a server, you’ll probably require a lot of those permissions, but this is the price to pay to secure your JVM against abuses.

Conclusion

I never checked policy files in production, but since I never had any complain, I assume the JVM’s policy was never secured. This is a very serious problem! I hope this article will raise awareness regarding that lack of hardening – especially since with the latest JVM, you can create and compile Java code on the fly, leading to even more threats.

To go further:

Send to Kindle
Categories: Java Tags: ,

Playing with Spring Boot, Vaadin and Kotlin

January 10th, 2016 1 comment

It’s no mystery that I’m a fan of both Spring Boot and Vaadin. When the Spring Boot Vaadin add-on became GA, I was ecstatic. Lately, I became interested in Kotlin, a JVM-based language offered by JetBrains. Thus, I wanted to check how I could develop a small Spring Boot Vaadin demo app in Kotlin – and learn something in the process. Here are my discoveries, in no particular order.

Spring needs non-final stuff

It seems Spring needs @Configuration classes and @Bean methods to be non-final. As my previous Spring projects were in Java, I never became aware of that because I never use the final keyword. However, Kotlin classes and methods are final by default: hence, you have to use the open keyword in Kotlin.

@Configuration
open class AppConfiguration {
    @Bean
    @UIScope
    open fun mainScreen() = MainScreen()
}

No main class

Spring Boot applications require a class with a public static void main(String... args) method to reference a class annotated with @SpringBootApplication. In general, those two classes are the same.

Kotlin has no concept of such static methods, but offers pure functions and objects. I tried to be creative by having an annotated object referenced by a pure function, both in the same file.

@SpringBootApplication
open class BootVaadinKotlinDemoApplication

fun main(vararg args: String) {
    SpringApplication.run(arrayOf(BootVaadinKotlinDemoApplication::class.java), args)
}

Different entry-point reference

Since the main function is not attached to a class, there’s no main class to reference to launch inside the IDE. Yet, Kotlin creates a .class with the same name as the file name suffixed with Kt.

My file is named BootVaadinKotlinDemoApplication.kt, hence the generated class name is BootVaadinKotlinDemoApplicationKt.class. This is the class to reference to launch the application in the IDE. Note that there’s no need to bother about that when using mvn spring-boot:run on the command-line, as Spring Boot seems to scan for the main method.

Short and readable bean definition

Java syntax is seen as verbose. I don’t think it’s a big issue when its redundancy is very low compared to the amount of useful code. However, in some cases, even I have to admit it’s a lot of ceremony for not much. When of such case is defining beans with the Java syntax:

@Bean @UIScope
public MainScreen mainScreen() {
    return new MainScreen();
}

Kotlin cuts through all of the ceremony to keep only the meat:

  • No semicolon required
  • No new keyword
  • Block replaced with an equal sign since the body consists of a single expression
  • No return keyword required as there’s no block
  • No return type required as it can easily be inferred
@Bean @UIScope
fun mainScreen() = MainScreen()

Spring configuration files are generally quite long and hard to read. Kotlin makes them much shorter, without sacrificing readability.

The init block is your friend

In Java, the constructor is used for different operations:

  1. storing arguments into attributes
  2. passing arguments to the super constructor
  3. other initialization code

The first operation is a no-brainer because attributes are part of the class signature in Kotlin. Likewise, calling the super constructor is handled by the class signature. The rest of the initialization code is not part of the class signature and should be part of an init block. Most applications do not this part, but Vaadin needs to setup layout and related stuff.

class MainScreenPresenter(tablePresenter: TablePresenter,
                          buttonPresenter: NotificationButtonPresenter,
                          view: MainScreen, eventBus: EventBus) : Presenter<MainScreen>(view, eventBus) {

    init {
        view.setComponents(tablePresenter.view, buttonPresenter.view)
    }
}

Use the apply method

Kotlin has a standard library offering small dedicated functions. One of them is apply, defined as inline fun T.apply(f: T.() -> Unit): T (source). It’s an extension function, meaning every type will have it as soon as it’s imported into scope. This function requires an argument that is a function and that returns nothing. Inside this function, the object that has been apply-ed is accessible as this (and this is implicit, as in standard Java code). It allows code like this:

VerticalLayout(button, table).apply {
    setSpacing(true)
    setMargin(true)
    setSizeFull()
}

Factor view and presenter into same file

Kotlin makes code extremely small, thus some files might be only a line long (not counting import). Opening different files to check related classes is useless. Packages are a way to organize your code; I think files might be another way in Kotlin. For example, Vaadin views and presenters can be put into the same file.

class NotificationButton: Button("Click me")

class NotificationButtonPresenter(view: NotificationButton, eventBus: EventBus): Presenter<NotificationButton>(view, eventBus) { ... }

Lambdas make great listeners

As of Java 8, single-method interfaces implemented as anonymous inner classes can be replaced with lambdas. Kotlin offers the same feature plus:

  • it allows to omit parentheses if the lambda is the only argument
  • if the lambda has a single argument, its default name is it and it doesn’t need to be declared
  • Both make for a very readable syntax when used in conjunction with the Vaadin API:

    view.addValueChangeListener {
        val rowId = it.property.value
        val rowItem = view.containerDataSource.getItem(rowId)
        eventBus.publish(SESSION, rowItem)
    }

Note: still, more complex logic should be put into its own function.

Send to Kindle
Categories: JavaEE Tags: , ,

Refactoring code for testability: an example

December 20th, 2015 1 comment

Working on a legacy project those last weeks gave me plenty of material to write about tests, Mockito and PowerMock. Last week, I wrote about abusing PowerMock. However, this doesn’t mean that you should never use PowerMock; only that if its usage is commonplace, it’s a code smell. In this article, I’d like to show an example how one can refactor legacy code to a more testable design with the temporary help of PowerMock.

Let’s check how we can do that using the following code as an example:

public class CustomersReader {

    public JSONObject read() throws IOException {
        String url = Configuration.getCustomersUrl();
        CloseableHttpClient client = HttpClients.createDefault();
        HttpGet get = new HttpGet(url);
        try (CloseableHttpResponse response = client.execute(get)) {
            HttpEntity entity = response.getEntity();
            String result = EntityUtils.toString(entity);
            return new JSONObject(result);
        }
    }
}

Note that the Configuration class is outside our reach, in a third-party library. Also, for brevity’s sake, I cared only about the happy path; real-world code would probably be much more complex with failure handling.

Obviously, this code reads an HTTP URL from this configuration, browse the URL and return its output wrapped into a JSONObject. The problem with that it’s that it’s pretty hard to test, so we’d better refactor it to a more testable design. However, refactoring is a huge risk, so we have to first create tests to ensure non-regression. Worst, unit tests do not help in this case, as refactoring will change classes and break existing tests.

Before anything, we need tests to verify the existing behavior – whatever we can hack together, even if they don’t adhere to good practives. Two alternatives are possible:

  • Fakes: set up an HTTP server to answer the HTTP client and a database/file for the configuration class to read (depending on the exact implementation)
  • Mocks: create mocks and stub their behavior as usual

Though PowerMock is dangerous, it’s less fragile and easy to set up than Fakes. So let’s start with PowerMock but only as a temporary measure. The goal is to refine both design and tests in parallel to that at the end, PowerMock will be removed. This test is a good start:

@RunWith(PowerMockRunner.class)
public class CustomersReaderTest {

    @Mock private CloseableHttpClient client;
    @Mock private CloseableHttpResponse response;
    @Mock private HttpEntity entity;

    private CustomersReader customersReader;

    @Before
    public void setUp() {
        customersReader = new CustomersReader();
    }

    @Test
    @PrepareForTest({Configuration.class, HttpClients.class})
    public void should_return_json() throws IOException {
        mockStatic(Configuration.class, HttpClients.class);
        when(Configuration.getCustomersUrl()).thenReturn("crap://test");
        when(HttpClients.createDefault()).thenReturn(client);
        when(client.execute(any(HttpUriRequest.class))).thenReturn(response);
        when(response.getEntity()).thenReturn(entity);
        InputStream stream = new ByteArrayInputStream("{ \"hello\" : \"world\" }".getBytes());
        when(entity.getContent()).thenReturn(stream);
        JSONObject json = customersReader.read();
        assertThat(json.has("hello"));
        assertThat(json.get("hello")).isEqualTo("world");
    }
}

At this point, the test harness is in place and the design can change bit by bit (to ensure non-regression).

The first problem is calling Configuration.getCustomersUrl(). Let’s introduce a service ConfigurationService class as a simple broker between the CustomersReader class and the Configuration class.

public class ConfigurationService {

    public String getCustomersUrl() {
        return Configuration.getCustomersUrl();
    }
}

Now, let’s inject this service into our main class:

public class CustomersReader {

    private final ConfigurationService configurationService;

    public CustomersReader(ConfigurationService configurationService) {
        this.configurationService = configurationService;
    }

    public JSONObject read() throws IOException {
        String url = configurationService.getCustomersUrl();
        // Rest of code unchanged
    }
}

Finally, let’s change the test accordingly:

@RunWith(PowerMockRunner.class)
public class CustomersReaderTest {

    @Mock private ConfigurationService configurationService;
    @Mock private CloseableHttpClient client;
    @Mock private CloseableHttpResponse response;
    @Mock private HttpEntity entity;

    private CustomersReader customersReader;

    @Before
    public void setUp() {
        customersReader = new CustomersReader(configurationService);
    }

    @Test
    @PrepareForTest(HttpClients.class)
    public void should_return_json() throws IOException {
        when(configurationService.getCustomersUrl()).thenReturn("crap://test");
        // Rest of code unchanged
    }
}

The next step is to cut the dependency to the static method call to HttpClients.createDefault(). In order to do that, let’s delegate this call to another class and inject the instance into ours.

public class CustomersReader {

    private final ConfigurationService configurationService;
    private final CloseableHttpClient client;

    public CustomersReader(ConfigurationService configurationService, CloseableHttpClient client) {
        this.configurationService = configurationService;
        this.client = client;
    }

    public JSONObject read() throws IOException {
        String url = configurationService.getCustomersUrl();
        HttpGet get = new HttpGet(url);
        try (CloseableHttpResponse response = client.execute(get)) {
            HttpEntity entity = response.getEntity();
            String result = EntityUtils.toString(entity);
            return new JSONObject(result);
        }
    }
}

The final step is to remove PowerMock altogether. Easy as pie:

@RunWith(MockitoJUnitRunner.class)
public class CustomersReaderTest {

    @Mock private ConfigurationService configurationService;
    @Mock private CloseableHttpClient client;
    @Mock private CloseableHttpResponse response;
    @Mock private HttpEntity entity;

    private CustomersReader customersReader;

    @Before
    public void setUp() {
        customersReader = new CustomersReader(configurationService, client);
    }

    @Test
    public void should_return_json() throws IOException {
        when(configurationService.getCustomersUrl()).thenReturn("crap://test");
        when(client.execute(any(HttpUriRequest.class))).thenReturn(response);
        when(response.getEntity()).thenReturn(entity);
        InputStream stream = new ByteArrayInputStream("{ \"hello\" : \"world\" }".getBytes());
        when(entity.getContent()).thenReturn(stream);
        JSONObject json = customersReader.read();
        assertThat(json.has("hello"));
        assertThat(json.get("hello")).isEqualTo("world");
    }
}

No trace of PowerMock whatsoever, neither in mocking static methods nor in the runner. We achieved a 100% testing-friendly design, according to our initial goal. Of course, this is a very simple example, real-life code is much more intricate. However, by changing code little bit by little bit with the help of PowerMock, it’s possible to achieve a clean design in the end.

The complete source code for this article is available on Github.

Send to Kindle

On PowerMock abuse

December 13th, 2015 No comments

PowerMock logoStill working on my legacy application, and still trying to improve unit tests.

This week, I noticed how much PowerMock was used throughout the tests, to mock either static or private methods. In one specific package, removing it improved tests execution time by one order of magnitude (from around 20 seconds to 2). That’s clearly abuse: I saw three main reasons of using PowerMock.

Lack of knowledge of the API

There probably must have been good reasons, but some of PowerMock uses could have been avoided if developers had just checked the underlying code. One example of such code was the following:

@RunWith(PowerMockRunner.class)
@PrepareForTest(SecurityContextHolder.class)
public class ExampleTest {

    @Mock private SecurityContext securityContext;

    public void setUp() throws Exception {
        mockStatic(SecurityContextHolder.class);
        when(SecurityContextHolder.getContext()).thenReturn(securityContext);
    }

    // Rest of the test
}

Just a quick glance at Spring’s SecurityContextHolder reveals it has a setContext() method, so that the previous snippet can easily be replaced with:

@RunWith(MockitoJUnitRunner.class)
public class ExampleTest {

    @Mock private SecurityContext securityContext;

    public void setUp() throws Exception {
        SecurityContextHolder.setContext(securityContext);
    }

    // Rest of the test
}

Another common snippet I noticed was the following:

@RunWith(PowerMockRunner.class)
@PrepareForTest(WebApplicationContextUtils.class)
public class ExampleTest {

    @Mock private WebApplicationContext wac;

    public void setUp() throws Exception {
        mockStatic(WebApplicationContextUtils.class);
        when(WebApplicationContextUtils.getWebApplicationContext(any(ServletContext.class))).thenReturn(wac);
    }

    // Rest of the test
}

While slightly harder than the previous example, looking at the source code of WebApplicationContextUtils reveals it looks into the servlet context for the context.

The testing code can be easily change to remove PowerMock:

@RunWith(MockitoJUnitRunner.class)
public class ExampleTest {

    @Mock private WebApplicationContext wac;
    @Mock private ServletContext sc;

    public void setUp() throws Exception {
        when(sc.getAttribute(WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE)).thenReturn(wac);
    }

    // Rest of the test
}

Too strict visibility

As seen above, good frameworks – such as Spring, make it easy to use them in tests. Unfortunately, the same cannot always be said of our code. In this case, I removed PowerMock by widening the visibility of methods and classes from private (or package) to public.

You could argue that breaking encapsulation to improve tests is wrong, but in this case, I tend to agree with Uncle Bob:

Tests trump Encapsulation.

In fact, you think your encapsulation prevents other developers from misusing your code. Yet, you break it with reflection within your tests… What guarantees developers won’t use reflection the same way in production code?

A pragmatic solution is to compromise your design a bit but – and that’s the heart of the matter, document it. Guava and Fest libraries have both a @VisibleForTesting annotation that I find quite convenient. Icing on the cake would be for IDEs to recognize it and not propose auto-completion in src/main/java folder.

Direct usage of static methods

This last point has been explained times and times again, but some developers still fail to apply it correctly. Some very common APIs offer only static methods and they have no alternatives e.g. Locale.getDefault() or Calendar.getInstance(). Such methods shouldn’t be called directly on your production code, or they’ll make your design testable only with PowerMock.

public class UntestableFoo {

    public void doStuff() {
        Calendar cal = Calendar.getInstance();
        // Do stuff on calendar;
    }
}

@RunWith(PowerMock.class)
@PrepareForTest(Calendar.class)
public class UntestableFooTest {

    @Mock
    private Calendar cal;

    private UntestableFoo foo;

    @Before
    public void setUp() {
        mockStatic(Calendar.class);
        when(Calendar.getInstance()).thenReturn(cal);
        // Stub cal accordingly
        foo = new UntestableFoo();
    }

    // Add relevant test methods
}

To fix this design flaw, simply use injection and more precisely constructor injection:

public class TestableFoo {

    private final Calendar calendar;

    public TestableFoo(Calendar calendar) {
        this.calendar = calendar;
    }

    public void doStuff() {
        // Do stuff on calendar;
    }
}

@RunWith(MockitoJUnitRunner.class)
public class TestableFooTest {

    @Mock
    private Calendar cal;

    private TestableFoo foo;

    @Before
    public void setUp() {
        // Stub cal accordingly
        foo = new TestableFoo(cal);
    }

    // Add relevant test methods
}

At this point, the only question is how to create the instance in the first place. Quite easily, depending on your injection method: Spring @Bean methods, CDI @Inject Provider methods or calling the getInstance() method in one of your own. Here’s the Spring way:

@Configuration
public class MyConfiguration {

    @Bean
    public Calendar calendar() {
        return Calendar.getInstance();
    }

    @Bean
    public TestableFoo foo() {
        return new TestableFoo(calendar());
    }
}

Conclusion

PowerMock is a very powerful and useful tool. But it should only be used when it’s strictly necessary as it has a huge impact on test execution time. In this article, I’ve tried to show how you can do without it in 3 different use-cases: lack of knowledge of the API, too strict visibility and direct static method calls. If you notice your test codebase full of PowerMock usages, I suggest you try the aforementioned techniques to get rid of them.

Note: I’ve never been a fan of TDD (probably the subject of another article) but I believe the last 2 points could easily have been avoided if TDD would have been used.

Send to Kindle
Categories: Java Tags: ,

The danger of @InjectMocks

December 6th, 2015 No comments

Last week, I wrote about the ways to initialize your Mockito’s mocks and my personal preferences. I’m still working on my legacy project, and I wanted to go deeper into some Mockito’s feature that are used.

For example, Mockito’s developers took a real strong opinionated stance on the design: Mockito can only mock public non-final instance methods. That’s something I completely endorse. To go outside this scope, you’d have to use PowerMock (which I wrote about a while ago). That’s good, because for me, spotting PowerMock on the classpath is a sure sign of a code smell. Either you’re using a library that needs some design improvement… or you code definitely should.

However, I think Mockito slipped some dangerous abilities in its API, akin to PowerMock. One such feature is the ability to inject your dependencies’s dependencies through reflection. That’s not clear? Let’s have an example with the following class hierarchy:

Rules of unit testing would mandate that when testing ToTest, we would mock dependencies DepA and DepB. Let’s stretch our example further, that DepA and DepB are classes that are:

  • Out of our reach, because they come from a third-party library/framework
  • Designed in a way that they are difficult to mock i.e. they require a lot of mocking behavior

In this case, we would not unit test our class only but integration test the behavior of ToTest, DepA and DepB. This is not the Grail but acceptable because of the limitations described above.

Now let’s imagine one more thing: DepA and DepB are themselves dependent on other classes. And since they are badly designed, they rely on field injection through @Autowiring – no constructor or even setter injection is available. In this case, one would have to use reflection to set those dependencies, either through the Java API or some utility class like Spring’s ReflectionTestUtils. In both cases, this is extremely fragile as it’s based on the name of the attribute:

DepA depA = new DepA();
DepX depX = new DepX();
DepY depY = new DepY();
ReflectionTestUtils.setField(depA, "depX", depX);
ReflectionTestUtils.setField(depA, "depY", depY);

Mockito offers an easy alternative to this method: by using @InjectMocks, Mockito is able to automatically inject mocked dependencies that are in context.

@RunWith(MockitoJUnitRunner.class)
public class Test {

    @InjectMocks
    private DepA depA = new DepA();

    @Mock
    private DepX depX;

    @Mock
    private DepY depY;

    // tests follow
}

Since depX and depY are mocked my Mockito, they are in context and thus can automatically be injected in depA by Mockito. And because they are mocks, they can be stubbed for behavior.

There are a couple of drawbacks though. The most important one is that you loose explicit injection – also the reason why I don’t use autowiring. In this case, your IDE might report depX and depY as unused. Or even worse, changes in the initial structure of DepA won’t trigger any warning for unused fields. Finally, as for reflection, those changes may result in runtime exceptions.

The most important problem of @InjectMocks, however, is that it’s very easy to use, too easy… @InjectMocks hides the problems of both fields injection and too many dependencies. Those should hurt but they don’t anymore when using @InjectMocks. So if applied to dependencies from libraries – like depA and depB, there’s no choice; but if you start using it for your own class aka ToTest, this for sure seems like a code smell.

Send to Kindle
Categories: Java Tags: