Posts Tagged ‘Spring Boot’
  • Migrating a Spring Boot application to Java 9 - Modules

    Java 9 Logo

    Last week, I tried to link:/migrating-to-java-9/1/[make a Spring Boot app^] - the famous Pet Clinic, Java 9 compatible. It was not easy. I had to let go of a lot of features along the way. And all in all, the only benefit I got was improvement of String memory management.

    This week, I want to continue the migration by fully embracing the Java 9 module system.

    == Configuring module meta-data

    Module information in Java 9 is implemented through a file. The first step is to create such a file at the root of the source directory, with the module name:


    module org.springframework.samples.petclinic { } —-

    The rest of the journey can be heaven or hell. I’m fortunate to benefit from an IntelliJ IDEA license. The IDE tells exactly what class it cannot read and a wizard lets you put it in the module file. In the end, it looks like that:


    module org.springframework.samples.petclinic { requires java.xml.bind; requires javax.transaction.api; requires validation.api; requires hibernate.jpa; requires hibernate.validator; requires spring.beans; requires spring.core; requires spring.context; requires spring.tx; requires spring.web; requires spring.webmvc; requires; requires; requires spring.boot; requires spring.boot.autoconfigure; requires cache.api; } —-

    Note that module configuration in the maven-compiler-plugin and maven-surefire-plugin can be removed.

    === Configuration the un-assisted way

    If you happen to be in a less than ideal environment, the process is the following:

    . Run mvn clean test . Analyze the error in the log to get the offending package . Locate the JAR of said package . If the JAR is a module, add the module name to the list of required modules . If not, compute the automatic module name, and add it to the list of requires modules

    For example:

    [ERROR] ~/spring-petclinic/src/main/java/org/springframework/samples/petclinic/system/[21,16] package javax.cache is not visible [ERROR] (package javax.cache is declared in the unnamed module, but module javax.cache does not read it) —-

    javax.cache is located in the cache-api-1.0.0.jar. It’s not a module since there’s no module-info in the JAR. The automatic module name is cache.api. Write it as a requires in the module. Rinse and repeat.

    == ASM failure

    Since the first part of this post, I’ve been made aware that[Spring Boot 1.5^] won’t be made Java 9-compatible. Let’s do it.

    Bumping Spring Boot to 2.0.0.M5 requires some changes in the module dependencies:

    • hibernate.validator to org.hibernate.validator
    • validation.api to java.validation

    And just when you think it might work:

    Caused by: java.lang.RuntimeException at org.springframework.asm.ClassVisitor.visitModule( —-

    This issue has already been[documented^]. At this point, explicitly declaring the main class resolves the issue.


    org.springframework.boot spring-boot-maven-plugin --add-modules java.xml.bind org.springframework.samples.petclinic.PetClinicApplication ...

    == Javassist failure

    The app is now ready to be tested with mvn clean spring-boot:run. Unfortunately, a new exception comes our way:

    2017-10-16 17:20:22.552 INFO 45661 — [ main] utoConfigurationReportLoggingInitializer :

    Error starting ApplicationContext. To display the auto-configuration report re-run your application with ‘debug’ enabled. 2017-10-16 17:20:22.561 ERROR 45661 — [ main] o.s.boot.SpringApplication : Application startup failed

    org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘entityManagerFactory’ defined in class path resource [org/springframework/boot/autoconfigure/orm/jpa/HibernateJpaAutoConfiguration.class]: Invocation of init method failed; nested exception is org.hibernate.boot.archive.spi.ArchiveException: Could not build ClassFile

    A quick search redirects to[an incompatibility] between Java 9 and Javassist. Javassist is the culprit here. The dependency is required Spring Data JPA, transitively via Hibernate. To fix it, exclude the dependency, and add the latest version:


    org.springframework.boot spring-boot-starter-data-jpa javassist org.javassist org.javassist javassist 3.22.0-GA runtime

    Fortunately, versions are compatible - at least for our usage.

    == It works!

    We did it! If you arrived at this point, you deserve a pat on the shoulder, a beer, or whatever you think you deserve.

    Icing on the cake, the Dev Tools dependency can be re-added.

    == Conclusion

    Migrating to Java 9 requires using Jigsaw, whether you like it or not. At the very least, it means a painful trial and error search-for-the-next-fix process and removing important steps in the build process. While it’s interesting for library/framework developers to add an additional layer of access control to their internal APIs, it’s much less so for application ones. At this stage, it’s not worth to move to Java 9.

    I hope to conduct this experiment again in some months and to notice an improvement in the situation.

    Categories: Java Tags: Java 9Spring BootmodulesJigsaw
  • Migrating a Spring Boot application to Java 9 - Compatibility

    Java 9 Logo

    With the coming of Java 9, there is a lot of buzz on how to migrate applications to use the module system. Unfortunately, most of the articles written focus on simple Hello world applications. Or worse, regarding Spring applications, the sample app uses legacy practices - like XML for example. This post aims to correct that by providing a step-to-step migration guide for a non-trivial modern Spring Boot application. The sample app chosen to do that is the[Spring Pet clinic^].

    There are basically 2 steps to use Java 9: first, be compatible then use the fully-fledged module system. This post aims at the former, a future post will consider the later.

    == Bumping the Java version

    Once JDK 9 is available on the target machine, the first move is to bump the java.version from 8 to 9 in the POM:


    9 </propertie> ---- Now, let's `mvn clean compile`. == Cobertura's failure The first error along the way is the following: ---- [ERROR] Failed to execute goal org.codehaus.mojo:cobertura-maven-plugin:2.7:clean (default) on project spring-petclinic: Execution default of goal org.codehaus.mojo:cobertura-maven-plugin:2.7:clean failed: Plugin org.codehaus.mojo:cobertura-maven-plugin:2.7 or one of its dependencies could not be resolved: Could not find artifact com.sun:tools:jar:0 at specified path /Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home/../lib/tools.jar -> [Help 1] ---- [quote,] Cobertura is a free Java code coverage reporting tool. It requires access to the _tools.jar_ that is part of JDK 8 (and earlier). One of the changes in Java 9 is the removal of that library. Hence, that is not compatible. This is already logged as an[issue^]. Given the[last commit^] on the Cobertura repository is one year old, just comment out the Cobertura Maven plugin. And think about replacing Cobertura for JaCoCo instead. == Wro4J's failure The next error is the following: ---- [ERROR] Failed to execute goal ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run (default) on project spring-petclinic: Execution default of goal ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run failed: An API incompatibility was encountered while executing ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run: java.lang.ExceptionInInitializerError: null ---- [quote,] wro4j is a free and Open Source Java project which will help you to easily improve your web application page loading time. It can help you to keep your static resources (js & css) well organized, merge & minify them at run-time (using a simple filter) or build-time (using maven plugin) and has a dozen of features you may find useful when dealing with web resources. This is referenced as a[Github issue^]. Changes have been merged, but the issue is still open for Java 9 compatibility should be part of the 2.0 release. Let's comment out Wro4J for the moment. == Compilation failure Compiling the project at this point yields the following compile-time errors: ---- /Users/i303869/projects/private/spring-petclinic/src/main/java/org/springframework/samples/petclinic/vet/ Error:(30, 22) java: package javax.xml.bind.annotation is not visible (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph) /Users/i303869/projects/private/spring-petclinic/src/main/java/org/springframework/samples/petclinic/vet/ Error:(21, 22) java: package javax.xml.bind.annotation is not visible (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph) Error:(22, 22) java: package javax.xml.bind.annotation is not visible (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph) ---- That means code on the classpath cannot access this module by default. It needs to be manually added with the `--add-modules` option of Java 9's `javac`. Within Maven, it can be set by using the _maven-compiler-plugin_: [source,xml] ---- maven-compiler-plugin 3.7.0 --add-modules java.xml.bind ---- Now the project can compile. == Test failure The next step sees the failure of unit tests to fail with `mvn test`. The cause is the same, but it's a bit harder to find. It requires checking the Surefire reports. Some contain exceptions with the following line: ---- Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException ---- Again, test code cannot access the module. This time, however, the _maven-surefire-plugin_ needs to be configured: [source,xml] ---- maven-surefire-plugin 2.20.1 --add-modules java.xml.bind ---- This makes the tests work. == Packaging failure If one thinks this is the end of the road, think again. The packaging phase also fails with a rather cryptic error: ---- [ERROR] Failed to execute goal org.apache.maven.plugins:maven-jar-plugin:2.6:jar (default-jar) on project spring-petclinic: Execution default-jar of goal org.apache.maven.plugins:maven-jar-plugin:2.6:jar failed: An API incompatibility was encountered while executing org.apache.maven.plugins:maven-jar-plugin:2.6:jar: java.lang.ExceptionInInitializerError: null ... Caused by: java.lang.ArrayIndexOutOfBoundsException: 1 at ---- This one is even harder to find: it requires a Google search to stumble upon[the solution^]. The `plexus-archiver` is to blame. Simply bumping the `maven-jar-plugin` to the latest version - 3.2 at the time of this writing will make use of a Java 9 compatible version of the archiver and will solve the issue: [source,xml] ---- maven-jar-plugin 3.0.2 ---- == Spring Boot plugin failure At this point, the project finally can be compiled, tested and packaged. The next step is to run the app through the Spring Boot Maven plugin _i.e._ `mvn spring-boot:run`. But it fails again...: ---- [INFO] --- spring-boot-maven-plugin:1.5.1.RELEASE:run (default-cli) @ spring-petclinic --- [INFO] Attaching agents: [] Exception in thread "main" java.lang.ClassCastException: java.base/jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to java.base/ at o.s.b.devtools.restart.DefaultRestartInitializer.getUrls( at o.s.b.devtools.restart.DefaultRestartInitializer.getInitialUrls( at o.s.b.devtools.restart.Restarter.( at o.s.b.devtools.restart.Restarter.initialize( at o.s.b.devtools.restart.RestartApplicationListener.onApplicationStartingEvent( at o.s.b.devtools.restart.RestartApplicationListener.onApplicationEvent( at o.s.c.event.SimpleApplicationEventMulticaster.invokeListener( at o.s.c.event.SimpleApplicationEventMulticaster.multicastEvent( at o.s.c.event.SimpleApplicationEventMulticaster.multicastEvent( at o.s.b.context.event.EventPublishingRunListener.starting( at o.s.b.SpringApplicationRunListeners.starting( at at at at org.springframework.samples.petclinic.PetClinicApplication.main( ---- This is a[documented issue^] that Spring Boot Dev Tools v1.5 is *not* compatible with Java 9. Fortunately, this bug is fixed in Spring Boot 2.0.0.M5. Unfortunately, this specific version is not yet available at the time of this writing. So for now, let's remove the dev-tools and try to run again. It fails again, but this time the exception is a familiar one: ---- Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException ---- Let's add the required argument to the _spring-boot-maven-plugin_: [source,xml] ---- org.springframework.boot spring-boot-maven-plugin --add-modules java.xml.bind ... ---- The app can finally be be launched and is accessible! == Conclusion Running a non-trivial legacy application on JDK 9 requires some effort. Worse, some important features had to be left on the way: code coverage and web performance enhancements. On the opposite side, the only meager benefit is the String memory space improvement. In the next blog post, we will try to improve the situation to actually make use of modules in the app.
    Categories: Java Tags: Java 9Spring BootmodulesJigsaw
  • What archive format should you use, WAR or JAR?

    Female hitting a male with boxing gloves on

    :page-liquid: :experimental:

    Some time ago, RAM and disk space were scarce resources. At that time, the widespread strategy was to host different applications onto the same platform. That was the golden age of the application server. I wrote in an link:{% post_url 2014-10-26-on-resources-scarcity-application-servers-and-micro-services %}[earlier post^] that the current tendency toward cheaper resources will make it obsolete, in the short or long term. However, a technology trend might bring it back in favor.

    Having an application server is good when infrastructure resources are expensive, and sharing them across apps brings a significant cost decrease. On the down side, it requires a deep insight into the load of each application sharing the same resources, as well as skilled sysadmins that can deploy on the same app server applications that are compatible. For old-timers, does an application requiring to be run alone because it mismanages resources ring a bell? When infrastructure costs decrease, laziness and aversion to risk take precedence over them, and hosting a single app on an application server become the norm. At that point, the next logical step is to consider why application servers as dedicated components are still required. It seems the Spring guys came to the same conclusion, for Spring Boot applications’ default mode is to package executable JARs - also known as Fat JARs. Those apps can be run as java -jar fat.jar. Hence the famous:

    [quote, Josh Long] __ Make JAR, not WAR __

    I’m still not completely sold on that, as I believe it too easily discards the expertise of most Ops teams regarding application servers’ management. However, one compelling argument about Fat JARs is that since the booting technology is in charge of app management from the start, it can handle load classes in any way it wants. For example, with[Dev Tools^], Spring Boot provides a mechanism based on two classloaders, one for libraries and one for classes, so that classes can be changed and reloaded without restarting the whole JVM - a neat trick that gives a very fast feedback loop at each code change.

    It wrongly thought that application server providers were still stuck with the legacy way - thanks to[Ivar Grimstad^] for making me aware of this option (a good reason to visit talks that do not necessarily target your interest at conferences).[Wildlfy^],[TomEE^] and other app server implementers can be configured to package Fat JARs as well, albeit with one huge difference: there’s nothing like Spring Dev Tools, so the restart of the whole app server is still required when code changes. The only alternative for faster feedback regarding those changes is to work at a lower level e.g. JRebel licenses for the whole team. However, there’s still one reason to use WAR archives, and that reason is Docker. By providing a common app server Docker image as a base image, one just needs to add one’s WAR on top of it, thus making the WAR image quite lightweight. And this cannot be achieved (yet?) with the JAR approach.

    Note that it’s not Spring Boot vs JavaEE but mostly JAR vs WAR, as Spring Boot is perfectly able to package either format, while many app server providers as well. As I pointed out above, the only missing piece is for the later to reload classes instead of restarting the whole JVM when a change occurs - but I believe it will happen at some point.

    Choosing between the WAR and the JAR approaches is highly dependent whether the company values more fast feedback cycles during development or more optimized and manageable Docker images.

    Categories: Java Tags: WARJARJavaEESpring Bootarchive
  • Fixing my own Spring Boot starter demo

    Spring Boot logo

    :page-liquid: :experimental:

    Since one year or so, I try to show the developer community that there’s no magic involved in Spring Boot but rather just straightforward software engineering. This is achieved with link:{% post_url 2016-02-07-designing-your-own-spring-boot-starter-part-1 %}[blog^] link:{% post_url 2016-02-14-designing-your-own-spring-boot-starter-part-2 %}[posts^] and[conference^][talks^]. At jDays,[Stéphane Nicoll^] was nice enough to attend my talk and pointed out an issue in the code. I didn’t fix it then, and it came back to bite me last week during a Pivotal webinar. Since a lesson learned is only as useful as its audience, I’d like to share my mistake with the world, or at least with you, dear readers.

    The context is that of a Spring Boot starter for the[XStream^] library:

    [quote] XStream is a simple library to serialize objects to XML and back again.

    . (De)serialization capabilities are implemented as instance methods of the XStream class . Customization of the (de)serialization process is implemented through converters registered in the XStream instance

    The goal of the starter is to ease the usage of the library. For example, it creates an XStream instance in the context if there’s none already:


    @Configuration public class XStreamAutoConfiguration {

    public XStream xStream() {
        return new XStream();
    } } ----

    Also, it will collect all custom converters from the context and register them in the existing instance:


    @Configuration public class XStreamAutoConfiguration {

    public XStream xStream() {
        return new XStream();
    public Collection<Converter> converters(XStream xstream, Collection<Converter> converters) {
        return converters;
    } } ----

    The previous snippet achieves the objective, as Spring obediently inject both xstream and converters dependency beans in the method. Yet, a problem happened during the webinar demo: can you spot it?

    If you answered about an extra converters bean registered into the context, you are right. But the issue occurs if there’s another converters bean registered by the client code, and it’s of different type i.e. not a Collection. Hence, registration of converters must not happen in beans methods, but only in a @PostConstruct one.

    • Option 1 + The first option is to convert the @Bean to @PostConstruct. + [source,java] —- @Configuration public class XStreamAutoConfiguration {

      @Bean @ConditionalOnMissingBean(XStream.class) public XStream xStream() { return new XStream(); }

      @PostConstruct public void register(XStream xstream, Collection converters) { converters.forEach(xstream::registerConverter); } } ---- + Unfortunately, `@PostConstruct` doesn't allow for methods to have arguments. The code doesn't compile. +

    • Option 2 + The alternative is to inject both beans into attributes of the configuration class, and use them in the @PostConstruct-annotated method. + [source,java] —- @Configuration public class XStreamAutoConfiguration {

      @Autowired private XStream xstream;

      @Autowired private Collection converters;

      @Bean @ConditionalOnMissingBean(XStream.class) public XStream xStream() { return new XStream(); }

      @PostConstruct public void register() { converters.forEach(xstream::registerConverter); } }
      —- + This compiles fine, but Spring enters a cycle trying both to inject the XStream instance into the configuration and to create it as a bean at the same time. +

    • Option 3 + The final (and only valid) option is to learn from the master and use another configuration class - a nested one. Looking at Spring Boot code source, it’s obviously a pattern. + The final code looks like this: + [source,java] —- @Configuration public class XStreamAutoConfiguration {

      @Bean @ConditionalOnMissingBean(XStream.class) public XStream xStream() { return new XStream(); }

      @Configuration public static class XStreamConverterAutoConfiguration {

        private XStream xstream;
        private Collection<Converter> converters;
        public void registerConverters() {
            converters.forEach(converter -> xstream.registerConverter(converter));
        }   } } ----

    The fixed code is available on[Github^]. And grab the[webinar^] while it’s hot.

    Categories: Java Tags: Spring Bootdesign
  • Fully configurable mappings for Spring MVC

    Spring Boot logo

    :page-liquid: :experimental: :imagesdir: /assets/resources/fully-configurable-mappings-spring-mvc/

    As I wrote some weeks link:{% post_url 2017-03-05-use-case-spring-component-scan %}[earlier^], I’m trying to implement features of the Spring Boot actuator in a non-Boot Spring MVC applications. Developing the endpoints themselves is quite straightforward. Much more challenging, however, is to be able to configure the mapping in a properties file, like in the actuator. This got me to check more closely at how it was done in the current code. This post sums up my “reverse-engineering” attempt around the subject.

    == Standard MVC

    === Usage

    In Spring MVC, in a class annotated with the @Controller annotation, methods can be in turn annotated with @RequestMapping. This annotation accepts a value attribute (or alternatively a path one) to define from which path it can be called.

    The path can be fixed e.g. /user but can also accepts variables e.g. /user/{id} filled at runtime. In that case, parameters should can be mapped to method parameters via @PathVariable:


    @RequestMapping(path = “/user/{id}”, method = arrayOf(RequestMethod.GET)) fun getUser(@PathVariable(“id”) id:String) = repository.findUserById(id) —-

    While adapted to REST API, this has to important limitations regarding configuration:

    • The pattern is set during development time and does not change afterwards
    • The filling of parameters occurs at runtime

    === Implementation

    With the above, mappings in @Controller-annotated classes will get registered during context startup through the DefaultAnnotationHandlerMapping class. Note there’s a default bean of this type registered in the context. This is summed up in the following diagram:

    image::defaultannotationhandlermapping.png[DefaultAnnotationHandlerMapping sequence diagram,714,533,align=”center”]

    In essence, the magic applies only to @Controller-annotated classes. Or, to be more strict, quoting the DefaultAnnotationHandlerMapping’s Javadoc:

    [quote] __ Annotated controllers are usually marked with the Controller stereotype at the type level. This is not strictly necessary when RequestMapping is applied at the type level (since such a handler usually implements the org.springframework.web.servlet.mvc.Controller interface). However, Controller is required for detecting RequestMapping annotations at the method level if RequestMapping is not present at the type level. __

    == Actuator

    === Usage

    Spring Boot actuator allows for configuring the path associated with each endpoint in the file (or using alternative methods for Boot configuration).

    For example, the metrics endpoint is available by default via the metrics path. But’s it possible to configure a completely different path:


    Also, actuator endpoints are by default accessible directly under the root, but it’s possible to group them under a dedicated sub-context:

    [source] management.context-path=/manage

    With the above configuration, the metrics endpoint is now available under the /manage/mymetrics.

    === Implementation

    Additional actuator endpoints should implements the MvcEndpoint interface. Methods annotated with @RequestMapping will work in the exact same way as for standard controllers above. This is achieved via a dedicated handler mapping, EndpointHandlerMapping in the Spring context.

    [quote] __ HandlerMapping to map Endpoints to URLs via Endpoint.getId(). The semantics of @RequestMapping should be identical to a normal @Controller, but the endpoints should not be annotated as @Controller (otherwise they will be mapped by the normal MVC mechanisms). __

    The class hierarchy is the following:

    image::mvcendpoint.png[MvcEndpoint class diagram,834,476,align=”center”]

    This diagram shows what’s part of Spring Boot and what’s not.

    == Conclusion

    Actuator endpoints reuse some from the existing Spring MVC code to handle @RequestMapping. It’s done in a dedicated mapping class so as to separate standard MVC controllers and Spring Boot’s actuator endpoint class hierarchy. In order to achieve fully configurable mappings in Spring MVC, this is the part of the code study, to duplicate and to adapt should one wants fully configurable mappings.

    Categories: JavaEE Tags: Spring MVCactuatorSpring Boot
  • Signing and verifying a standalone JAR


    :revdate: 2017-02-05 16:00:00 +0100 :page-liquid: :experimental:

    link:{% post_url 2017-02-05-proposal-java-policy-files-crafting-process %}[Last week^], I wrote about the JVM policy file that explicitly lists allowed sensitive API calls when running the JVM in sandboxed mode. This week, I’d like to improve the security by signing the JAR.

    == The nominal way

    This way doesn’t work. Readers more interested in the solution than the process should skip it. **

    === Create a keystore

    The initial step is to create a keystore if none is already available. There are plenty of online tutorials showing how to do that.


    keytool -genkey -keyalg RSA -alias selfsigned -keystore /path/to/keystore.jks -storepass password -validity 360 —-

    Fill in information accordingly.

    === Sign the application JAR

    Signing the application JAR must be part of the build process. With Maven, the[JAR signer plugin^] is dedicated to that. Its usage is quite straightforward:


    maven-jarsigner-plugin 1.4 sign sign /path/to/keystore.jks selfsigned ${store.password} ${key.password}

    To create the JAR, invoke the usual command-line and pass both passwords as system properties:


    mvn package -Dstore.password=password -Dkey.password=password —-

    Alternatively, Maven’s[encryption capabilities^] can be used to store passwords in a dedicated settings-security.xml to further improve security.

    === Configure the policy file

    Once the JAR is signed, the policy file can be updated to make use of it. This requires only the following configuration steps:

    1. Point to the keystore
    2. Configure the allowed alias


    keystore “keystore.jks”;

    grant signedBy “selfsigned” codeBase “file:target/spring-petclinic-1.4.2.jar” { … } —-

    Notice the signedBy keyword followed by the alias name - the same one as in the keystore above.

    === Launching the JAR with the policy file

    The same launch command can be used without any change:


    java -jar target/spring-petclinic-1.4.2.jar —-

    Unfortunately, it doesn’t work though this particular permission had already been configured!


    Caused by: access denied (“java.lang.reflect.ReflectPermission” “suppressAccessChecks”) at at at java.lang.SecurityManager.checkPermission( at java.lang.reflect.AccessibleObject.setAccessible( at org.springframework.util.ReflectionUtils.makeAccessible( at org.springframework.beans.BeanUtils.instantiateClass( at org.springframework.boot.SpringApplication.createSpringFactoriesInstances( —-

    The strangest part is that permissions requested before this one work all right. The reason is to be found in the particular structure of the JAR created by the Spring Boot plugin: JAR dependencies are packaged untouched in a BOOT-INF/lib folder in the executable JAR. Then Spring Boot code uses custom class-loading magic to load required classes from there.

    JAR signing works by creating a specific hash for each class, and by writing them into the JAR manifest file. During the verification phase, the hash of a class is computed and compared to the hash of the manifest. Hence, permissions related to classes located in the BOOT-INF/classes folder work as expected.

    However, the org.springframework.boot.SpringApplication class mentioned in the stack trace above is part of the spring-boot.jar located under BOOT-INF/lib: verification fails as there’s no hash available for the class in the manifest.

    Thus, usage of the Spring Boot plugin for JAR creation/launch is not compatible with JAR signing.

    == The workaround

    Aside from Spring Boot, there’s a legacy way to create standalone JARs: the[Maven Shade plugin^]. This will extract every class of every dependency in the final JAR. This is possible with Spring Boot apps, but it requires some slight changes to the POM:

    1. In the POM, remove the Spring Boot Maven plugin
    2. Configure the main class in the Maven JAR plugin: + [source,xml] —-
    maven-jar-plugin 3.0.2 org.springframework.samples.petclinic.PetClinicApplication
    1. Finally, add the Maven Shade plugin to work its magic: + [source,xml] —-
    maven-shade-plugin 2.4.3 true package shade

    WARNING: The command-line to launch the JAR doesn’t change but permissions depend on the executed code, coupled to the JAR structure. Hence, the policy file should be slightly modified.

    == Lessons learned

    While it requires to be a little creative, it’s entirely possible to sign Spring Boot JARs by using the same techniques as for any other JARs.

    To go further:

    •[Jarsigner Plugin^]
    •[Shade Plugin^]
    •[Spring Boot plugin^]
    •[Spring Boot executable JAR format^]
    Categories: Java Tags: JVMsecurityJARSpring Bootpolicy
  • Proposal for a Java policy files crafting process

    Security guy on an escalator

    :revdate: 2017-02-05 16:00:00 +0100 :page-liquid: :experimental:

    I’ve link:{% post_url 2016-01-17-java-security-manager %}[already^] link:{% post_url 2017-01-29-compilation-java-code-on-the-fly %}[written] about the JVM security manager, and why it should be used - despite it being rarely the case, if ever. However, just advocating for it won’t change the harsh reality unless some guidelines are provided to do so. This post has the ambition to be the basis of such guidelines.

    As a reminder, the JVM can run in two different modes, standard and sandboxed. In the former, all API are available with no restriction; in the later, some API calls deemed sensitive are forbidden. In that case, explicit permissions to allow some of those calls can be configured in a dedicated policy file.

    NOTE: Though running the JVM in sandbox mode is important, it doesn’t stop there .e.g. executing only digitally-signed code is also part of securing the JVM. This post is the first in a 2 parts-serie regarding JVM security.

    == Description

    The process is based on the[principle of least privilege^]. That directly translates into the following process:

    1. Start with a blank policy file
    2. Run the application
    3. Check the thrown security exception
    4. Add the smallest-grained permission possible in the policy file that allows to pass step 2
    5. Return to step 2 until the application can be run normally

    Relevant system properties include:

    • activates the Java Security manager
    • points to the desired policy file
    • last but not least, activates debugging information when an absent privilege is required. There are a ton of[options^].

    That sounds easy enough but let’s go detail how it works with an example.

    == A case study

    As a sample application, we will be using the[Spring Pet Clinic^], a typical albeit small-sized Spring Boot application.

    === First steps

    Once the application has been built, launch it with the security manager:


    java -jar target/spring-petclinic-1.4.2.jar —-

    This, of course, fails. The output is the following:

    Exception in thread “main” java.lang.IllegalStateException: access denied (“java.lang.RuntimePermission” “getProtectionDomain”) at org.springframework.boot.loader.ExecutableArchiveLauncher.( at org.springframework.boot.loader.JarLauncher.( at org.springframework.boot.loader.JarLauncher.main( Caused by: access denied ("java.lang.RuntimePermission" "getProtectionDomain") at at at java.lang.SecurityManager.checkPermission( at java.lang.Class.getProtectionDomain( at org.springframework.boot.loader.Launcher.createArchive( at org.springframework.boot.loader.ExecutableArchiveLauncher.( ... 2 more ----

    Let’s add the permission relevant to the above “access denied” exception to the policy file:


    grant codeBase “file:target/spring-petclinic-1.4.2.jar” { permission java.lang.RuntimePermission “getProtectionDomain”; }; —-

    Notice the path pointing to the JAR. It prevents other potentially malicious archives to execute critical code. Onto the next blocker.

    Exception in thread “main” access denied (“java.util.PropertyPermission” “java.protocol.handler.pkgs” “read”) —-

    This can be fixed by adding the below line to the policy file:


    grant codeBase “file:target/spring-petclinic-1.4.2.jar” { permission java.lang.RuntimePermission “getProtectionDomain”; permission java.util.PropertyPermission “java.protocol.handler.pkgs”, “read”; }; —-

    Next please.

    Exception in thread “main” access denied (“java.util.PropertyPermission” “java.protocol.handler.pkgs” “read”) —-

    Looks quite similar, but it needs a write permission in addition to the read one. Sure it can be fixed by adding one more line, but there’s a shortcut available. Just specify all necessary attributes of the permission on the same line:


    grant codeBase “file:target/spring-petclinic-1.4.2.jar” { permission java.lang.RuntimePermission “getProtectionDomain”; permission java.util.PropertyPermission “java.protocol.handler.pkgs”, “read,write”; }; —-

    Rinse and repeat. Without further ado, the (nearly) final policy can be found[online^]: a whooping ~1800 lines of configuration for the Spring Boot Pet Clinic as an executable JAR.

    Now that the general approach has been explained, it just needs to be followed until the application functions properly. The next section describe some specific glitches along the way.

    === Securing Java logging

    At some point, nothing gets printed in the console anymore. The command-line just returns, that’s it. Comes the system property - described above, that helps resolve the issue:


    java,stacktrace -jar target/spring-petclinic-1.4.2.jar —-

    That yields the following stack:

    java.lang.Exception: Stack trace at java.lang.Thread.dumpStack( at at at java.lang.SecurityManager.checkPermission( at java.util.logging.LogManager.checkPermission( at java.util.logging.Logger.checkPermission( at java.util.logging.Logger.setLevel( at java.util.logging.LogManager.resetLogger( at java.util.logging.LogManager.reset( at java.util.logging.LogManager$ —-

    It’s time for some real software engineering (also known as Google Search). The LogManager’s–[Javadoc^] tells about the LoggingPermission that needs to be added to the existing list of permissions:


    grant codeBase “file:target/spring-petclinic-1.4.2.jar” { permission java.lang.RuntimePermission “getProtectionDomain”; … permission java.util.PropertyPermission “PID”, “read,write”; permission java.util.logging.LoggingPermission “control”; }; —-

    That makes it possible to go further.

    === Securing the reading of system properties and environment variables

    It’s even possible to watch Spring Boot log… until one realizes it’s made entirely of error messages about not being able to read a bunchload of system properties and environment variables. Here’s an excerpt:

    2017-01-22 00:30:17.118 INFO 46549 — [ main] o.s.w.c.s.StandardServletEnvironment : Caught AccessControlException when accessing system environment variable [logging.register_shutdown_hook]; its value will be returned [null]. Reason: access denied (“java.lang.RuntimePermission” “getenv.logging.register_shutdown_hook”) 2017-01-22 00:30:17.118 INFO 46549 — [ main] o.s.w.c.s.StandardServletEnvironment : Caught AccessControlException when accessing system property [logging_register-shutdown-hook]; its value will be returned [null]. Reason: access denied (“java.util.PropertyPermission” “logging_register-shutdown-hook” “read”) —-

    I will spare you dear readers a lot of trouble: there’s no sense in configuring every property one by one as JCache requires read and write permissions on all properties. So just remove every fine-grained PropertyPermission so far and replace it with a catch-all coarse-grained one:


    permission java.util.PropertyPermission “*”, “read,write”; —-


    Seems like security was not one of JCache developers first priority. The following snippet is the[code excerpt] for javax.cache.Caching.CachingProviderRegistry.getCachingProviders():


    if (System.getProperties().containsKey(JAVAX_CACHE_CACHING_PROVIDER)) { String className = System.getProperty(JAVAX_CACHE_CACHING_PROVIDER); … } —-

    Wow, it reads all properties! Plus the next line makes it a little redundant, no?

    As for environment variables, the Spring team seem to try to avoid developers configuration issues related to case and check every possible case combination, so there is a lot of different options.

    === Variables and subdirectories

    At one point, Spring’s embedded Tomcat attempts - and fails, to create a subfolder into the folder.

    java.lang.SecurityException: Unable to create temporary file at ~[na:1.8.0_92] at ~[na:1.8.0_92] at org.springframework.boot.context.embedded.AbstractEmbeddedServletContainerFactory.createTempDir(…) at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainerFactory.getEmbeddedServletContainer(…) at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.createEmbeddedServletContainer(…) at org.springframework.boot.context.embedded.EmbeddedWebApplicationContext.onRefresh(…) … 16 common frames omitted —-

    One could get away with that by “hard-configuring” the path, but that would just be a major portability issue. Permissions are able to use System properties.

    The second issue is the subfolder: there’s no way of knowing the folder name, hence it’s not possible to configure it beforehand. However, file permissions accept any direct children or any descendant in the hierarachy; the former is set with jokers, and the second with dashes. The final configuration looks like this:


    permission “${}/-“, “read,write,delete”; —-

    === CGLIB issues

    CGLIB is used heavily in the Spring framework to extend classes at compile-time. By default, the name of a generated class:

    [quote] […] is composed of a prefix based on the name of the superclass, a fixed string incorporating the CGLIB class responsible for generation, and a hashcode derived from the parameters used to create the object.

    Consequently, one if faced with the following exception: access denied (""
      at ~[na:1.8.0_92]
      at [na:1.8.0_92]
      at java.lang.SecurityManager.checkPermission( ~[na:1.8.0_92]
      at java.lang.SecurityManager.checkRead( ~[na:1.8.0_92]
      at ~[na:1.8.0_92]
      at org.apache.catalina.webresources.DirResourceSet.getResource(...)
      at org.apache.catalina.webresources.StandardRoot.getResourceInternal(...)
      at org.apache.catalina.webresources.Cache.getResource( ~[tomcat-embed-core-8.5.6.jar!/:8.5.6]
      at org.apache.catalina.webresources.StandardRoot.getResource(...)
      at org.apache.catalina.webresources.StandardRoot.getClassLoaderResource(...)
      at org.apache.catalina.loader.WebappClassLoaderBase.findClassInternal(...)
      at org.apache.catalina.loader.WebappClassLoaderBase$
      at org.apache.catalina.loader.WebappClassLoaderBase$
      at Method) [na:1.8.0_92]
      at org.apache.catalina.loader.WebappClassLoaderBase.findClass()
      at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedWebappClassLoader.findClassIgnoringNotFound()
      at org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedWebappClassLoader.loadClass()
      at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(...)
      at java.lang.Class.forName0(Native Method) [na:1.8.0_92]
      at java.lang.Class.forName( [na:1.8.0_92]

    It looks quite an easy file permission fix, but it isn’t: for whatever reason, the hashcode used by CGLIB to extend MultipartAutoConfiguration changes at every compilation. Hence, a more lenient generic permission is required:

    permission "src/main/webapp/WEB-INF/classes/org/springframework/boot/autoconfigure/web/*", "read";

    === Launching is not the end

    Unfortunately, once the application has been successfully launched doesn’t mean it stops there. Browsing the home page yields a new bunch of security exceptions.

    For example, Tomcat needs to bind to port 8080, but this is a potential insecure action: access denied ("" "localhost:8080" "listen,resolve")

    The permission to fix it is pretty straightforward:

    permission "localhost:8080", "listen,resolve";

    However, actually browsing the app brings a new exception: access denied ("" "[0:0:0:0:0:0:0:1]:56733" "accept,resolve")

    That wouldn’t be bad if the port number didn’t change with every launch. A few attempts reveal that it seems to start from around 55400. Good thing that the socket permission allows for a port range:

    permission "[0:0:0:0:0:0:0:1]:55400-", "accept,resolve";

    == Lessons learned

    Though it was very fulfilling to have created the policy file, the true value lies in the lessons learned.

    • The crafting of a custom policy file for a specific application is quite trivial, but very time-consuming. I didn’t finish completely and spent around one day for a small-sized application. Time might be a valid reason why policy files are never in use.
    • For large applications, I believe it’s not only possible but desirable to automate the crafting process: run the app, read the exception, create the associated permission, and update the policy file accordingly.
    • Patterns are recognizable in the policy file: sets of permissions are dedicated to a specific library, such as Spring Boot’s actuator. If each framework/library would provide the minimum associated policy file that allows it to work correctly, crafting a policy file for an app would just mean aggregating all files for every library.
    • Randomness (such as random port number) and bad coding practices (such as JCache’s) require more coarse-grained permissions. On one hand, it speeds up the crafting process; on the other hand, it increases the potential attack surface.

    In all cases, running the JVM in sandbox mode is not an option in security-aware environments.

    To go further:

    •[Policy file syntax^]
    •[Permissions in the JDK^]
    •[Security documentation^]
    •[End result policy file] (nearly finished)
    Categories: Java Tags: JVMsecuritySpring Bootpolicy
  • Feeding Spring Boot metrics to Elasticsearch

    Elasticsearch logo

    :imagesdir: /assets/resources/feeding-spring-boot-metrics-to-elasticsearch/

    This week’s post aims to describe how to send JMX metrics taken from the JVM to an Elasticsearch instance.

    == Business app requirements

    The business app(s) has some minor requirements.

    The easiest use-case is to start from a Spring Boot application. In order for metrics to be available, just add the Actuator dependency to it:


    org.springframework.boot spring-boot-starter-actuator

    Note that when inheriting from spring-boot-starter-parent, setting the version is not necessary and taken from the parent POM.

    To send data to JMX, configure a brand-new @Bean in the context:


    @Bean @ExportMetricWriter MetricWriter metricWriter(MBeanExporter exporter) { return new JmxMetricWriter(exporter); } —-

    == To-be architectural design

    There are several options to put JMX data into Elasticsearch.

    === Possible options

    . The most straightforward way is to use Logstash with the[JMX plugin^] . Alternatively, one can hack his own micro-service architecture:

    • Let the application send metrics to the JVM - there’s the Spring Boot actuator for that, the overhead is pretty limited
    • Have a feature expose JMX data on an HTTP endpoint using[Jolokia^]
    • Have a dedicated app poll the endpoint and send data to Elasticsearch + This way, every component has its own responsibility, there’s not much performance overhead and the metric-handling part can fail while the main app is still available. . An alternative would be to directly poll the JMX data from the JVM

    === Unfortunate setback

    Any architect worth his salt (read lazy) should always consider the out-of-the-box option. The Logstash JMX plugin looks promising. After installing the plugin, the jmx input can be configured into the Logstash configuration file:


    input { jmx { path => “/var/logstash/jmxconf” polling_frequency => 5 type => “jmx” } }

    output { stdout { codec => rubydebug } } —-

    The plugin is designed to read JVM parameters (such as host and port), as well as the metrics to handle from JSON configuration files. In the above example, they will be watched in the /var/logstash/jmxconf folder. Moreover, they can be added, removed and updated on the fly.

    Here’s an example of such configuration file:


    { “host” : “localhost”, “port” : 1616, “alias” : “petclinic”, “queries” : [ { “object_name” : “org.springframework.metrics:name=,type=,value=*”, “object_alias” : “${type}.${name}.${value}” }] } —-

    A MBean’s ObjectName can be determined from inside the jconsole:

    image::jconsole.png[JConsole screenshot,739,314,align=”center”]

    The plugin allows wildcard in the metric’s name and usage of captured values in the alias. Also, by default, all attributes will be read (those can be restricted if necessary).

    Note: when starting the business app, it’s highly recommended to set the JMX port through the system property.

    Unfortunately, at the time of this writing, running the above configuration fails with messages of this kind:

    [WARN][logstash.inputs.jmx] Failed retrieving metrics for attribute Value on object blah blah blah [WARN][logstash.inputs.jmx] undefined method `event’ for = ----

    For reference purpose, the Github issue can be found[here^].

    == The do-it yourself alternative

    Considering it’s easier to poll HTTP endpoints than JMX - and that implementations already exist, let’s go for option 3 above. Libraries will include:

    • Spring Boot for the business app
    • With the Actuator starter to provides metrics
    • Configured with the JMX exporter for sending data
    • Also with the dependency to expose JMX beans on an HTTP endpoints
    • Another Spring Boot app for the “poller”
    • Configured with a scheduled service to regularly poll the endpoint and send it to Elasticsearch

    image::component-diagram.png[Draft architecture component diagram,535,283,align=”center”]

    === Additional business app requirement

    To expose the JMX data over HTTP, simply add the Jolokia dependency to the business app:


    org.jolokia jolokia-core

    From this point on, one can query for any JMX metric via the HTTP endpoint exposed by Jolokia - by default, the full URL looks like /jolokia/read/<JMX_ObjectName>.

    === Custom-made broker

    The broker app responsibilities include:

    • reading JMX metrics from the business app through the HTTP endpoint at regular intervals
    • sending them to Elasticsearch for indexing

    My initial move was to use Spring Data, but it seems the current release is not compatible with Elasticsearch latest 5 version, as I got the following exception:

    java.lang.IllegalStateException: Received message from unsupported version: [2.0.0] minimal compatible version is: [5.0.0] —-

    Besides, Spring Data is based on entities, which implies deserializing from HTTP and serializing back again to Elasticsearch: that has a negative impact on performance for no real added value.

    The code itself is quite straightforward:


    @SpringBootApplication <1> @EnableScheduling <2> open class JolokiaElasticApplication {

    @Autowired lateinit var client: JestClient <6>

    @Bean open fun template() = RestTemplate() <4>

    @Scheduled(fixedRate = 5000) <3> open fun transfer() { val result = template().getForObject( <5> “http://localhost:8080/manage/jolokia/read/org.springframework.metrics:name=status,type=counter,value=beans”, val index = Index.Builder(result).index(“metrics”).type(“metric”).id(UUID.randomUUID().toString()).build() client.execute(index) } }

    fun main(args: Array) {, *args) } ----

    <1> Of course, it’s a Spring Boot application. <2> To poll at regular intervals, it must be annotated with @EnableScheduling <3> And have the polling method annotated with @Scheduled and parameterized with the interval in milliseconds. <4> In Spring Boot application, calling HTTP endpoints is achieved through the RestTemplate. Once created - it’s singleton, it can be (re)used throughout the application. <5> The call result is deserialized into a String. <6> The client to use is[Jest^]. Jest offers a dedicated indexing API: it just requires the JSON string to be sent, as well as the index name, the object name as well as its id. With the Spring Boot Elastic starter on the classpath, a JestClient instance is automatically registered in the bean factory. Just autowire it in the configuration to use it.

    At this point, launching the Spring Boot application will poll the business app at regular intervals for the specified metrics and send it to Elasticsearch. It’s of course quite crude, everything is hard-coded, but it gets the job done.

    == Conclusion

    Despite the failing plugin, we managed to get the JMX data from the business application to Elasticsearch by using a dedicated Spring Boot app.