• Migrating a Spring Boot application to Java 9 - Compatibility

    Java 9 Logo

    With the coming of Java 9, there is a lot of buzz on how to migrate applications to use the module system. Unfortunately, most of the articles written focus on simple Hello world applications. Or worse, regarding Spring applications, the sample app uses legacy practices - like XML for example. This post aims to correct that by providing a step-to-step migration guide for a non-trivial modern Spring Boot application. The sample app chosen to do that is the Spring Pet clinic.

    There are basically 2 steps to use Java 9: first, be compatible then use the fully-fledged module system. This post aims at the former, a future post will consider the later.

    Bumping the Java version

    Once JDK 9 is available on the target machine, the first move is to bump the java.version from 8 to 9 in the POM:

    <properties>
        <!-- Generic properties -->
        <java.version>9</java.version>
    </propertie>

    Now, let’s mvn clean compile.

    Cobertura’s failure

    The first error along the way is the following:

    [ERROR] Failed to execute goal org.codehaus.mojo:cobertura-maven-plugin:2.7:clean (default) on project spring-petclinic:
     Execution default of goal org.codehaus.mojo:cobertura-maven-plugin:2.7:clean failed:
      Plugin org.codehaus.mojo:cobertura-maven-plugin:2.7 or one of its dependencies could not be resolved:
      Could not find artifact com.sun:tools:jar:0 at
       specified path /Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home/../lib/tools.jar -> [Help 1]
    Cobertura is a free Java code coverage reporting tool.
    — https://github.com/cobertura/cobertura

    It requires access to the tools.jar that is part of JDK 8 (and earlier). One of the changes in Java 9 is the removal of that library. Hence, that is not compatible. This is already logged as an issue. Given the last commit on the Cobertura repository is one year old, just comment out the Cobertura Maven plugin. And think about replacing Cobertura for JaCoCo instead.

    Wro4J’s failure

    The next error is the following:

    [ERROR] Failed to execute goal ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run (default) on project spring-petclinic:
     Execution default of goal ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run failed:
      An API incompatibility was encountered while executing ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run:
       java.lang.ExceptionInInitializerError: null
    wro4j is a free and Open Source Java project which will help you to easily improve your web application page loading time. It can help you to keep your static resources (js & css) well organized, merge & minify them at run-time (using a simple filter) or build-time (using maven plugin) and has a dozen of features you may find useful when dealing with web resources.
    — https://github.com/wro4j/wro4j

    This is referenced as a Github issue. Changes have been merged, but the issue is still open for Java 9 compatibility should be part of the 2.0 release.

    Let’s comment out Wro4J for the moment.

    Compilation failure

    Compiling the project at this point yields the following compile-time errors:

    /Users/i303869/projects/private/spring-petclinic/src/main/java/org/springframework/samples/petclinic/vet/Vet.java
    Error:(30, 22) java: package javax.xml.bind.annotation is not visible
      (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph)
    /Users/i303869/projects/private/spring-petclinic/src/main/java/org/springframework/samples/petclinic/vet/Vets.java
    Error:(21, 22) java: package javax.xml.bind.annotation is not visible
      (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph)
    Error:(22, 22) java: package javax.xml.bind.annotation is not visible
      (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph)

    That means code on the classpath cannot access this module by default. It needs to be manually added with the --add-modules option of Java 9’s javac. Within Maven, it can be set by using the maven-compiler-plugin:

    <plugin>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>3.7.0</version>
        <configuration>
            <compilerArgs>
                <arg>--add-modules</arg>
                <arg>java.xml.bind</arg>
            </compilerArgs>
        </configuration>
    </plugin>

    Now the project can compile.

    Test failure

    The next step sees the failure of unit tests to fail with mvn test.

    The cause is the same, but it’s a bit harder to find. It requires checking the Surefire reports. Some contain exceptions with the following line:

    Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException

    Again, test code cannot access the module. This time, however, the maven-surefire-plugin needs to be configured:

    <plugin>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>2.20.1</version>
        <configuration>
            <argLine>--add-modules java.xml.bind</argLine>
        </configuration>
    </plugin>

    This makes the tests work.

    Packaging failure

    If one thinks this is the end of the road, think again. The packaging phase also fails with a rather cryptic error:

    [ERROR] Failed to execute goal org.apache.maven.plugins:maven-jar-plugin:2.6:jar (default-jar) on project spring-petclinic:
     Execution default-jar of goal org.apache.maven.plugins:maven-jar-plugin:2.6:jar failed:
      An API incompatibility was encountered while executing org.apache.maven.plugins:maven-jar-plugin:2.6:jar:
       java.lang.ExceptionInInitializerError: null
    ...
    Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
    	at org.codehaus.plexus.archiver.zip.AbstractZipArchiver.<clinit>(AbstractZipArchiver.java:116)

    This one is even harder to find: it requires a Google search to stumble upon the solution. The plexus-archiver is to blame. Simply bumping the maven-jar-plugin to the latest version - 3.2 at the time of this writing will make use of a Java 9 compatible version of the archiver and will solve the issue:

    <plugin>
        <artifactId>maven-jar-plugin</artifactId>
        <version>3.0.2</version>
    </plugin>

    Spring Boot plugin failure

    At this point, the project finally can be compiled, tested and packaged. The next step is to run the app through the Spring Boot Maven plugin i.e. mvn spring-boot:run. But it fails again…​:

    [INFO] --- spring-boot-maven-plugin:1.5.1.RELEASE:run (default-cli) @ spring-petclinic ---
    [INFO] Attaching agents: []
    Exception in thread "main" java.lang.ClassCastException:
     java.base/jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to java.base/java.net.URLClassLoader
    	at o.s.b.devtools.restart.DefaultRestartInitializer.getUrls(DefaultRestartInitializer.java:93)
    	at o.s.b.devtools.restart.DefaultRestartInitializer.getInitialUrls(DefaultRestartInitializer.java:56)
    	at o.s.b.devtools.restart.Restarter.<init>(Restarter.java:140)
    	at o.s.b.devtools.restart.Restarter.initialize(Restarter.java:546)
    	at o.s.b.devtools.restart.RestartApplicationListener.onApplicationStartingEvent(RestartApplicationListener.java:67)
    	at o.s.b.devtools.restart.RestartApplicationListener.onApplicationEvent(RestartApplicationListener.java:45)
    	at o.s.c.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:167)
    	at o.s.c.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
    	at o.s.c.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:122)
    	at o.s.b.context.event.EventPublishingRunListener.starting(EventPublishingRunListener.java:68)
    	at o.s.b.SpringApplicationRunListeners.starting(SpringApplicationRunListeners.java:48)
    	at o.s.b.SpringApplication.run(SpringApplication.java:303)
    	at o.s.b.SpringApplication.run(SpringApplication.java:1162)
    	at o.s.b.SpringApplication.run(SpringApplication.java:1151)
    	at org.springframework.samples.petclinic.PetClinicApplication.main(PetClinicApplication.java:32)

    This is a documented issue that Spring Boot Dev Tools v1.5 is not compatible with Java 9.

    Fortunately, this bug is fixed in Spring Boot 2.0.0.M5. Unfortunately, this specific version is not yet available at the time of this writing. So for now, let’s remove the dev-tools and try to run again. It fails again, but this time the exception is a familiar one:

    Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException

    Let’s add the required argument to the spring-boot-maven-plugin:

    <plugin>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId>
        <configuration>
            <jvmArguments>--add-modules java.xml.bind</jvmArguments>
        </configuration>
        ...
    </plugin>

    The app can finally be be launched and is accessible!

    Conclusion

    Running a non-trivial legacy application on JDK 9 requires some effort. Worse, some important features had to be left on the way: code coverage and web performance enhancements. On the opposite side, the only meager benefit is the String memory space improvement. In the next blog post, we will try to improve the situation to actually make use of modules in the app.

    Categories: Java Tags: Java 9Spring BootmodulesJigsaw
  • Truly immutable builds

    Baltic amber pieces with insects

    It sometimes happen that after a few years, an app is stable enough that it gets into hibernating mode. Though it’s used and useful, there are no changes to it and it happily runs its life. Then, after a while, someone decides to add some new features again. Apart from simple things such as locating the sources, one of the most important thing is to be able to build the app. Though it may seem trivial, there are some things to think about. Here are some advices on how to make apps that can be built forever.

    I’ll use Maven as an example but advices below can apply to any build tool.

    Immutable plugins version

    For dependencies, Maven requires the version to be set. For plugins, Maven allows not to specify it. In that case, it will fetch the latest.

    Though it might be seen as a benefit to always use the latest version, it can break existing behavior.

    Rule 1

    Always explicitly set plugins version. This includes all plugins that are used during the build, even if they are not configured e.g. maven-surefire-plugin.

    Check in the build tool

    The second problem that may arise is the build tool itself. What Maven version that was used to build the app? Building the app with another version might not work. Or worse, build the app in a slightly different way with unexpected side-effects.

    Hence, the build tool must be saved along the sources of the app. In the Maven ecosystem, this can be done using the Maven wrapper.

    Rule 2
    1. Get the wrapper
    2. Check it in along with regular sources
    3. Use it for each and every build

    Check in JVM options

    The last step occurs when JVM options are tweaked using the MVN_OPTS environment variable. It can be used to initialize the maximum amount of memory for the build e.g. -XmX or to pass system properties to the build e.g. -Dmy.property=3. Instead of using the MVN_OPTS environment variable, such parameters should be set in a .mvn/jvm.config file, and checked in along the app sources. Note this is available since Maven 3.3.1.

    Rule 3

    Check in JVM build options via the .mvn/jvm.config file along regular sources

    Check in the CI build file

    The build file that is relevant to the continuous integration server should be checked in as well. For some - Travis CI, GitLab, etc., it’s a pretty standard practice. For others - Jenkins, it’s a brand new feature.

    Rule 4

    Check in the CI server specific build files along regular sources.

    Nice to have

    The above steps try to ensure that as many things as possible are immutable. While the version of the Java code and the version of the generated bytecode can be set (source and target configuration parameters of the maven-compiler-plugin), one thing that cannot be set is the JDK itself.

    In order to keep it immutable along the whole application life, it’s advised to specify the exact JDK. In turn, this depends a lot on the exact continuous integration server. For example, Travis CI allows it natively in the build file, while Jenkins might require the usage of a Docker image.

    Conclusion

    Making sure to be able to build your app in the future is not a glamorous task. However, it can make a huge difference in the future. Following the above rules, chances you’ll be able to build apps, after they have been hibernating for a long time, will dramatically increase.

    Categories: Java Tags: mavenbuildDockermaintenance
  • Bypassing Javascript checks

    Police riot control training

    Nowadays, when a webapp offers a registration page, it usually duplicates the password field (or sometimes even the email field).

    By having you type the password twice, it wants to ensure that you didn’t make any mistake. And that you don’t have to reset the password the next time you try to login. It makes sense if you actually type the password. Me, I’m using a password manager. That means, I do a copy-paste the password I got from the password manager twice. So far, so good.

    Now, some webapps assume people do type the password. Or worse, people behind those apps don’t know about password managers. They enforce typing the password in the second password field by preventing the paste command in the field. That means that password manager users like me have to type more than 20 random characters by hand. It’s not acceptable!

    Fortunately, there’s a neat solution if you are willing to do some hacking. With any modern navigator, locate the guilty password input field and:

    • Get its id, e.g. myId
    • Get the name of the function associated with its onpaste attribute, e.g. dontPaste Event listeners in Google Chrome
    • Run the following in the JavaScript console:
    document.getElementById('myId').removeElementListener('paste', 'dontPaste');
    

    In some cases, however, the function is anonymous, and the previous code cannot be executed. Fine. Then, run:

    document.getElementById('myId').onpaste = null;
    

    Conclusions

    There are some conclusions to this rather short post.

    The first one, is that it’s really cool to be a developer, because it allows you to avoid a lot of hassle.

    The second one is to never ever implement this kind of checks. If the user doesn’t remember his password, then provide a way to reset it. You’ll need this feature anyway. But duplicating passwords makes it harder to use password managers. Hence, it decreases security. In this day and age, that’s not only a waste of time, it’s a serious mistake.

    Finally, it’s your “moral” duty as a developer to push back against such stupid requirements.

    PS: The complete title of this post should have been, “Bypassing stupid Javascript check”, but it would not have been great in Google search results…

    Categories: Development Tags: javascripthack
  • Lambdas and Clean Code

    Buddhist monk washing clothes in the river

    As software developers, we behave like children. When we see shiny new things, we just have to play with them. That’s normal, accepted, and in general, even beneficial to our job… up to a point.

    When Java started to provide annotations with version 5, there was a huge move toward using them. Anywhere. Everywhere. Even when it was not a good idea to. But it was new, hence it had to be good. Of course, when something is abused, there’s a strong movement against it. So that even when the usage of annotations may make sense, some developers might strongly be against it. There’s even a site about that (warning, trolling inside).

    Unfortunately, we didn’t collectively learn from overusing annotations. With a lot of companies having migrated to Java 8, one start to notice a lot of code making use of lambdas like this one:

    List<Person> persons = ...;
    persons.stream().filter(p -> {
        if (p.getGender() == Gender.MALE) {
            return true;
        }
        LocalDate now = LocalDate.now();
        Duration age = Duration.between(p.getBirthDate(), now);
        Duration adult = Duration.of(18, ChronoUnit.YEARS);
        if (age.compareTo(adult) > 0) {
            return true;
        }
        return false;
    }).map(p -> p.getFirstName() + " " + p.getLastName())
      .collect(Collectors.toList());
    

    This is just a stupid sample, but it gives a good feeling of the code I sometimes have to read. It’s in general longer and even more convoluted, or to be politically correct, it has room for improvement - really a lot of room.

    The first move would be to apply correct naming, as well as move the logic to where it belongs to.

    public class Person {
    
        // ...
    
        public boolean isMale() {
            return getGender() == Gender.MALE;
        }
    
        public boolean isAdult(LocalDate when) {
            Duration age = Duration.between(birthDate, when);
            Duration adult = Duration.of(18, ChronoUnit.YEARS);
            return age.compareTo(adult) > 0;
        }
    }
    

    This small refactoring already improves the readability of the lambda:

    persons.stream().filter(p -> {
        if (p.isMale()) {
            return true;
        }
        LocalDate now = LocalDate.now();
        if (p.isAdult(now)) {
            return true;
        }
        return false;
    }).map(p -> p.getFirstName() + " " + p.getLastName())
            .collect(Collectors.toList());
    

    But it shouldn’t stop there. There’s an interesting bia regarding lambda: they have to be anonymous. Nearly all examples on the Web show anonymous lambdas. But nothing could be further from the truth!

    Let’s name our lambdas, and check the results:

    // Implementation details
    Predicate<Person> isMaleOrAdult = p -> {
        if (p.isMale()) {
            return true;
        }
        LocalDate now = LocalDate.now();
        if (p.isAdult(now)) {
            return true;
        }
        return false;
    };
    Function<Person, String> concatenateFirstAndLastName = p -> p.getFirstName() + " " + p.getLastName();
    
    // Core
    persons.stream().filter(isMaleOrAdult).map(concatenateFirstAndLastName)
    

    Nothing mind-blowing. Yet, notice that the stream itself (the last line) has become more readable, not hidden behind implementation details. It doesn’t prevent developers from reading them, but only if necessary.

    Conclusion

    Tools are tools. Lambdas are one of them, in a Java developer’s toolsbelt. But concepts are forever.

    Categories: Java Tags: lambdaclean codejava 8
  • Synthetic

    Plastic straw tubes

    There is a bunch of languages running on the JVM, from of course Java, to Clojure and JRuby. All of them have different syntaxes, but it’s awesome they all compile to the same bytecode. The JVM unites them all. Of course, it’s biased toward Java, but even in Java, there is some magic happening in the bytecode.

    The most well-known trick comes from the following code:

    public class Foo {
        static class Bar {
            private Bar() {}
        }
    
        public static void main(String... args) {
            new Bar();
        }
    }

    Can you guess how many constructors the Bar class has?

    Two. Yes, you read that well. For the JVM, the Bar class declares 2 constructors. Run the following code if you don’t believe it:

    Class<Foo.Bar> clazz = Foo.Bar.class;
    Constructor<?>[] constructors = clazz.getDeclaredConstructors();
    System.out.println(constructors.length);
    Arrays.stream(constructors).forEach(constructor -> {
        System.out.println("Constructor: " + constructor);
    });

    The output is the following:

    Constructor: private Foo$Bar()
    Constructor: Foo$Bar(Foo$1)

    The reason is pretty well documented. The bytecode knows about access modifiers, but not about nested classes. In order for the Foo class to be able to create new Bar instances, the Java compiler generates an additional constructor with a default package visibility.

    This can be confirmed with the javap tool.

    javap -v out/production/synthetic/Foo\$Bar.class

    This outputs the following:

    [...]
    {
      Foo$Bar(Foo$1);
        descriptor: (LFoo$1;)V
        flags: ACC_SYNTHETIC
        Code:
          stack=1, locals=2, args_size=2
             0: aload_0
             1: invokespecial #1                  // Method "<init>":()V
             4: return
          LineNumberTable:
            line 2: 0
          LocalVariableTable:
            Start  Length  Slot  Name   Signature
                0       5     0  this   LFoo$Bar;
                0       5     1    x0   LFoo$1;
    }
    [...]

    Notice the ACC_SYNTHETIC flag. Going to the JVM specifications yields the following information:

    The ACC_SYNTHETIC flag indicates that this method was generated by a compiler and does not appear in source code, unless it is one of the methods named in §4.7.8.
    — The class File Format
    https://docs.oracle.com/javase/specs/jvms/se8/html/jvms-4.html#jvms-4.6

    Theoretically, it should be possible to call this generated constructor - notwithstanding the fact that it’s not possible to provide an instance of Foo$1, but let’s put it aside. But the IDE doesn’t seem to be able to discover this second non-argumentless constructor. I didn’t find any reference in the Java Language Specification, but synthetic classes and members cannot be accessed directly but only through reflection.

    At this point, one could wonder why all the fuss about the synthetic flag. It was introduced in Java to resolve the issue of nested classes access. But other JVM languages use it to implement their specification. For example, Kotlin uses synthetic to access the companion object:

    class Baz() {
        companion object {
            val BAZ = "baz"
        }
    }

    Executing javap on the .class file returns the following output (abridged for readability purpose) :

    {
      public static final Baz$Companion Companion;
        descriptor: LBaz$Companion;
        flags: ACC_PUBLIC, ACC_STATIC, ACC_FINAL
    
      public Baz();
      [...]
    
      public static final java.lang.String access$getBAZ$cp();
        descriptor: ()Ljava/lang/String;
        flags: ACC_PUBLIC, ACC_STATIC, ACC_FINAL, ACC_SYNTHETIC
        Code:
          stack=1, locals=0, args_size=0
             0: getstatic     #22                 // Field BAZ:Ljava/lang/String;
             3: areturn
          LineNumberTable:
            line 1: 0
        RuntimeInvisibleAnnotations:
          0: #15()
    }
    [...]

    Notice the access$getBAZ$cp() static method? That’s the name of the method that should be called from Java:

    public class FromJava {
    
        public static void main(String... args) {
            Baz.Companion.getBAZ();
        }
    }

    Conclusion

    While knowledge of the synthetic flag is not required in the day-to-day work of a JVM developer, it can be helpful to understand some of the results returned by the reflection API.

    Categories: Java Tags: JVMbytecodejavapKotlin
  • Flavors of Spring application context configuration

    Spring framework logo

    Every now and then, there’s an angry post or comment bitching about how the Spring framework is full of XML, how terrible and verbose it is, and how the author would never use it because of that. Of course, that is completely crap. First, when Spring was created, XML was pretty hot. J2EE deployment descriptors (yes, that was the name at the time) was XML-based.

    Anyway, it’s 2017 folks, and there are multiple ways to skin a cat. This article aims at listing the different ways a Spring application context can be configured so as to enlighten the aforementioned crowd - and stop the trolling around Spring and XML.

    XML

    XLM has been the first way to configure the Spring application context. Basically, one create an XML file with a dedicated namespace. It’s very straightforward:

    <beans xmlns="http://www.springframework.org/schema/beans"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://www.springframework.org/schema/beans 
               http://www.springframework.org/schema/beans/spring-beans.xsd">
        <bean id="foo" class="ch.frankel.blog.Foo">
            <constructor-arg value="Hello world!" />
        </bean>
        <bean id="bar" class="ch.frankel.blog.Bar">
            <constructor-arg ref="bar" />
        </bean>
    </beans>
    

    The next step is to create the application context, using dedicated classe:

    ApplicationContext ctx = new ClassPathXmlApplicationContext("ch/frankel/blog/context.xml");
    ApplicationContext ctx = new FileSystemXmlApplicationContext("/opt/app/context.xml");
    ApplicationContext ctx = new GenericXmlApplicationContext("classpath:ch/frankel/blog/context.xml");
    

    XML’s declarative nature enforces simplicity at the cost of extra verbosity. It’s orthogonal to the code - it’s completely independent. Before the coming of JavaConfig, I still favored XML over self-annotated classes.

    Self-annotated classes

    As for every new future/technology, when Java 5 introduced annotations, there was a rush to use them. In essence, a self-annotated class will be auto-magically registered into the application context.

    To achieve that, Spring provides the @Component annotation. However, to improve semantics, there are also dedicated annotations to differentiate between the 3 standard layers of the layered architecture principle:

    • @Controller
    • @Service
    • @Repository

    This is also quite straightforward:

    @Component
    public class Foo {
    
        public Foo(@Value("Hello world!") String value) { }
    }
    
    @Component
    public class Bar {
    
        @Autowired
        public Bar(Foo foo) { }
    }
    

    To scan for self-annotated classes, a dedicated application context is necessary:

    ApplicationContext ctx = new AnnotationConfigApplicationContext("ch.frankel.blog");
    

    Self-annotated classes are quite easy to use, but there are some downsides:

    • A self-annotated class becomes dependent on the Spring framework. For a framework based on dependency injection, that’s quite a problem.
    • Usage of self-annotations blurs the boundary between the class and the bean. As a consequence, the class cannot be registered multiple times, under different names and scopes into the context.
    • Self-annotated classes require autowiring, which has downsides on its own.

    Java configuration

    Given the above problems regarding self-annotated classes, the Spring framework introduced a new way to configure the context: JavaConfig. In essence, JavaConfig configuration classes replace XML file, but with compile-time safety instead of XML-schema runtime validation. This is based on two annotations @Configuration for classes, and @Bean for methods.

    The equivalent of the above XML is the following snippet:

    @Configuration
    public class JavaConfiguration {
    
        @Bean
        public Foo foo() {
            return new Foo("Hello world!");
        }
    
        @Bean
        public Bar bar() {
            return new Bar(foo());
        }
    }
    

    JavaConfig classes can be scanned like self-annotated classes:

    ApplicationContext ctx = new AnnotationConfigApplicationContext("ch.frankel.blog");
    

    JavaConfig is the way to configure Spring application: it’s orthogonal to the code, and brings some degree of compile-time validation.

    Groovy DSL

    Spring 4 added a way to configure the context via a Groovy Domain-Specific Language. The configuration takes place in a Groovy file, with the beans element as its roots.

    beans {
        foo String, 'Hello world!'
        bar Bar, foo
    }
    

    There’s an associated application context creator class:

    ApplicationContext ctx = new GenericGroovyApplicationContext("ch/frankel/blog/context.groovy");
    

    I’m not a Groovy developer, so I never used that option. But if you are, it makes a lot of sense.

    Kotlin DSL

    Groovy has been unceremoniously kicked out of the Pivotal portfolio some time ago. There is no correlation, Kotlin has found its way in. It’s no wonder that the the upcoming release of Spring 5 provides a Kotlin DSL.

    package ch.frankel.blog
    
    fun beans() = beans {
        bean {
            Foo("Hello world!")
            Bar(ref())
        }
    }
    

    Note that while bean declaration is explicit, wiring is implicit, as in JavaConfig @Bean methods with dependencies.

    In opposition to configuration flavors mentioned above, the Kotlin DSL needs an existing context to register beans in:

    import ch.frankel.blog.beans
    
    fun register(ctx: GenericApplicationContext) {
        beans().invoke(ctx)
    }
    

    I didn’t use Kotlin DSL but to play a bit with it for a demo, so I cannot say for sure about pros/cons.

    Conclusion

    So far, the JavaConfig alternative is my favorite: it’s orthogonal to the code and provides some degree of compile-time validation. As a Kotlin enthusiast, I’m also quite eager to try the Kotlin DSL in large projects to experience its pros and cons first-hand.

  • On exceptions

    When Java came out some decades ago, it was pretty innovative at the time. In particular, its exception handling mechanism was a great improvement over previous C/C++. For example, in order to read from the file, there could be a lot of exceptions happening: the file can be absent, it can be read-only, etc.

    The associated Java-like pseudo-code would be akin to:

    File file = new File("/path");
    
    if (!file.exists) {
        System.out.println("File doesn't exist");
    } else if (!file.canRead()) {
        System.out.println("File cannot be read");
    } else {
        // Finally read the file
        // Depending on the language
        // This could span seveal lines
    }
    

    The idea behind separated try catch blocks was to separate between business code and exception-handling code.

    try {
        File file = new File("/path");
        // Finally read the file
        // Depending on the language
        // This could span seveal lines
    } catch (FileNotFoundException e) {
        System.out.println("File doesn't exist");
    } catch (FileNotReadableException e) {
        System.out.println("File cannot be read");
    }
    

    Of course, the above code is useless standalone. It probably makes up the body of a dedicated method for reading files.

    public String readFile(String path) {
        if (!file.exists) {
            return null;
        } else if (!file.canRead()) {
            return null;
        } else {
            // Finally read the file
            // Depending on the language
            // This could span seveal lines
            return content;
        }
    }
    

    One of the problem with the above catch blocks is that they return null. Hence:

    1. Calling code needs to check everytime for null values
    2. There’s no way to know whether the file was not found, or if it was not readable.

    Using a more functional approach fixes the first issue and hence allows methods to be composed.:

    public Optional<String> readFile(String path) {
        if (!file.exists) {
            return Optional.empty();
        } else if (!file.canRead()) {
            return Optional.empty();
        } else {
            // Finally read the file
            // Depending on the language
            // This could span seveal lines
            return Optional.of(content);
        }
    }
    

    Sadly, it changes nothing about the second problem. Followers of a purely functional approach would probably discard the previous snippet in favor of something like that:

    public Either<String, Failure> readFile(String path) {
        if (!file.exists) {
            return Either.right(new FileNotFoundFailure(path));
        } else if (!file.canRead()) {
            return Either.right(new FileNotReadableFailure(path));
        } else {
            // Finally read the file
            // Depending on the language
            // This could span seveal lines
            return Either.left(content);
        }
    }
    

    And presto, there’s a nice improvement over the previous code. It’s now more meaningful, as it tells exactly why it failed (if it does), thanks to the right part of the return value.

    Unfortunately, one issue remains, and not a small one. What about the calling code? It would need to handle the failure. Or more probably, let the calling code handle it, and so on, and so forth, up to the topmost code. For me, that makes it impossible to consider the exception-free functional approach an improvement.

    For example, this is what happens in Go:

        items, err := todo.ReadItems(file)
        if err != nil {
            fmt.Errorf("%v", err)
        }
    

    This is pretty fine if the code ends here. But otherwise, err has to be passed to the calling code, and all the way up, as described above. Of course, there’s the panic keyword, but it seems it’s not the preferred way to handle exceptions.

    The strangest part is this is exactly what people complain about with Java’s checked exceptions: it’s necessary to handle them at the exact location where they appear and the method signature has to be changed accordingly.

    For this reason, I’m all in favor of unchecked exceptions. The only downside of those is that they break purely functional programming - throwing exceptions is considered a side-effect. Unless you’re working with a purely functional approach, there is no incentive to avoid unchecked exceptions.

    Moreover, languages and frameworks may provide hooks to handle exceptions at the topmost level. For example, on the JVM, they include:

    • In the JDK, Thread.setDefaultUncaughtExceptionHandler()
    • In Vaadin, VaadinSession.setErrorHandler()
    • In Spring MVC, @ExceptionHandler
    • etc.

    This way, you can let your exceptions bubble up to the place they can be handled in the way they should. Embrace (unchecked) exceptions!

    Categories: Development Tags: software design
  • You're not a compiler!

    Passed exam

    At conferences, it’s common to get gifts at booths. In general, you need to pass some some sort of challenge.

    An innocent puzzle

    Some of those challenges involve answering a code puzzle, like the following:

    What would be the output of running the following snippet:

    public class Sum {
    
        public static void main(String[] args) {
            System.out.println(0123 + 3210);
        }
    }
    1. 3293
    2. 3333
    3. The code doesn’t compile
    4. The code throws an IllegalArgumentException
    5. None of the above

    My usual reaction is either to leave the booth immediately, or if I’ve some time, to write the code down on my laptop to get the result (or search on Google). You could say I’m lazy but I prefer to think I’m an expert at optimizing how I spend my time.

    Simple but useless

    There is a single advantage to this approach: anybody can grade the answers - if the correct results are provided. And that’s all.

    Otherwise, this kind of puzzles brings absolutely nothing to the table. This wouldn’t be an issue if it stayed on conference booths, as a "fun" game during breakouts. Unfortunately, the stuff has become pretty ubiquitous, to filter out candidates during recruiting or to deliver certifications.

    In both cases, skills needs to be assessed, and the easy solution is through a multiple-choice question test. Of course, if one has no access to an IDE nor Google, then it becomes a real challenge. But in the case of the above sample, what’s being checked? The knowledge of a very specific corner-case. This specific usage I would disallow on my projects anyway for this exact same reason: it’s a corner-case! And I want my code to be as free of corner cases as possible,

    Another sample

    This is also another kind of puzzle you can stumble upon:

    In the class javax.servlet.http.HttpServlet, what’s the correct signature of the service() method?:

    1. public void service(ServletRequest, ServletResponse)
    2. public void service(HttpServletRequest, HttpServletResponse)
    3. public int service(ServletRequest, ServletResponse)
    4. public int service(HttpServletRequest, HttpServletResponse)
    5. protected void service(ServletRequest, ServletResponse)
    6. protected void service(HttpServletRequest, HttpServletResponse)
    7. protected int service(ServletRequest, ServletResponse)
    8. protected int service(HttpServletRequest, HttpServletResponse)

    This kind of question asserts nothing but the capacity to remember something that is given by the most basic of IDEs or the JavaDoc.

    In fact, the complete title of this post should have been "You’re not a compiler, nor a runtime, nor the documentation".

    The core issue

    There is no single proof of correlation between correctly answering any of the above questions and real-world proficiency of applying the skills they are meant to assess. Is there an alternative? Yes. Don’t simply check shallow knowledge, check what you want to assert, under the same conditions, including IDEs, documentation, Google, etc.

    • You want to check if candidates are able to read code? Make them read your own code - or similar code if your code is secret.
    • You want to check if candidates are able to write code? Make them write code e.g. fix an existing or already-fixed bug.

    The same can be done in the context of certifications.

    Of course, the downside is that it takes time. Time to prepare the material. Time to analyze what the candidate produced. Time to debrief the candidate.

    Is this feasible? Unfortunately for those who would cast that aside as an utopia, it already exists. The best example is the Java EE Enterprise Architect. Though the first step is a multiple-choice questions test, other steps include a project-like assignment and an associated essay.

    Another example is much more personal. I’m working as a part-time lecturer public in higher education organizations. In some courses, I evaluate students by giving them an assignment: develop a very small application. The assignment takes the form of specifications, as a business analyst would.

    Conclusion

    For the sake of our industry, let’s stop pretending puzzles are worth anything beyond having some fun with colleagues. You should be more concerned about the consequences of a bad hire than the about the time spent on assessing the skills of a potential hire. There are alternatives, use them. Or face the consequences.

    Categories: Miscellaneous Tags: hiringrecruitingcertification
  • Rise and fall of JVM languages

    Rising trend on a barchart

    Every now and then, there’s a post predicting the death of the Java language. The funny thing is that none of them writes about a date. But to be honest, they are all probably true. This is the fate of every language: to disappear into oblivion - or more precisely to be used less and less for new projects. The question is what will replace them?

    Last week saw another such article on InfoQ. At least, this one told about a possible replacement, Kotlin. It got me thinking about the state of the JVM languages, and trends. Note that trends have nothing to do with the technical merits and flaws of each language.

    I started developing in Java, late 2001. At that time, Java was really cool. Every young developer wanted to work on so-called new technologies: either .Net or Java, as older developers were stuck on Cobol. I had studied C and C++ in school, and memory management in Java was so much easier. I was happy with Java…​ but not everyone was.

    Groovy came into existence in 2003. I don’t remember at what time I learned about it. I just ignored it: I had no need of a scripting language then. In the context of developing enterprise-grade applications with a long lifespan with a team of many developers, static typing was a huge advantage over dynamic typing. Having to create tests to check the type system was a net loss. The only time I had to create scripts, it was as a WebSphere administrator: the choice was between Python and TCL.

    Scala was incepted one year later in 2004. I don’t remember when and how I heard about it, but it was much later. But in opposition to Groovy, I decided to give it a try. The main reason was my long interest in creating "better" code - read more readable and maintainable. Scala being statically typed, it was more what I was looking for. I followed the Coursera course Principles of Functional Programming in Scala. It had three main consequences:

    • It questioned my way to write Java code. For example, why did I automatically generate getters and setters when designing a class?
    • I decided Scala made it too easy to write code unreadable for most developers - including myself
    • I started looking for other alternative languages

    After Groovy and Scala came the second generation (3rd if you count Java as the first) of JVM languages, including:

    After a casual glance at them, I became convinced they had not much traction, and were not worth investing my time.

    Some years ago, I decided to teach myself basic Android to be able to understand the development context of mobile developers. Oh boy! After years of developing Java EE and Spring applications, that was a surprise - and not a pleasant one. It was like being sent backwards a decade in the past. The Android API is so low level…​ not to even mention testing the app locally. After a quick search around, I found that Kotlin was mentioned in a lot of places, and finally decided to give it a try. I fell in love immediately: with Kotlin, I could improve the existing crap API into something better, even elegant, thanks to extension functions. I dug more into the language, and started using Kotlin for server-side projects as well. Then, the Spring framework announced its integration of Kotlin. And at Google I/O, Google announced its support of Kotlin on Android.

    This post is about my own personal experience and opinion formed from it. If you prefer the comfort of reading only posts you agree with, feel free to stop reading a this point.

    Apart from my own experience, what is the current state of those languages? I ran a quick search on Google Trends.

    Overview of JVM languages on Google Trends

    There are a couple of interesting things to note:

    • Google has recognized search terms i.e. "Programming Language" for Scala, Groovy and Kotlin, but not for Ceylon and eXtend. For Ceylon, I can only assume it’s because Ceylon is a popular location. For eXtend, I’m afraid there are just not enough Google searches.
    • Scala is by far the most popular, followed by Groovy and Kotlin. I have no real clue about the scale.
    • The Kotlin peak in May is correlated with Google’s support announcement at Google I/O.
    • Most searches for Scala and Kotlin originate from China, Groovy is much more balanced regarding locations.
    • Scala searches strongly correlate with the term "Spark", Kotlin searches with the term "Android".

    Digging a bit further may uncover interesting facts:

    • xTend is not dead, because it has never been living. Never read any post about it. Never listend to a conference talk neither.
    • In 2017, Red Hat gave Ceylon to the Eclipse Foundation, creating Eclipse Ceylon. A private actor giving away software to a foundation might be interpreted in different ways. In this case and despite all reassuring talks around the move, this is not a good sign for the future of Ceylon.
    • In 2015, Pivotal stopped sponsoring Groovy and it moved to the Apache foundation. While I believe Groovy has a wide enough support base, and a unique niche on the JVM - scripting, it’s also not a good sign. This correlates to the commit frequency of the core Groovy committers: their number of commits drastically decreases - to the point of stopping for some.
    • Interestingly enough, both Scala and Kotlin recently invaded other spaces, transpiling to JavaScript and compiling to native.
    • In Java, JEP 286 is a proposal to enhance the language with type inference, a feature already provided by Scala and Kotlin. It’s however limited to only local variables.
    • Interestingly enough, there are efforts to improve Scala compilation time by keeping only a subset of the language. Which then raises the question, why keep Scala if you ditch its powerful features (such as macros)?

    I’m not great at forecast, but here are my 2 cents:

    1. Groovy has its own niche - scripting, which leaves Java, Scala and Kotlin vying for the pure application development space on the server-side JVM.
    2. Scala also carved its own space. Scala developers generally consider the language superior to Java (or Kotlin) and won’t migrate to another one. However, due to Spring and Google announcements, Kotlin may replace Scala as the language developers go to when they are dissatisfied with Java.
    3. Kotlin has won the Android battle. Scala ignored this area in the past, and won’t invest in it in the future given Kotlin is so far ahead in the game.
    4. Kotlin’s rise on the mobile was not an intended move, but rather a nice and unexpected surprise. But JetBrains used it as a way forward as soon as they noticed the trend.
    5. Kotlin interoperability with Java is the killer feature that may convince managers to migrate legacy projects to or start new projects with Kotlin. Just as non-breaking backward compatibility was for Java.

    I’d be very much interested in your opinion, dear readers, even (especially?) if you disagree with the above. Just please be courteous and provide facts when you can to prove your points.

    Categories: Miscellaneous Tags: trend analysisKotlinxTendScalaGroovyCeylon
  • Automating generation of Asciidoctor output

    Asciidoctor logo

    I’ve been asked to design new courses for the next semester at my local university. Historically, I create course slides with Power Point and lab instructions with Word. There are other possible alternatives, for slides and documents:

    I also recently wrote some lab documents with Asciidoctor, versioned with Git. The build process generates HTML pages and those are hosted on GitHub pages. The build is triggered at every push.

    This solution has several advantages for me:

    • I’m in full control of the media. Asciidoctor files are kept on my hard drive, and any other Git repository I want, public or private, in the cloud or self-hosted.
    • The Asciidoctor format is really great for writing documentation.
    • With one source document, there can be multiple output formats (.e.g. HTML, PDF, etc.).
    • Hosted on Github/GitLab Pages, HTML generation can be automated for every push on the remote.

    Starting point

    Content should be released under the Creative Commons license. That means there’s no requirement for a private repository. In addition, a public repository will allows students to make Pull Requests. Both Github and GitLab provide free page hosting. Since developers are in general more familiar with Github than with GitLab, the former will be the platform of choice.

    Content is made from courses and labs:

    • Courses should be displayed as slides
    • Labs as standard HTML documents

    Setup

    Setup differs slightly for courses and labs.

    Courses

    To display slides, let’s use Reveal.js. Fortunately, Asciidoctor Reveal.js allows to generate slides from Asciidoctor sources. With Ruby, the setup is pretty straightforward, thanks to the documentation:

    1. Install bundler:
      gem install bundler
    2. Create a Gemfile with the following content:
      source 'https://rubygems.org'
      
      gem 'asciidoctor-revealjs'
    3. Configure bundler:
      bundle config --local github.https true
      bundle --path=.bundle/gems --binstubs=.bundle/.bin
    4. Finally, generate HTML:
      bundle exec asciidoctor-revealjs source.adoc

    Actually, the generation command is slightly different in my case:

    bundle exec asciidoctor-revealjs -D output/cours\ (1)
                                     -a revealjs_history=true\ (2)
                                     -a revealjs_theme=white (3)
                                     -a revealjs_slideNumber=true\ (4)
                                     -a linkcss\ (5)
                                     -a customcss=../style.css\ (6)
                                     -a revealjsdir=https://cdnjs.cloudflare.com/ajax/libs/reveal.js/3.5.0 (7)
                                     cours/*.adoc (8)
    1 Set the HTML output directory
    2 Explicitly push the URL into the browser history at each navigation step
    3 Use the white them (by default, it’s black)
    4 Display the slide number - it’s just mandatory for reference purpose
    5 Create an external CSS file instead of embedding the styles into each document - makes sense if there are more than one document
    6 Override some styles
    7 Reference the JavaScript framework to use - could be a local location
    8 Generate all documents belonging to a folder - instead of just one specific document

    Labs

    To generate standard HTML documents from Asciidoctor is even easier:

    bundle exec asciidoctor -D output/tp tp/*.adoc

    Automated remote generation

    As a software developer, it’s our sacred duty to automate as much as possible. In this context, it means generating HTML pages from Asciidoctor sources on each push to the remote repository.

    While Github doesn’t provide any build tool, it integrates greatly with Travis CI. I’ve already written about the process to publish HTML on Github Pages. The only differences in the build file come from:

    1. The project structure
    2. The above setup

    Automated local generation

    So far, so good. Everything works fine, every push to the remote generates the site. The only thing drawback, is that in order to preview the site, one has either to push…​ or to type the whole command-line every time. The bash history helps somewhat, until some other command is required.

    As a user, I want to preview the HTML automatically in order to keep me from writing the same command-line over and over.

    — Me

    This one a bit is harder, because it’s not related to Asciidoctor. A full-blown solution with LiveReload and everything would be probably be overkill (read that I’m too lazy). But I’ll be happy enough with a watch over the local file system, and the trigger of a command when changes are detected. As my laptop runs on OSX, here’s a solution that "works on my machine". This is based on the launchd process and the plist format.

    This format is XML-based, but "weakly-typed" and based on the order of elements. For example, key value pairs are defined as such:

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
      <key>key1</key>
      <string>value1</string>
      <key>key2</key>
      <string>value2</string>
    </dict>

    A couple of things I had to find out on my own - I had no prior experience with launchd:

    1. A .plist file should be named as per the key named Label.
    2. It should be located in the ~/Library/LaunchAgents/ folder.
    3. In this specific case, the most important part is to understand how to watch the filesystem. It’s achieved with the WatchPaths key associated with the array of paths to watch.
    4. The second most important part is the command to execute, ProgramArguments. The syntax is quite convoluted: every argument on the command line (as separated by spaces) should be an element in an array.
      It seems the $PATH is not initialized with environment variables of my own user, so the full path to the executable should be used.
    5. As debugging is mandatory - at least with the first few runs, feel free to use StandardErrorPath and StandardOutPath to respectively write standard err and out in files.

    The final .plist looks something like that:

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
    <plist version="1.0">
    <dict>
      <key>Label</key>
      <string>ch.frankel.generatejaveecours.plist</string>
      <key>WorkingDirectory</key>
      <string>/Users/frankel/projects/course/javaee</string>
      <key>Program</key>
      <string>/usr/local/bin/bundle</string>
      <key>ProgramArguments</key>
      <array>
        <string>/usr/local/bin/bundle</string>
        <string>exec</string>
        <string>asciidoctor-revealjs</string>
        <string>-a</string>
        <string>revealjs_history=true</string>
        <string>-a</string>
        <string>revealjs_theme=white</string>
        <string>-a</string>
        <string>revealjs_slideNumber=true</string>
        <string>-a</string>
        <string>linkcss</string>
        <string>-a</string>
        <string>customcss=../style.css</string>
        <string>-a</string>
        <string>revealjsdir=https://cdnjs.cloudflare.com/ajax/libs/reveal.js/3.5.0</string>
        <string>cours/*.adoc</string>
      </array>
      <key>WatchPaths</key>
      <array>
        <string>/Users/frankel/projects/course/javaee/cours</string>
      </array>
      <key>StandardErrorPath</key>
      <string>/tmp/asciidocgenerate.err</string>
      <key>StandardOutPath</key>
      <string>/tmp/asciidocgenerate.out</string>
    </dict>
    </plist>

    To finally launch the daemon, use the launchctl command:

    launchctl load ~/Library/LaunchAgents/ch.frankel.generatejaveecours.plist

    Conclusion

    While automating HTML page generation on every push is quite straightforward, previewing HTML before the push requires manual action. However, with a bit of research, it’s possible to scan for changes on the file system and automate generation on the laptop as well. Be proud to be lazy!

    Categories: Development Tags: build automationAsciiDoctor