• On developer shortage

    A laptop on a table with nobody sitting in front

    Who is not aware that there’s a developer shortage? I mean, if you’re into the software industry, everybody is telling it every now and then - especially if you’re on the recruiting side. To handle that, companies come up with creative solutions - some are sponsoring education initiatives, or even creating their own. However, I think it’s not the right answer.

    Of course, there’s no doubt that there’s a shortage of good developers. Yet, that’s true for every industry: there’s a lack of good plumbers, of good carpenters, of good car mechanics, etc. So, why is there such a pathos around it in IT? I’ve come up with several reasons, that boil down to a single point. Let me develop.

    Unrelated technologies/skills

    Have you ever written a job description that looks like a Christmas wish list? I mean, a lot of technologies, mainly unrelated. Like front-end and back-end, or sysadmin skills with any kind of development skills.

    And since I’m there, I need to tell you there no such thing as a full-stack developer. At least, not in the sense you think. I have designed systems from front to back, and I’ve fixed bugs on the front-end, but I’m not on par with a regular experienced front-end developer.

    Yet, every day, I learn new things - that I try to share here. Do you believe I can learn them in my free time, but not as a new hire?


    Does your job description requires a university degree? A degree in computer science? A licence or a master?

    If that’s the case, it means you believe those university degrees somehow prepare people to do the job. Or worse, you don’t really believe it, but at least, it will protect you if the hire is not the right fit in the end.

    Do they teach you object-oriented programming in universities? Or mutation testing? Or testing at all? How people who are full-time teachers know about modern build pipelines? About Docker? They do know a lot about sorting algorithms though, which you probably don’t need to implement yourself in the industry.


    Do you require certifications for the job? Of course, it might make sense to require a Java certification for a Java job. But do you know how people are getting certified? Do you know what it really entails?

    Well, certifications are based on multiple-choice questions. Mostly based on knowing the API by heart. In real-life, I have an IDE and Google if I forget about them.


    In most developed countries, selection by age is not allowed. However, there are subtle ways to filter by age. Asking for a number of years of experience is a good way. Like 5 years of experience probably means ~28 years, counting a master’s degree.

    Thing is, I’ve worked in the same team as an Android developer, he had 5 years of experience. Only he was 21, as he had started developing in Android at age 16. Would you hire him, since he fulfills the requirements, or is he too young for your taste?

    Too young, too old, not the right gender, not the right ethnicity…​ There are a lot of bias that might sit between you and potential capable hires.

    Work onsite

    Do you require your developers to work onsite? 100% of the time? Why?

    I’ve worked with remote colleagues. I do work now with remote colleagues. Some companies are 100% remote. Are they making bad software? If you believe people need to meet each other once in a while, does that mean that people need to commute every day to work?

    For sure, there are differences between working while co-located and not. But there are a lot of tips and tricks available to make it work.

    Creating a team is about building trust. Trust doesn’t appear automatically by co-locating people.

    Dress code

    Is there a specific dress code in your company? I’m referring to a dress code that is not compatible with the average developer’s dress code e.g. costume, bow and tie.

    Do you think developers work better when they dress in a certain way? Or is your reason is that developers are part of the image of the company, and customers might see them (hint: they never do). Or do you think that in order to cooperate with business people, they need to dress the same way? Then, think about a car mechanic who would handle your car dressed in bow and tie?

    Hiring process itself

    What’s your hiring process? How many steps completely unrelated to the job do I need to pass in order to get to the real interviews? Do I need to answer questions such as "Where do you see yourself in 5 years?" or "What’s your 3 best qualities?".

    I’m not saying this is a rule, but I’ve never seen an HR team that knew about the specifics of the developer job. Having an hiring process that doesn’t take it into account is plain crazy, especially since HR is but the first step.

    HR should be a help, not an hindrance.


    This one is pretty obvious, but do you pay developers at the market level? Do you include the salary range in your job description?

    When browsing job ads, it’s an extra effort to ask for the compensation. Most developers won’t take the time, and just discard your job description. And if they do, and the offer doesn’t match, it’s time lost for both parties.

    There are only 2 possible reason you don’t include compensation policy. Either you’re afraid of telling current employees, and that means something is already broken. Or you’re afraid of telling competitors, and let’s be blunt, they probably already know - because people talk about their compensation.


    Last but not least, how do you consider developers inside your company? Are they a true part of the value chain? Are they only accountable or also decision makers? Do you threaten them to outsource their job in India every day to keep them "under pressure"?

    Developers are in touch. Because of the same university, same technology user group, whatever…​ They know about your company and its culture, about its pros and cons.


    Does any of the above point ring a bell? There’s no developer shortage, there are only barriers you set yourself between your company and a potential workforce. Break the barriers.

    Categories: Miscellaneous Tags: hiring
  • Is Object-Oriented Programming compatible with an enteprise context?

    A pair of glasses on a background of screens displaying code

    This week, during a workshop related to a Java course I give at a higher education school, I noticed the code produced by the students was mostly - ok, entirely, procedural. In fact, though the Java language touts itself as an Object-Oriented language, it’s not uncommon to find such code developed by professional developers in enterprises. For example, the JavaBean specification is in direct contradiction of one of OOP’s main principle, encapsulation.

    Another example is the widespread controller, service and DAO architecture found equally in Java EE and Spring applications. In that context, entities are in general anemic, while all business logic is located in the service layer. While this is not bad per se, this design separates between state and behaviour, and sits at the opposite to true OOP.

    Both Java EE and the Spring framework enforce this layered design. For example, in Spring, there’s one annotation for every such layer: @Controller, @Service and @Repository. In the Java EE world, only @EJB instances - the service layer, can be made transactional.

    This post aims to try to reconcile both the OOP paradigm and the layered architecture. I’ll be using the Spring framework to highlight my point because I’m more familiar with it, but I believe the same approach could be used for pure Java EE apps.

    A simple use-case

    Let’s have a simple use-case: from an IBAN number, find the associated account with the relevant balance. Within a standard design, this could look like that:

    class ClassicAccountController(private val service: AccountService) {
        fun getAccount(@PathVariable("iban") iban: String) = service.findAccount(iban)
    class AccountService(private val repository: ClassicAccountRepository) {
        fun findAccount(iban: String) = repository.findOne(iban)
    interface ClassicAccountRepository : CrudRepository<ClassicAccount, String>
    @Table(name = "ACCOUNT")
    class ClassicAccount(@Id var iban: String = "", var balance: BigDecimal = BigDecimal.ZERO)

    There are a couple of issues there:

    1. The JPA specifications mandates for a no-arg constructor. Hence, it’s possible to create ClassicalAccount instances with an empty IBAN.
    2. There’s no validation of the IBAN. The full round-trip to the database is required to check if an IBAN is valid.
    Yes, there’s no currency. It’s a simple example, remember?

    Being compliant

    In order to comply with the no-args constructor JPA constraint - and because we use Kotlin, it’s possible to generate a synthetic constructor. That means the constructor is accessible through reflection, but not by calling the constructor directly.

    If you use Java, tough luck, I don’t know about any option to fix that.

    Adding validation

    In a layer architecture, the service layer is the obvious place to put the business logic, including validation:

    class AccountService(private val repository: ClassicAccountRepository) {
        fun findAccount(iban: String): Account? {
            return repository.findOne(iban)
        fun checkIban(iban: String) {
            if (iban.isBlank()) throw IllegalArgumentException("IBAN cannot be blank")

    In order to be more OOP-compliant, we must decide whether we should allow invalid IBAN numbers or not. It’s easier to forbid it altogether.

    @Table(name = "ACCOUNT")
    class OopAccount(@Id var iban: String, var balance: BigDecimal = BigDecimal.ZERO) {
        init {
            if (iban.isBlank()) throw IllegalArgumentException("IBAN cannot be blank")

    However, this means that we must first create the OopAccount instance to validate the IBAN - with a balance of 0, even if the balance is actually not 0. Again, as per the empty IBAN, the code does not match the model. Even worse, to use the repository we must access the OopAccount inner state:


    A more OOP-friendly design

    Improving the state of the code requires a major rework on the class model, separating between the IBAN and the account, so that the former can be validated, and can access the latter. The IBAN class serves both as the entry point, and the PK of the account.

    @Table(name = "ACCOUNT")
    class OopAccount(@EmbeddedId var iban: Iban, var balance: BigDecimal)
    class Iban(@Column(name = "iban") val number: String,
               @Transient private val repository: OopAccountRepository) : Serializable {
        init {
            if (number.isBlank()) throw IllegalArgumentException("IBAN cannot be blank")
        val account
            get() = repository.findOne(this)
    Notice the returned JSON structure will be different from the one returned above. If that’s an issue, it’s quite easy to customize Jackson to obtain the desired result.

    With this new design, the controller requires a bit of change:

    class OopAccountController(private val repository: OopAccountRepository) {
        fun getAccount(@PathVariable("iban") number: String): OopAccount {
            val iban = Iban(number, repository)
            return iban.account

    The only disadvantage of this approach is that the repository needs to be injected into the controller, then be explicitly passed to the entity’s constructor.

    The final touch

    It would be great if the repository could automatically be injected into the entity when it’s created. Well, Spring makes it possible - though this is not a very well-known feature, through Aspect-Oriented Programming. It requires the following steps:

    Add AOP capabilities to the application

    To effectively add the AOP dependency is quite straightforward and requires just adding the relevant starter dependency to the POM:


    Then, the application must be configured to make use of it:

    class OopspringApplication
    Update the entity
    1. The entity must first be set as a target for injection. Dependency injection will be done through autowiring.
    2. Then, the repository be moved from a constructor argument to a field.
    3. Finally, the database fetching logic can be moved into the entity:
      @Configurable(autowire = Autowire.BY_TYPE)
      class Iban(@Column(name = "iban") val number: String) : Serializable {
          private lateinit var repository: OopAccountRepository
          init {
              if (number.isBlank()) throw IllegalArgumentException("IBAN cannot be blank")
          val account
              get() = repository.findOne(this)
    Remember that field-injection is evil.
    Aspect weaving

    There are two ways to weave the aspect into the , either compile-time weaving, or load-time weaving. I choose the the later is much easier to configure. It’s achieved through a standard Java agent.

    1. First, it needs to be added as a runtime dependency in the POM:
    2. Then, the Spring Boot plugin must be configured with the agent:
    3. Finally, the application must be configured accordingly:
      class OopspringApplication

    And then?

    Of course, this sample leaves out an important piece of the design: how to update the balance of an account? The layer approach would have a setter for that, but that’s not OOP. Thinking about it, the balance of an account changes because there’s a transfer from another account. This could be modeled as:

    fun OopAccount.transfer(source: OopAccount, amount: BigDecimal) { ... }

    Experienced developers should see some transaction management requirements sneaking in. I leave the implementation to motivated readers. The next step would be to cache the values, because accessing the database for each read and write would be killing performance.


    There are a couple of points I’d like to make.

    First, the answer to the title question is a resounding 'yes'. The results is a true OOP code while still using a so-called enterprise-grade framework - namely Spring.

    However, migrating to this OOP-compatible design came with a bit of overhead. Not only did we rely on field injection, we had to bring in AOP with load-time weaving. The first is a hindrance during unit testing, the second is a technology you definitely don’t want in every team, as they make apps more complex. And that’s only for a trivial example.

    Finally, this approach has a huge drawback: most developers are not familiar with it. Whatever its advantages, they first must be "conditioned" to have this mindset. And that might be a reason to continue using the traditional layered architecture.

    I’ve searched Google for scientific studies proving OOP is better in terms of readability and maintainability: I found none. I would be very grateful for pointers.
    Categories: JavaEE Tags: Object-Oriented ProgrammingOOPSpring
  • The multiple usages of git rebase --onto

    Git Orange Logo

    I’m not Git expert and I regularly learn things in Git that changes my view of the tool. When I was showed git rebase -i, I stopped over-thinking about my commits. When I discovered git reflog, I became more confident in rebasing. But I think one of the most important command I was taught was git rebase --onto.

    IMHO, the documentation has room for improvement regarding the result of option. If taking the image of the tree, it basically uproots a part of the tree to replant it somewhere else.

    Let’s have an example with the following tree:

    o---o---o---o master
          \---o---o branch1
                    \---o branch2

    Suppose we want to transplant branch2 from branch1 to master:

    o---o---o---o master
         \       \
          \       \---o' branch2
            \---o---o branch1

    This is a great use-case! On branch branch2, the command would be git rebase --onto master branch1. That approximately translates to move everything from branch2 starting from branch1 to the tip of master. I try to remember the syntax by thinking the first argument is the new commit, the second is the old one.

    So, but what use-cases are there to move parts of the tree?

    Delete commits

    While my first reflex when I want to delete commits is to git rebase -i, it’s not always the most convenient. It requires the following steps:

    1. Locating the first commit to be removed
    2. Effectively run the git rebase -i command
    3. In the editor, for every commits that needs to be removed, delete the line
    4. Quit the editor

    If the commits to be removed are adjacent, it’s easier to rebase --onto, because you only need the new and the old commit and can do the "deletion" in one line.

    Here’s an example:

    o---A---X---Y---Z master

    To remove the last 3 commits X, Y and Z, you just need:

    git rebase --onto A Z

    Long-lived remote branches

    While it’s in general a bad idea to have long-lived branches, it’s sometimes required.

    Suppose you need to migrate part of your application to a new framework, library, whatever. With small applications, a small task-force can be dedicated to that. The main development team goes on week-end with instructions to commit everything before leaving on Friday. When they come back on Monday, everything has been migrated.

    Sadly, life is not always so easy and applications can be too big for such an ideal scenario. In that case, the task force works on a dedicated migration branch for more than a week-end, in parallel with the main team. But they need to keep up-to-date with the main branch, and still keep their work.

    Hence, every now and then, they rebase the migration branch at the top of master:

    git rebase --onto master old-root-of-migration

    This is different from merging, because you keep the history linear.

    Local branches

    Sometimes, I want to keep my changes locally, for a number of reasons. For example, I might tinker with additional (or harsher) rules for the code quality tool. In that case, I want to take time to evaluate whether this is relevant for the whole team or not.

    As above, this is achieved by regularly rebasing my local tinker branch onto master.

    As above, it lets me keep my history linear and change the commits relevant to tinker if the need be. Merging would prevent me from doing this.


    git rebase --onto can have different use-cases. The most important ones are related to the handling of long-lived branches, whether local or remote.

    As always, it’s just one more tool in your toolbelt, so that not everything looks like a nail.

    As every command that changes the Git history, git rebase --onto should only change local commits. You’ve been warned!
    Categories: Development Tags: Git
  • Alternative navigator in Vaadin

    Vaadin Logo

    In Vaadin, to change the components displayed on the screen, there are a few options. The most straightforward way is to use the setContent() method on the UI. The most widespread way is to use the navigator.

    Views managed by the navigator automatically get a distinct URI, which can be used to be able to bookmark the views and their states and to go back and forward in the browser history.
    — Vaadin documentation

    This is the main asset for apps managing catalogs, such as e-commerce shops, or item management apps. Every item gets assigned a specific URL, through fragment identifiers.

    However, the view per URL trick is also a double-edged blade. It lets the user bypass the server-side navigation and directly type the URL of the view he wants to display in the browser. Also, the View interface requires to implement the enter() method. Yet, this method is generally empty and thus is just boiler plate.

    Let’s see how we can overcome those issues.

    A solution to the empty method

    With Java 8, it’s possible for an interface to define a default implementation. Vaadin takes advantage of that by providing a default empty method implementation:

    public interface View extends Serializable {
        public default void enter(ViewChangeEvent event) {

    Even better, all methods of the View interface are default. That lets you implement the interface without writing any boiler-plate code and rely on default behaviour.

    A solution to the view per URI

    The default implementation

    The navigator API consists of the following classes:

    Navigator API class diagram

    As can be seen, the handling of the view by the URI occurs in the UriFragmentManager class. This implementation relies on the fragment identifier in the URL. But this is only the default implementation. The topmost abstraction is NavigationStageManager and it has no constraint where to store the state.

    An alternative

    To prevent users from jumping to a view by typing a specific URL, one should store the state in a place unaccessible by them. For example, it’s feasible to store the state in cookies. A simple implementation could look like:

    private const val STATE_KEY = "state"
    class CookieManager : NavigationStateManager {
        private var navigator: Navigator? = null
        override fun setNavigator(navigator: Navigator) {
            this.navigator = navigator
        override fun getState(): String {
            val cookies = VaadinService.getCurrentRequest().cookies?.asList()
            val value = cookies?.find { it.name == STATE_KEY }?.value
            return when(value) {
                null -> ""
                else -> value
        override fun setState(state: String) {
            val cookie = Cookie(STATE_KEY, state)

    Creating a navigator using the previous state manager is very straightforward:

    class NavigatorUI : ViewDisplay, UI() {
        override fun init(vaadinRequest: VaadinRequest) {
            navigator = Navigator(this, CookieManager(), this)
        override fun showView(view: View) {
            content = view as Component

    At this point, it’s possible to place components which change the view on the UI and spy upon the request and response. Here are the HTTP request and response of an AJAX call made by Vaadin to change the view:

    POST /UIDL/?v-uiId=0 HTTP/1.1
    Connection: keep-alive
    Content-Length: 297
    User-Agent: ...
    Content-Type: application/json; charset=UTF-8
    Accept: */*
    DNT: 1
    Accept-Encoding: gzip, deflate, br
    Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4
    Cookie: state=Another; JSESSIONID=B584B759D78A0CDBA43496EF4AEB3F25
    HTTP/1.1 200
    Set-Cookie: state=Third
    Cache-Control: no-cache
    Content-Type: application/json;charset=UTF-8
    Content-Length: 850
    Date: Fri, 27 Oct 2017 09:19:42 GMT


    Before Vaadin 7, I personally didn’t use the Navigator API because of the requirement to implement the enter() method on the view. Java 8 provide default methods, Vaadin 8 provide the implementation. There’s now no reason not to use it.

    If the default fragment identifier way of navigating doesn’t suit your own use-case, it’s also quite easy to rely on another way to store the current view.

    More generally, in its latest versions, Vaadin makes it easier to plug-in your implementation if the default behaviour doesn’t suit you.

    Categories: JavaEE Tags: VaadinNavigator
  • Migrating a Spring Boot application to Java 9 - Modules

    Java 9 Logo

    Last week, I tried to make a Spring Boot app - the famous Pet Clinic, Java 9 compatible. It was not easy. I had to let go of a lot of features along the way. And all in all, the only benefit I got was improvement of String memory management.

    This week, I want to continue the migration by fully embracing the Java 9 module system.

    Configuring module meta-data

    Module information in Java 9 is implemented through a module-info.java file. The first step is to create such a file at the root of the source directory, with the module name:

    module org.springframework.samples.petclinic {

    The rest of the journey can be heaven or hell. I’m fortunate to benefit from an IntelliJ IDEA license. The IDE tells exactly what class it cannot read and a wizard lets you put it in the module file. In the end, it looks like that:

    module org.springframework.samples.petclinic {
        requires java.xml.bind;
        requires javax.transaction.api;
        requires validation.api;
        requires hibernate.jpa;
        requires hibernate.validator;
        requires spring.beans;
        requires spring.core;
        requires spring.context;
        requires spring.tx;
        requires spring.web;
        requires spring.webmvc;
        requires spring.data.commons;
        requires spring.data.jpa;
        requires spring.boot;
        requires spring.boot.autoconfigure;
        requires cache.api;

    Note that module configuration in the maven-compiler-plugin and maven-surefire-plugin can be removed.

    Configuration the un-assisted way

    If you happen to be in a less than ideal environment, the process is the following:

    1. Run mvn clean test
    2. Analyze the error in the log to get the offending package
    3. Locate the JAR of said package
    4. If the JAR is a module, add the module name to the list of required modules
    5. If not, compute the automatic module name, and add it to the list of requires modules

    For example:

    [ERROR] ~/spring-petclinic/src/main/java/org/springframework/samples/petclinic/system/CacheConfig.java:[21,16]
      package javax.cache is not visible
    [ERROR]   (package javax.cache is declared in the unnamed module, but module javax.cache does not read it)

    javax.cache is located in the cache-api-1.0.0.jar. It’s not a module since there’s no module-info in the JAR. The automatic module name is cache.api. Write it as a requires in the module. Rinse and repeat.

    ASM failure

    Since the first part of this post, I’ve been made aware that Spring Boot 1.5 won’t be made Java 9-compatible. Let’s do it.

    Bumping Spring Boot to 2.0.0.M5 requires some changes in the module dependencies:

    • hibernate.validator to org.hibernate.validator
    • validation.api to java.validation

    And just when you think it might work:

    Caused by: java.lang.RuntimeException
    	at org.springframework.asm.ClassVisitor.visitModule(ClassVisitor.java:148)

    This issue has already been documented. At this point, explicitly declaring the main class resolves the issue.

            <jvmArguments>--add-modules java.xml.bind</jvmArguments>

    Javassist failure

    The app is now ready to be tested with mvn clean spring-boot:run. Unfortunately, a new exception comes our way:

    2017-10-16 17:20:22.552  INFO 45661 --- [           main] utoConfigurationReportLoggingInitializer :
    Error starting ApplicationContext. To display the auto-configuration report re-run your application with 'debug' enabled.
    2017-10-16 17:20:22.561 ERROR 45661 --- [           main] o.s.boot.SpringApplication               :
     Application startup failed
      Error creating bean with name 'entityManagerFactory' defined in class path resource
        Invocation of init method failed; nested exception is org.hibernate.boot.archive.spi.ArchiveException:
         Could not build ClassFile

    A quick search redirects to an incompatibility between Java 9 and Javassist. Javassist is the culprit here. The dependency is required Spring Data JPA, transitively via Hibernate. To fix it, exclude the dependency, and add the latest version:


    Fortunately, versions are compatible - at least for our usage.

    It works!

    We did it! If you arrived at this point, you deserve a pat on the shoulder, a beer, or whatever you think you deserve.

    Icing on the cake, the Dev Tools dependency can be re-added.


    Migrating to Java 9 requires using Jigsaw, whether you like it or not. At the very least, it means a painful trial and error search-for-the-next-fix process and removing important steps in the build process. While it’s interesting for library/framework developers to add an additional layer of access control to their internal APIs, it’s much less so for application ones. At this stage, it’s not worth to move to Java 9.

    I hope to conduct this experiment again in some months and to notice an improvement in the situation.

    Categories: Java Tags: Java 9Spring BootmodulesJigsaw
  • Migrating a Spring Boot application to Java 9 - Compatibility

    Java 9 Logo

    With the coming of Java 9, there is a lot of buzz on how to migrate applications to use the module system. Unfortunately, most of the articles written focus on simple Hello world applications. Or worse, regarding Spring applications, the sample app uses legacy practices - like XML for example. This post aims to correct that by providing a step-to-step migration guide for a non-trivial modern Spring Boot application. The sample app chosen to do that is the Spring Pet clinic.

    There are basically 2 steps to use Java 9: first, be compatible then use the fully-fledged module system. This post aims at the former, a future post will consider the later.

    Bumping the Java version

    Once JDK 9 is available on the target machine, the first move is to bump the java.version from 8 to 9 in the POM:

        <!-- Generic properties -->

    Now, let’s mvn clean compile.

    Cobertura’s failure

    The first error along the way is the following:

    [ERROR] Failed to execute goal org.codehaus.mojo:cobertura-maven-plugin:2.7:clean (default) on project spring-petclinic:
     Execution default of goal org.codehaus.mojo:cobertura-maven-plugin:2.7:clean failed:
      Plugin org.codehaus.mojo:cobertura-maven-plugin:2.7 or one of its dependencies could not be resolved:
      Could not find artifact com.sun:tools:jar:0 at
       specified path /Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home/../lib/tools.jar -> [Help 1]
    Cobertura is a free Java code coverage reporting tool.
    — https://github.com/cobertura/cobertura

    It requires access to the tools.jar that is part of JDK 8 (and earlier). One of the changes in Java 9 is the removal of that library. Hence, that is not compatible. This is already logged as an issue. Given the last commit on the Cobertura repository is one year old, just comment out the Cobertura Maven plugin. And think about replacing Cobertura for JaCoCo instead.

    Wro4J’s failure

    The next error is the following:

    [ERROR] Failed to execute goal ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run (default) on project spring-petclinic:
     Execution default of goal ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run failed:
      An API incompatibility was encountered while executing ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run:
       java.lang.ExceptionInInitializerError: null
    wro4j is a free and Open Source Java project which will help you to easily improve your web application page loading time. It can help you to keep your static resources (js & css) well organized, merge & minify them at run-time (using a simple filter) or build-time (using maven plugin) and has a dozen of features you may find useful when dealing with web resources.
    — https://github.com/wro4j/wro4j

    This is referenced as a Github issue. Changes have been merged, but the issue is still open for Java 9 compatibility should be part of the 2.0 release.

    Let’s comment out Wro4J for the moment.

    Compilation failure

    Compiling the project at this point yields the following compile-time errors:

    Error:(30, 22) java: package javax.xml.bind.annotation is not visible
      (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph)
    Error:(21, 22) java: package javax.xml.bind.annotation is not visible
      (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph)
    Error:(22, 22) java: package javax.xml.bind.annotation is not visible
      (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph)

    That means code on the classpath cannot access this module by default. It needs to be manually added with the --add-modules option of Java 9’s javac. Within Maven, it can be set by using the maven-compiler-plugin:


    Now the project can compile.

    Test failure

    The next step sees the failure of unit tests to fail with mvn test.

    The cause is the same, but it’s a bit harder to find. It requires checking the Surefire reports. Some contain exceptions with the following line:

    Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException

    Again, test code cannot access the module. This time, however, the maven-surefire-plugin needs to be configured:

            <argLine>--add-modules java.xml.bind</argLine>

    This makes the tests work.

    Packaging failure

    If one thinks this is the end of the road, think again. The packaging phase also fails with a rather cryptic error:

    [ERROR] Failed to execute goal org.apache.maven.plugins:maven-jar-plugin:2.6:jar (default-jar) on project spring-petclinic:
     Execution default-jar of goal org.apache.maven.plugins:maven-jar-plugin:2.6:jar failed:
      An API incompatibility was encountered while executing org.apache.maven.plugins:maven-jar-plugin:2.6:jar:
       java.lang.ExceptionInInitializerError: null
    Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
    	at org.codehaus.plexus.archiver.zip.AbstractZipArchiver.<clinit>(AbstractZipArchiver.java:116)

    This one is even harder to find: it requires a Google search to stumble upon the solution. The plexus-archiver is to blame. Simply bumping the maven-jar-plugin to the latest version - 3.2 at the time of this writing will make use of a Java 9 compatible version of the archiver and will solve the issue:


    Spring Boot plugin failure

    At this point, the project finally can be compiled, tested and packaged. The next step is to run the app through the Spring Boot Maven plugin i.e. mvn spring-boot:run. But it fails again…​:

    [INFO] --- spring-boot-maven-plugin:1.5.1.RELEASE:run (default-cli) @ spring-petclinic ---
    [INFO] Attaching agents: []
    Exception in thread "main" java.lang.ClassCastException:
     java.base/jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to java.base/java.net.URLClassLoader
    	at o.s.b.devtools.restart.DefaultRestartInitializer.getUrls(DefaultRestartInitializer.java:93)
    	at o.s.b.devtools.restart.DefaultRestartInitializer.getInitialUrls(DefaultRestartInitializer.java:56)
    	at o.s.b.devtools.restart.Restarter.<init>(Restarter.java:140)
    	at o.s.b.devtools.restart.Restarter.initialize(Restarter.java:546)
    	at o.s.b.devtools.restart.RestartApplicationListener.onApplicationStartingEvent(RestartApplicationListener.java:67)
    	at o.s.b.devtools.restart.RestartApplicationListener.onApplicationEvent(RestartApplicationListener.java:45)
    	at o.s.c.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:167)
    	at o.s.c.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
    	at o.s.c.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:122)
    	at o.s.b.context.event.EventPublishingRunListener.starting(EventPublishingRunListener.java:68)
    	at o.s.b.SpringApplicationRunListeners.starting(SpringApplicationRunListeners.java:48)
    	at o.s.b.SpringApplication.run(SpringApplication.java:303)
    	at o.s.b.SpringApplication.run(SpringApplication.java:1162)
    	at o.s.b.SpringApplication.run(SpringApplication.java:1151)
    	at org.springframework.samples.petclinic.PetClinicApplication.main(PetClinicApplication.java:32)

    This is a documented issue that Spring Boot Dev Tools v1.5 is not compatible with Java 9.

    Fortunately, this bug is fixed in Spring Boot 2.0.0.M5. Unfortunately, this specific version is not yet available at the time of this writing. So for now, let’s remove the dev-tools and try to run again. It fails again, but this time the exception is a familiar one:

    Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException

    Let’s add the required argument to the spring-boot-maven-plugin:

            <jvmArguments>--add-modules java.xml.bind</jvmArguments>

    The app can finally be be launched and is accessible!


    Running a non-trivial legacy application on JDK 9 requires some effort. Worse, some important features had to be left on the way: code coverage and web performance enhancements. On the opposite side, the only meager benefit is the String memory space improvement. In the next blog post, we will try to improve the situation to actually make use of modules in the app.

    Categories: Java Tags: Java 9Spring BootmodulesJigsaw
  • Truly immutable builds

    Baltic amber pieces with insects

    It sometimes happen that after a few years, an app is stable enough that it gets into hibernating mode. Though it’s used and useful, there are no changes to it and it happily runs its life. Then, after a while, someone decides to add some new features again. Apart from simple things such as locating the sources, one of the most important thing is to be able to build the app. Though it may seem trivial, there are some things to think about. Here are some advices on how to make apps that can be built forever.

    I’ll use Maven as an example but advices below can apply to any build tool.

    Immutable plugins version

    For dependencies, Maven requires the version to be set. For plugins, Maven allows not to specify it. In that case, it will fetch the latest.

    Though it might be seen as a benefit to always use the latest version, it can break existing behavior.

    Rule 1

    Always explicitly set plugins version. This includes all plugins that are used during the build, even if they are not configured e.g. maven-surefire-plugin.

    Check in the build tool

    The second problem that may arise is the build tool itself. What Maven version that was used to build the app? Building the app with another version might not work. Or worse, build the app in a slightly different way with unexpected side-effects.

    Hence, the build tool must be saved along the sources of the app. In the Maven ecosystem, this can be done using the Maven wrapper.

    Rule 2
    1. Get the wrapper
    2. Check it in along with regular sources
    3. Use it for each and every build

    Check in JVM options

    The last step occurs when JVM options are tweaked using the MVN_OPTS environment variable. It can be used to initialize the maximum amount of memory for the build e.g. -XmX or to pass system properties to the build e.g. -Dmy.property=3. Instead of using the MVN_OPTS environment variable, such parameters should be set in a .mvn/jvm.config file, and checked in along the app sources. Note this is available since Maven 3.3.1.

    Rule 3

    Check in JVM build options via the .mvn/jvm.config file along regular sources

    Check in the CI build file

    The build file that is relevant to the continuous integration server should be checked in as well. For some - Travis CI, GitLab, etc., it’s a pretty standard practice. For others - Jenkins, it’s a brand new feature.

    Rule 4

    Check in the CI server specific build files along regular sources.

    Nice to have

    The above steps try to ensure that as many things as possible are immutable. While the version of the Java code and the version of the generated bytecode can be set (source and target configuration parameters of the maven-compiler-plugin), one thing that cannot be set is the JDK itself.

    In order to keep it immutable along the whole application life, it’s advised to specify the exact JDK. In turn, this depends a lot on the exact continuous integration server. For example, Travis CI allows it natively in the build file, while Jenkins might require the usage of a Docker image.


    Making sure to be able to build your app in the future is not a glamorous task. However, it can make a huge difference in the future. Following the above rules, chances you’ll be able to build apps, after they have been hibernating for a long time, will dramatically increase.

    Categories: Java Tags: mavenbuildDockermaintenance
  • Bypassing Javascript checks

    Police riot control training

    Nowadays, when a webapp offers a registration page, it usually duplicates the password field (or sometimes even the email field).

    By having you type the password twice, it wants to ensure that you didn’t make any mistake. And that you don’t have to reset the password the next time you try to login. It makes sense if you actually type the password. Me, I’m using a password manager. That means, I do a copy-paste the password I got from the password manager twice. So far, so good.

    Now, some webapps assume people do type the password. Or worse, people behind those apps don’t know about password managers. They enforce typing the password in the second password field by preventing the paste command in the field. That means that password manager users like me have to type more than 20 random characters by hand. It’s not acceptable!

    Fortunately, there’s a neat solution if you are willing to do some hacking. With any modern navigator, locate the guilty password input field and:

    • Get its id, e.g. myId
    • Get the name of the function associated with its onpaste attribute, e.g. dontPaste
      Event listeners in Google Chrome
    • Run the following in the JavaScript console:
    document.getElementById('myId').removeElementListener('paste', 'dontPaste');

    In some cases, however, the function is anonymous, and the previous code cannot be executed. Fine. Then, run:

    document.getElementById('myId').onpaste = null;


    There are some conclusions to this rather short post.

    The first one, is that it’s really cool to be a developer, because it allows you to avoid a lot of hassle.

    The second one is to never ever implement this kind of checks. If the user doesn’t remember his password, then provide a way to reset it. You’ll need this feature anyway. But duplicating passwords makes it harder to use password managers. Hence, it decreases security. In this day and age, that’s not only a waste of time, it’s a serious mistake.

    Finally, it’s your "moral" duty as a developer to push back against such stupid requirements.

    PS: The complete title of this post should have been, "Bypassing stupid Javascript check", but it would not have been great in Google search results…​

    Categories: Development Tags: javascripthack
  • Lambdas and Clean Code

    Buddhist monk washing clothes in the river

    As software developers, we behave like children. When we see shiny new things, we just have to play with them. That’s normal, accepted, and in general, even beneficial to our job…​ up to a point.

    When Java started to provide annotations with version 5, there was a huge move toward using them. Anywhere. Everywhere. Even when it was not a good idea to. But it was new, hence it had to be good. Of course, when something is abused, there’s a strong movement against it. So that even when the usage of annotations may make sense, some developers might strongly be against it. There’s even a site about that (warning, trolling inside).

    Unfortunately, we didn’t collectively learn from overusing annotations. With a lot of companies having migrated to Java 8, one start to notice a lot of code making use of lambdas like this one:

    List<Person> persons = ...;
    persons.stream().filter(p -> {
        if (p.getGender() == Gender.MALE) {
            return true;
        LocalDate now = LocalDate.now();
        Duration age = Duration.between(p.getBirthDate(), now);
        Duration adult = Duration.of(18, ChronoUnit.YEARS);
        if (age.compareTo(adult) > 0) {
            return true;
        return false;
    }).map(p -> p.getFirstName() + " " + p.getLastName())

    This is just a stupid sample, but it gives a good feeling of the code I sometimes have to read. It’s in general longer and even more convoluted, or to be politically correct, it has room for improvement - really a lot of room.

    The first move would be to apply correct naming, as well as move the logic to where it belongs to.

    public class Person {
        // ...
        public boolean isMale() {
            return getGender() == Gender.MALE;
        public boolean isAdult(LocalDate when) {
            Duration age = Duration.between(birthDate, when);
            Duration adult = Duration.of(18, ChronoUnit.YEARS);
            return age.compareTo(adult) > 0;

    This small refactoring already improves the readability of the lambda:

    persons.stream().filter(p -> {
        if (p.isMale()) {
            return true;
        LocalDate now = LocalDate.now();
        if (p.isAdult(now)) {
            return true;
        return false;
    }).map(p -> p.getFirstName() + " " + p.getLastName())

    But it shouldn’t stop there. There’s an interesting bia regarding lambda: they have to be anonymous. Nearly all examples on the Web show anonymous lambdas. But nothing could be further from the truth!

    Let’s name our lambdas, and check the results:

    // Implementation details
    Predicate<Person> isMaleOrAdult = p -> {
        if (p.isMale()) {
            return true;
        LocalDate now = LocalDate.now();
        if (p.isAdult(now)) {
            return true;
        return false;
    Function<Person, String> concatenateFirstAndLastName = p -> p.getFirstName() + " " + p.getLastName();
    // Core

    Nothing mind-blowing. Yet, notice that the stream itself (the last line) has become more readable, not hidden behind implementation details. It doesn’t prevent developers from reading them, but only if necessary.


    Tools are tools. Lambdas are one of them, in a Java developer’s toolsbelt. But concepts are forever.

    Categories: Java Tags: lambdaclean codejava 8
  • Synthetic

    Plastic straw tubes

    There is a bunch of languages running on the JVM, from of course Java, to Clojure and JRuby. All of them have different syntaxes, but it’s awesome they all compile to the same bytecode. The JVM unites them all. Of course, it’s biased toward Java, but even in Java, there is some magic happening in the bytecode.

    The most well-known trick comes from the following code:

    public class Foo {
        static class Bar {
            private Bar() {}
        public static void main(String... args) {
            new Bar();

    Can you guess how many constructors the Bar class has?

    Two. Yes, you read that well. For the JVM, the Bar class declares 2 constructors. Run the following code if you don’t believe it:

    Class<Foo.Bar> clazz = Foo.Bar.class;
    Constructor<?>[] constructors = clazz.getDeclaredConstructors();
    Arrays.stream(constructors).forEach(constructor -> {
        System.out.println("Constructor: " + constructor);

    The output is the following:

    Constructor: private Foo$Bar()
    Constructor: Foo$Bar(Foo$1)

    The reason is pretty well documented. The bytecode knows about access modifiers, but not about nested classes. In order for the Foo class to be able to create new Bar instances, the Java compiler generates an additional constructor with a default package visibility.

    This can be confirmed with the javap tool.

    javap -v out/production/synthetic/Foo\$Bar.class

    This outputs the following:

        descriptor: (LFoo$1;)V
        flags: ACC_SYNTHETIC
          stack=1, locals=2, args_size=2
             0: aload_0
             1: invokespecial #1                  // Method "<init>":()V
             4: return
            line 2: 0
            Start  Length  Slot  Name   Signature
                0       5     0  this   LFoo$Bar;
                0       5     1    x0   LFoo$1;

    Notice the ACC_SYNTHETIC flag. Going to the JVM specifications yields the following information:

    The ACC_SYNTHETIC flag indicates that this method was generated by a compiler and does not appear in source code, unless it is one of the methods named in §4.7.8.
    — The class File Format

    Theoretically, it should be possible to call this generated constructor - notwithstanding the fact that it’s not possible to provide an instance of Foo$1, but let’s put it aside. But the IDE doesn’t seem to be able to discover this second non-argumentless constructor. I didn’t find any reference in the Java Language Specification, but synthetic classes and members cannot be accessed directly but only through reflection.

    At this point, one could wonder why all the fuss about the synthetic flag. It was introduced in Java to resolve the issue of nested classes access. But other JVM languages use it to implement their specification. For example, Kotlin uses synthetic to access the companion object:

    class Baz() {
        companion object {
            val BAZ = "baz"

    Executing javap on the .class file returns the following output (abridged for readability purpose) :

      public static final Baz$Companion Companion;
        descriptor: LBaz$Companion;
      public Baz();
      public static final java.lang.String access$getBAZ$cp();
        descriptor: ()Ljava/lang/String;
          stack=1, locals=0, args_size=0
             0: getstatic     #22                 // Field BAZ:Ljava/lang/String;
             3: areturn
            line 1: 0
          0: #15()

    Notice the access$getBAZ$cp() static method? That’s the name of the method that should be called from Java:

    public class FromJava {
        public static void main(String... args) {


    While knowledge of the synthetic flag is not required in the day-to-day work of a JVM developer, it can be helpful to understand some of the results returned by the reflection API.

    Categories: Java Tags: JVMbytecodejavapKotlin