• Using Kotlin on the SAP Hybris platform

    SAP Hybris logo

    Since I discovered Kotlin, I use it in all my personal projects. I’ve become quite fond of the language, and with good reason. However, there’s yet no integration with the Hybris platform - though there’s with Groovy and Scala. This post aims at achieving just that, to be able to use Kotlin on Hybris projects.

    Generate a new extension

    The first step on the journey is to create a new extension:

    ant extgen
    1. Choose the yempty package
    2. Choose an adequate name, e.g. "kotlinfun"
    3. Choose a root package, e.g. ch.frankel.blog.kotlin
    Don’t forget to add this new extension to the localextensions.xml file.

    Add libraries

    Kotlin requires libraries, both to compile and to run. They include:

    • kotlin-ant.jar
    • kotlin-compiler.jar
    • kotlin-preloader.jar
    • kotlin-reflect.jar
    • kotlin-script-runtime.jar

    Though it would be nice to use external-dependencies.xml, some JAR are not available as Maven dependencies (e.g. kotlin-ant.jar). Thus, it’s required to manually get the relevant Kotlin compiler and unpack it.

    Create folders

    To help with the build, let’s create dedicated folders for sources and tests, respectively kotlinsrc and kotlintestsrc. Configure the module or the project accordingly - depending whether your IDE of choice is IntelliJ IDEA or Eclipse.

    On the Hybris platform, both sources and tests will compile in the same folder.

    Create simple Kotlin files in kotlinsrc and kotlintestsrc.

    Configure Kotlin in IDE

    While at it, configure Kotlin for the IDE. IntelliJ IDEA will pop-up an alert box to do so if a Kotlin file is created anyway. I have no clue about Eclipse…​

    The fun part

    Now that compiling works inside of the IDE, it should also be part of the standard command-line build. Fortunately, there are build hooks into the Hybris build pipeline. Those are configured inside the build_callbacks.xml file.

    A naive first draft would look like:

    <project name="kotlinfun_buildcallbacks">
        <property name="kotlin.lib.dir" location="${ext.kotlinfun.path}/lib"/>
        <taskdef resource="org/jetbrains/kotlin/ant/antlib.xml">
                <pathelement location="${kotlin.lib.dir}/kotlin-ant.jar"/>
                <pathelement location="${kotlin.lib.dir}/kotlin-compiler.jar"/>
                <pathelement location="${kotlin.lib.dir}/kotlin-preloader.jar"/>
                <pathelement location="${kotlin.lib.dir}/kotlin-reflect.jar"/>
                <pathelement location="${kotlin.lib.dir}/kotlin-script-runtime.jar"/>
        <macrodef name="kotlin_compile">
            <attribute name="srcdir"/>
            <attribute name="destdir"/>
            <attribute name="extname"/>
                <echo message="compile kotlin sources for @{extname} using srcdir: @{srcdir}"/>
                <mkdir dir="@{destdir}"/>
                <kotlinc src="@{srcdir}" output="@{destdir}">
                        <pathelement location="${kotlin.lib.dir}/kotlin-stdlib.jar"/>
                        <fileset dir="${[email protected]{extname}.path}" erroronmissingdir="false">
                            <include name="${[email protected]{extname}.additional.src.dir}/**"/>
                            <include name="${[email protected]{extname}.additional.testsrc.dir}/**"/>
                        <pathelement path="${build.classpath}"/>
                        <pathelement path="${platformhome}/bootstrap/bin/models.jar" />
                        <fileset dir="${bundled.tomcat.home}">
                            <include name="lib/jsp-api.jar"/>
                            <include name="lib/servlet-api.jar"/>
                            <include name="lib/el-api.jar"/>
                            <include name="lib/wrapper*.jar"/>
                        <pathelement path="${[email protected]{extname}.classpath}" />
        <macrodef name="kotlinfun_compile_core">
            <attribute name="extname"/>
                    <isset property="[email protected]{extname}.coremodule.available"/>
                            <istrue value="${[email protected]{extname}.extension.coremodule.sourceavailable}"/>
                                <kotlin_compile srcdir="${[email protected]{extname}.path}/kotlinsrc"
                                               destdir="${[email protected]{extname}.path}/classes"
                                <kotlin_compile srcdir="${[email protected]{extname}.path}/kotlintestsrc"
                                               destdir="${[email protected]{extname}.path}/classes"
        <macrodef name="kotlinfun_after_compile_core">
                <kotlinfun_compile_core extname="kotlinfun"/>

    The above is configured for a simple core module, not a web one. It can be added afterwards though.

    Unfortunately, this doesn’t work. Launching the build with ant build will produce the following output:

    /hybris/bin/platform/resources/ant/compiling.xml:530: java.lang.NoSuchMethodError:

    Looking more closely at the kotlin-compiler.jar, one must notice it embeds Guava, which also happens to be on the hybris classpath. Hence, there’s a version conflict. Fortunately, there’s a smarter version of the compiler - kotlin-compiler-embeddedable.jar available on repo1 that shades Guava i.e. embeds it using a different package name, thus resolving the conflict. Just update the callback file with the new JAR name, and be done.

    Now, the build produces a different output:

    /hybris/bin/custom/kotlinfun/buildcallbacks.xml:34: java.lang.IllegalStateException:
     File is not found in the directory of Kotlin Ant task: kotlin-compiler.jar

    For reasons unknown, the Ant task checks the existence of dependent libraries using their file names! Check the source code for proof. There’s an evil workaround: rename the JAR as it’s expected. And update the build file back to the initial version…​

    At this point, it’s possible to build the platform with Kotlin files compiled.


    Though it works, there are a couple of possible improvements:

    • Change the lib folder to a dedicated folder e.g. compile , so as not to ship it in the distributed archive
    • Add the configuration to also compile a web module
    • Refactor the macros into a common extension, so that different Kotlin extensions can use it.

    In the end, nothing prevents you from using Kotlin on SAP hybris right now!

    Categories: Development Tags: Kotlinhybris
  • Make your life easier with Kotlin stdlib

    A bunch of books on shelves

    IMHO, Kotlin is not about big killer features - although extension methods and properties could certainly be categorized as such, but about a bunch of small improvements that have deep impact. Most of them are not built-in into the language, but are functions offered as part of the Kotlin standard library. In this post, I’d like to go through a limited set of them, and describe how they can be used to improve the code.


    It’s quite common to have //TODO comments in a new codebase. For most of us developers, it might even be a reflex. When the flow comes in, don’t stop because of a lack of specification, but write down a reminder to get back to it later. Whatever later means. Even IDEs happily generate code with such comments.

    fun computeCustomerNumber(customer: Customer): String {
        // TODO Not specified as of November 27th

    Yet, no problem occurs when the actual code is run. Nothing happens, nothing to really remind us that piece should be implemented. Of course, some code analysis tools might uncover it, but it might already be too late. And it requires the tool to actually be run.

    Kotlin provides the TODO() function, that actually throws an exception when it’s called. This way, even running a simple unit test will forcibly point you to the fact there’s something to be done there.

    fun computeCustomerNumber(customer: Customer): String {
        TODO("Not specified as of November 27th")


    Depending on an API specifics, constructing some objects might be rather tedious, and involve a lot of details. To hide those details, the current consensus is generally to create a method out of it. Here’s a snippet to create a combo-box component for the Vaadin web framework:

    fun createCountriesCombo(): ComboBox<String> {
        val countries = ComboBox<String>("Countries")
        countries.setItems("Switzerland", "France", "Germany", "Austria")
        countries.isEmptySelectionAllowed = false
        countries.placeholder = "Choose your country"
        countries.addValueChangeListener {
            val country = countries.value
        return countries

    Even then, depending on the amount of properties to set, it’s easy to get lost in the details. apply() is a simple function, defined as:

    fun <T> T.apply(block: T.() -> Unit): T { block(); return this }

    It means this function can be called on any type T, and its only argument is a lambda receiver, that returns nothing. As for any lambda receiver, this inside the lambda refers to the object the function is called on. This let us refactor the above snippet into the following:

    fun createCountriesCombo(): ComboBox<String> {
    val countries = ComboBox<String>("Country").apply {
            setItems("Switzerland", "France", "Germany", "Austria")
            isEmptySelectionAllowed = false
            placeholder = "Choose your country"
            addValueChangeListener {

    Even better, the snippet can be now be easily refactored to take advantage of expression body:

    fun createCountriesCombo() = ComboBox<String>("Country").apply {
        setItems("Switzerland", "France", "Germany", "Austria")
        isEmptySelectionAllowed = false
        placeholder = "Choose your country"
        addValueChangeListener {

    Icing on the cake, with any IDE worth its salt, folding may be used to display the overview, and unfolding to reveal the details.

    IDE-unfolded apply function
    IDE-folded apply function


    Before Java 7, the closing of a connection had to be explicitly done in a finally block:

    Connection conn = getConnection();
    try {
        // Do stuff with the connection
    } finally {
        if (conn != null) {

    Java 7 added the try-with-resource syntax. For example, the previous snippet can refactored into this one:

    try (Connection conn = getConnection()) {
        // Do stuff with the connection

    The try-with-resource syntax greatly simplifies the code. However, it’s part of the language syntax and thus carries a lot of implicitness:

    • The type in the resource statement must implement AutoCloseable.
    • There can be multiple resources opened in the same statement. In that case, they are closed in the reversed order they were opened.
    • If an exception is thrown in the try block and also during the closing of the resource(s), the latter is suppressed and set to the main exception.

    The Kotlin counterpart is handled through the use function, whose signature is:

    fun <T : Closeable, R> T.use(block: (T) -> R): R

    No black magic involved. It’s a simple function, and its source code is available online.


    Those are only a sample of what is available. In general, the syntax of languages can be learned in 21 days (or probably even less). However, it takes a lot more time to know the API.

    Kotlin is less about the syntax and more about the API.

    Categories: Development Tags: Kotlinstdlib
  • Making the most out of conferences

    A conference room full of people

    This week was my last conference of 2017, Voxxed Day Thessaloniki organized by my good friend Patroklos. I started going to conferences some years ago.

    Going to a conference is an investment, whether in time or in money - or in both. You should me make sure to get the most out of that investment. In this post, I’d like to write down what I do to achieve that.

    Plan ahead

    The first thing to do is to get the list of available talks ahead. Some conferences require attendees to register to talks beforehand. That’s especially true for workshops, which have a limited number of seats. You’d be very sorry to miss a talk you wanted to attend only because you didn’t register.

    Also, some have talks in different buildings. Even at conferences, logistics do play a role.

    Even if that’s not the case, planning ahead will get you a clear picture, in case you’re interested in more than one talk at the same time.

    Workshops over talks

    Some conferences offer workshops in addition to talks. IMHO, such workshops have way more value than regular talks. While in a talk, your mind can wander away, workshops make you more implicated by essence, because you need to do something. In the end, you’ll be more focused and what you learned will be more persistent.

    Avoid long talks

    When I was a student, I listened to 2-hours lesson. Even then, it was hard to stay focused for such a long time. Some conferences propose 3-hours talk. I don’t think anyone can stay focused during the whole talk.

    While it might be possible to attend one such talk over the conference, more than one is not worth it.

    Take notes

    A good way to keep your focus is to take notes. An even better way is to share your notes, whatever the medium. It can be you personal blog, you company blog, a company meeting, anything. By setting an objective, you’ll pay more attention during the talk.

    Put your phone away

    Let me repeat it: put. your. phone. away. Or shut it off. Or leave it in your bag. It’s too easy to be distracted by notifications, especially if you have a smart watch on top of it. Hey, an email! Hey, a Twitter follower! Hey, a connection request on LinkedIn. I’m guilty of this myself.

    During talks, phone are only useful to:

    • take pictures of slides
    • tweet a quote from the speaker

      In the first case, speakers (or conferences) nearly always make their slides available afterwards. In the second, write the quote down and tweet later.

    Don’t be shy

    The talk is finished. It’s QA time. A lot of questions are in your head. But you dare not ask them, for whatever reason. Or you waved your hand, but was not chosen and the time is up. Or there was no QA.

    In all cases, ask your question. If time is up, go the speaker after the session. If he leaves the room, chase him. Most speakers will be very happy to engage in a discussion on the subject they talked about. And then, you’ll keep what you learned longer because of that personal engagement.

    Walk away

    Yes, people don’t want to talk about that, but sometimes, the talk is not great. Perhaps the content is not what you expected. Perhaps the speaker is not that great. Or whatever.

    In that case, you shouldn’t be afraid to just walk away, and use that time in a more productive way. Or just take a break - see below.

    Voting with your feet is important. But it’s even more important to give a detailed feedback. Conference organizers and speakers want to improve, so this information is core to them.

    Try something new

    After the first or second talk about <insert hype subject here> (e.g. Docker, microservices, React, etc.), what you learn is pretty marginal. If you’re an expert of <insert subject here>, why would you go for a talk about it? Granted, you could want to deepen your understanding, but in general, conference talks are not so deep dive because they need to attract many people.

    On the opposite, attending a talk on a subject which is completely foreign to you could open new options. For example, I’m a Vaadin fan and still went to an AngularJS talk some years ago. It was great, I never regretted it. Now, I have one more option in my solutions toolbelt when designing an architecture.

    Speaker over content

    Let’s face it. Some speakers are better than others. They can make a good and entertaining talk out of poor content. When the content is good, the talk is just awesome. You remember it. You want to tell your colleagues about it. You’re even likely to pester them until they watch the recording. It got you inspired!

    While you should give a chance to new talents, it’s sometimes a good idea not to think about too much about the content and just go for the show.

    Meet new people

    Yes, conferences are about improving skills, and discovering new things. But this is not by far the only possibilities.

    Networking is as important as technical content, if not more. In conferences, you could talk to someone who could offer you your dream job, or introduce you to a new idea. Or you could mingle with a group, and have a nice time debating over an idea - or simply chit-chat. Work is not all, relaxing is also required at some time. Network if possible.

    More importantly, if you already have a network, try to enlarge your horizon. Meet new people. Conferences are a fantastic opportunity to meet experts in a field completely unrelated to yours. Listen to them talk about it in a passionate way.

    Don’t overdo it

    With the first few conferences, I did want to squeeze every last piece of information out of conferences. I went to every talk I could Took notes, wrote blog post. And ended up quite tired.

    A conference is not like a sprint, but more like a marathon. Stuffing a lot of knowledge quickly into your head doesn’t help retain it.

    Pauses are important. Netwok (see above). Go to boothes, play games, sit and think about what you learned, rest and sleep if need, whatever works for you. That’s also useful time, albeit of a different kind.

    Categories: Miscellaneous Tags: Conference
  • On developer shortage

    A laptop on a table with nobody sitting in front

    Who is not aware that there’s a developer shortage? I mean, if you’re into the software industry, everybody is telling it every now and then - especially if you’re on the recruiting side. To handle that, companies come up with creative solutions - some are sponsoring education initiatives, or even creating their own. However, I think it’s not the right answer.

    Of course, there’s no doubt that there’s a shortage of good developers. Yet, that’s true for every industry: there’s a lack of good plumbers, of good carpenters, of good car mechanics, etc. So, why is there such a pathos around it in IT? I’ve come up with several reasons, that boil down to a single point. Let me develop.

    Unrelated technologies/skills

    Have you ever written a job description that looks like a Christmas wish list? I mean, a lot of technologies, mainly unrelated. Like front-end and back-end, or sysadmin skills with any kind of development skills.

    And since I’m there, I need to tell you there no such thing as a full-stack developer. At least, not in the sense you think. I have designed systems from front to back, and I’ve fixed bugs on the front-end, but I’m not on par with a regular experienced front-end developer.

    Yet, every day, I learn new things - that I try to share here. Do you believe I can learn them in my free time, but not as a new hire?


    Does your job description requires a university degree? A degree in computer science? A licence or a master?

    If that’s the case, it means you believe those university degrees somehow prepare people to do the job. Or worse, you don’t really believe it, but at least, it will protect you if the hire is not the right fit in the end.

    Do they teach you object-oriented programming in universities? Or mutation testing? Or testing at all? How people who are full-time teachers know about modern build pipelines? About Docker? They do know a lot about sorting algorithms though, which you probably don’t need to implement yourself in the industry.


    Do you require certifications for the job? Of course, it might make sense to require a Java certification for a Java job. But do you know how people are getting certified? Do you know what it really entails?

    Well, certifications are based on multiple-choice questions. Mostly based on knowing the API by heart. In real-life, I have an IDE and Google if I forget about them.


    In most developed countries, selection by age is not allowed. However, there are subtle ways to filter by age. Asking for a number of years of experience is a good way. Like 5 years of experience probably means ~28 years, counting a master’s degree.

    Thing is, I’ve worked in the same team as an Android developer, he had 5 years of experience. Only he was 21, as he had started developing in Android at age 16. Would you hire him, since he fulfills the requirements, or is he too young for your taste?

    Too young, too old, not the right gender, not the right ethnicity…​ There are a lot of bias that might sit between you and potential capable hires.

    Work onsite

    Do you require your developers to work onsite? 100% of the time? Why?

    I’ve worked with remote colleagues. I do work now with remote colleagues. Some companies are 100% remote. Are they making bad software? If you believe people need to meet each other once in a while, does that mean that people need to commute every day to work?

    For sure, there are differences between working while co-located and not. But there are a lot of tips and tricks available to make it work.

    Creating a team is about building trust. Trust doesn’t appear automatically by co-locating people.

    Dress code

    Is there a specific dress code in your company? I’m referring to a dress code that is not compatible with the average developer’s dress code e.g. costume, bow and tie.

    Do you think developers work better when they dress in a certain way? Or is your reason is that developers are part of the image of the company, and customers might see them (hint: they never do). Or do you think that in order to cooperate with business people, they need to dress the same way? Then, think about a car mechanic who would handle your car dressed in bow and tie?

    Hiring process itself

    What’s your hiring process? How many steps completely unrelated to the job do I need to pass in order to get to the real interviews? Do I need to answer questions such as "Where do you see yourself in 5 years?" or "What’s your 3 best qualities?".

    I’m not saying this is a rule, but I’ve never seen an HR team that knew about the specifics of the developer job. Having an hiring process that doesn’t take it into account is plain crazy, especially since HR is but the first step.

    HR should be a help, not an hindrance.


    This one is pretty obvious, but do you pay developers at the market level? Do you include the salary range in your job description?

    When browsing job ads, it’s an extra effort to ask for the compensation. Most developers won’t take the time, and just discard your job description. And if they do, and the offer doesn’t match, it’s time lost for both parties.

    There are only 2 possible reason you don’t include compensation policy. Either you’re afraid of telling current employees, and that means something is already broken. Or you’re afraid of telling competitors, and let’s be blunt, they probably already know - because people talk about their compensation.


    Last but not least, how do you consider developers inside your company? Are they a true part of the value chain? Are they only accountable or also decision makers? Do you threaten them to outsource their job in India every day to keep them "under pressure"?

    Developers are in touch. Because of the same university, same technology user group, whatever…​ They know about your company and its culture, about its pros and cons.


    Does any of the above point ring a bell? There’s no developer shortage, there are only barriers you set yourself between your company and a potential workforce. Break the barriers.

    Categories: Miscellaneous Tags: hiring
  • Is Object-Oriented Programming compatible with an enteprise context?

    A pair of glasses on a background of screens displaying code

    This week, during a workshop related to a Java course I give at a higher education school, I noticed the code produced by the students was mostly - ok, entirely, procedural. In fact, though the Java language touts itself as an Object-Oriented language, it’s not uncommon to find such code developed by professional developers in enterprises. For example, the JavaBean specification is in direct contradiction of one of OOP’s main principle, encapsulation.

    Another example is the widespread controller, service and DAO architecture found equally in Java EE and Spring applications. In that context, entities are in general anemic, while all business logic is located in the service layer. While this is not bad per se, this design separates between state and behaviour, and sits at the opposite to true OOP.

    Both Java EE and the Spring framework enforce this layered design. For example, in Spring, there’s one annotation for every such layer: @Controller, @Service and @Repository. In the Java EE world, only @EJB instances - the service layer, can be made transactional.

    This post aims to try to reconcile both the OOP paradigm and the layered architecture. I’ll be using the Spring framework to highlight my point because I’m more familiar with it, but I believe the same approach could be used for pure Java EE apps.

    A simple use-case

    Let’s have a simple use-case: from an IBAN number, find the associated account with the relevant balance. Within a standard design, this could look like that:

    class ClassicAccountController(private val service: AccountService) {
        fun getAccount(@PathVariable("iban") iban: String) = service.findAccount(iban)
    class AccountService(private val repository: ClassicAccountRepository) {
        fun findAccount(iban: String) = repository.findOne(iban)
    interface ClassicAccountRepository : CrudRepository<ClassicAccount, String>
    @Table(name = "ACCOUNT")
    class ClassicAccount(@Id var iban: String = "", var balance: BigDecimal = BigDecimal.ZERO)

    There are a couple of issues there:

    1. The JPA specifications mandates for a no-arg constructor. Hence, it’s possible to create ClassicalAccount instances with an empty IBAN.
    2. There’s no validation of the IBAN. The full round-trip to the database is required to check if an IBAN is valid.
    Yes, there’s no currency. It’s a simple example, remember?

    Being compliant

    In order to comply with the no-args constructor JPA constraint - and because we use Kotlin, it’s possible to generate a synthetic constructor. That means the constructor is accessible through reflection, but not by calling the constructor directly.

    If you use Java, tough luck, I don’t know about any option to fix that.

    Adding validation

    In a layer architecture, the service layer is the obvious place to put the business logic, including validation:

    class AccountService(private val repository: ClassicAccountRepository) {
        fun findAccount(iban: String): Account? {
            return repository.findOne(iban)
        fun checkIban(iban: String) {
            if (iban.isBlank()) throw IllegalArgumentException("IBAN cannot be blank")

    In order to be more OOP-compliant, we must decide whether we should allow invalid IBAN numbers or not. It’s easier to forbid it altogether.

    @Table(name = "ACCOUNT")
    class OopAccount(@Id var iban: String, var balance: BigDecimal = BigDecimal.ZERO) {
        init {
            if (iban.isBlank()) throw IllegalArgumentException("IBAN cannot be blank")

    However, this means that we must first create the OopAccount instance to validate the IBAN - with a balance of 0, even if the balance is actually not 0. Again, as per the empty IBAN, the code does not match the model. Even worse, to use the repository we must access the OopAccount inner state:


    A more OOP-friendly design

    Improving the state of the code requires a major rework on the class model, separating between the IBAN and the account, so that the former can be validated, and can access the latter. The IBAN class serves both as the entry point, and the PK of the account.

    @Table(name = "ACCOUNT")
    class OopAccount(@EmbeddedId var iban: Iban, var balance: BigDecimal)
    class Iban(@Column(name = "iban") val number: String,
               @Transient private val repository: OopAccountRepository) : Serializable {
        init {
            if (number.isBlank()) throw IllegalArgumentException("IBAN cannot be blank")
        val account
            get() = repository.findOne(this)
    Notice the returned JSON structure will be different from the one returned above. If that’s an issue, it’s quite easy to customize Jackson to obtain the desired result.

    With this new design, the controller requires a bit of change:

    class OopAccountController(private val repository: OopAccountRepository) {
        fun getAccount(@PathVariable("iban") number: String): OopAccount {
            val iban = Iban(number, repository)
            return iban.account

    The only disadvantage of this approach is that the repository needs to be injected into the controller, then be explicitly passed to the entity’s constructor.

    The final touch

    It would be great if the repository could automatically be injected into the entity when it’s created. Well, Spring makes it possible - though this is not a very well-known feature, through Aspect-Oriented Programming. It requires the following steps:

    Add AOP capabilities to the application

    To effectively add the AOP dependency is quite straightforward and requires just adding the relevant starter dependency to the POM:


    Then, the application must be configured to make use of it:

    class OopspringApplication
    Update the entity
    1. The entity must first be set as a target for injection. Dependency injection will be done through autowiring.
    2. Then, the repository be moved from a constructor argument to a field.
    3. Finally, the database fetching logic can be moved into the entity:
      @Configurable(autowire = Autowire.BY_TYPE)
      class Iban(@Column(name = "iban") val number: String) : Serializable {
          private lateinit var repository: OopAccountRepository
          init {
              if (number.isBlank()) throw IllegalArgumentException("IBAN cannot be blank")
          val account
              get() = repository.findOne(this)
    Remember that field-injection is evil.
    Aspect weaving

    There are two ways to weave the aspect into the , either compile-time weaving, or load-time weaving. I choose the the later is much easier to configure. It’s achieved through a standard Java agent.

    1. First, it needs to be added as a runtime dependency in the POM:
    2. Then, the Spring Boot plugin must be configured with the agent:
    3. Finally, the application must be configured accordingly:
      class OopspringApplication

    And then?

    Of course, this sample leaves out an important piece of the design: how to update the balance of an account? The layer approach would have a setter for that, but that’s not OOP. Thinking about it, the balance of an account changes because there’s a transfer from another account. This could be modeled as:

    fun OopAccount.transfer(source: OopAccount, amount: BigDecimal) { ... }

    Experienced developers should see some transaction management requirements sneaking in. I leave the implementation to motivated readers. The next step would be to cache the values, because accessing the database for each read and write would be killing performance.


    There are a couple of points I’d like to make.

    First, the answer to the title question is a resounding 'yes'. The results is a true OOP code while still using a so-called enterprise-grade framework - namely Spring.

    However, migrating to this OOP-compatible design came with a bit of overhead. Not only did we rely on field injection, we had to bring in AOP with load-time weaving. The first is a hindrance during unit testing, the second is a technology you definitely don’t want in every team, as they make apps more complex. And that’s only for a trivial example.

    Finally, this approach has a huge drawback: most developers are not familiar with it. Whatever its advantages, they first must be "conditioned" to have this mindset. And that might be a reason to continue using the traditional layered architecture.

    I’ve searched Google for scientific studies proving OOP is better in terms of readability and maintainability: I found none. I would be very grateful for pointers.
    Categories: JavaEE Tags: Object-Oriented ProgrammingOOPSpring
  • The multiple usages of git rebase --onto

    Git Orange Logo

    I’m not Git expert and I regularly learn things in Git that changes my view of the tool. When I was showed git rebase -i, I stopped over-thinking about my commits. When I discovered git reflog, I became more confident in rebasing. But I think one of the most important command I was taught was git rebase --onto.

    IMHO, the documentation has room for improvement regarding the result of option. If taking the image of the tree, it basically uproots a part of the tree to replant it somewhere else.

    Let’s have an example with the following tree:

    o---o---o---o master
          \---o---o branch1
                    \---o branch2

    Suppose we want to transplant branch2 from branch1 to master:

    o---o---o---o master
         \       \
          \       \---o' branch2
            \---o---o branch1

    This is a great use-case! On branch branch2, the command would be git rebase --onto master branch1. That approximately translates to move everything from branch2 starting from branch1 to the tip of master. I try to remember the syntax by thinking the first argument is the new commit, the second is the old one.

    So, but what use-cases are there to move parts of the tree?

    Delete commits

    While my first reflex when I want to delete commits is to git rebase -i, it’s not always the most convenient. It requires the following steps:

    1. Locating the first commit to be removed
    2. Effectively run the git rebase -i command
    3. In the editor, for every commits that needs to be removed, delete the line
    4. Quit the editor

    If the commits to be removed are adjacent, it’s easier to rebase --onto, because you only need the new and the old commit and can do the "deletion" in one line.

    Here’s an example:

    o---A---X---Y---Z master

    To remove the last 3 commits X, Y and Z, you just need:

    git rebase --onto A Z

    Long-lived remote branches

    While it’s in general a bad idea to have long-lived branches, it’s sometimes required.

    Suppose you need to migrate part of your application to a new framework, library, whatever. With small applications, a small task-force can be dedicated to that. The main development team goes on week-end with instructions to commit everything before leaving on Friday. When they come back on Monday, everything has been migrated.

    Sadly, life is not always so easy and applications can be too big for such an ideal scenario. In that case, the task force works on a dedicated migration branch for more than a week-end, in parallel with the main team. But they need to keep up-to-date with the main branch, and still keep their work.

    Hence, every now and then, they rebase the migration branch at the top of master:

    git rebase --onto master old-root-of-migration

    This is different from merging, because you keep the history linear.

    Local branches

    Sometimes, I want to keep my changes locally, for a number of reasons. For example, I might tinker with additional (or harsher) rules for the code quality tool. In that case, I want to take time to evaluate whether this is relevant for the whole team or not.

    As above, this is achieved by regularly rebasing my local tinker branch onto master.

    As above, it lets me keep my history linear and change the commits relevant to tinker if the need be. Merging would prevent me from doing this.


    git rebase --onto can have different use-cases. The most important ones are related to the handling of long-lived branches, whether local or remote.

    As always, it’s just one more tool in your toolbelt, so that not everything looks like a nail.

    As every command that changes the Git history, git rebase --onto should only change local commits. You’ve been warned!
    Categories: Development Tags: Git
  • Alternative navigator in Vaadin

    Vaadin Logo

    In Vaadin, to change the components displayed on the screen, there are a few options. The most straightforward way is to use the setContent() method on the UI. The most widespread way is to use the navigator.

    Views managed by the navigator automatically get a distinct URI, which can be used to be able to bookmark the views and their states and to go back and forward in the browser history.
    — Vaadin documentation

    This is the main asset for apps managing catalogs, such as e-commerce shops, or item management apps. Every item gets assigned a specific URL, through fragment identifiers.

    However, the view per URL trick is also a double-edged blade. It lets the user bypass the server-side navigation and directly type the URL of the view he wants to display in the browser. Also, the View interface requires to implement the enter() method. Yet, this method is generally empty and thus is just boiler plate.

    Let’s see how we can overcome those issues.

    A solution to the empty method

    With Java 8, it’s possible for an interface to define a default implementation. Vaadin takes advantage of that by providing a default empty method implementation:

    public interface View extends Serializable {
        public default void enter(ViewChangeEvent event) {

    Even better, all methods of the View interface are default. That lets you implement the interface without writing any boiler-plate code and rely on default behaviour.

    A solution to the view per URI

    The default implementation

    The navigator API consists of the following classes:

    Navigator API class diagram

    As can be seen, the handling of the view by the URI occurs in the UriFragmentManager class. This implementation relies on the fragment identifier in the URL. But this is only the default implementation. The topmost abstraction is NavigationStageManager and it has no constraint where to store the state.

    An alternative

    To prevent users from jumping to a view by typing a specific URL, one should store the state in a place unaccessible by them. For example, it’s feasible to store the state in cookies. A simple implementation could look like:

    private const val STATE_KEY = "state"
    class CookieManager : NavigationStateManager {
        private var navigator: Navigator? = null
        override fun setNavigator(navigator: Navigator) {
            this.navigator = navigator
        override fun getState(): String {
            val cookies = VaadinService.getCurrentRequest().cookies?.asList()
            val value = cookies?.find { it.name == STATE_KEY }?.value
            return when(value) {
                null -> ""
                else -> value
        override fun setState(state: String) {
            val cookie = Cookie(STATE_KEY, state)

    Creating a navigator using the previous state manager is very straightforward:

    class NavigatorUI : ViewDisplay, UI() {
        override fun init(vaadinRequest: VaadinRequest) {
            navigator = Navigator(this, CookieManager(), this)
        override fun showView(view: View) {
            content = view as Component

    At this point, it’s possible to place components which change the view on the UI and spy upon the request and response. Here are the HTTP request and response of an AJAX call made by Vaadin to change the view:

    POST /UIDL/?v-uiId=0 HTTP/1.1
    Connection: keep-alive
    Content-Length: 297
    User-Agent: ...
    Content-Type: application/json; charset=UTF-8
    Accept: */*
    DNT: 1
    Accept-Encoding: gzip, deflate, br
    Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4
    Cookie: state=Another; JSESSIONID=B584B759D78A0CDBA43496EF4AEB3F25
    HTTP/1.1 200
    Set-Cookie: state=Third
    Cache-Control: no-cache
    Content-Type: application/json;charset=UTF-8
    Content-Length: 850
    Date: Fri, 27 Oct 2017 09:19:42 GMT


    Before Vaadin 7, I personally didn’t use the Navigator API because of the requirement to implement the enter() method on the view. Java 8 provide default methods, Vaadin 8 provide the implementation. There’s now no reason not to use it.

    If the default fragment identifier way of navigating doesn’t suit your own use-case, it’s also quite easy to rely on another way to store the current view.

    More generally, in its latest versions, Vaadin makes it easier to plug-in your implementation if the default behaviour doesn’t suit you.

    Categories: JavaEE Tags: VaadinNavigator
  • Migrating a Spring Boot application to Java 9 - Modules

    Java 9 Logo

    Last week, I tried to make a Spring Boot app - the famous Pet Clinic, Java 9 compatible. It was not easy. I had to let go of a lot of features along the way. And all in all, the only benefit I got was improvement of String memory management.

    This week, I want to continue the migration by fully embracing the Java 9 module system.

    Configuring module meta-data

    Module information in Java 9 is implemented through a module-info.java file. The first step is to create such a file at the root of the source directory, with the module name:

    module org.springframework.samples.petclinic {

    The rest of the journey can be heaven or hell. I’m fortunate to benefit from an IntelliJ IDEA license. The IDE tells exactly what class it cannot read and a wizard lets you put it in the module file. In the end, it looks like that:

    module org.springframework.samples.petclinic {
        requires java.xml.bind;
        requires javax.transaction.api;
        requires validation.api;
        requires hibernate.jpa;
        requires hibernate.validator;
        requires spring.beans;
        requires spring.core;
        requires spring.context;
        requires spring.tx;
        requires spring.web;
        requires spring.webmvc;
        requires spring.data.commons;
        requires spring.data.jpa;
        requires spring.boot;
        requires spring.boot.autoconfigure;
        requires cache.api;

    Note that module configuration in the maven-compiler-plugin and maven-surefire-plugin can be removed.

    Configuration the un-assisted way

    If you happen to be in a less than ideal environment, the process is the following:

    1. Run mvn clean test
    2. Analyze the error in the log to get the offending package
    3. Locate the JAR of said package
    4. If the JAR is a module, add the module name to the list of required modules
    5. If not, compute the automatic module name, and add it to the list of requires modules

    For example:

    [ERROR] ~/spring-petclinic/src/main/java/org/springframework/samples/petclinic/system/CacheConfig.java:[21,16]
      package javax.cache is not visible
    [ERROR]   (package javax.cache is declared in the unnamed module, but module javax.cache does not read it)

    javax.cache is located in the cache-api-1.0.0.jar. It’s not a module since there’s no module-info in the JAR. The automatic module name is cache.api. Write it as a requires in the module. Rinse and repeat.

    ASM failure

    Since the first part of this post, I’ve been made aware that Spring Boot 1.5 won’t be made Java 9-compatible. Let’s do it.

    Bumping Spring Boot to 2.0.0.M5 requires some changes in the module dependencies:

    • hibernate.validator to org.hibernate.validator
    • validation.api to java.validation

    And just when you think it might work:

    Caused by: java.lang.RuntimeException
    	at org.springframework.asm.ClassVisitor.visitModule(ClassVisitor.java:148)

    This issue has already been documented. At this point, explicitly declaring the main class resolves the issue.

            <jvmArguments>--add-modules java.xml.bind</jvmArguments>

    Javassist failure

    The app is now ready to be tested with mvn clean spring-boot:run. Unfortunately, a new exception comes our way:

    2017-10-16 17:20:22.552  INFO 45661 --- [           main] utoConfigurationReportLoggingInitializer :
    Error starting ApplicationContext. To display the auto-configuration report re-run your application with 'debug' enabled.
    2017-10-16 17:20:22.561 ERROR 45661 --- [           main] o.s.boot.SpringApplication               :
     Application startup failed
      Error creating bean with name 'entityManagerFactory' defined in class path resource
        Invocation of init method failed; nested exception is org.hibernate.boot.archive.spi.ArchiveException:
         Could not build ClassFile

    A quick search redirects to an incompatibility between Java 9 and Javassist. Javassist is the culprit here. The dependency is required Spring Data JPA, transitively via Hibernate. To fix it, exclude the dependency, and add the latest version:


    Fortunately, versions are compatible - at least for our usage.

    It works!

    We did it! If you arrived at this point, you deserve a pat on the shoulder, a beer, or whatever you think you deserve.

    Icing on the cake, the Dev Tools dependency can be re-added.


    Migrating to Java 9 requires using Jigsaw, whether you like it or not. At the very least, it means a painful trial and error search-for-the-next-fix process and removing important steps in the build process. While it’s interesting for library/framework developers to add an additional layer of access control to their internal APIs, it’s much less so for application ones. At this stage, it’s not worth to move to Java 9.

    I hope to conduct this experiment again in some months and to notice an improvement in the situation.

    Categories: Java Tags: Java 9Spring BootmodulesJigsaw
  • Migrating a Spring Boot application to Java 9 - Compatibility

    Java 9 Logo

    With the coming of Java 9, there is a lot of buzz on how to migrate applications to use the module system. Unfortunately, most of the articles written focus on simple Hello world applications. Or worse, regarding Spring applications, the sample app uses legacy practices - like XML for example. This post aims to correct that by providing a step-to-step migration guide for a non-trivial modern Spring Boot application. The sample app chosen to do that is the Spring Pet clinic.

    There are basically 2 steps to use Java 9: first, be compatible then use the fully-fledged module system. This post aims at the former, a future post will consider the later.

    Bumping the Java version

    Once JDK 9 is available on the target machine, the first move is to bump the java.version from 8 to 9 in the POM:

        <!-- Generic properties -->

    Now, let’s mvn clean compile.

    Cobertura’s failure

    The first error along the way is the following:

    [ERROR] Failed to execute goal org.codehaus.mojo:cobertura-maven-plugin:2.7:clean (default) on project spring-petclinic:
     Execution default of goal org.codehaus.mojo:cobertura-maven-plugin:2.7:clean failed:
      Plugin org.codehaus.mojo:cobertura-maven-plugin:2.7 or one of its dependencies could not be resolved:
      Could not find artifact com.sun:tools:jar:0 at
       specified path /Library/Java/JavaVirtualMachines/jdk-9.jdk/Contents/Home/../lib/tools.jar -> [Help 1]
    Cobertura is a free Java code coverage reporting tool.
    — https://github.com/cobertura/cobertura

    It requires access to the tools.jar that is part of JDK 8 (and earlier). One of the changes in Java 9 is the removal of that library. Hence, that is not compatible. This is already logged as an issue. Given the last commit on the Cobertura repository is one year old, just comment out the Cobertura Maven plugin. And think about replacing Cobertura for JaCoCo instead.

    Wro4J’s failure

    The next error is the following:

    [ERROR] Failed to execute goal ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run (default) on project spring-petclinic:
     Execution default of goal ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run failed:
      An API incompatibility was encountered while executing ro.isdc.wro4j:wro4j-maven-plugin:1.8.0:run:
       java.lang.ExceptionInInitializerError: null
    wro4j is a free and Open Source Java project which will help you to easily improve your web application page loading time. It can help you to keep your static resources (js & css) well organized, merge & minify them at run-time (using a simple filter) or build-time (using maven plugin) and has a dozen of features you may find useful when dealing with web resources.
    — https://github.com/wro4j/wro4j

    This is referenced as a Github issue. Changes have been merged, but the issue is still open for Java 9 compatibility should be part of the 2.0 release.

    Let’s comment out Wro4J for the moment.

    Compilation failure

    Compiling the project at this point yields the following compile-time errors:

    Error:(30, 22) java: package javax.xml.bind.annotation is not visible
      (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph)
    Error:(21, 22) java: package javax.xml.bind.annotation is not visible
      (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph)
    Error:(22, 22) java: package javax.xml.bind.annotation is not visible
      (package javax.xml.bind.annotation is declared in module java.xml.bind, which is not in the module graph)

    That means code on the classpath cannot access this module by default. It needs to be manually added with the --add-modules option of Java 9’s javac. Within Maven, it can be set by using the maven-compiler-plugin:


    Now the project can compile.

    Test failure

    The next step sees the failure of unit tests to fail with mvn test.

    The cause is the same, but it’s a bit harder to find. It requires checking the Surefire reports. Some contain exceptions with the following line:

    Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException

    Again, test code cannot access the module. This time, however, the maven-surefire-plugin needs to be configured:

            <argLine>--add-modules java.xml.bind</argLine>

    This makes the tests work.

    Packaging failure

    If one thinks this is the end of the road, think again. The packaging phase also fails with a rather cryptic error:

    [ERROR] Failed to execute goal org.apache.maven.plugins:maven-jar-plugin:2.6:jar (default-jar) on project spring-petclinic:
     Execution default-jar of goal org.apache.maven.plugins:maven-jar-plugin:2.6:jar failed:
      An API incompatibility was encountered while executing org.apache.maven.plugins:maven-jar-plugin:2.6:jar:
       java.lang.ExceptionInInitializerError: null
    Caused by: java.lang.ArrayIndexOutOfBoundsException: 1
    	at org.codehaus.plexus.archiver.zip.AbstractZipArchiver.<clinit>(AbstractZipArchiver.java:116)

    This one is even harder to find: it requires a Google search to stumble upon the solution. The plexus-archiver is to blame. Simply bumping the maven-jar-plugin to the latest version - 3.2 at the time of this writing will make use of a Java 9 compatible version of the archiver and will solve the issue:


    Spring Boot plugin failure

    At this point, the project finally can be compiled, tested and packaged. The next step is to run the app through the Spring Boot Maven plugin i.e. mvn spring-boot:run. But it fails again…​:

    [INFO] --- spring-boot-maven-plugin:1.5.1.RELEASE:run (default-cli) @ spring-petclinic ---
    [INFO] Attaching agents: []
    Exception in thread "main" java.lang.ClassCastException:
     java.base/jdk.internal.loader.ClassLoaders$AppClassLoader cannot be cast to java.base/java.net.URLClassLoader
    	at o.s.b.devtools.restart.DefaultRestartInitializer.getUrls(DefaultRestartInitializer.java:93)
    	at o.s.b.devtools.restart.DefaultRestartInitializer.getInitialUrls(DefaultRestartInitializer.java:56)
    	at o.s.b.devtools.restart.Restarter.<init>(Restarter.java:140)
    	at o.s.b.devtools.restart.Restarter.initialize(Restarter.java:546)
    	at o.s.b.devtools.restart.RestartApplicationListener.onApplicationStartingEvent(RestartApplicationListener.java:67)
    	at o.s.b.devtools.restart.RestartApplicationListener.onApplicationEvent(RestartApplicationListener.java:45)
    	at o.s.c.event.SimpleApplicationEventMulticaster.invokeListener(SimpleApplicationEventMulticaster.java:167)
    	at o.s.c.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:139)
    	at o.s.c.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:122)
    	at o.s.b.context.event.EventPublishingRunListener.starting(EventPublishingRunListener.java:68)
    	at o.s.b.SpringApplicationRunListeners.starting(SpringApplicationRunListeners.java:48)
    	at o.s.b.SpringApplication.run(SpringApplication.java:303)
    	at o.s.b.SpringApplication.run(SpringApplication.java:1162)
    	at o.s.b.SpringApplication.run(SpringApplication.java:1151)
    	at org.springframework.samples.petclinic.PetClinicApplication.main(PetClinicApplication.java:32)

    This is a documented issue that Spring Boot Dev Tools v1.5 is not compatible with Java 9.

    Fortunately, this bug is fixed in Spring Boot 2.0.0.M5. Unfortunately, this specific version is not yet available at the time of this writing. So for now, let’s remove the dev-tools and try to run again. It fails again, but this time the exception is a familiar one:

    Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException

    Let’s add the required argument to the spring-boot-maven-plugin:

            <jvmArguments>--add-modules java.xml.bind</jvmArguments>

    The app can finally be be launched and is accessible!


    Running a non-trivial legacy application on JDK 9 requires some effort. Worse, some important features had to be left on the way: code coverage and web performance enhancements. On the opposite side, the only meager benefit is the String memory space improvement. In the next blog post, we will try to improve the situation to actually make use of modules in the app.

    Categories: Java Tags: Java 9Spring BootmodulesJigsaw
  • Truly immutable builds

    Baltic amber pieces with insects

    It sometimes happen that after a few years, an app is stable enough that it gets into hibernating mode. Though it’s used and useful, there are no changes to it and it happily runs its life. Then, after a while, someone decides to add some new features again. Apart from simple things such as locating the sources, one of the most important thing is to be able to build the app. Though it may seem trivial, there are some things to think about. Here are some advices on how to make apps that can be built forever.

    I’ll use Maven as an example but advices below can apply to any build tool.

    Immutable plugins version

    For dependencies, Maven requires the version to be set. For plugins, Maven allows not to specify it. In that case, it will fetch the latest.

    Though it might be seen as a benefit to always use the latest version, it can break existing behavior.

    Rule 1

    Always explicitly set plugins version. This includes all plugins that are used during the build, even if they are not configured e.g. maven-surefire-plugin.

    Check in the build tool

    The second problem that may arise is the build tool itself. What Maven version that was used to build the app? Building the app with another version might not work. Or worse, build the app in a slightly different way with unexpected side-effects.

    Hence, the build tool must be saved along the sources of the app. In the Maven ecosystem, this can be done using the Maven wrapper.

    Rule 2
    1. Get the wrapper
    2. Check it in along with regular sources
    3. Use it for each and every build

    Check in JVM options

    The last step occurs when JVM options are tweaked using the MVN_OPTS environment variable. It can be used to initialize the maximum amount of memory for the build e.g. -XmX or to pass system properties to the build e.g. -Dmy.property=3. Instead of using the MVN_OPTS environment variable, such parameters should be set in a .mvn/jvm.config file, and checked in along the app sources. Note this is available since Maven 3.3.1.

    Rule 3

    Check in JVM build options via the .mvn/jvm.config file along regular sources

    Check in the CI build file

    The build file that is relevant to the continuous integration server should be checked in as well. For some - Travis CI, GitLab, etc., it’s a pretty standard practice. For others - Jenkins, it’s a brand new feature.

    Rule 4

    Check in the CI server specific build files along regular sources.

    Nice to have

    The above steps try to ensure that as many things as possible are immutable. While the version of the Java code and the version of the generated bytecode can be set (source and target configuration parameters of the maven-compiler-plugin), one thing that cannot be set is the JDK itself.

    In order to keep it immutable along the whole application life, it’s advised to specify the exact JDK. In turn, this depends a lot on the exact continuous integration server. For example, Travis CI allows it natively in the build file, while Jenkins might require the usage of a Docker image.


    Making sure to be able to build your app in the future is not a glamorous task. However, it can make a huge difference in the future. Following the above rules, chances you’ll be able to build apps, after they have been hibernating for a long time, will dramatically increase.

    Categories: Java Tags: mavenbuildDockermaintenance