• Performance cost of reflection

    In Java, it’s widely admitted that reflection - usage of the java.reflect API, comes at a high cost in terms of performance. Older Java versions had huge performance overhead, while it seems that newer versions bring it in the acceptable range. But what does “acceptable” really mean?

    This is the question I asked when commenting on a performance review that advised to replace code based on reflection by standard code. As many of our decisions are not based on facts but on beliefs, I decided to perform some tests to get metrics in Java 8.

    Testing protocol

    In order to get realistic metrics through an unchallenged protocol, I used the excellent JMH testing framework. JMH has several advantages, among others:

    • An existing Maven artifact is readily available
    • Methods to benchmark have only to be annotated with @Benchmark
    • It handles the warming up of the JVM
    • It also handles the writing the results on the console

    Here’s a JMH snippet:

    public void executePerformanceTest() {
    	// Code goes here

    JMH will take care of executing the above executePerformanceTest() and taking care of measuring the time taken.

    The code

    To highlight the cost of reflection, let’s check the difference between the time needed to access attributes with reflection and to call simple getters without.

    // With reflection
    Field firstName = clazz.getDeclaredField("firstName");
    Field lastName = clazz.getDeclaredField("lastName");
    Field birthDate = clazz.getDeclaredField("birthDate");
    Field.setAccessible(new AccessibleObject[] { firstName, lastName, birthDate }, true);
    // Without reflection

    Checking possible optimizations

    I was wondering if immutable data structures were compiled into optimized bytecode that could decrease the performance overhead of reflection.

    Thus I created the same basic data structure in two-different ways:

    • One mutable with a no-args constructor and setters
    • One immutable with final attributes and constructor initialization


    Running the tests on my machine yields the following results:

    # Run complete. Total time: 00:26:55
    Benchmark                                    Mode  Cnt         Score        Error  Units
    BenchmarkRun.runImmutableWithReflection     thrpt  200   2492673.501 ±  37994.941  ops/s
    BenchmarkRun.runImmutableWithoutReflection  thrpt  200  26499946.587 ± 242499.198  ops/s
    BenchmarkRun.runMutableWithReflection       thrpt  200   2505239.277 ±  27697.028  ops/s
    BenchmarkRun.runMutableWithoutReflection    thrpt  200  26635097.050 ± 150798.911  ops/s

    For table-minded readers:

      Non-reflection Reflection
    Mutable 26,635,097.050 ± 150,798.911 2,505,239.277 ± 27,697.028
    Immutable 26,499,946.587 ± 242,499.198 2,492,673.501 ± 37,994.941

    And for bar chart interested readers (note the scale is linear):

    Test results


    Whatever the way results are displayed, it will take about 10 times more to access a field with reflection than without… on my machine. My guess is that this can be extrapolated to be true on any machine.

    There’s a corollary conclusion: whatever you think holds true at a given time, you should always do some fact-checking to ensure a solid foundation for your decisions.

    If you want to run the tests yourself, the project is available online in Maven format.

    Categories: Java Tags: performancesoftware engineering
  • You don't talk about refactoring club

    The first rule of Fight Club is: You do not talk about Fight Club. The second rule of Fight Club is: You do not talk about Fight Club.

    — Tyler Durden

    I guess the same could be said about refactoring. That would first requires to define what I mean by refactoring in the context of this post:

    Refactoring is any action on the codebase that improves quality.

    Which in turn requires to define what is quality. Everyone I’ve talked with agrees on this: it’s quite hard to do. Let’s settle for now for the following tentative explanation:

    Quality is a feature of the codebase (including and not limited to architecture, design, etc.) which the lack of stalls further meaningful changes in the codebase. At the limits:

    • 100% quality means changing the codebase to develop a new feature would require the smallest possible time;
    • 0% quality means the time to do it would be infinite.

    Given this definition, refactoring includes:

    • Improving the design of classes
    • Adding unit tests
    • Removing useless code
    • Following accepted good practices
    • Anything related to improve readability
    • etc.

    Now back to the subject of this post. Should we ask the customer/manager if a refactoring is necessary? Should we put a refactoring sprint in the backlog? I’ve witnessed first hand many cases where it was asked. As expected, in nearly all cases, the decision was not to perform the refactoring. Taking ages to implement some feature? No design change. Not enough test harness? No tests added. Why? Because the customer/manager has no clue what refactoring and quality means.

    Let’s use a simple analogy: when I take my car to the mechanic, do I get to choose whether he’ll check if the repairs have been correctly executed? Not at all. Checks are part of the overall package I get when I choose a professional mechanic. If choice was made possible, some people who probably opt not to do the checks - to pay less. So far, so good. But then if trouble happened, and probability is in favor of that, the mechanic would be in deep trouble. Because he’s the professional and didn’t do his job well.

    Developers would also get into trouble if they delivered applications with no tests or with a messy codebase; not their customer nor their manager - especially not their managing (depending on their kind of manager if you catch my drift). So I wonder why developers have to let people that don’t know about code taking such important decisions.

    As a professional developer, you and no one else are responsible for the quality of the application you deliver. Your name is in the source code and the commit history, not your manager’s. Stop searching for excuses not to refactor: don’t ask, do it. Refactoring is part of the software development package, period.

    That doesn’t mean that you have to hide the fact that you’re refactoring, only that it’s up to you to decide if the code is good enough or need to be improved.

    Categories: Development Tags: refactoringsoftware engineering
  • Scala vs Kotlin: Operator overloading

    Last week, I started my comparison of Scala and Kotlin with the Pimp my library pattern. In the second part of this serie, I’d like to address operator overloading.


    Before to dive into the nitty-gritty details, let’s try first to tell what it’s all about.

    In every language where there are functions (or methods), a limited set of characters is allowed to define the name of said functions. Some languages are more lenient toward allowed characters: naming a function \O/ might be perfectly valid.

    Some others are much more strict about it. It’s interesting to note that Java eschewed the ability to use symbols in function names besides $ - probably in response to previous abuses in older languages. It definitely stands on the less lenient part of the spectrum and the Java compiler won’t compile the previous \O/ function.

    The name operator overloading is thus slightly misleading, even if widespread. IMHO, it’s semantically more correct to talk about operator characters in function names.


    Scala stands on the far side of leniency spectrum, and allows characters such as + and £ to be used to name functions, alone or in combinations. Note I couldn’t find any official documentation regarding accepted characters (but some helpful discussion is available here).

    This enables libraries to offer operator-like functions to be part of their API. One example is the foldLeft function belonging to the TraversableOnce type, which is also made available as the /: function.

    This allows great flexibility, especially in defining DSLs. For example, mathematics: functions can be named π, or . On the flip side, this flexibility might be subject to abuse, as \O/, ^_^ or even |-O are perfectly valid function names. Anyone for an emoticon-based API?

    def ∑(i: Int*) = i.sum
    val s = ∑(1, 2, 3, 5) // = 11


    Kotlin stands on the middle of the leniency scale, as it’s possible to define only a limited set of operators.

    Each such operator has a corresponding standard function signature. To define a specific operator on a type, the associated function should be implemented and prepended with the operator keyword. For example, the + operator is associated with the plus() method. The following shows how to define this operator for an arbitrary new type and how to use it:

    class Complex(val i: Int, val j: Int) {
        operator fun plus(c: Complex) = Complex(this.i + c.i, this.j + c.j)
    val c = Complex(1, 0) + Complex(0, 1) // = Complex(1, 1)


    Scala’s flexibility allows for an almost unlimited set of operator-looking functions. This makes it suited to design DSL with a near one-to-one mapping between domains names and function names. But it also relies on implicitness: every operator has to be known to every member of the team, present and future.

    Kotlin takes a much more secure path, as it allows to define only a limited set of operators. However, those operators are so ubiquitous that even beginning software developer know them and their meaning (and even more so experienced ones).

    Categories: Development Tags: scalakotlin
  • Scala vs Kotlin: Pimp my library

    I’ve been introduced to the world of immutable data structures with the Scala programming language - to write I’ve been introduced to the FP world would sound too presumptuous. Although I wouldn’t recommend its usage in my day-to-day projects, I’m still grateful to it for what I learned: my Java code is now definitely not the same because Scala made me aware of some failings in both the language and my coding practices.

    On the other hand, I became recently much interested in Kotlin, another language that tries to bridge between the Object-Oriented and Functional worlds. In this serie of articles, I’d like to compare some features of Scala and Kotlin and how each achieve it.

    In this article, I’ll be tackling how both offer a way to improve the usage of existing Java libraries.


    Let’s start with Scala, as it coined the term Pimp My Library 10 years ago.

    Scala’s approach is based on conversion. Consider a base type lacking the desired behavior. For example, Java’s double primitive type - mapped to Scala’s scala.Double type, is pretty limited.

    The first step is to create a new type with said behavior. Therefore, Scala provides a RichDouble type to add some methods e.g. isWhole().

    The second step is to provide an implicit function that converts from the base type to the improved type. The signature of such a function must follow the following rules:

    • Have a single parameter of the base type
    • Return the improved type
    • Be tagged implicit

    Here’s how the Scala library declares the Double to RichDouble conversion function:

    private[scala] abstract class LowPriorityImplicits {
        implicit def doubleWrapper(x: Double) = new runtime.RichDouble(x)

    An alternative is to create an implicit class, which among other requirements must have a constructor with a single parameter of base type.

    The final step step is to bring the conversion in scope. For conversion functions, it means importing the function in the class file where the conversion will be used. Note that in this particular case, the conversion function is part of the automatic imports (there’s no need to explicitly declare it).

    At this point, if a function is not defined for a type, the compiler will look for an imported conversion function that transforms this type to a new type that provides this function. In that case, the type will be replaced with the conversion function.

    val x = 45d
    val isWhole = x.isWhole // Double has no isWhole() function
    // But there's a conversion function in scope which transforms Double to RichDouble
    // And RichDouble has a isWhole() function
    val isWhole = doubleWrapper(x).isWhole


    One of the main reasons I’m cautious about using Scala is indeed the implicit part: it makes it much harder to reason about the code - just like AOP. Homeopathic usage of AOP is a life saver, widespread usage is counter-productive.

    Kotlin eschews implicitness: instead of conversions, it provides extension methods (and properties).

    Let’s analyze how to add additional behavior to the java.lang.Double type.

    The first step is to provide an extension function: it’s a normal function, but grafted to an existing type. To add the same isWhole() function as above, the syntax is the following:

    fun Double.isWhole() = this == Math.floor(this) && !java.lang.Double.isInfinite(this)

    As for Scala, the second step is to bring this function in scope. As of Scala, it’s achieved through an import. If the previous function has been defined in any file of the ch.frankel.blog package:

    import ch.frankel.blog.isWhole
    val x = 45.0
    val isWhole = x.isWhole // Double has no isWhole() function
    // But there's an extension function in scope for isWhole()
    val isWhole = x == Math.floor(x) && !java.lang.Double.isInfinite(x)

    Note that extension methods are resolved statically.

    Extensions do not actually modify classes they extend. By defining an extension, you do not insert new members into a class, but merely make new functions callable with the dot-notation on instances of this class.

    We would like to emphasize that extension functions are dispatched statically, i.e. they are not virtual by receiver type. This means that the extension function being called is determined by the type of the expression on which the function is invoked, not by the type of the result of evaluating that expression at runtime.


    Obviously, Scala has one more indirection level - the conversion. I let anyone decide whether this is a good or a bad thing. For me, it makes it harder to reason about the code.

    The other gap is the packaging of the additional functions. While in Scala those are all attached to the enriched type and can be imported as a whole, they have to be imported one by one in Kotlin.

    Categories: Development Tags: scalakotlin
  • Fixing floating-point arithmetics with Kotlin

    This week saw me finally taking time to analyze our code base with Sonar. In particular, I was made aware of plenty of issues regarding floating-point arithmetics.

    Fun with Java’s floating-point arithmetics

    Those of you who learned Java in an academic context probably remember something fishy around FP arithmetics. Then if you never used them, you probably forgot about them. Here’s a very quick example of interesting it turns out to be:

    double a = 5.8d;
    double b = 5.6d;
    double sub = a - b;

    Contrary to common sense, this snippet throws an AssertionError: sub is not equal to 0.2 but to 0.20000000000000018.

    BigDecimal as a crutch

    Of course, no language worthy of the name could let that stand. BigDecimal is Java’s answer:

    The BigDecimal class provides operations for arithmetic, scale manipulation, rounding, comparison, hashing, and format conversion.

    Let’s update the above snippet with BigDecimal:

    BigDecimal a = new BigDecimal(5.8d);
    BigDecimal b = new BigDecimal(5.6d);
    BigDecimal sub = a.subtract(b);
    assertThat(sub).isEqualTo(new BigDecimal(0.2d));

    And run the test again… Oooops, it still fails:

    to be equal to:
    but was not.

    Using constructors changes nothing, one has to use the static valueOf() method instead.

    BigDecimal a = BigDecimal.valueOf(5.8d);
    BigDecimal b = BigDecimal.valueOf(5.6d);
    BigDecimal sub = a.subtract(b);

    Finally it works, but as the cost of a lot of ceremony…

    Kotlin to the rescue

    Just porting the code to Kotlin only marginally improves the readability:

    val a = BigDecimal.valueOf(5.8)
    val b = BigDecimal.valueOf(5.6)
    val sub = a.subtract(b)

    Note that in Kotlin, floating-points numbers are doubles by default.

    In order to make the API more fluent and thus the code more readable, two valuable Kotlin features can be applied.

    The first one is extension method (I’ve already showed their use in a former post to improve logging with SLF4J. Let’s use it here to easily create BigDecimal objects from Double:

    fun Double.toBigDecimal(): BigDecimal = BigDecimal.valueOf(this)
    val a = 5.8.toBigDecimal() // Now a is a BigDecimal

    The second feature - coupled with method extension, is operator overloading. Kotlin sits between Java where operator overloading is impossible, and Scala where every operator can be overloaded (I’m wondering why there aren’t already any emoticons library): only some operators can be overloaded, including those from arithmetics - +, -, * and /.

    They can be overriden quite easily, as shown here:

    operator fun BigDecimal.plus(a: BigDecimal) = this.add(a)
    operator fun BigDecimal.minus(a: BigDecimal) = this.subtract(a)
    operator fun BigDecimal.times(a: BigDecimal) = this.multiply(a)
    operator fun BigDecimal.div(a: BigDecimal) = this.divide(a)
    val sub = a - b

    Note this is already take care of in Kotlin’s stdlib/

    The original snippet can now be written like this:

    val a = 5.8.toBigDecimal()
    val b = 5.6.toBigDecimal()
    assertThat(a - b).isEqualTo(0.2.toBigDecimal())

    The assertion line can probably be improved further. Possible solutions includes AssertJ custom assertions or… extension method again, in order for the isEqualTo() method to accept Double parameters.


    Any complex API or library can be made easier to read by using Kotlin extension methods. What are you waiting for?

    Categories: Technical Tags: kotlinarithmetics
  • Comparing Lombok and Kotlin

    I know about Lombok since a long time, and I even wrote on how to create a new (at the time) @Delegate annotation. Despite this and even though I think it’s a great library, I’ve never used it in my projects. The reason for this mostly because I consider setting up the Lombok agent across various IDEs and build tools too complex for my own taste in standard development teams.

    Comes Kotlin which has support for IDEs and build tools right out-of-the-box plus seamless Java interoperability. So, I was wondering whether Lombok would still be relevant. In this article, I’ll check if Kotlin offers the same feature as Lombok and how. My goal is not to downplay Lombok’s features in a any way, but to perform some fact-checking to let you decide what’s the right tool for you.

    The following assumes readers are familiar with Lombok’s features.

    LombokSnippetKotlin equivalent
    val e = new ArrayList<String>();
    val e = ArrayList<String>()
    public void f(@NonNull String b) {
    fun f(b: String) {
    Kotlin types can be either nullable i.e. String? or not i.e. String. Such types do no inherit from one another.
    InputStream in =
        new FileInputStream("foo.txt")
    FileInputStream("foo.txt").use {
      // Do stuff here using 'it'
      // to reference the stream
    • use() is part of Kotlin stdlib
    • Referencing the stream is not even necessary
    @NoArgsConstructor, @RequiredArgsConstructor and @AllArgsConstructor No equivalent as class and constructor declaration are merged
    public class Foo {
        private String bar;
    class Foo() {
      var bar:String? = null
        private set
    • Generates a private setter instead of no setter
    • Not idiomatic Kotlin (see below)
    public class Foo {
        private String bar;
        public Foo(String bar) {
            this.bar = bar;
    class Foo(val bar:String)
    Takes advantage of merging class and constructor declaration with slightly different semantics
    public class Foo {
        private String bar;
    No equivalent but pretty unusual to just have a setter (does not favor immutability)
    public class Foo {
        private String bar;
        public Foo(String bar) {
            this.bar = bar;
    class Foo(var bar: String)
    @ToString No direct equivalent but included in data classes, see below
    public class DataExample {
        private String name;
        private int age;
    data class DataExample(
      var name: String,
      var age: Int)
    public class ValueExample {
      String name;
      int age;
    data class ValueExample(
      val name: String,
      val age: Int)
    Notice the difference between val and var with the above snippet
    public class GetterLazyExample {
      private Double cached = expensive();
    class GetterLazyExample {
      val cached by lazy {
    public class LogExample {
      public static void main(String... args) {
        log.error("Something's wrong here");
    No direct equivalent, but possible in many different ways
    public void run() {
     throw new Throwable();
    fun run() {
      throw Throwable()
    There's no such thing as checked exceptions in Kotlin. Using Java checked exceptions requires no specific handling.
    public class BuilderExample {
      private String name;
      private int age;
    BuilderExample b = new BuilderExampleBuilder()
    class BuilderExample(
      val name: String,
      val age: Int)
    val b = BuilderExample(
      name = "Foo",
      age = 42
    No direct equivalent, use named arguments instead.
    public class SynchronizedExample {
      private final Object readLock = new Object();
      public void foo() {
    It's possible to omit to reference an object in the annotation: Lombok will create one under the hood (and use it).
    class SynchronizedExample {
      private val readLock = Object()
      fun foo() {
        synchronized(readLock) {
    synchronized() is part of Kotlin stdlib
    Experimental features
    LombokSnippetKotlin equivalent
    class Extensions {
      public static String extends(String in) {
        // do something with in and
        return it;
      {String.class, Extensions.class}
    public class Foor {
      public String bar() {
        return "bar".toTitleCase();
    fun String.extends: String {
      // do something with this and
      return it;
    class Foo {
      fun bar = "bar".toTitleCase()
    @Wither No direct equivalent, available through data classes
    class Foo(val bar: String = "bar")
    @Delegate Available in many different flavors
    public class UtilityClassExample {
      private final int CONSTANT = 5;
      public void addSomething(int in) {
        return in + CONSTANT;
    val CONSTANT = 5
    fun addSomething(i: Int) = i + CONSTANT
    public class HelperExample {
      int someMethod(int arg1) {
        int localVar = 5;
        @Helper class Helpers {
          int helperMethod(int arg) {
            return arg + localVar;
        return helperMethod(10);
    class HelperExample {
      fun someMethod(arg1: Int): Int {
        val localVar = 5
        fun helperMethod(arg: Int): Int {
          return arg + localVar
        return helperMethod(10)

    In conclusion:

    • Lombok offers some more features
    • Lombok offers some more configuration options, but most don’t make sense taken separately (e.g. data classes)
    • Lombok and Kotlin have slightly different semantics when a feature is common so double-check if you’re a Lombok user willing to migrate

    Of course, this post completely left out Kotlin’s original features. This is a subject left out for future articles.

    Categories: Java Tags: lombokkotlin
  • Faster Mutation Testing

    As an ardent promoter of Mutation Testing, I sometimes get comments that it’s too slow to be of real use. This is always very funny as it also applies to Integration Testing, or GUI. Yet, this argument is only used againt Mutation Testing, though it cost nothing to setup, as opposed to the former. This will be the subject of another post. In this one, I will provide proposals on how to speed up mutation testing, or more precisely PIT, the Java Mutation Testing reference.

    Setting the bar

    A project reference for this article is required. Let’s use the codec submodule of Netty 4.1.

    At first, let’s compile the project to measure only PIT-related time. The following should be launched at the root of the Maven project hierarchy:

    mvn -pl codec clean test-compile

    The -pl option let the command be applied to only specified sub-projects - the codec sub-project in this case. Now just run the tests:

    mvn -pl codec surefire:test
    Results :
    Tests run: 340, Failures: 0, Errors: 0, Skipped: 0
    [INFO] ------------------------------------------------------------------------
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 8.954 s
    [INFO] Finished at: 2016-06-13T21:41:10+02:00
    [INFO] Final Memory: 12M/309M
    [INFO] ------------------------------------------------------------------------

    For all it’s worth, 9 seconds will be the baseline. This is not really precise measurement, but good enough for the scope of this article.

    Now let’s run PIT:

    mvn -pl codec -DexcludedClasses=io.netty.handler.codec.compression.LzfDecoderTest \

    Note: PIT flags the above class as failing, even though Surefire has no problem about that

    [INFO] ------------------------------------------------------------------------
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 14:14 min
    [INFO] Finished at: 2016-06-13T22:02:48+02:00
    [INFO] Final Memory: 14M/329M

    A whooping 14 minutes! Let’s try to reduce the time taken.

    Speed vs. relevancy trade-off

    There is no such thing as a free lunch.

    PIT offers a lot of tweaks to improve testing speed. However, most of them imply a huge drop in relevancy: faster means less feedback on code quality less than complete. As it’s the ultimate goal of Mutation Testing, it’s IMHO meaningless.

    Those configuration options are:

    • Set a limited a set of mutators
    • Limit scope of target classes
    • Limit number of tests
    • Limit dependency distance

    All benefits those flags bring are negated by less information.

    Running on multiple cores

    By default, PIT uses a single core even though most personal computers, not to mention servers have many more available.

    The easiest way to run faster is to use more cores. If it takes X minutes to run PIT on a single core, and if Y additional cores are 100% available, then it should only take X / (Y + 1) minutes to run on the first and additional cores.

    The number of cores is governed by the thread Java system property when launching Maven.

    mvn -pl codec -DexcludedClasses=io.netty.handler.codec.compression.LzfDecoderTest \
     -Dthreads=4 org.pitest:pitest-maven:mutationCoverage

    This yields the following results, which probably means that those additional cores were already performing some tasks during the run.

    [INFO] ------------------------------------------------------------------------
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 07:30 min
    [INFO] Finished at: 2016-06-13T22:25:38+02:00
    [INFO] Final Memory: 15M/318M

    Still, 50% is good enough for such a small configuration effort.

    Incremental analysis

    The rationale between incremental analysis is that unchanged tests running unchanged classes will produce the same outcome as the last one.

    Hence, PIT will create a hash for every test and production class. If they are similar, the test won’t run and PIT will reuse the same result. This just requires to configure where to store those hashes.

    Let’s run PIT with incremental analysis. The historyInputFile system property is the file where PIT will read hashes, historyOutputFile the file where it will write them. Obviously, they should point to the same file; anyone care to enlighten me as why they can be different? Anyway:

    mvn -pl codec -DexcludedClasses=io.netty.handler.codec.compression.LzfDecoderTest \
     -DhistoryInputFile=~/.fastermutationtesting -DhistoryOutputFile=~/.fastermutationtesting \
     -Dthreads=4 org.pitest:pitest-maven:mutationCoverage

    That produces:

    [INFO] ------------------------------------------------------------------------
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 07:05 min
    [INFO] Finished at: 2016-06-13T23:04:59+02:00
    [INFO] Final Memory: 15M/325M

    That’s about the time of the previous run… It didn’t run any faster! What could have happened? Well, that’s the first run so that hashes were not created. If PIT is ran again a second time with the exact same command-line, the output is now:

    [INFO] ------------------------------------------------------------------------
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 17.806 s
    [INFO] Finished at: 2016-06-13T23:12:43+02:00
    [INFO] Final Memory: 20M/575M

    That’s way better! Just the time to startup the engine, create the hashes and compare them.

    Using SCM

    An alternative to the above incremental analysis is to delegate change checks to the VCS. However, that requires the scm section to be adequately configured in the POM and the usage of the scmMutationCoverage goal in place of the mutationCoverage one.

    Note this is the reason why Maven should be launched at the root of the project hierarchy, to benefit from the SCM configuration in the root POM.

    mvn -pl codec -DexcludedClasses=io.netty.handler.codec.compression.LzfDecoderTest \
     -Dthreads=4 org.pitest:pitest-maven:scmMutationCoverage

    The output is the following:

    [INFO] No locally modified files found - nothing to mutation test
    [INFO] ------------------------------------------------------------------------
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 7.587 s
    [INFO] Finished at: 2016-06-14T22:11:29+02:00
    [INFO] Final Memory: 14M/211M

    This is even faster than incremental analysis, probably since there’s no hash comparison involved.

    Changing files without committing will correctly send the related mutants to be tested. Hence, this goal should be used only be developers on their local repository to check their changes before commit.


    Now that an authoritative figure has positively written about Mutation Testing, more and more people will be willing to use it. In that case, the right configuration might make a difference between wide adoption and complete rejection.

    Categories: Java Tags: mutation testingqualityperformance
  • Smart logging in Java 8 and Kotlin

    Logging is a not a sexy subject but it’s important nonetheless. In the Java world, logging frameworks range from Log4J to SLF4J via Commons Logging and JDK logging (let’s exclude Log4J 2 for the time being). Though different in architecture and features, all of their API look the same. The logger has a method for each log level e.g.:

    • debug(String message)
    • info(String message)
    • error(String message)
    • etc.

    Levels are organized into a hierarchy. Once the framework is configured at a certain level, only messages logged with the same or a higher priority will be written.

    So far, so good. The problem comes when messages contain variables so that they must be concatenated.

    LOGGER.debug("Customer + " customer.getId() + " has just ordered " + item.getName() + " (" + item.getId() + ")");

    String concatenation has a definite performance cost in Java, and whatever the configured log level, it will occur.

    For this reason, modern logging frameworks such as SLF4J provide improved signature accepting a message format typed as String and variables as varargs of Objects. In that case, concatenation occurs only when the logger effectively writes.

    LOGGER.debug("Customer {} has just ordered {} ({})", customer.getId(), item.getName(), item.getId());

    Sometimes, however, variables are not readily available but have to be computed explicitly solely for the purpose of logging.

    LOGGER.debug("Customer {} has just ordered {}", customer.getId(), order.expensiveComputation());

    SLF4J doesn’t help there, as the method is evaluated even if the logger decides latter not to write because the framework is configured with a higher priority. In that case, it’s therefore advised to wrap the logger method call inside a relevant priority check.

    if (LOGGER.isDebug()) {
        LOGGER.debug("Customer {} has just ordered {}", customer.getId(), order.expensiveComputation());

    That has to be done for every expensive method call, so it requires a strong coding discipline and reviews to make sure the wrapping occurs when it’s relevant - but only then. Besides, it decreases readability and thus maintainability. To achieve the same result automatically without those cons, one could use AOP at the cost of extra-complexity.

    Comes Java 8 and the Supplier<T> interface which returns a String, so that a method can be created like so:

    public void debug(Supplier<String> s) {
       if (LOGGER.isDebugEnabled()) {

    In that case, the get() method is called only when the wrapping condition evaluates to true.

    Using this method is as simple as that:

    debug(() -> ("Customer + " customer.getId() + " has just ordered " + order.expensiveComputation());

    Great! But where to put this improved debug() method?

    • In the class itself: and duplicate for every class. Really?
    • In an utility class: one should add the LOGGER as the first parameter. Did you say cumbersome?
    • In the logger: one can create a wrapper and keep a standard logger as a delegate but the factory is final (at least in SLF4J) and has a lot of private methods.
    • In an aspect: back to square one…

    That’s the step where things start not being so nice in the Java realm.

    What about Kotlin? It comes with extension functions (and properties). This will probably be the subject of a future post since you should adopt Kotlin if only for this feature. Suffice to say that Kotlin can make it look like one can add state and behavior for already defined types.

    So debug() can be defined in a aptly named file:

    fun Logger.debug(s: () -> String) {
        if (isDebugEnabled) debug(s.invoke())

    And calling it really feels like calling a standard logger, only a lambda is passed:

    LOGGER.debug { "Customer + " customer.getId() + " has just ordered " + order.expensiveComputation() }

    As a finishing touch, let’s make the defined function inlined. By doing that, the compiler will effectively replace every place the method is called with the method in the bytecode. We get the readable syntax benefit and avoid the overhead of un-wrapping the lambda at runtime:

    inline fun Logger.debug(s: () -> String) {
        if (isDebugEnabled) debug(s.invoke())

    Note that it can still be called in Java, albeit with a not-so-nice syntax:

    Slf4KUtilsKt.debug(LOGGER, () -> "Customer + " + customer.getId() + " has just ordered " + order.expensiveComputation());

    Also note that Log4J 2 already implements this feature out-of-the-box.

    At this point, it’s simple enough to copy-code the above snippet when required. But developers are lazy by nature so I created a full-fledged wrapper around SLF4J methods. The source code is available on Github and the binary artifact on Bintray so that you only need the following dependency to use it:

    Categories: Development Tags: kotlinloggingperformance
  • Encapsulation: I don't think it means what you think it means

    My post about immutability provoked some stir and received plenty of comments, from the daunting to the interesting, both on reddit and here.

    Comment types

    They can be more or less divided into those categories:

    • Let’s not consider anything and don’t budge an inch - with no valid argument beside “it’s terrible”
    • One thread wondered about the point of code review, to catch bugs or to share knowledge
    • Rational counter-arguments that I’ll be happy to debate in a future post
    • “It breaks encapsulation!” This one is the point I’d like to address in this post.

    Encapsulation, really?

    I’ve already written about encapsulation but if the wood is very hard, I guess it’s natural to hammer down the nail several times.

    Younger, when I learned OOP, I was told about its characteristics:

    1. Inheritance
    2. Polymorphism
    3. Encapsulation

    This is the definition found on Wikipedia:

    Encapsulation is used to refer to one of two related but distinct notions, and sometimes to the combination thereof:
    • A language mechanism for restricting direct access to some of the object's components.
    • A language construct that facilitates the bundling of data with the methods (or other functions) operating on that data.
    -- Wikipedia

    In short, encapsulating means no direct access to an object’s state but only through its methods. In Java, that directly translated to the JavaBeans conventions with private properties and public accessors - getters and setters. That is the current sad state that we are plagued with and that many refer to when talking about encapsulation.

    For this kind of pattern is no encapsulation at all! Don’t believe me? Check this snippet:

    public class Person {
        private Date birthdate = new Date();
        public Date getBirthdate() {
            return birthdate;

    Given that there’s no setter, it shouldn’t be possible to change the date inside a Person instance. But it is:

    Person person = new Person();
    Date date = person.getBirthdate();

    Ouch! State was not so well-encapsulated after all…

    It all boils down to one tiny little difference: we want to give access to the value of the birthdate but we happily return the reference to the birthdate field which holds the value. Let’s change that to separate the value itself from the reference:

    public class Person {
        private Date birthdate = new Date();
        public Date getBirthdate() {
            return new Date(birthdate.getTime());

    By creating a new Date instance that shares nothing with the original reference, real encapsulation has been achieved. Now getBirthdate() is safe to call.

    Note that classes that are by nature immutable - in Java, primitives, String and those that are developed like so, are completely safe to share. Thus, it’s perfectly acceptable to make fields of those types public and forget about getters.

    Note that injecting references e.g. in the constructor entails the exact same problem and should be treated in the same way.

    public class Person {
        private Date birthdate;
        public Person(Date birthdate) {
            this.birthdate = new Date(birthdate.getTime());
        public Date getBirthdate() {
            return new Date(birthdate.getTime());

    The problem is that most people who religiously invoke encapsulation blissfully share their field references to the outside world.


    There are a couple of conclusions here:

    • If you have mutable fields, simple getters such as those generated by IDEs provide no encapsulation.
    • True encapsulation can only be achieved with mutable fields if copies of the fields are returned, and not the fields themselves.
    • Once you have immutable fields, accessing them through a getter or having the field final is exactly the same.

    Note: kudos to you if you understand the above meme reference.

    Categories: Development Tags: designobject oriented programming
  • Rolling dice in Kotlin

    A little more than 2 years ago, I wrote a post on how you could create a Die rolling API in Scala. As I’m more and more interested in Kotlin, let’s do that in Kotlin.

    At the root of the hierarchy lies the Rollable interface:

    interface Rollable<T> {
        fun roll(): T

    The base class is the Die:

    open class Die(val sides: Int): Rollable<Int> {
        init {
            val random = new SecureRandom()
        override fun roll() = random.nextInt(sides)

    Now let’s create some objects:

    object d2: Die(2)
    object d3: Die(3)
    object d4: Die(4)
    object d6: Die(6)
    object d10: Die(10)
    object d12: Die(12)
    object d20: Die(20)

    Finally, in order to make code using Die instances testable, let’s change the class to inject the Random instead:

    open class Die(val sides: Int, private val random: Random = SecureRandom()): Rollable<Int> {
        override fun roll() = random.nextInt(sides)

    Note that the random property is private, so that only the class itself can use it - there won’t even be a getter.

    The coolest thing about that I that I hacked the above code in 15 minutes in the plane. I love Kotlin :-)

    Categories: Development Tags: apikotlin