• Scala vs Kotlin: Operator overloading

    Last week, I started my comparison of Scala and Kotlin with the Pimp my library pattern. In the second part of this serie, I’d like to address operator overloading.

    Overview

    Before to dive into the nitty-gritty details, let’s try first to tell what it’s all about.

    In every language where there are functions (or methods), a limited set of characters is allowed to define the name of said functions. Some languages are more lenient toward allowed characters: naming a function \O/ might be perfectly valid.

    Some others are much more strict about it. It’s interesting to note that Java eschewed the ability to use symbols in function names besides $ - probably in response to previous abuses in older languages. It definitely stands on the less lenient part of the spectrum and the Java compiler won’t compile the previous \O/ function.

    The name operator overloading is thus slightly misleading, even if widespread. IMHO, it’s semantically more correct to talk about operator characters in function names.

    Scala

    Scala stands on the far side of leniency spectrum, and allows characters such as + and £ to be used to name functions, alone or in combinations. Note I couldn’t find any official documentation regarding accepted characters (but some helpful discussion is available here).

    This enables libraries to offer operator-like functions to be part of their API. One example is the foldLeft function belonging to the TraversableOnce type, which is also made available as the /: function.

    This allows great flexibility, especially in defining DSLs. For example, mathematics: functions can be named π, or . On the flip side, this flexibility might be subject to abuse, as \O/, ^_^ or even |-O are perfectly valid function names. Anyone for an emoticon-based API?

    def ∑(i: Int*) = i.sum
    
    val s = ∑(1, 2, 3, 5) // = 11
    

    Kotlin

    Kotlin stands on the middle of the leniency scale, as it’s possible to define only a limited set of operators.

    Each such operator has a corresponding standard function signature. To define a specific operator on a type, the associated function should be implemented and prepended with the operator keyword. For example, the + operator is associated with the plus() method. The following shows how to define this operator for an arbitrary new type and how to use it:

    class Complex(val i: Int, val j: Int) {
        operator fun plus(c: Complex) = Complex(this.i + c.i, this.j + c.j)
    }
    
    val c = Complex(1, 0) + Complex(0, 1) // = Complex(1, 1)
    

    Conclusion

    Scala’s flexibility allows for an almost unlimited set of operator-looking functions. This makes it suited to design DSL with a near one-to-one mapping between domains names and function names. But it also relies on implicitness: every operator has to be known to every member of the team, present and future.

    Kotlin takes a much more secure path, as it allows to define only a limited set of operators. However, those operators are so ubiquitous that even beginning software developer know them and their meaning (and even more so experienced ones).

    Categories: Technical Tags: scalakotlin
  • Scala vs Kotlin: Pimp my library

    I’ve been introduced to the world of immutable data structures with the Scala programming language - to write I’ve been introduced to the FP world would sound too presumptuous. Although I wouldn’t recommend its usage in my day-to-day projects, I’m still grateful to it for what I learned: my Java code is now definitely not the same because Scala made me aware of some failings in both the language and my coding practices.

    On the other hand, I became recently much interested in Kotlin, another language that tries to bridge between the Object-Oriented and Functional worlds. In this serie of articles, I’d like to compare some features of Scala and Kotlin and how each achieve it.

    In this article, I’ll be tackling how both offer a way to improve the usage of existing Java libraries.

    Scala

    Let’s start with Scala, as it coined the term Pimp My Library 10 years ago.

    Scala’s approach is based on conversion. Consider a base type lacking the desired behavior. For example, Java’s double primitive type - mapped to Scala’s scala.Double type, is pretty limited.

    The first step is to create a new type with said behavior. Therefore, Scala provides a RichDouble type to add some methods e.g. isWhole().

    The second step is to provide an implicit function that converts from the base type to the improved type. The signature of such a function must follow the following rules:

    • Have a single parameter of the base type
    • Return the improved type
    • Be tagged implicit

    Here’s how the Scala library declares the Double to RichDouble conversion function:

    private[scala] abstract class LowPriorityImplicits {
        ...
        implicit def doubleWrapper(x: Double) = new runtime.RichDouble(x)
        ...
    }
    

    An alternative is to create an implicit class, which among other requirements must have a constructor with a single parameter of base type.

    The final step step is to bring the conversion in scope. For conversion functions, it means importing the function in the class file where the conversion will be used. Note that in this particular case, the conversion function is part of the automatic imports (there’s no need to explicitly declare it).

    At this point, if a function is not defined for a type, the compiler will look for an imported conversion function that transforms this type to a new type that provides this function. In that case, the type will be replaced with the conversion function.

    val x = 45d
    val isWhole = x.isWhole // Double has no isWhole() function
    
    // But there's a conversion function in scope which transforms Double to RichDouble
    // And RichDouble has a isWhole() function
    val isWhole = doubleWrapper(x).isWhole
    

    Kotlin

    One of the main reasons I’m cautious about using Scala is indeed the implicit part: it makes it much harder to reason about the code - just like AOP. Homeopathic usage of AOP is a life saver, widespread usage is counter-productive.

    Kotlin eschews implicitness: instead of conversions, it provides extension methods (and properties).

    Let’s analyze how to add additional behavior to the java.lang.Double type.

    The first step is to provide an extension function: it’s a normal function, but grafted to an existing type. To add the same isWhole() function as above, the syntax is the following:

    fun Double.isWhole() = this == Math.floor(this) && !java.lang.Double.isInfinite(this)
    

    As for Scala, the second step is to bring this function in scope. As of Scala, it’s achieved through an import. If the previous function has been defined in any file of the ch.frankel.blog package:

    import ch.frankel.blog.isWhole
    
    val x = 45.0
    val isWhole = x.isWhole // Double has no isWhole() function
    
    // But there's an extension function in scope for isWhole()
    val isWhole = x == Math.floor(x) && !java.lang.Double.isInfinite(x)
    

    Note that extension methods are resolved statically.

    Extensions do not actually modify classes they extend. By defining an extension, you do not insert new members into a class, but merely make new functions callable with the dot-notation on instances of this class.

    We would like to emphasize that extension functions are dispatched statically, i.e. they are not virtual by receiver type. This means that the extension function being called is determined by the type of the expression on which the function is invoked, not by the type of the result of evaluating that expression at runtime.

    Conclusion

    Obviously, Scala has one more indirection level - the conversion. I let anyone decide whether this is a good or a bad thing. For me, it makes it harder to reason about the code.

    The other gap is the packaging of the additional functions. While in Scala those are all attached to the enriched type and can be imported as a whole, they have to be imported one by one in Kotlin.

    Categories: Technical Tags: scalakotlin
  • Fixing floating-point arithmetics with Kotlin

    This week saw me finally taking time to analyze our code base with Sonar. In particular, I was made aware of plenty of issues regarding floating-point arithmetics.

    Fun with Java’s floating-point arithmetics

    Those of you who learned Java in an academic context probably remember something fishy around FP arithmetics. Then if you never used them, you probably forgot about them. Here’s a very quick example of interesting it turns out to be:

    double a = 5.8d;
    double b = 5.6d;
    double sub = a - b;
    assertThat(sub).isEqualTo(0.2d);
    

    Contrary to common sense, this snippet throws an AssertionError: sub is not equal to 0.2 but to 0.20000000000000018.

    BigDecimal as a crutch

    Of course, no language worthy of the name could let that stand. BigDecimal is Java’s answer:

    The BigDecimal class provides operations for arithmetic, scale manipulation, rounding, comparison, hashing, and format conversion.

    Let’s update the above snippet with BigDecimal:

    BigDecimal a = new BigDecimal(5.8d);
    BigDecimal b = new BigDecimal(5.6d);
    BigDecimal sub = a.subtract(b);
    assertThat(sub).isEqualTo(new BigDecimal(0.2d));
    

    And run the test again… Oooops, it still fails:

    java.lang.AssertionError: 
    Expecting:
     <0.20000000000000017763568394002504646778106689453125>
    to be equal to:
     <0.200000000000000011102230246251565404236316680908203125>
    but was not.
    

    Using constructors changes nothing, one has to use the static valueOf() method instead.

    BigDecimal a = BigDecimal.valueOf(5.8d);
    BigDecimal b = BigDecimal.valueOf(5.6d);
    BigDecimal sub = a.subtract(b);
    assertThat(sub).isEqualTo(BigDecimal.valueOf(0.2d));
    

    Finally it works, but as the cost of a lot of ceremony…

    Kotlin to the rescue

    Just porting the code to Kotlin only marginally improves the readability:

    val a = BigDecimal.valueOf(5.8)
    val b = BigDecimal.valueOf(5.6)
    val sub = a.subtract(b)
    assertThat(sub).isEqualTo(BigDecimal.valueOf(0.2))
    

    Note that in Kotlin, floating-points numbers are doubles by default.

    In order to make the API more fluent and thus the code more readable, two valuable Kotlin features can be applied.

    The first one is extension method (I’ve already showed their use in a former post to improve logging with SLF4J. Let’s use it here to easily create BigDecimal objects from Double:

    fun Double.toBigDecimal(): BigDecimal = BigDecimal.valueOf(this)
    
    val a = 5.8.toBigDecimal() // Now a is a BigDecimal
    

    The second feature - coupled with method extension, is operator overloading. Kotlin sits between Java where operator overloading is impossible, and Scala where every operator can be overloaded (I’m wondering why there aren’t already any emoticons library): only some operators can be overloaded, including those from arithmetics - +, -, * and /.

    They can be overriden quite easily, as shown here:

    operator fun BigDecimal.plus(a: BigDecimal) = this.add(a)
    operator fun BigDecimal.minus(a: BigDecimal) = this.subtract(a)
    operator fun BigDecimal.times(a: BigDecimal) = this.multiply(a)
    operator fun BigDecimal.div(a: BigDecimal) = this.divide(a)
    
    val sub = a - b
    

    Note this is already take care of in Kotlin’s stdlib/

    The original snippet can now be written like this:

    val a = 5.8.toBigDecimal()
    val b = 5.6.toBigDecimal()
    assertThat(a - b).isEqualTo(0.2.toBigDecimal())
    

    The assertion line can probably be improved further. Possible solutions includes AssertJ custom assertions or… extension method again, in order for the isEqualTo() method to accept Double parameters.

    Conclusion

    Any complex API or library can be made easier to read by using Kotlin extension methods. What are you waiting for?

    Categories: Technical Tags: kotlinarithmetics
  • Comparing Lombok and Kotlin

    I know about Lombok since a long time, and I even wrote on how to create a new (at the time) @Delegate annotation. Despite this and even though I think it’s a great library, I’ve never used it in my projects. The reason for this mostly because I consider setting up the Lombok agent across various IDEs and build tools too complex for my own taste in standard development teams.

    Comes Kotlin which has support for IDEs and build tools right out-of-the-box plus seamless Java interoperability. So, I was wondering whether Lombok would still be relevant. In this article, I’ll check if Kotlin offers the same feature as Lombok and how. My goal is not to downplay Lombok’s features in a any way, but to perform some fact-checking to let you decide what’s the right tool for you.

    The following assumes readers are familiar with Lombok’s features.

    LombokSnippetKotlin equivalent
    val
    val e = new ArrayList<String>();
    val e = ArrayList<String>()
    @NonNull
    public void f(@NonNull String b) {
        ....
    }
    fun f(b: String) {
      ...
    }
    Kotlin types can be either nullable i.e. String? or not i.e. String. Such types do no inherit from one another.
    @Cleanup
    @Cleanup
    InputStream in =
        new FileInputStream("foo.txt")
    FileInputStream("foo.txt").use {
      // Do stuff here using 'it'
      // to reference the stream
    }
    • use() is part of Kotlin stdlib
    • Referencing the stream is not even necessary
    @NoArgsConstructor, @RequiredArgsConstructor and @AllArgsConstructor No equivalent as class and constructor declaration are merged
    @Getter
    public class Foo {
        @Getter
        private String bar;
    }
    class Foo() {
      var bar:String? = null
        private set
    }
    • Generates a private setter instead of no setter
    • Not idiomatic Kotlin (see below)
    public class Foo {
        @Getter
        private String bar;
        public Foo(String bar) {
            this.bar = bar;
        }
    }
    class Foo(val bar:String)
    Takes advantage of merging class and constructor declaration with slightly different semantics
    @Setter
    public class Foo {
        @Setter
        private String bar;
    }
    No equivalent but pretty unusual to just have a setter (does not favor immutability)
    public class Foo {
        @Setter
        private String bar;
        public Foo(String bar) {
            this.bar = bar;
        }
    }
    class Foo(var bar: String)
    @ToString No direct equivalent but included in data classes, see below
    @EqualsAndHashCode
    @Data
    @Data
    public class DataExample {
        private String name;
        private int age;
    }
    data class DataExample(
      var name: String,
      var age: Int)
    @Value
    @Value
    public class ValueExample {
      String name;
      int age;
    }
    data class ValueExample(
      val name: String,
      val age: Int)
    Notice the difference between val and var with the above snippet
    lazy=true
    public class GetterLazyExample {
      @Getter(lazy=true)
      private Double cached = expensive();
    }
    class GetterLazyExample {
      val cached by lazy {
        expensive()
      }
    }
    @Log
    @Log
    public class LogExample {
      public static void main(String... args) {
        log.error("Something's wrong here");
      }
    }
    No direct equivalent, but possible in many different ways
    @SneakyThrows
    @SneakyThrows
    public void run() {
     throw new Throwable();
    }
    fun run() {
      throw Throwable()
    }
    There's no such thing as checked exceptions in Kotlin. Using Java checked exceptions requires no specific handling.
    @Builder
    @Builder
    public class BuilderExample {
      private String name;
      private int age;
    }
    
    BuilderExample b = new BuilderExampleBuilder()
      .name("Foo")
      .age(42)
      .build();
    class BuilderExample(
      val name: String,
      val age: Int)
    
    val b = BuilderExample(
      name = "Foo",
      age = 42
    )
    No direct equivalent, use named arguments instead.
    @Synchronized
    public class SynchronizedExample {
      private final Object readLock = new Object();
      @Synchronized("readLock")
      public void foo() {
        System.out.println("bar");
      }
    }
    It's possible to omit to reference an object in the annotation: Lombok will create one under the hood (and use it).
    class SynchronizedExample {
      private val readLock = Object()
      fun foo() {
        synchronized(readLock) {
          println("bar")
        }
      }
    }
    synchronized() is part of Kotlin stdlib
    Experimental features
    LombokSnippetKotlin equivalent
    @ExtensionMethod
    class Extensions {
      public static String extends(String in) {
        // do something with in and
        return it;
      }
    }
    
    @ExtensionMethod(
      {String.class, Extensions.class}
    )
    public class Foor {
      public String bar() {
        return "bar".toTitleCase();
      }
    }
    fun String.extends: String {
      // do something with this and
      return it;
    }
    
    class Foo {
      fun bar = "bar".toTitleCase()
    }
    @Wither No direct equivalent, available through data classes
    @FieldDefaults
    class Foo(val bar: String = "bar")
    @Delegate Available in many different flavors
    @UtilityClass
    @UtilityClass
    public class UtilityClassExample {
      private final int CONSTANT = 5;
      public void addSomething(int in) {
        return in + CONSTANT;
      }
    }
    val CONSTANT = 5
    fun addSomething(i: Int) = i + CONSTANT
    @Helper
    public class HelperExample {
      int someMethod(int arg1) {
        int localVar = 5;
        @Helper class Helpers {
          int helperMethod(int arg) {
            return arg + localVar;
          }
        }
        return helperMethod(10);
      }
    }
    class HelperExample {
      fun someMethod(arg1: Int): Int {
        val localVar = 5
        fun helperMethod(arg: Int): Int {
          return arg + localVar
        }
        return helperMethod(10)
      }
    }

    In conclusion:

    • Lombok offers some more features
    • Lombok offers some more configuration options, but most don’t make sense taken separately (e.g. data classes)
    • Lombok and Kotlin have slightly different semantics when a feature is common so double-check if you’re a Lombok user willing to migrate

    Of course, this post completely left out Kotlin’s original features. This is a subject left out for future articles.

    Categories: Java Tags: lombokkotlin
  • Faster Mutation Testing

    As an ardent promoter of Mutation Testing, I sometimes get comments that it’s too slow to be of real use. This is always very funny as it also applies to Integration Testing, or GUI. Yet, this argument is only used againt Mutation Testing, though it cost nothing to setup, as opposed to the former. This will be the subject of another post. In this one, I will provide proposals on how to speed up mutation testing, or more precisely PIT, the Java Mutation Testing reference.

    Setting the bar

    A project reference for this article is required. Let’s use the codec submodule of Netty 4.1.

    At first, let’s compile the project to measure only PIT-related time. The following should be launched at the root of the Maven project hierarchy:

    mvn -pl codec clean test-compile
    

    The -pl option let the command be applied to only specified sub-projects - the codec sub-project in this case. Now just run the tests:

    mvn -pl codec surefire:test
    
    ...
    Results :
    
    Tests run: 340, Failures: 0, Errors: 0, Skipped: 0
    
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 8.954 s
    [INFO] Finished at: 2016-06-13T21:41:10+02:00
    [INFO] Final Memory: 12M/309M
    [INFO] ------------------------------------------------------------------------
    

    For all it’s worth, 9 seconds will be the baseline. This is not really precise measurement, but good enough for the scope of this article.

    Now let’s run PIT:

    mvn -pl codec -DexcludedClasses=io.netty.handler.codec.compression.LzfDecoderTest \
     org.pitest:pitest-maven:mutationCoverage
    

    Note: PIT flags the above class as failing, even though Surefire has no problem about that

    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 14:14 min
    [INFO] Finished at: 2016-06-13T22:02:48+02:00
    [INFO] Final Memory: 14M/329M
    

    A whooping 14 minutes! Let’s try to reduce the time taken.

    Speed vs. relevancy trade-off

    There is no such thing as a free lunch.

    PIT offers a lot of tweaks to improve testing speed. However, most of them imply a huge drop in relevancy: faster means less feedback on code quality less than complete. As it’s the ultimate goal of Mutation Testing, it’s IMHO meaningless.

    Those configuration options are:

    • Set a limited a set of mutators
    • Limit scope of target classes
    • Limit number of tests
    • Limit dependency distance

    All benefits those flags bring are negated by less information.

    Running on multiple cores

    By default, PIT uses a single core even though most personal computers, not to mention servers have many more available.

    The easiest way to run faster is to use more cores. If it takes X minutes to run PIT on a single core, and if Y additional cores are 100% available, then it should only take X / (Y + 1) minutes to run on the first and additional cores.

    The number of cores is governed by the thread Java system property when launching Maven.

    mvn -pl codec -DexcludedClasses=io.netty.handler.codec.compression.LzfDecoderTest \
     -Dthreads=4 org.pitest:pitest-maven:mutationCoverage
    

    This yields the following results, which probably means that those additional cores were already performing some tasks during the run.

    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 07:30 min
    [INFO] Finished at: 2016-06-13T22:25:38+02:00
    [INFO] Final Memory: 15M/318M
    

    Still, 50% is good enough for such a small configuration effort.

    Incremental analysis

    The rationale between incremental analysis is that unchanged tests running unchanged classes will produce the same outcome as the last one.

    Hence, PIT will create a hash for every test and production class. If they are similar, the test won’t run and PIT will reuse the same result. This just requires to configure where to store those hashes.

    Let’s run PIT with incremental analysis. The historyInputFile system property is the file where PIT will read hashes, historyOutputFile the file where it will write them. Obviously, they should point to the same file; anyone care to enlighten me as why they can be different? Anyway:

    mvn -pl codec -DexcludedClasses=io.netty.handler.codec.compression.LzfDecoderTest \
     -DhistoryInputFile=~/.fastermutationtesting -DhistoryOutputFile=~/.fastermutationtesting \
     -Dthreads=4 org.pitest:pitest-maven:mutationCoverage
    

    That produces:

    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 07:05 min
    [INFO] Finished at: 2016-06-13T23:04:59+02:00
    [INFO] Final Memory: 15M/325M
    

    That’s about the time of the previous run… It didn’t run any faster! What could have happened? Well, that’s the first run so that hashes were not created. If PIT is ran again a second time with the exact same command-line, the output is now:

    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 17.806 s
    [INFO] Finished at: 2016-06-13T23:12:43+02:00
    [INFO] Final Memory: 20M/575M
    

    That’s way better! Just the time to startup the engine, create the hashes and compare them.

    Using SCM

    An alternative to the above incremental analysis is to delegate change checks to the VCS. However, that requires the scm section to be adequately configured in the POM and the usage of the scmMutationCoverage goal in place of the mutationCoverage one.

    Note this is the reason why Maven should be launched at the root of the project hierarchy, to benefit from the SCM configuration in the root POM.

    mvn -pl codec -DexcludedClasses=io.netty.handler.codec.compression.LzfDecoderTest \
     -Dthreads=4 org.pitest:pitest-maven:scmMutationCoverage
    

    The output is the following:

    [INFO] No locally modified files found - nothing to mutation test
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESS
    [INFO] ------------------------------------------------------------------------
    [INFO] Total time: 7.587 s
    [INFO] Finished at: 2016-06-14T22:11:29+02:00
    [INFO] Final Memory: 14M/211M
    

    This is even faster than incremental analysis, probably since there’s no hash comparison involved.

    Changing files without committing will correctly send the related mutants to be tested. Hence, this goal should be used only be developers on their local repository to check their changes before commit.

    Conclusion

    Now that an authoritative figure has positively written about Mutation Testing, more and more people will be willing to use it. In that case, the right configuration might make a difference between wide adoption and complete rejection.

    Categories: Java Tags: mutation testingqualityperformance
  • Smart logging in Java 8 and Kotlin

    Logging is a not a sexy subject but it’s important nonetheless. In the Java world, logging frameworks range from Log4J to SLF4J via Commons Logging and JDK logging (let’s exclude Log4J 2 for the time being). Though different in architecture and features, all of their API look the same. The logger has a method for each log level e.g.:

    • debug(String message)
    • info(String message)
    • error(String message)
    • etc.

    Levels are organized into a hierarchy. Once the framework is configured at a certain level, only messages logged with the same or a higher priority will be written.

    So far, so good. The problem comes when messages contain variables so that they must be concatenated.

    LOGGER.debug("Customer + " customer.getId() + " has just ordered " + item.getName() + " (" + item.getId() + ")");
    

    String concatenation has a definite performance cost in Java, and whatever the configured log level, it will occur.

    For this reason, modern logging frameworks such as SLF4J provide improved signature accepting a message format typed as String and variables as varargs of Objects. In that case, concatenation occurs only when the logger effectively writes.

    LOGGER.debug("Customer {} has just ordered {} ({})", customer.getId(), item.getName(), item.getId());
    

    Sometimes, however, variables are not readily available but have to be computed explicitly solely for the purpose of logging.

    LOGGER.debug("Customer {} has just ordered {}", customer.getId(), order.expensiveComputation());
    

    SLF4J doesn’t help there, as the method is evaluated even if the logger decides latter not to write because the framework is configured with a higher priority. In that case, it’s therefore advised to wrap the logger method call inside a relevant priority check.

    if (LOGGER.isDebug()) {
        LOGGER.debug("Customer {} has just ordered {}", customer.getId(), order.expensiveComputation());
    }
    

    That has to be done for every expensive method call, so it requires a strong coding discipline and reviews to make sure the wrapping occurs when it’s relevant - but only then. Besides, it decreases readability and thus maintainability. To achieve the same result automatically without those cons, one could use AOP at the cost of extra-complexity.

    Comes Java 8 and the Supplier<T> interface which returns a String, so that a method can be created like so:

    public void debug(Supplier<String> s) {
       if (LOGGER.isDebugEnabled()) {
            LOGGER.debug(s.get());
       }
    }
    

    In that case, the get() method is called only when the wrapping condition evaluates to true.

    Using this method is as simple as that:

    debug(() -> ("Customer + " customer.getId() + " has just ordered " + order.expensiveComputation());
    

    Great! But where to put this improved debug() method?

    • In the class itself: and duplicate for every class. Really?
    • In an utility class: one should add the LOGGER as the first parameter. Did you say cumbersome?
    • In the logger: one can create a wrapper and keep a standard logger as a delegate but the factory is final (at least in SLF4J) and has a lot of private methods.
    • In an aspect: back to square one…

    That’s the step where things start not being so nice in the Java realm.

    What about Kotlin? It comes with extension functions (and properties). This will probably be the subject of a future post since you should adopt Kotlin if only for this feature. Suffice to say that Kotlin can make it look like one can add state and behavior for already defined types.

    So debug() can be defined in a aptly named file:

    fun Logger.debug(s: () -> String) {
        if (isDebugEnabled) debug(s.invoke())
    }
    

    And calling it really feels like calling a standard logger, only a lambda is passed:

    LOGGER.debug { "Customer + " customer.getId() + " has just ordered " + order.expensiveComputation() }
    

    As a finishing touch, let’s make the defined function inlined. By doing that, the compiler will effectively replace every place the method is called with the method in the bytecode. We get the readable syntax benefit and avoid the overhead of un-wrapping the lambda at runtime:

    inline fun Logger.debug(s: () -> String) {
        if (isDebugEnabled) debug(s.invoke())
    }
    

    Note that it can still be called in Java, albeit with a not-so-nice syntax:

    Slf4KUtilsKt.debug(LOGGER, () -> "Customer + " + customer.getId() + " has just ordered " + order.expensiveComputation());
    

    Also note that Log4J 2 already implements this feature out-of-the-box.

    At this point, it’s simple enough to copy-code the above snippet when required. But developers are lazy by nature so I created a full-fledged wrapper around SLF4J methods. The source code is available on Github and the binary artifact on Bintray so that you only need the following dependency to use it:

    <dependency>
      <groupId>ch.frankel.log4k</groupId>
      <artifactId>slf4k-api</artifactId>
      <version>1.0.0</version>
    </dependency>
    
    Categories: Development Tags: kotlinloggingperformance
  • Encapsulation: I don't think it means what you think it means

    My post about immutability provoked some stir and received plenty of comments, from the daunting to the interesting, both on reddit and here.

    Comment types

    They can be more or less divided into those categories:

    • Let’s not consider anything and don’t budge an inch - with no valid argument beside “it’s terrible”
    • One thread wondered about the point of code review, to catch bugs or to share knowledge
    • Rational counter-arguments that I’ll be happy to debate in a future post
    • “It breaks encapsulation!” This one is the point I’d like to address in this post.

    Encapsulation, really?

    I’ve already written about encapsulation but if the wood is very hard, I guess it’s natural to hammer down the nail several times.

    Younger, when I learned OOP, I was told about its characteristics:

    1. Inheritance
    2. Polymorphism
    3. Encapsulation

    This is the definition found on Wikipedia:

    Encapsulation is used to refer to one of two related but distinct notions, and sometimes to the combination thereof:
    • A language mechanism for restricting direct access to some of the object's components.
    • A language construct that facilitates the bundling of data with the methods (or other functions) operating on that data.
    -- Wikipedia

    In short, encapsulating means no direct access to an object’s state but only through its methods. In Java, that directly translated to the JavaBeans conventions with private properties and public accessors - getters and setters. That is the current sad state that we are plagued with and that many refer to when talking about encapsulation.

    For this kind of pattern is no encapsulation at all! Don’t believe me? Check this snippet:

    public class Person {
    
        private Date birthdate = new Date();
    
        public Date getBirthdate() {
            return birthdate;
        }
    }
    

    Given that there’s no setter, it shouldn’t be possible to change the date inside a Person instance. But it is:

    Person person = new Person();
    Date date = person.getBirthdate();
    date.setTime(0L);
    

    Ouch! State was not so well-encapsulated after all…

    It all boils down to one tiny little difference: we want to give access to the value of the birthdate but we happily return the reference to the birthdate field which holds the value. Let’s change that to separate the value itself from the reference:

    public class Person {
    
        private Date birthdate = new Date();
    
        public Date getBirthdate() {
            return new Date(birthdate.getTime());
        }
    }
    

    By creating a new Date instance that shares nothing with the original reference, real encapsulation has been achieved. Now getBirthdate() is safe to call.

    Note that classes that are by nature immutable - in Java, primitives, String and those that are developed like so, are completely safe to share. Thus, it’s perfectly acceptable to make fields of those types public and forget about getters.

    Note that injecting references e.g. in the constructor entails the exact same problem and should be treated in the same way.

    public class Person {
    
        private Date birthdate;
    
        public Person(Date birthdate) {
            this.birthdate = new Date(birthdate.getTime());
        }
    
        public Date getBirthdate() {
            return new Date(birthdate.getTime());
        }
    }
    

    The problem is that most people who religiously invoke encapsulation blissfully share their field references to the outside world.

    Conclusions

    There are a couple of conclusions here:

    • If you have mutable fields, simple getters such as those generated by IDEs provide no encapsulation.
    • True encapsulation can only be achieved with mutable fields if copies of the fields are returned, and not the fields themselves.
    • Once you have immutable fields, accessing them through a getter or having the field final is exactly the same.

    Note: kudos to you if you understand the above meme reference.

    Categories: Development Tags: designobject oriented programming
  • Rolling dice in Kotlin

    A little more than 2 years ago, I wrote a post on how you could create a Die rolling API in Scala. As I’m more and more interested in Kotlin, let’s do that in Kotlin.

    At the root of the hierarchy lies the Rollable interface:

    interface Rollable<T> {
        fun roll(): T
    }
    

    The base class is the Die:

    open class Die(val sides: Int): Rollable<Int> {
    
        init {
            val random = new SecureRandom()
        }
    
        override fun roll() = random.nextInt(sides)
    }
    

    Now let’s create some objects:

    object d2: Die(2)
    object d3: Die(3)
    object d4: Die(4)
    object d6: Die(6)
    object d10: Die(10)
    object d12: Die(12)
    object d20: Die(20)
    

    Finally, in order to make code using Die instances testable, let’s change the class to inject the Random instead:

    open class Die(val sides: Int, private val random: Random = SecureRandom()): Rollable<Int> {
        override fun roll() = random.nextInt(sides)
    }
    

    Note that the random property is private, so that only the class itself can use it - there won’t even be a getter.

    The coolest thing about that I that I hacked the above code in 15 minutes in the plane. I love Kotlin :-)

    Categories: Development Tags: apikotlin
  • Software labels translation is not so easy

    Some developers have hardly ever touched software labels translation, some do it on a day-to-day basis. It sure helps to work in a country with more than one language – official or de facto.

    Even for in the first case, it’s considered good practice to externalize labels in properties files. As for the second case, languages are in general related.

    In Java, the whole label translation mechanism is handled through a hierarchy of properties files. At the top of the hierarchy lies the root file, at the second level language-specific files and finally at the bottom country-specific files (let’s forget about lower levels since I haven’t seen them used in 15 years). The translation for message strings are searched along a specific locale, starting from most specific – country, up to the root. If the translation is found at any level, the resolution mechanism stops there and the label is returned.

    As a simple example, let’s take a simple use-case, displaying the number of items on a list. I’d need probably 3 labels:

    • No item found
    • One item found
    • Multiple items found

    This is probably the resulting messages.properties file:

    result.found.none=No item found
    result.found.one=One item found
    result.found.multiple={0} items found
    

    Things get interesting when the customer wants to translate the software after the initial release into an un-related language. Let’s not go as far as pictograph-based languages such as Mandarin Chinese or RTL languages such as Arabic, but use Russian, a language I’m trying to learn (emphasis on try).

    Russian is a language that has https://en.wikipedia.org/wiki/Case[cases], like in Latin.

    Case is a grammatical category whose value reflects the grammatical function performed by a noun or pronoun in a phrase, clause, or sentence. In some languages, nouns, pronouns, and their modifiers take different inflected forms depending on what case they are in. […]

    Commonly encountered cases include nominative, accusative, dative, and genitive. A role that one of these languages marks by case will often be marked in English using a preposition.
    -- Wikipedia

    So, what the fuss about it? Just translate the file and be done with it! Well, Russian is an interesting language for counting. With one, you’d use singular, from 2 to 4, you’d use plural and nominative case, but starting from 5, you use plural and genitive case – indicating quantity.

    Now keys will look like the following (the messages are not very important by themselves):

    • result.found.none
    • result.found.one
    • result.found.twotofour
    • result.found.five

    Is it OK to translate, now? Not quite. Russian is derived from old Slavic and old Slavic had three grammatical numbers: singular, plural and dual. Russian has only singular and plural but there’s a remnant of that for the feminine case. In this case, you’d use две instead of два.

    This requires the following keys:

    • result.found.none
    • result.found.one
    • result.found.two.feminine
    • result.found.two.notfeminine
    • result.found.threetofour
    • result.found.five

    And this only for the things I know. I’m afraid there might be more rules than I don’t know about.

    There are a couple of lessons to learn here:

    1. Translations are not straightforward, especially when the target is a language with different roots.
    2. i18n is much larger and harder than just translations. Think about dates: should month or day come first? And l10n is even larger and harder than i18n.
    3. The cost of translation is not null, and probably will be higher than expected. Estimates are hard, and wrong most of the times.
    4. Never ever assume anything. Implicit is bad in software projects…
    Categories: Java Tags: i18n
  • Immutable data structures in Java

    Before being software developers, we are people - and thus creatures of habits. It’s hard for someone to change one’s own habits, it’s harder for someone to change someone else’s habits - and for some of us, it’s even harder.

    This, week, during a code review, I stumbled upon this kind of structure:

    public class MyStructure {
    
        private String myProp1;
        private String myProp2;
        // A bunch of other String properties
    
        public MyStructure(String myProp1, String myProp2 /* All other properties here */) {
            this.myProp1 = myProp1;
            this.myProp2 = myProp2;
            // All other properties set there
        }
    
        public String getMyProp1() { ... }
        public String getMyProp2() { ... }
        // All other getters
    
        public void setMyProp1(String myProp1) { ... }
        public void setMyProp2(String myProp2) { ... }
        // All other setters
    }
    
    

    Note: it seems like a JavaBean, but it’s not because there’s no no-argument constructor.

    Looking at the code, I see that setters are never used in our code, making it a nice use-case for an immutable data structure - and saving a good number of lines of code:

    public class MyStructure {
    
        private final String myProp1;
        private final String myProp2;
        // A bunch of other String properties
    
        public MyStructure(String myProp1, String myProp2 /* All other properties here */) {
            this.myProp1 = myProp1;
            this.myProp2 = myProp2;
            // All other properties set there
        }
    
        public String getMyProp1() { ... }
        public String getMyProp2() { ... }
        // All other getters
    }
    
    

    At this point, one realizes String are themselves immutable, which leads to the second proposal, which again save more lines of code:

    public class MyStructure {
    
        public final String myProp1;
        public final String myProp2;
        // A bunch of other String properties
    
        public MyStructure(String myProp1, String myProp2 /* All other properties here */) {
            this.myProp1 = myProp1;
            this.myProp2 = myProp2;
            // All other properties set there
        }
    }
    
    

    Given that attributes are final and that Java String are immutable, the class still safe against unwanted changes. Note that it works only because String are immutable by definition in Java. With a Date property, it wouldn’t work as Dates are mutable.

    The same can be done with stateless services, with embedded services that needs to be accessed from children classes. There’s no need to have a getter:

    public class MyService {
    
        // Can be accessed from children classes
        protected final EmbeddedService anotherService;
    
        public MyService(EmbeddedService anotherService) {
            this.anotherService = anotherService;
        }
    }
    
    

    Note this approach is 100% compatible with for Dependency Injection, either Spring or CDI.

    Now, you cannot imagine the amount of back and forth comments this simple review caused. Why? Because even if that makes sense from a coding point of view, it’s completely different from what we usually do.

    In that case, laziness and IDEs don’t serve us well. The latter make it too easy to create accessors. I’m pretty sure if we had to code getters and setters by hand, the above proposals would be more in favor.

    This post could easily have been titled “Don’t let habits get the best of you”. The lesson here is to regularly challenge how you code, even for simple easy stuff. There might be better alternatives after all.

    Categories: Java Tags: good practice