Posts Tagged ‘Kotlin’
  • A SonarQube plugin for Kotlin - Creating the plugin proper

    SonarQube Continuous Inspection logo

    This is the 3rd post in a serie about creating a SonarQube plugin for the Kotlin language:

    • The first post was about creating the parsing code itself.
    • The 2nd post detailed how to use the parsing code to check for two rules.

    In this final post, we will be creating the plugin proper using the code of the 2 previous posts.

    The Sonar model

    The Sonar model is based on the following abstractions:

    Entry-point for plugins to inject extensions into SonarQube
    A plugin points to the other abstraction instances to make the SonarQube platform load them
    Pretty self-explanatory. Represents a language - Java, C#, Kotlin, etc.
    Define a profile which is automatically registered during sonar startup
    A profile is a mutable set of fully-configured rules. While not strictly necessary, having a Sonar profile pre-registered allows users to analyze their code without further configuration. Every language plugin offered by Sonar has at least one profile attached.
    Defines some coding rules of the same repository
    Defines an immutable set of rule definitions into a repository. While a rule definition defines available parameters, default severity, etc. the rule (from the profile) defines the exact value for parameters, a specific severity, etc. In short, the rule implements the role definition.
    A sensor is invoked once for each module of a project, starting from leaf modules. The sensor can parse a flat file, connect to a web server... Sensors are used to add measure and issues at file level.
    The sensor is the entry-point where the magic happens.

    Starting to code the plugin

    Every abstraction above needs a concrete subclass. Note that the API classes themselves are all fairly decoupled. It’s the role of the Plugin child class to bind them together.

    class KotlinPlugin : Plugin {
        override fun define(context: Context) {

    Most of the code is mainly boilerplate, but for ANTLR code.

    Wiring the ANTLR parsing code

    On one hand, the parsing code is based on generated listeners. On the other hand, the sensor is the entry-point to the SonarQube parsing. There’s a need for a bridge between the 2.

    In the first article, we used an existing grammar for Kotlin to generate parsing code. SonarQube provides its own lexer/parser generating tool (SonarSource Language Recognizer). A sizeable part of the plugin API is based on it. Describing the grammar is no small feat for any real-life language, so I preferred to design my own adapter code instead.

    Subclass of the generated ANTLR KotlinParserBaseListener. It has an attribute to store violations, and a method to add such a violation.
    The violation only contains the line number, as the rest of the required information will be stored into a KotlinCheck instance.
    Abstract class that wraps an AbstractKotlinParserListener. Defines what constitutes a violation. It handles the ANTLR boilerplate code itself.

    This can be represented as the following:

    The sensor proper

    The general pseudo-code should look something akin to:

    FOR EACH source file
        FOR EACH rule
            Check for violation of the rule
            FOR EACH violation
                Call the SonarQube REST API to create a violation in the datastore

    This translates as:

    class KotlinSensor(private val fs: FileSystem) : Sensor {
        val sources: Iterable<InputFile>
            get() = fs.inputFiles(MAIN)
        override fun execute(context: SensorContext) {
            sources.forEach { inputFile: InputFile ->
                KotlinChecks.checks.forEach { check ->
                    val violations = check.violations(inputFile.file())
                    violations.forEach { (lineNumber) ->
                        with(context.newIssue().forRule(check.ruleKey())) {
                            val location = newLocation().apply {

    Finally, the run

    Let’s create a dummy Maven project with 2 classes, Test1 and Test2 in one Test.kt file, with the same code as last week. Running mvn sonar:sonar yields the following output:

    Et voilà, our first SonarQube plugin for Kotlin, checking for our custom-developed violations.

    Of course, it has (a lot of) room for improvements:

    • Rules need to be activated through the GUI - I couldn’t find how to do it programmatically
    • Adding new rules needs updates to the plugin. Rules in 3rd-party plugins are not added automatically, as could be the case for standard SonarQube plugins.
    • So far, code located outside of classes seems not to be parsed.
    • The walk through the parse tree is executed for every check. An obvious performance gain would be to walk only once and do every check from there.
    • A lof of the above improvements could be achieved by replacing ANTLR’s grammar with Sonar’s internal SSLR
    • No tests…

    That still makes the project a nice starting point for a full-fledged Kotlin plugin. Pull requests are welcome!

    Categories: Technical Tags: code qualitySonarQubeKotlinpluginANTLR
  • A SonarQube plugin for Kotlin - Analyzing with ANTLR

    SonarQube Continuous Inspection logo

    Last week, we used ANTLR to generate a library to be able to analyze Kotlin code. It’s time to use the generated API to check for specific patterns.

    API overview

    Let’s start by having a look at the generated API:

    • KotlinLexer: Executes lexical analysis.
    • KotlinParser: Wraps classes representing all Kotlin tokens, and handles parsing errors.
    • KotlinParserVisitor: Contract for implementing the Visitor pattern on Kotlin code. KotlinParserBaseVisitor is its empty implementation, to ease the creation of subclasses.
    • KotlinParserListener: Contract for callback-related code when visiting Kotlin code, with KotlinParserBaseListener its empty implementation.

    Class diagrams are not the greatest diagrams to ease the writing of code. The following snippet is a very crude analysis implementation. I’ll be using Kotlin, but any JVM language interoperable with Java could be used:

    {% highlight java linenos %} val stream = CharStreams.fromString(“fun main(args : Array) {}") val lexer = KotlinLexer(stream) val tokens = CommonTokenStream(lexer) val parser = KotlinParser(tokens) val context = parser.kotlinFile() ParseTreeWalker().apply { walk(object : KotlinParserBaseListener() { override fun enterFunctionDeclaration(ctx: KotlinParser.FunctionDeclarationContext) { println(ctx.SimpleName().text) } }, context) } {% endhighlight %}

    Here’s the explanation:

    1. Create a CharStream to feed the lexer on the next line. The CharStreams offers plenty of static fromXXX() methods, each accepting a different type (String, InputStream, etc.)
    2. Instantiate the lexer, with the stream
    3. Instantiate a token stream over the lexer. The class provides streaming capabilities over the lexer.
    4. Instantiate the parser, with the token stream
    5. Define the entry point into the code. In that case, it’s a Kotlin file - and probably will be for the plugin.
    6. Create the overall walker that will visit each node in turn
    7. Start the visiting process by calling walk and passing the desired behavior as an object
    8. Override the desired function. Here, it will be invoked every time a function node is entered
    9. Do whatever is desired e.g. print the function name

    Obviously, lines 1 to 7 are just boilerplate to wire all components together. The behavior that need to be implemented should replace lines 8 and 9.

    First simple check

    In Kotlin, if a function returns Unit - nothing, then explicitly declaring its return type is optional. It would be a great rule to check that there’s no such explicit return. The following snippets, both valid Kotlin code, are equivalent - one with an explicit return type and the other without:

    fun hello1(): Unit {
    fun hello2() {

    Let’s use grun to graphically display the parse tree (grun was explained in the previous post). It yields the following:

    As can be seen, the snippet with an explicit return type has a type branch under functionDeclaration. This is confirmed by the snippet from the KotlinParser ANTLR grammar file:

      : modifiers 'fun' typeParameters?
          (type '.' | annotations)?
          typeParameters? valueParameters (':' type)?

    The rule should check that if such a return type exists, then it shouldn’t be Unit. Let’s update the above code with the desired effect:

    {% highlight java linenos %} ParseTreeWalker().apply { walk(object : KotlinParserBaseListener() { override fun enterFunctionDeclaration(ctx: KotlinParser.FunctionDeclarationContext) { if (ctx.type().isNotEmpty()) { val typeContext = ctx.type(0) with(typeContext.typeDescriptor().userType().simpleUserType()) { val typeName = this[0].SimpleName() if (typeName.symbol.text == “Unit”) { println(“Found Unit as explicit return type “ + “in function ${ctx.SimpleName()} at line ${typeName.symbol.line}”) } } } } }, context) } {% endhighlight %}

    Here’s the explanation:

    • Line 4: Check there’s an explicit return type, whatever it is
    • Line 5: Strangely enough, the grammar allows for a multi-valued return type. Just take the first one.
    • Line 6: Follow the parse tree up to the final type name - refer to the above parse tree screenshot for a graphical representation of the path.
    • Line 8: Check that the return type is Unit
    • Line 9: Prints a message in the console. In the next step, we will call the SonarQube API there.

    Running the above code correctly yields the following output:

    Found Unit as explicit return type in function hello1 at line 1

    A more advanced check

    In Kotlin, the following snippets are all equivalent:

    fun hello1(name: String): String {
        return "Hello $name"
    fun hello2(name: String): String = "Hello $name"
    fun hello3(name: String) = "Hello $name"

    Note that in the last case, the return type can be inferred by the compiler and omitted by the developer. That would make a good check: in the case of a expression body, the return type should be omitted. The same technique as above can be used:

    1. Display the parse tree from the snippet using grun:

    2. Check for differences. Obviously:
      • Functions that do not have an explicit return type miss a type node in the functionDeclaration tree, as above
      • Functions with an expression body have a functionBody whose first child is = and whose second child is an expression
    3. Refer to the initial grammar, to make sure all cases are covered.
        : block
        | '=' expression
    4. Code! {% highlight java linenos %} ParseTreeWalker().apply { walk(object : KotlinParserBaseListener() { override fun enterFunctionDeclaration(ctx: KotlinParser.FunctionDeclarationContext) { val bodyChildren = ctx.functionBody().children if (bodyChildren.size > 1 && bodyChildren[0] is TerminalNode && bodyChildren[0].text == “=” && ctx.type().isNotEmpty()) { val firstChild = bodyChildren[0] as TerminalNode println(“Found explicit return type for expression body “ + “in function ${ctx.SimpleName()} at line ${firstChild.symbol.line}”) } } }, context) } {% endhighlight %}

    The code is pretty self-explanatory and yields the following:

    Found explicit return type for expression body in function hello2 at line 5
    Categories: Technical Tags: code qualitySonarQubeKotlinplugin
  • A SonarQube plugin for Kotlin - Paving the way

    SonarQube Continuous Inspection logo

    Since I started my journey into Kotlin, I wanted to use the same libraries and tools I use in Java. For libraries - Spring Boot, Mockito, etc., it’s straightforward as Kotlin is 100% interoperable with Java. For tools, well, it depends. For example, Jenkins works flawlessly, while SonarQube lacks a dedicated plugin. The SonarSource team has limited resources: Kotlin, though on the rise - and even more so since Google I/O 17, is not in their pipe. This post serie is about creating such a plugin, and this first post is about parsing Kotlin code.


    In the realm of code parsing, ANTLR is a clear leader in the JVM world.

    ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It’s widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build and walk parse trees.

    Designing the grammar

    ANTLR is able to generate parsing code for any language thanks to a dedicated grammar file. However, creating such a grammar from scratch for regular languages is not trivial. Fortunately, thanks to the power of the community, a grammar for Kotlin already exists on Github.

    With this existing grammar, ANTLR is able to generate Java parsing code to be used by the SonarQube plugin. The steps are the following:

    • Clone the Github repository
      git clone
    • By default, classes will be generated under the root package, which is discouraged. To change that default:
      • Create a src/main/antlr4/<fully>/<qualified>/<package> folder such as src/main/antlr4/ch/frankel/sonarqube/kotlin/api
      • Move the g4 files there
      • In the POM, remove the sourceDirectory and includes bits from the antlr4-maven-plugin configuration to use the default
    • Build and install the JAR in the local Maven repo
      cd grammars-v4/kotlin
      mvn install

    This should generate a KotlinLexer and a KotlinParser class, as well as several related classes in target/classes. As Maven goes, it also packages them in a JAR named kotlin-1.0-SNAPSHOT.jar in the target folder - and in the local Maven repo as well.

    Testing the parsing code

    To test the parsing code, one can use the grun command. It’s an alias for the following:

    java -Xmx500M -cp "<path/to/antlr/complete/>.jar:$CLASSPATH" org.antlr.v4.Tool

    Create the alias manually or install the antlr package via Homebrew on OSX.

    With grun, Kotlin code can parsed then displayed in different ways, textual and graphical. The following expects an input in the console:

    cd target/classes
    grun Kotlin kotlinFile -tree

    After having typed valid Kotlin code, it yields its parse tree in text. By replacing the -tree option with the -gui option, it displays the tree graphically instead. For example, the following tree comes from this snippet:

    fun main(args : Array<String>) { 
        val firstName : String = "Adam"
        val name : String? = firstName 

    In order for the JAR to be used later in the SonarQube plugin, it has been deployed on Bintray. In the next post, we will be doing proper code analysis to check for violations.

    Categories: Technical Tags: code qualitySonarQubeKotlinpluginANTLR
  • Kotlin for front-end developers

    Kotlin logo

    :revdate: 2017-04-09 16:00:00 +0200 :page-liquid: :icons: font :experimental:

    I assume that readers are already familiar with the[Kotlin^] language. Advertised from the site:

    [quote,Headline from the Kotlin website] __ Statically typed programming language for the JVM, Android and the browser 100% interoperable with Java™ __

    Kotlin first penetrated the Android ecosystem and has seen a huge adoption there. There’s also a growing trend on the JVM, via Spring Boot. Since its latest 1.1 release, Kotlin also offers a production-grade Kotlin to JavaScript transpiler. This post is dedicated to the latter.

    IMHO, the biggest issue regarding the process of transpiling Kotlin to JavaScript is that[documentation^] is only aimed at server-side build tools just as Maven and Gradle - or worse, the IntelliJ IDEA IDE. Those work very well for backend-driven projects or prototypes, but is not an incentive to front-end developers whose bread and butter are npm, Grunt/Gulp, yarn, etc.

    Let’s fix that by creating a simple Google Maps using Kotlin.

    == Managing the build file


    This section assumes:

    • The Grunt command-line has already been installed, via npm install -g grunt-cli
    • The kotlin package has been installed and the kotlin compiler is on the path. I personally use Homebrew - brew install kotlin

    The build file looks like the following:

    [source,javascript] .Gruntfile.js —- module.exports = function (grunt) {

        clean: 'dist/**',
        exec: {
            cmd: 'kotlinc-js src -output dist/script.js' // <1>
        copy: { // <2>
            files: {
                expand: true,
                flatten: true,
                src: ['src/**/*.html', 'src/**/*.css'],
                dest: 'dist'
    grunt.registerTask('default', ['exec', 'copy']);  }; ----

    <1> Transpiles all Kotlin files found in the src folder to a single JavaScript file <2> Copies CSS and HTML files to the dist folder

    == Bridging Google Maps API in Kotlin

    The following snippet creates a Map object using Google Maps API which will be displayed on a specific div element:


    var element = document.getElementById(‘map’); new google.maps.Map(element, { center: { lat: 46.2050836, lng: 6.1090691 }, zoom: 8 }); —-

    Like in TypeScript, there must be a thin Kotlin adapter to bridge the original JavaScript API. Unlike in TypeScript, there’s no[existing repository^] of such adapters. The following is a naive first draft:


    external class Map(element: Element?, options: Json?) —-

    NOTE: The external keyword is used to tell the transpiler the function body is implemented in another file - or library in that case.

    The first issue comes from a name collision: Map is an existing Kotlin class and is imported by default. Fortunately, the @JsName annotation allows to translate the name at transpile time.

    [source,kotlin] .gmaps.kt —- @JsName(“Map”) external class GoogleMap(element: Element?, options: Json?) —-

    The second issue occurs because the original API is in a namespace: the object is not Map, but google.maps.Map. The previous annotation doesn’t allow for dots, but a combination of other annotations can do the trick:

    [source,kotlin] ./google/maps/gmaps.kt —- @JsModule(“google”) @JsQualifier(“maps”) @JsName(“Map”) external class GoogleMap(element: Element?, options: Json?) —-

    This still doesn’t work - it doesn’t even compile, as @JsQualifier cannot be applied to a class. The final working code is:

    [source,kotlin] ./google/maps/gmaps.kt —- @file:JsModule(“google”) @file:JsQualifier(“maps”) @file:JsNonModule

    package google.maps

    @JsName(“Map”) external class GoogleMap(element: Element?, options: Json?) —-

    == Calling Google Maps

    Calling the above code is quite straightforward, but note the second parameter of the constructor is of type Json. That for sure is quite different from the strongly-typed code which was the goal of using Kotlin. To address that, let’s create real types:


    internal class MapOptions(val center: LatitudeLongitude, val zoom: Byte) internal class LatitudeLongitude(val latitude: Double, val longitude: Double) —-

    And with Kotlin’s extension function - and an out-of-the-box json() function, let’s make them able to serialize themselves to JSON:


    internal fun LatitudeLongitude.toJson() = json(“lat” to latitude, “lng” to longitude) internal fun MapOptions.toJson() = json(“center” to center.toJson(), “zoom” to zoom) —-

    This makes it possible to write the following:


    fun initMap() { val div = document.getElementById(“map”) val latLng = LatitudeLongitude(latitude = -34.397, longitude = 150.644) val options = MapOptions(center = latLng, zoom = 8) GoogleMap(element = div, options = options.toJson()) } —-

    == Refinements

    We could stop at this point, with the feeling to have achieved something. But Kotlin allows much more.

    The low hanging fruit would be to move the JSON serialization to the Map constructor:


    internal class KotlinGoogleMap(element: Element?, options: MapOptions) : GoogleMap(element, options.toJson())

    KotlinGoogleMap(element = div, options = options)

    == Further refinements

    The “domain” is quite suited to be written using a DSL. Let’s update the “API”:


    external open class GoogleMap(element: Element?) { fun setOptions(options: Json) }

    internal class MapOptions { lateinit var center: LatitudeLongitude var zoom: Byte = 1 fun center(init: LatitudeLongitude.() -> Unit) { center = LatitudeLongitude().apply(init) } fun toJson() = json(“center” to center.toJson(), “zoom” to zoom) }

    internal class LatitudeLongitude() { var latitude: Double = 0.0 var longitude: Double = 0.0 fun toJson() = json(“lat” to latitude, “lng” to longitude) }

    internal class KotlinGoogleMap(element: Element?) : GoogleMap(element) { fun options(init: MapOptions.() -> Unit) { val options = MapOptions().apply(init) setOptions(options = options.toJson()) } }

    internal fun kotlinGoogleMap(element: Element?, init: KotlinGoogleMap.() -> Unit) = KotlinGoogleMap(element).apply(init)

    The client code now can be written as:


    fun initMap() { val div = document.getElementById(“map”) kotlinGoogleMap(div) { options { zoom = 6 center { latitude = 46.2050836 longitude = 6.1090691 } } } } —-

    == Conclusion

    Though the documentation is rather terse, it’s possible to only use the JavaScript ecosystem to transpile Kotlin code to JavaScript. Granted, the bridging of existing libraries is a chore, but this is only a one-time effort as the community starts sharing their efforts. On the other hand, the same features that make Kotlin a great language to use server-side - e.g. writing a DSL, also benefit on the front-end.

    Categories: Development Tags: JavaScriptKotlinfront-end
  • Coping with stringly-typed

    UPDATED on March 13, 2017: Add Builder pattern section

    Most developers have strong opinions regarding whether a language should be strongly-typed or weakly-typed, whatever notions they put behind those terms. Some also actively practice stringly-typed programming - mostly without even being aware of it. It happens when most of attributes and parameters of a codebase are String. In this post, I will make use of the following simple snippet as an example:

    public class Person {
        private final String title;
        private final String givenName;
        private final String familyName;
        private final String email;
        public Person(String title, String givenName, String familyName, String email) {
            this.title = title;
            this.givenName = givenName;
            this.familyName = familyName;
   = email;

    The original sin

    The problem with that code is that it’s hard to remember which parameter represents what and in which order they should be passed to the constructor.

    Person person = new Person("", "John", "Doe", "Sir");

    In the previous call, the email and the title parameter values were switched. Ooops.

    This is even worse if more than one constructor is available, offering optional parameters:

    public Person(String givenName, String familyName, String email) {
        this(null, givenName, familyName, email);
    Person another = new Person("Sir", "John", "Doe");

    In that case, title was the optional parameter, not email. My bad.

    Solving the problem the OOP way

    Object-Oriented Programming and its advocates have a strong aversion to stringly-typed code for good reasons. Since everything in the world has a specific type, so must it be in the system.

    Let’s rewrite the previous code à la OOP:

    public class Title {
        private final String value;
        public Title(String value) {
        	this.value = value;
    public class GivenName {
        private final String value;
        public FirstName(String value) {
        	this.value = value;
    public class FamilyName {
        private final String value;
        public LastName(String value) {
        	this.value = value;
    public class Email {
        private final String value;
        public Email(String value) {
        	this.value = value;
    public class Person {
        private final Title title;
        private final GivenName givenName;
        private final FamilyName familyName;
        private final Email email;
        public Person(Title title, GivenName givenName, FamilyName familyName, Email email) {
            this.title = title;
            this.givenName = givenName;
            this.familyName = familyName;
   = email;
    Person person = new Person(new Title(null), new FirstName("John"), new LastName("Doe"), new Email(""));

    That way drastically limits the possibility of mistakes. The drawback is a large increase in verbosity - which might lead to other bugs.

    Pattern to the rescue

    A common way to tackle this issue in Java is to use the Builder pattern. Let’s introduce a new builder class and rework the code:

    public class Person {
        private String title;
        private String givenName;
        private String familyName;
        private String email;
        private Person() {}
        private void setTitle(String title) {
            this.title = title;
        private void setGivenName(String givenName) {
            this.givenName = givenName;
        private void setFamilyName(String familyName) {
            this.familyName = familyName;
        private void setEmail(String email) {
   = email;
        public static class Builder {
            private Person person;
            public Builder() {
                person = new Person();
            public Builder title(String title) {
                return this;
            public Builder givenName(String givenName) {
                return this;
            public Builder familyName(String familyName) {
                return this;
            public Builder email(String email) {
                return this;
            public Person build() {
                return person;

    Note that in addition to the new builder class, the constructor of the Person class has been set to private. Using the Java language features, this allows only the Builder to create new Person instances. The same is used for the different setters.

    Using this pattern is quite straightforward:

    Person person = new Builder()

    The builder patterns shifts the verbosity from the calling part to the design part. Not a bad trade-off.

    Languages to the rescue

    Verbosity is unfortunately the mark of Java. Some other languages (Kotlin, Scala, etc.) would be much more friendly to this approach, not only for class declarations, but also for object creation.

    Let’s port class declarations to Kotlin:

    class Title(val value: String?)
    class GivenName(val value: String)
    class FamilyName(val value: String)
    class Email(val value: String)
    class Person(val title: Title, val givenName: GivenName, val familyName: FamilyName, val email: Email)

    This is much better, thanks to Kotlin! And now object creation:

    val person = Person(Title(null), GivenName("John"), FamilyName("Doe"), Email(""))

    For this, verbosity is only marginally decreased compared to Java.

    Named parameters to the rescue

    OOP fanatics may stop reading there, for their way is not the only one to cope with stringly-typed.

    One alternative is about named parameters, and is incidentally also found in Kotlin. Let’s get back to the original stringly-typed code, port it to Kotlin and use named parameters:

    class Person(val title: String?, val givenName: String, val familyName: String, val email: String)
    val person = Person(title = null, givenName = "John", familyName = "Doe", email = "")
    val another = Person(email = "", title = "Sir", givenName = "John", familyName = "Doe")

    A benefit of named parameters besides coping with stringly-typed code is that they are order-agnostic when invoking the constructor. Plus, they also play nice with default values:

    class Person(val title: String? = null, val givenName: String, val familyName: String, val email: String? = null)
    val person = Person(givenName = "John", familyName = "Doe")
    val another = Person(title = "Sir", givenName = "John", familyName = "Doe")

    Type aliases to the rescue

    While looking at Kotlin, let’s describe a feature released with 1.1 that might help.

    A type alias is as its name implies a name for an existing type; the type can be a simple type, a collection, a lambda - whatever exists within the type system.

    Let’s create some type aliases in the stringly-typed world:

    typealias Title = String
    typelias GivenName = String
    typealias FamilyName = String
    typealias Email = String
    class Person(val title: Title, val givenName: GivenName, val familyName: FamilyName, val email: Email)
    val person = Person(null, "John", "Doe", "")

    The declaration seems more typed. Unfortunately object creation doesn’t bring any betterment.

    Note the main problem of type aliases is that they are just that - aliases: no new type is created so if 2 aliases point to the same type, all 3 are interchangeable with one another.

    Libraries to the rescue

    For the rest of this post, let’s go back to the Java language.

    Twisting the logic a bit, parameters can be validated at runtime instead of compile-time with the help of specific libraries. In particular, the Bean validation library does the job:

    public Person(@Title String title, @GivenName String givenName, @FamilyName String familyName, @Email String email) {
        this.title = title;
        this.givenName = givenName;
        this.familyName = familyName; = email;

    Admittedly, it’s not the best solution… but it works.

    Tooling to the rescue

    I have already written about tooling and that it’s as important (if not more) as the language itself.

    Tools fill gaps in languages, while being non-intrusive. The downside is that everyone has to use it (or find a tool with the same feature).

    For example, when I started my career, coding guidelines mandated developers to order methods by alphabetical order in the class file. Nowadays, that would be senseless, as every IDE worth its salt can display the methods of a class in order.

    Likewise, named parameters can be a feature of the IDE, for languages that lack it. In particular, latest versions of IntelliJ IDEA emulates named parameters for the Java language for types that are deemed to generic. The following shows the Person class inside the IDE:


    While proper OOP design is the historical way to cope with stringly-typed code, it also makes it quite verbose and unwieldy in Java. This post describes alternatives, with their specific pros and cons. Each needs to be evaluated in the context of one’s own specific context to decide which one is the best fit.

  • HTTP headers forwarding in microservices

    Spring Cloud logo

    Microservices are not a trend anymore. Like it or not, they are here to stay. Yet, there’s a huge gap before embracing the microservice architecture and implementing them right. As a reminder, one might first want to check the many fallacies of distributed computed. Among all requirements necessary to overcome them is the ability to follow one HTTP request along microservices involved in a specific business scenario - for monitoring and debugging purpose.

    One possible implementation of it is a dedicated HTTP header with an immutable value passed along every microservice involved in the call chain. This week, I developed this monitoring of sort on a Spring microservice and would like to share how to achieve that.

    Headers forwarding out-of-the-box

    In the Spring ecosystem, the Spring Cloud Sleuth is the library dedicated to that:

    Spring Cloud Sleuth implements a distributed tracing solution for Spring Cloud, borrowing heavily from Dapper, Zipkin and HTrace. For most users Sleuth should be invisible, and all your interactions with external systems should be instrumented automatically. You can capture data simply in logs, or by sending it to a remote collector service.

    Within Spring Boot projects, adding the Spring Cloud Sleuth library to the classpath will automatically add 2 HTTP headers to all calls:

    Shared by all HTTP calls of a single transaction i.e. the wished-for transaction identifier
    Identifies the work of a single microservice during a transaction

    Spring Cloud Sleuth offers some customisation capabilities, such as alternative header names, at the cost of some extra code.

    Diverging from out-of-the-box features

    Those features are quite handy when starting from a clean slate architecture. Unfortunately, the project I’m working has a different context:

    • The transaction ID is not created by the first microservice in the call chain - a mandatory façade proxy does
    • The transaction ID is not numeric - and Sleuth handles only numeric values
    • Another header is required. Its objective is to group all requests related to one business scenario across different call chains
    • A third header is necessary. It’s to be incremented by each new microservice in the call chain

    A solution architect’s first move would be to check among API management products, such as Apigee (recently bought by Google) and search which one offers the feature matching those requirements. Unfortunately, the current context doesn’t allow for that.

    Coding the requirements

    In the end, I ended up coding the following using the Spring framework:

    1. Read and store headers from the initial request
    2. Write them in new microservice requests
    3. Read and store headers from the microservice response
    4. Write them in the final response to the initiator, not forgetting to increment the call counter

    UML modeling of the header flow

    Headers holder

    The first step is to create the entity responsible to hold all necessary headers. It’s unimaginatively called HeadersHolder. Blame me all you want, I couldn’t find a more descriptive name.

    private const val HOP_KEY = "hop"
    private const val REQUEST_ID_KEY = "request-id"
    private const val SESSION_ID_KEY = "session-id"
    data class HeadersHolder (var hop: Int?,
                              var requestId: String?,
                              var sessionId: String?)

    The interesting part is to decide which scope is more relevant to put instances of this class in. Obviously, there must be several instances, so this makes singleton not suitable. Also, since data must be stored across several requests, it cannot be prototype. In the end, the only possible way to manage the instance is through a ThreadLocal.

    Though it’s possible to manage ThreadLocal, let’s leverage Spring’s features since it allows to easily add new scopes. There’s already an out-of-the-box scope for ThreadLocal, one just needs to register it in the context. This directly translates into the following code:

    internal const val THREAD_SCOPE = "thread"
    annotation class ThreadScope
    open class WebConfigurer {
        @Bean @ThreadScope
        open fun headersHolder() = HeadersHolder()
        @Bean open fun customScopeConfigurer() = CustomScopeConfigurer().apply {
            addScope(THREAD_SCOPE, SimpleThreadScope())

    Headers in the server part

    Let’s implement requirements 1 & 4 above: read headers from the request and write them to the response. Also, headers need to be reset after the request-response cycle to prepare for the next.

    This also mandates for the holder class to be updated to be more OOP-friendly:

    data class HeadersHolder private constructor (private var hop: Int?,
                                                  private var requestId: String?,
                                                  private var sessionId: String?) {
        constructor() : this(null, null, null)
        fun readFrom(request: HttpServletRequest) {
            this.hop = request.getIntHeader(HOP_KEY)
            this.requestId = request.getHeader(REQUEST_ID_KEY)
            this.sessionId = request.getHeader(SESSION_ID_KEY)
        fun writeTo(response: HttpServletResponse) {
            hop?.let { response.addIntHeader(HOP_KEY, hop as Int) }
            response.addHeader(REQUEST_ID_KEY, requestId)
            response.addHeader(SESSION_ID_KEY, sessionId)
        fun clear() {
            hop = null
            requestId = null
            sessionId = null

    To keep controllers free from any header management concern, related code should be located in a filter or a similar component. In the Spring MVC ecosystem, this translates into an interceptor.

    abstract class HeadersServerInterceptor : HandlerInterceptorAdapter() {
        abstract val headersHolder: HeadersHolder
        override fun preHandle(request: HttpServletRequest,
                               response: HttpServletResponse, handler: Any): Boolean {
            return true
        override fun afterCompletion(request: HttpServletRequest, response: HttpServletResponse,
                                     handler: Any, ex: Exception?) {
            with (headersHolder) {
    @Configuration open class WebConfigurer : WebMvcConfigurerAdapter() {
        override fun addInterceptors(registry: InterceptorRegistry) {
            registry.addInterceptor(object : HeadersServerInterceptor() {
                override val headersHolder: HeadersHolder
                    get() = headersHolder()

    Note the invocation of the clear() method to reset the headers holder for the next request.

    The most important bit is the abstract headersHolder property. As its scope - thread, is smaller than the adapter’s, it cannot be injected directly less it will be only injected during Spring’s context startup. Hence, Spring provides lookup method injection. The above code is its direct translation in Kotlin.

    Headers in client calls

    The previous code assumes the current microservice is at the end of the caller chain: it reads request headers and writes them back in the response (not forgetting to increment the ‘hop’ counter). However, monitoring is relevant only for a caller chain having more than one single link. How is it possible to pass headers to the next microservice (and get them back) - requirements 2 & 3 above?

    Spring provides a handy abstraction to handle that client part - ClientHttpRequestInterceptor, that can be registered to a REST template. Regarding scope mismatch, the same injection trick as for the interceptor handler above is used.

    abstract class HeadersClientInterceptor : ClientHttpRequestInterceptor {
        abstract val headersHolder: HeadersHolder
        override fun intercept(request: HttpRequest, 
                               body: ByteArray, execution: ClientHttpRequestExecution): ClientHttpResponse {
            with(headersHolder) {
                return execution.execute(request, body).apply {
    open class WebConfigurer : WebMvcConfigurerAdapter() {
        @Bean open fun headersClientInterceptor() = object : HeadersClientInterceptor() {
            override val headersHolder: HeadersHolder
                get() = headersHolder()
        @Bean open fun oAuth2RestTemplate() = OAuth2RestTemplate(clientCredentialsResourceDetails()).apply {
            interceptors = listOf(headersClientInterceptor())

    With this code, every REST call using the oAuth2RestTemplate() will have headers managed automatically by the interceptor.

    The HeadersHolder just needs a quick update:

    data class HeadersHolder private constructor (private var hop: Int?,
                                                  private var requestId: String?,
                                                  private var sessionId: String?) {
        fun readFrom(headers: org.springframework.http.HttpHeaders) {
            headers[HOP_KEY]?.let {
                it.getOrNull(0)?.let { this.hop = it.toInt() }
            headers[REQUEST_ID_KEY]?.let { this.requestId = it.getOrNull(0) }
            headers[SESSION_ID_KEY]?.let { this.sessionId = it.getOrNull(0) }
        fun writeTo(headers: org.springframework.http.HttpHeaders) {
            hop?.let { headers.add(HOP_KEY, hop.toString()) }
            headers.add(REQUEST_ID_KEY, requestId)
            headers.add(SESSION_ID_KEY, sessionId)


    Spring Cloud offers many components that can be used out-of-the-box when developing microservices. When requirements start to diverge from what it provides, the flexibility of the underlying Spring Framework can be leveraged to code those requirements.