Archive

Posts Tagged ‘Kotlin’
  • Synthetic

    Plastic straw tubes

    :page-liquid: :icons: font :experimental:

    There is a bunch of languages running on the JVM, from of course https://www.java.com/[Java^], to https://clojure.org/[Clojure^] and http://jruby.org/[JRuby^]. All of them have different syntaxes, but it’s awesome they all compile to the same bytecode. The JVM unites them all. Of course, it’s biased toward Java, but even in Java, there is some magic happening in the bytecode.

    The most well-known trick comes from the following code:

    [source,java]

    public class Foo { static class Bar { private Bar() {} }

    public static void main(String... args) {
        new Bar();
    } } ----
    

    Can you guess how many constructors the Bar class has?

    Two. Yes, you read that well. For the JVM, the Bar class declares 2 constructors. Run the following code if you don’t believe it:

    [source,java]

    Class clazz = Foo.Bar.class; Constructor<?>[] constructors = clazz.getDeclaredConstructors(); System.out.println(constructors.length); Arrays.stream(constructors).forEach(constructor -> { System.out.println("Constructor: " + constructor); }); ----

    The output is the following:


    Constructor: private Foo$Bar() Constructor: Foo$Bar(Foo$1) —-

    The reason is https://stackoverflow.com/questions/2883181/why-is-an-anonymous-inner-class-containing-nothing-generated-from-this-code[pretty well documented^]. The bytecode knows about access modifiers, but not about nested classes. In order for the Foo class to be able to create new Bar instances, the Java compiler generates an additional constructor with a default package visibility.

    This can be confirmed with the javap tool.

    [source,bash] javap -v out/production/synthetic/Foo$Bar.class

    This outputs the following:


    […] { Foo$Bar(Foo$1); descriptor: (LFoo$1;)V flags: ACC_SYNTHETIC Code: stack=1, locals=2, args_size=2 0: aload_0 1: invokespecial #1 // Method “":()V 4: return LineNumberTable: line 2: 0 LocalVariableTable: Start Length Slot Name Signature 0 5 0 this LFoo$Bar; 0 5 1 x0 LFoo$1; } [...] ----

    Notice the ACC_SYNTHETIC flag. Going to the JVM specifications yields the following information:

    [quote, The class File Format, https://docs.oracle.com/javase/specs/jvms/se8/html/jvms-4.html#jvms-4.6] The ACC_SYNTHETIC flag indicates that this method was generated by a compiler and does not appear in source code, unless it is one of the methods named in §4.7.8.

    [NOTE]

    Theoretically, it should be possible to call this generated constructor - notwithstanding the fact that it’s not possible to provide an instance of Foo$1, but let’s put it aside. But the IDE doesn’t seem to be able to discover this second non-argumentless constructor. I didn’t find any reference in the https://docs.oracle.com/javase/specs/jls/se8/html/index.html[Java Language Specification^], but synthetic classes and members cannot be accessed directly but only through reflection. ====

    At this point, one could wonder why all the fuss about the synthetic flag. It was introduced in Java to resolve the issue of nested classes access. But other JVM languages use it to implement their specification. For example, Kotlin uses synthetic to access the companion object:

    [source,kotlin]

    class Baz() { companion object { val BAZ = “baz” } } —-

    Executing javap on the .class file returns the following output (abridged for readability purpose) :


    { public static final Baz$Companion Companion; descriptor: LBaz$Companion; flags: ACC_PUBLIC, ACC_STATIC, ACC_FINAL

    public Baz(); […]

    public static final java.lang.String access$getBAZ$cp(); descriptor: ()Ljava/lang/String; flags: ACC_PUBLIC, ACC_STATIC, ACC_FINAL, ACC_SYNTHETIC Code: stack=1, locals=0, args_size=0 0: getstatic #22 // Field BAZ:Ljava/lang/String; 3: areturn LineNumberTable: line 1: 0 RuntimeInvisibleAnnotations: 0: #15() } […] —-

    Notice the access$getBAZ$cp() static method? That’s the name of the method that should be called from Java:

    [source,code]

    public class FromJava {

    public static void main(String... args) {
        Baz.Companion.getBAZ();
    } } ----
    

    == Conclusion

    While knowledge of the synthetic flag is not required in the day-to-day work of a JVM developer, it can be helpful to understand some of the results returned by the reflection API.

    Categories: Java Tags: JVMbytecodejavapKotlin
  • Flavors of Spring application context configuration

    Spring framework logo

    Every now and then, there’s an angry post or comment bitching about how the Spring framework is full of XML, how terrible and verbose it is, and how the author would never use it because of that. Of course, that is completely crap. First, when Spring was created, XML was pretty hot. J2EE deployment descriptors (yes, that was the name at the time) was XML-based.

    Anyway, it’s 2017 folks, and there are multiple ways to skin a cat. This article aims at listing the different ways a Spring application context can be configured so as to enlighten the aforementioned crowd - and stop the trolling around Spring and XML.

    XML

    XLM has been the first way to configure the Spring application context. Basically, one create an XML file with a dedicated namespace. It’s very straightforward:

    <beans xmlns="http://www.springframework.org/schema/beans"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://www.springframework.org/schema/beans 
               http://www.springframework.org/schema/beans/spring-beans.xsd">
        <bean id="foo" class="ch.frankel.blog.Foo">
            <constructor-arg value="Hello world!" />
        </bean>
        <bean id="bar" class="ch.frankel.blog.Bar">
            <constructor-arg ref="bar" />
        </bean>
    </beans>
    

    The next step is to create the application context, using dedicated classe:

    ApplicationContext ctx = new ClassPathXmlApplicationContext("ch/frankel/blog/context.xml");
    ApplicationContext ctx = new FileSystemXmlApplicationContext("/opt/app/context.xml");
    ApplicationContext ctx = new GenericXmlApplicationContext("classpath:ch/frankel/blog/context.xml");
    

    XML’s declarative nature enforces simplicity at the cost of extra verbosity. It’s orthogonal to the code - it’s completely independent. Before the coming of JavaConfig, I still favored XML over self-annotated classes.

    Self-annotated classes

    As for every new future/technology, when Java 5 introduced annotations, there was a rush to use them. In essence, a self-annotated class will be auto-magically registered into the application context.

    To achieve that, Spring provides the @Component annotation. However, to improve semantics, there are also dedicated annotations to differentiate between the 3 standard layers of the layered architecture principle:

    • @Controller
    • @Service
    • @Repository

    This is also quite straightforward:

    @Component
    public class Foo {
    
        public Foo(@Value("Hello world!") String value) { }
    }
    
    @Component
    public class Bar {
    
        @Autowired
        public Bar(Foo foo) { }
    }
    

    To scan for self-annotated classes, a dedicated application context is necessary:

    ApplicationContext ctx = new AnnotationConfigApplicationContext("ch.frankel.blog");
    

    Self-annotated classes are quite easy to use, but there are some downsides:

    • A self-annotated class becomes dependent on the Spring framework. For a framework based on dependency injection, that’s quite a problem.
    • Usage of self-annotations blurs the boundary between the class and the bean. As a consequence, the class cannot be registered multiple times, under different names and scopes into the context.
    • Self-annotated classes require autowiring, which has downsides on its own.

    Java configuration

    Given the above problems regarding self-annotated classes, the Spring framework introduced a new way to configure the context: JavaConfig. In essence, JavaConfig configuration classes replace XML file, but with compile-time safety instead of XML-schema runtime validation. This is based on two annotations @Configuration for classes, and @Bean for methods.

    The equivalent of the above XML is the following snippet:

    @Configuration
    public class JavaConfiguration {
    
        @Bean
        public Foo foo() {
            return new Foo("Hello world!");
        }
    
        @Bean
        public Bar bar() {
            return new Bar(foo());
        }
    }
    

    JavaConfig classes can be scanned like self-annotated classes:

    ApplicationContext ctx = new AnnotationConfigApplicationContext("ch.frankel.blog");
    

    JavaConfig is the way to configure Spring application: it’s orthogonal to the code, and brings some degree of compile-time validation.

    Groovy DSL

    Spring 4 added a way to configure the context via a Groovy Domain-Specific Language. The configuration takes place in a Groovy file, with the beans element as its roots.

    beans {
        foo String, 'Hello world!'
        bar Bar, foo
    }
    

    There’s an associated application context creator class:

    ApplicationContext ctx = new GenericGroovyApplicationContext("ch/frankel/blog/context.groovy");
    

    I’m not a Groovy developer, so I never used that option. But if you are, it makes a lot of sense.

    Kotlin DSL

    Groovy has been unceremoniously kicked out of the Pivotal portfolio some time ago. There is no correlation, Kotlin has found its way in. It’s no wonder that the the upcoming release of Spring 5 provides a Kotlin DSL.

    package ch.frankel.blog
    
    fun beans() = beans {
        bean {
            Foo("Hello world!")
            Bar(ref())
        }
    }
    

    Note that while bean declaration is explicit, wiring is implicit, as in JavaConfig @Bean methods with dependencies.

    In opposition to configuration flavors mentioned above, the Kotlin DSL needs an existing context to register beans in:

    import ch.frankel.blog.beans
    
    fun register(ctx: GenericApplicationContext) {
        beans().invoke(ctx)
    }
    

    I didn’t use Kotlin DSL but to play a bit with it for a demo, so I cannot say for sure about pros/cons.

    Conclusion

    So far, the JavaConfig alternative is my favorite: it’s orthogonal to the code and provides some degree of compile-time validation. As a Kotlin enthusiast, I’m also quite eager to try the Kotlin DSL in large projects to experience its pros and cons first-hand.

  • Rise and fall of JVM languages

    Rising trend on a barchart

    :page-liquid: :icons: font :experimental: :imagesdir: /assets/resources/rise-fall-jvm-languages/

    Every now and then, there’s a post predicting the death of the Java language. The funny thing is that none of them writes about a date. But to be honest, they are all probably true. This is the fate of every language: to disappear into oblivion - or more precisely to be used less and less for new projects. The question is what will replace them?

    Last week saw another such article https://www.infoq.com/news/2017/08/Java-Still-One-Tiobe[on InfoQ^]. At least, this one told about a possible replacement, Kotlin. It got me thinking about the state of the JVM languages, and trends. Note that trends have nothing to do with the technical merits and flaws of each language.

    I started developing in Java, late 2001. At that time, Java was really cool. Every young developer wanted to work on so-called new technologies: either .Net or Java, as older developers were stuck on Cobol. I had studied C and C++ in school, and memory management in Java was so much easier. I was happy with Java… but not everyone was.

    http://groovy-lang.org/[Groovy^] came into existence in 2003. I don’t remember at what time I learned about it. I just ignored it: I had no need of a scripting language then. In the context of developing enterprise-grade applications with a long lifespan with a team of many developers, static typing was a huge advantage over dynamic typing. Having to create tests to check the type system was a net loss. The only time I had to create scripts, it was as a WebSphere administrator: the choice was between Python and TCL.

    https://www.scala-lang.org/[Scala^] was incepted one year later in 2004. I don’t remember when and how I heard about it, but it was much later. But in opposition to Groovy, I decided to give it a try. The main reason was my long interest in creating “better” code - read more readable and maintainable. Scala being statically typed, it was more what I was looking for. I followed the Coursera course https://www.coursera.org/learn/progfun1[Principles of Functional Programming in Scala^]. It had three main consequences:

    • It questioned my way to write Java code. For example, why did I automatically generate getters and setters when designing a class?
    • I decided Scala made it too easy to write code unreadable for most developers - including myself
    • I started looking for other alternative languages

    After Groovy and Scala came the second generation (3^rd^ if you count Java as the first) of JVM languages, including:

    • https://kotlinlang.org/[JetBrains Kotlin^]
    • https://ceylon-lang.org/[Red Hat Ceylon]
    • and https://eclipse.org/xtend/[Eclipse Xtend^]

    After a casual glance at them, I became convinced they had not much traction, and were not worth investing my time.

    Some years ago, I decided to teach myself basic Android to be able to understand the development context of mobile developers. Oh boy! After years of developing Java EE and Spring applications, that was a surprise - and not a pleasant one. It was like being sent backwards a decade in the past. The Android API is so low level… not to even mention testing the app locally. After a quick search around, I found that Kotlin was mentioned in a lot of places, and finally decided to give it a try. I fell in love immediately: with Kotlin, I could improve the existing crap API into something better, even elegant, thanks to extension functions. I dug more into the language, and started using Kotlin for server-side projects as well. Then, the Spring framework announced its integration of Kotlin. And at Google I/O, Google announced its support of Kotlin on Android.

    [CAUTION]

    This post is about my own personal experience and opinion formed from it. If you prefer the comfort of reading only posts you agree with, feel free to stop reading a this point. ====

    Apart from my own experience, what is the current state of those languages? I ran a https://trends.google.com/trends/explore?q=%2Fm%2F0_lcrx4,%2Fm%2F02js86,%2Fm%2F091hdj,Ceylon,XTend[quick search on Google Trends^].

    image::jvm-lang-google-trends.png[Overview of JVM languages on Google Trends,796,511,align=”center”]

    There are a couple of interesting things to note:

    • Google has recognized search terms i.e. “Programming Language” for Scala, Groovy and Kotlin, but not for Ceylon and eXtend. For Ceylon, I can only assume it’s because Ceylon is a popular location. For eXtend, I’m afraid there are just not enough Google searches.
    • Scala is by far the most popular, followed by Groovy and Kotlin. I have no real clue about the scale.
    • The Kotlin peak in May is correlated with Google’s support announcement at Google I/O.
    • Most searches for Scala and Kotlin originate from China, Groovy is much more balanced regarding locations.
    • Scala searches strongly correlate with the term “Spark”, Kotlin searches with the term “Android”.

    Digging a bit further may uncover interesting facts:

    • xTend is not dead, because it has never been living. Never read any post about it. Never listend to a conference talk neither.
    • In 2017, Red Hat gave Ceylon to the Eclipse Foundation, creating https://projects.eclipse.org/proposals/eclipse-ceylon[Eclipse Ceylon]. A private actor giving away software to a foundation might be interpreted in different ways. In this case and despite all reassuring talks around the move, this is not a good sign for the future of Ceylon.
    • In 2015, Pivotal stopped sponsoring Groovy and it moved to the Apache foundation. While I believe Groovy has a wide enough support base, and a unique niche on the JVM - scripting, it’s also not a good sign. This correlates to the https://github.com/apache/groovy/graphs/contributors[commit frequency^] of the core Groovy committers: their number of commits drastically decreases - to the point of stopping for some.
    • Interestingly enough, both Scala and Kotlin recently invaded other spaces, transpiling to JavaScript and compiling to native.
    • In Java, http://openjdk.java.net/jeps/286[JEP 286^] is a proposal to enhance the language with type inference, a feature already provided by Scala and Kotlin. It’s however limited to only local variables.
    • Interestingly enough, there are efforts to https://github.com/twitter/reasonable-scala[improve Scala compilation time^] by keeping only a subset of the language. Which then raises the question, why keep Scala if you ditch its powerful features (such as macros)?

    I’m not great at forecast, but here are my 2 cents:

    . Groovy has its own niche - scripting, which leaves Java, Scala and Kotlin vying for the pure application development space on the server-side JVM. . Scala also carved its own space. Scala developers generally consider the language superior to Java (or Kotlin) and won’t migrate to another one. However, due to Spring and Google announcements, Kotlin may replace Scala as the language developers go to when they are dissatisfied with Java. . Kotlin has won the Android battle. Scala ignored this area in the past, and won’t invest in it in the future given Kotlin is so far ahead in the game. . Kotlin’s rise on the mobile was not an intended move, but rather a nice and unexpected surprise. But JetBrains used it as a way forward as soon as they noticed the trend. . Kotlin interoperability with Java is the killer feature that may convince managers to migrate legacy projects to or start new projects with Kotlin. Just as non-breaking backward compatibility was for Java.

    I’d be very much interested in your opinion, dear readers, even (especially?) if you disagree with the above. Just please be courteous and provide facts when you can to prove your points.

    Categories: Miscellaneous Tags: trend analysisKotlinxTendScalaGroovyCeylon
  • A SonarQube plugin for Kotlin - Creating the plugin proper

    SonarQube Continuous Inspection logo

    This is the 3rd post in a serie about creating a SonarQube plugin for the Kotlin language:

    • The first post was about creating the parsing code itself.
    • The 2nd post detailed how to use the parsing code to check for two rules.

    In this final post, we will be creating the plugin proper using the code of the 2 previous posts.

    The Sonar model

    The Sonar model is based on the following abstractions:

    Plugin
    Entry-point for plugins to inject extensions into SonarQube
    A plugin points to the other abstraction instances to make the SonarQube platform load them
    AbstractLanguage
    Pretty self-explanatory. Represents a language - Java, C#, Kotlin, etc.
    ProfileDefinition
    Define a profile which is automatically registered during sonar startup
    A profile is a mutable set of fully-configured rules. While not strictly necessary, having a Sonar profile pre-registered allows users to analyze their code without further configuration. Every language plugin offered by Sonar has at least one profile attached.
    RulesDefinition
    Defines some coding rules of the same repository
    Defines an immutable set of rule definitions into a repository. While a rule definition defines available parameters, default severity, etc. the rule (from the profile) defines the exact value for parameters, a specific severity, etc. In short, the rule implements the role definition.
    Sensor
    A sensor is invoked once for each module of a project, starting from leaf modules. The sensor can parse a flat file, connect to a web server... Sensors are used to add measure and issues at file level.
    The sensor is the entry-point where the magic happens.

    Starting to code the plugin

    Every abstraction above needs a concrete subclass. Note that the API classes themselves are all fairly decoupled. It’s the role of the Plugin child class to bind them together.

    class KotlinPlugin : Plugin {
    
        override fun define(context: Context) {
            context.addExtensions(
                    Kotlin::class.java,
                    KotlinProfile::class.java,
                    KotlinSensor::class.java,
                    KotlinRulesDefinition::class.java)
        }
    }
    

    Most of the code is mainly boilerplate, but for ANTLR code.

    Wiring the ANTLR parsing code

    On one hand, the parsing code is based on generated listeners. On the other hand, the sensor is the entry-point to the SonarQube parsing. There’s a need for a bridge between the 2.

    In the first article, we used an existing grammar for Kotlin to generate parsing code. SonarQube provides its own lexer/parser generating tool (SonarSource Language Recognizer). A sizeable part of the plugin API is based on it. Describing the grammar is no small feat for any real-life language, so I preferred to design my own adapter code instead.

    AbstractKotlinParserListener
    Subclass of the generated ANTLR KotlinParserBaseListener. It has an attribute to store violations, and a method to add such a violation.
    Violation
    The violation only contains the line number, as the rest of the required information will be stored into a KotlinCheck instance.
    KotlinCheck
    Abstract class that wraps an AbstractKotlinParserListener. Defines what constitutes a violation. It handles the ANTLR boilerplate code itself.

    This can be represented as the following:

    The sensor proper

    The general pseudo-code should look something akin to:

    FOR EACH source file
        FOR EACH rule
            Check for violation of the rule
            FOR EACH violation
                Call the SonarQube REST API to create a violation in the datastore
    

    This translates as:

    class KotlinSensor(private val fs: FileSystem) : Sensor {
    
        val sources: Iterable<InputFile>
            get() = fs.inputFiles(MAIN)
    
        override fun execute(context: SensorContext) {
            sources.forEach { inputFile: InputFile ->
                KotlinChecks.checks.forEach { check ->
                    val violations = check.violations(inputFile.file())
                    violations.forEach { (lineNumber) ->
                        with(context.newIssue().forRule(check.ruleKey())) {
                            val location = newLocation().apply {
                                on(inputFile)
                                message(check.message)
                                at(inputFile.selectLine(lineNumber))
                            }
                            at(location).save()
                        }
                    }
                }
            }
        }
    }
    

    Finally, the run

    Let’s create a dummy Maven project with 2 classes, Test1 and Test2 in one Test.kt file, with the same code as last week. Running mvn sonar:sonar yields the following output:

    Et voilà, our first SonarQube plugin for Kotlin, checking for our custom-developed violations.

    Of course, it has (a lot of) room for improvements:

    • Rules need to be activated through the GUI - I couldn’t find how to do it programmatically
    • Adding new rules needs updates to the plugin. Rules in 3rd-party plugins are not added automatically, as could be the case for standard SonarQube plugins.
    • So far, code located outside of classes seems not to be parsed.
    • The walk through the parse tree is executed for every check. An obvious performance gain would be to walk only once and do every check from there.
    • A lof of the above improvements could be achieved by replacing ANTLR’s grammar with Sonar’s internal SSLR
    • No tests…

    That still makes the project a nice starting point for a full-fledged Kotlin plugin. Pull requests are welcome!

    Categories: Technical Tags: code qualitySonarQubeKotlinpluginANTLR
  • A SonarQube plugin for Kotlin - Analyzing with ANTLR

    SonarQube Continuous Inspection logo

    Last week, we used ANTLR to generate a library to be able to analyze Kotlin code. It’s time to use the generated API to check for specific patterns.

    API overview

    Let’s start by having a look at the generated API:

    • KotlinLexer: Executes lexical analysis.
    • KotlinParser: Wraps classes representing all Kotlin tokens, and handles parsing errors.
    • KotlinParserVisitor: Contract for implementing the Visitor pattern on Kotlin code. KotlinParserBaseVisitor is its empty implementation, to ease the creation of subclasses.
    • KotlinParserListener: Contract for callback-related code when visiting Kotlin code, with KotlinParserBaseListener its empty implementation.

    Class diagrams are not the greatest diagrams to ease the writing of code. The following snippet is a very crude analysis implementation. I’ll be using Kotlin, but any JVM language interoperable with Java could be used:

    {% highlight java linenos %} val stream = CharStreams.fromString(“fun main(args : Array) {}") val lexer = KotlinLexer(stream) val tokens = CommonTokenStream(lexer) val parser = KotlinParser(tokens) val context = parser.kotlinFile() ParseTreeWalker().apply { walk(object : KotlinParserBaseListener() { override fun enterFunctionDeclaration(ctx: KotlinParser.FunctionDeclarationContext) { println(ctx.SimpleName().text) } }, context) } {% endhighlight %}

    Here’s the explanation:

    1. Create a CharStream to feed the lexer on the next line. The CharStreams offers plenty of static fromXXX() methods, each accepting a different type (String, InputStream, etc.)
    2. Instantiate the lexer, with the stream
    3. Instantiate a token stream over the lexer. The class provides streaming capabilities over the lexer.
    4. Instantiate the parser, with the token stream
    5. Define the entry point into the code. In that case, it’s a Kotlin file - and probably will be for the plugin.
    6. Create the overall walker that will visit each node in turn
    7. Start the visiting process by calling walk and passing the desired behavior as an object
    8. Override the desired function. Here, it will be invoked every time a function node is entered
    9. Do whatever is desired e.g. print the function name

    Obviously, lines 1 to 7 are just boilerplate to wire all components together. The behavior that need to be implemented should replace lines 8 and 9.

    First simple check

    In Kotlin, if a function returns Unit - nothing, then explicitly declaring its return type is optional. It would be a great rule to check that there’s no such explicit return. The following snippets, both valid Kotlin code, are equivalent - one with an explicit return type and the other without:

    fun hello1(): Unit {
        println("Hello")
    }
    
    fun hello2() {
        println("Hello")
    }
    

    Let’s use grun to graphically display the parse tree (grun was explained in the previous post). It yields the following:

    As can be seen, the snippet with an explicit return type has a type branch under functionDeclaration. This is confirmed by the snippet from the KotlinParser ANTLR grammar file:

    functionDeclaration
      : modifiers 'fun' typeParameters?
          (type '.' | annotations)?
          SimpleName
          typeParameters? valueParameters (':' type)?
          typeConstraints
          functionBody?
          SEMI*
      ;
    

    The rule should check that if such a return type exists, then it shouldn’t be Unit. Let’s update the above code with the desired effect:

    {% highlight java linenos %} ParseTreeWalker().apply { walk(object : KotlinParserBaseListener() { override fun enterFunctionDeclaration(ctx: KotlinParser.FunctionDeclarationContext) { if (ctx.type().isNotEmpty()) { val typeContext = ctx.type(0) with(typeContext.typeDescriptor().userType().simpleUserType()) { val typeName = this[0].SimpleName() if (typeName.symbol.text == “Unit”) { println(“Found Unit as explicit return type “ + “in function ${ctx.SimpleName()} at line ${typeName.symbol.line}”) } } } } }, context) } {% endhighlight %}

    Here’s the explanation:

    • Line 4: Check there’s an explicit return type, whatever it is
    • Line 5: Strangely enough, the grammar allows for a multi-valued return type. Just take the first one.
    • Line 6: Follow the parse tree up to the final type name - refer to the above parse tree screenshot for a graphical representation of the path.
    • Line 8: Check that the return type is Unit
    • Line 9: Prints a message in the console. In the next step, we will call the SonarQube API there.

    Running the above code correctly yields the following output:

    Found Unit as explicit return type in function hello1 at line 1
    

    A more advanced check

    In Kotlin, the following snippets are all equivalent:

    fun hello1(name: String): String {
        return "Hello $name"
    }
    
    fun hello2(name: String): String = "Hello $name"
    
    fun hello3(name: String) = "Hello $name"
    

    Note that in the last case, the return type can be inferred by the compiler and omitted by the developer. That would make a good check: in the case of a expression body, the return type should be omitted. The same technique as above can be used:

    1. Display the parse tree from the snippet using grun:

    2. Check for differences. Obviously:
      • Functions that do not have an explicit return type miss a type node in the functionDeclaration tree, as above
      • Functions with an expression body have a functionBody whose first child is = and whose second child is an expression
    3. Refer to the initial grammar, to make sure all cases are covered.
      functionBody
        : block
        | '=' expression
        ;
      
    4. Code! {% highlight java linenos %} ParseTreeWalker().apply { walk(object : KotlinParserBaseListener() { override fun enterFunctionDeclaration(ctx: KotlinParser.FunctionDeclarationContext) { val bodyChildren = ctx.functionBody().children if (bodyChildren.size > 1 && bodyChildren[0] is TerminalNode && bodyChildren[0].text == “=” && ctx.type().isNotEmpty()) { val firstChild = bodyChildren[0] as TerminalNode println(“Found explicit return type for expression body “ + “in function ${ctx.SimpleName()} at line ${firstChild.symbol.line}”) } } }, context) } {% endhighlight %}

    The code is pretty self-explanatory and yields the following:

    Found explicit return type for expression body in function hello2 at line 5
    
    Categories: Technical Tags: code qualitySonarQubeKotlinplugin
  • A SonarQube plugin for Kotlin - Paving the way

    SonarQube Continuous Inspection logo

    Since I started my journey into Kotlin, I wanted to use the same libraries and tools I use in Java. For libraries - Spring Boot, Mockito, etc., it’s straightforward as Kotlin is 100% interoperable with Java. For tools, well, it depends. For example, Jenkins works flawlessly, while SonarQube lacks a dedicated plugin. The SonarSource team has limited resources: Kotlin, though on the rise - and even more so since Google I/O 17, is not in their pipe. This post serie is about creating such a plugin, and this first post is about parsing Kotlin code.

    ANTLR

    In the realm of code parsing, ANTLR is a clear leader in the JVM world.

    ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It’s widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build and walk parse trees.

    Designing the grammar

    ANTLR is able to generate parsing code for any language thanks to a dedicated grammar file. However, creating such a grammar from scratch for regular languages is not trivial. Fortunately, thanks to the power of the community, a grammar for Kotlin already exists on Github.

    With this existing grammar, ANTLR is able to generate Java parsing code to be used by the SonarQube plugin. The steps are the following:

    • Clone the Github repository
      git clone [email protected]:antlr/grammars-v4.git
      
    • By default, classes will be generated under the root package, which is discouraged. To change that default:
      • Create a src/main/antlr4/<fully>/<qualified>/<package> folder such as src/main/antlr4/ch/frankel/sonarqube/kotlin/api
      • Move the g4 files there
      • In the POM, remove the sourceDirectory and includes bits from the antlr4-maven-plugin configuration to use the default
    • Build and install the JAR in the local Maven repo
      cd grammars-v4/kotlin
      mvn install
      

    This should generate a KotlinLexer and a KotlinParser class, as well as several related classes in target/classes. As Maven goes, it also packages them in a JAR named kotlin-1.0-SNAPSHOT.jar in the target folder - and in the local Maven repo as well.

    Testing the parsing code

    To test the parsing code, one can use the grun command. It’s an alias for the following:

    java -Xmx500M -cp "<path/to/antlr/complete/>.jar:$CLASSPATH" org.antlr.v4.Tool
    

    Create the alias manually or install the antlr package via Homebrew on OSX.

    With grun, Kotlin code can parsed then displayed in different ways, textual and graphical. The following expects an input in the console:

    cd target/classes
    grun Kotlin kotlinFile -tree
    

    After having typed valid Kotlin code, it yields its parse tree in text. By replacing the -tree option with the -gui option, it displays the tree graphically instead. For example, the following tree comes from this snippet:

    fun main(args : Array<String>) { 
        val firstName : String = "Adam"
        val name : String? = firstName 
        print("$name") 
    }
    

    In order for the JAR to be used later in the SonarQube plugin, it has been deployed on Bintray. In the next post, we will be doing proper code analysis to check for violations.

    Categories: Technical Tags: code qualitySonarQubeKotlinpluginANTLR
  • Kotlin for front-end developers

    Kotlin logo

    :revdate: 2017-04-09 16:00:00 +0200 :page-liquid: :icons: font :experimental:

    I assume that readers are already familiar with the https://kotlinlang.org/[Kotlin^] language. Advertised from the site:

    [quote,Headline from the Kotlin website] __ Statically typed programming language for the JVM, Android and the browser 100% interoperable with Java™ __

    Kotlin first penetrated the Android ecosystem and has seen a huge adoption there. There’s also a growing trend on the JVM, via Spring Boot. Since its latest 1.1 release, Kotlin also offers a production-grade Kotlin to JavaScript transpiler. This post is dedicated to the latter.

    IMHO, the biggest issue regarding the process of transpiling Kotlin to JavaScript is that https://kotlinlang.org/docs/tutorials/javascript/kotlin-to-javascript/kotlin-to-javascript.html[documentation^] is only aimed at server-side build tools just as Maven and Gradle - or worse, the IntelliJ IDEA IDE. Those work very well for backend-driven projects or prototypes, but is not an incentive to front-end developers whose bread and butter are npm, Grunt/Gulp, yarn, etc.

    Let’s fix that by creating a simple Google Maps using Kotlin.

    == Managing the build file

    [NOTE]

    This section assumes:

    • The Grunt command-line has already been installed, via npm install -g grunt-cli
    • The kotlin package has been installed and the kotlin compiler is on the path. I personally use Homebrew - brew install kotlin

    The build file looks like the following:

    [source,javascript] .Gruntfile.js —- module.exports = function (grunt) {

    grunt.initConfig({
        clean: 'dist/**',
        exec: {
            cmd: 'kotlinc-js src -output dist/script.js' // <1>
        },
        copy: { // <2>
            files: {
                expand: true,
                flatten: true,
                src: ['src/**/*.html', 'src/**/*.css'],
                dest: 'dist'
            }
        }
    });
    
    grunt.loadNpmTasks('grunt-contrib-clean');
    grunt.loadNpmTasks('grunt-contrib-copy');
    grunt.loadNpmTasks('grunt-exec');
    
    grunt.registerTask('default', ['exec', 'copy']);  }; ----
    

    <1> Transpiles all Kotlin files found in the src folder to a single JavaScript file <2> Copies CSS and HTML files to the dist folder

    == Bridging Google Maps API in Kotlin

    The following snippet creates a Map object using Google Maps API which will be displayed on a specific div element:

    [source,javascript]

    var element = document.getElementById(‘map’); new google.maps.Map(element, { center: { lat: 46.2050836, lng: 6.1090691 }, zoom: 8 }); —-

    Like in TypeScript, there must be a thin Kotlin adapter to bridge the original JavaScript API. Unlike in TypeScript, there’s no https://github.com/DefinitelyTyped/DefinitelyTyped[existing repository^] of such adapters. The following is a naive first draft:

    [source,javascript]

    external class Map(element: Element?, options: Json?) —-

    NOTE: The external keyword is used to tell the transpiler the function body is implemented in another file - or library in that case.

    The first issue comes from a name collision: Map is an existing Kotlin class and is imported by default. Fortunately, the @JsName annotation allows to translate the name at transpile time.

    [source,kotlin] .gmaps.kt —- @JsName(“Map”) external class GoogleMap(element: Element?, options: Json?) —-

    The second issue occurs because the original API is in a namespace: the object is not Map, but google.maps.Map. The previous annotation doesn’t allow for dots, but a combination of other annotations can do the trick:

    [source,kotlin] ./google/maps/gmaps.kt —- @JsModule(“google”) @JsQualifier(“maps”) @JsName(“Map”) external class GoogleMap(element: Element?, options: Json?) —-

    This still doesn’t work - it doesn’t even compile, as @JsQualifier cannot be applied to a class. The final working code is:

    [source,kotlin] ./google/maps/gmaps.kt —- @file:JsModule(“google”) @file:JsQualifier(“maps”) @file:JsNonModule

    package google.maps

    @JsName(“Map”) external class GoogleMap(element: Element?, options: Json?) —-

    == Calling Google Maps

    Calling the above code is quite straightforward, but note the second parameter of the constructor is of type Json. That for sure is quite different from the strongly-typed code which was the goal of using Kotlin. To address that, let’s create real types:

    [source,kotlin]

    internal class MapOptions(val center: LatitudeLongitude, val zoom: Byte) internal class LatitudeLongitude(val latitude: Double, val longitude: Double) —-

    And with Kotlin’s extension function - and an out-of-the-box json() function, let’s make them able to serialize themselves to JSON:

    [source,kotlin]

    internal fun LatitudeLongitude.toJson() = json(“lat” to latitude, “lng” to longitude) internal fun MapOptions.toJson() = json(“center” to center.toJson(), “zoom” to zoom) —-

    This makes it possible to write the following:

    [source,kotlin]

    fun initMap() { val div = document.getElementById(“map”) val latLng = LatitudeLongitude(latitude = -34.397, longitude = 150.644) val options = MapOptions(center = latLng, zoom = 8) GoogleMap(element = div, options = options.toJson()) } —-

    == Refinements

    We could stop at this point, with the feeling to have achieved something. But Kotlin allows much more.

    The low hanging fruit would be to move the JSON serialization to the Map constructor:

    [source,kotlin]

    internal class KotlinGoogleMap(element: Element?, options: MapOptions) : GoogleMap(element, options.toJson())

    KotlinGoogleMap(element = div, options = options)

    == Further refinements

    The “domain” is quite suited to be written using a DSL. Let’s update the “API”:

    [source,kotlin]

    external open class GoogleMap(element: Element?) { fun setOptions(options: Json) }

    internal class MapOptions { lateinit var center: LatitudeLongitude var zoom: Byte = 1 fun center(init: LatitudeLongitude.() -> Unit) { center = LatitudeLongitude().apply(init) } fun toJson() = json(“center” to center.toJson(), “zoom” to zoom) }

    internal class LatitudeLongitude() { var latitude: Double = 0.0 var longitude: Double = 0.0 fun toJson() = json(“lat” to latitude, “lng” to longitude) }

    internal class KotlinGoogleMap(element: Element?) : GoogleMap(element) { fun options(init: MapOptions.() -> Unit) { val options = MapOptions().apply(init) setOptions(options = options.toJson()) } }

    internal fun kotlinGoogleMap(element: Element?, init: KotlinGoogleMap.() -> Unit) = KotlinGoogleMap(element).apply(init)

    The client code now can be written as:

    [source,kotlin]

    fun initMap() { val div = document.getElementById(“map”) kotlinGoogleMap(div) { options { zoom = 6 center { latitude = 46.2050836 longitude = 6.1090691 } } } } —-

    == Conclusion

    Though the documentation is rather terse, it’s possible to only use the JavaScript ecosystem to transpile Kotlin code to JavaScript. Granted, the bridging of existing libraries is a chore, but this is only a one-time effort as the community starts sharing their efforts. On the other hand, the same features that make Kotlin a great language to use server-side - e.g. writing a DSL, also benefit on the front-end.

    Categories: Development Tags: JavaScriptKotlinfront-end
  • Coping with stringly-typed

    UPDATED on March 13, 2017: Add Builder pattern section

    Most developers have strong opinions regarding whether a language should be strongly-typed or weakly-typed, whatever notions they put behind those terms. Some also actively practice stringly-typed programming - mostly without even being aware of it. It happens when most of attributes and parameters of a codebase are String. In this post, I will make use of the following simple snippet as an example:

    public class Person {
    
        private final String title;
        private final String givenName;
        private final String familyName;
        private final String email;
      
        public Person(String title, String givenName, String familyName, String email) {
            this.title = title;
            this.givenName = givenName;
            this.familyName = familyName;
            this.email = email;
        }
        ...
    }
    

    The original sin

    The problem with that code is that it’s hard to remember which parameter represents what and in which order they should be passed to the constructor.

    Person person = new Person("[email protected]", "John", "Doe", "Sir");
    

    In the previous call, the email and the title parameter values were switched. Ooops.

    This is even worse if more than one constructor is available, offering optional parameters:

    public Person(String givenName, String familyName, String email) {
        this(null, givenName, familyName, email);
    }
    
    Person another = new Person("Sir", "John", "Doe");
    

    In that case, title was the optional parameter, not email. My bad.

    Solving the problem the OOP way

    Object-Oriented Programming and its advocates have a strong aversion to stringly-typed code for good reasons. Since everything in the world has a specific type, so must it be in the system.

    Let’s rewrite the previous code à la OOP:

    public class Title {
        private final String value;
        public Title(String value) {
        	this.value = value;
        }
    }
    
    public class GivenName {
        private final String value;
        public FirstName(String value) {
        	this.value = value;
        }
    }
    
    public class FamilyName {
        private final String value;
        public LastName(String value) {
        	this.value = value;
        }
    }
    
    public class Email {
        private final String value;
        public Email(String value) {
        	this.value = value;
        }
    }
    
    public class Person {
    
        private final Title title;
        private final GivenName givenName;
        private final FamilyName familyName;
        private final Email email;
      
        public Person(Title title, GivenName givenName, FamilyName familyName, Email email) {
            this.title = title;
            this.givenName = givenName;
            this.familyName = familyName;
            this.email = email;
        }
        ...
    }
    
    
    Person person = new Person(new Title(null), new FirstName("John"), new LastName("Doe"), new Email("[email protected]"));
    

    That way drastically limits the possibility of mistakes. The drawback is a large increase in verbosity - which might lead to other bugs.

    Pattern to the rescue

    A common way to tackle this issue in Java is to use the Builder pattern. Let’s introduce a new builder class and rework the code:

    public class Person {
    
        private String title;
        private String givenName;
        private String familyName;
        private String email;
    
        private Person() {}
    
        private void setTitle(String title) {
            this.title = title;
        }
    
        private void setGivenName(String givenName) {
            this.givenName = givenName;
        }
    
        private void setFamilyName(String familyName) {
            this.familyName = familyName;
        }
    
        private void setEmail(String email) {
            this.email = email;
        }
    
        public static class Builder {
    
            private Person person;
    
            public Builder() {
                person = new Person();
            }
    
            public Builder title(String title) {
                person.setTitle(title);
                return this;
            }
    
            public Builder givenName(String givenName) {
                person.setGivenName(givenName);
                return this;
            }
    
            public Builder familyName(String familyName) {
                person.setFamilyName(familyName);
                return this;
            }
    
            public Builder email(String email) {
                person.setEmail(email);
                return this;
            }
    
            public Person build() {
                return person;
            }
        }
    }
    

    Note that in addition to the new builder class, the constructor of the Person class has been set to private. Using the Java language features, this allows only the Builder to create new Person instances. The same is used for the different setters.

    Using this pattern is quite straightforward:

    Person person = new Builder()
                   .title("Sir")
                   .givenName("John")
                   .familyName("Doe")
                   .email("[email protected]")
                   .build();
    

    The builder patterns shifts the verbosity from the calling part to the design part. Not a bad trade-off.

    Languages to the rescue

    Verbosity is unfortunately the mark of Java. Some other languages (Kotlin, Scala, etc.) would be much more friendly to this approach, not only for class declarations, but also for object creation.

    Let’s port class declarations to Kotlin:

    class Title(val value: String?)
    class GivenName(val value: String)
    class FamilyName(val value: String)
    class Email(val value: String)
    
    class Person(val title: Title, val givenName: GivenName, val familyName: FamilyName, val email: Email)
    

    This is much better, thanks to Kotlin! And now object creation:

    val person = Person(Title(null), GivenName("John"), FamilyName("Doe"), Email("[email protected]"))
    

    For this, verbosity is only marginally decreased compared to Java.

    Named parameters to the rescue

    OOP fanatics may stop reading there, for their way is not the only one to cope with stringly-typed.

    One alternative is about named parameters, and is incidentally also found in Kotlin. Let’s get back to the original stringly-typed code, port it to Kotlin and use named parameters:

    class Person(val title: String?, val givenName: String, val familyName: String, val email: String)
    
    val person = Person(title = null, givenName = "John", familyName = "Doe", email = "[email protected]")
    
    val another = Person(email = "[email protected]", title = "Sir", givenName = "John", familyName = "Doe")
    

    A benefit of named parameters besides coping with stringly-typed code is that they are order-agnostic when invoking the constructor. Plus, they also play nice with default values:

    class Person(val title: String? = null, val givenName: String, val familyName: String, val email: String? = null)
    
    val person = Person(givenName = "John", familyName = "Doe")
    val another = Person(title = "Sir", givenName = "John", familyName = "Doe")
    

    Type aliases to the rescue

    While looking at Kotlin, let’s describe a feature released with 1.1 that might help.

    A type alias is as its name implies a name for an existing type; the type can be a simple type, a collection, a lambda - whatever exists within the type system.

    Let’s create some type aliases in the stringly-typed world:

    typealias Title = String
    typelias GivenName = String
    typealias FamilyName = String
    typealias Email = String
    
    class Person(val title: Title, val givenName: GivenName, val familyName: FamilyName, val email: Email)
    
    val person = Person(null, "John", "Doe", "[email protected]")
    

    The declaration seems more typed. Unfortunately object creation doesn’t bring any betterment.

    Note the main problem of type aliases is that they are just that - aliases: no new type is created so if 2 aliases point to the same type, all 3 are interchangeable with one another.

    Libraries to the rescue

    For the rest of this post, let’s go back to the Java language.

    Twisting the logic a bit, parameters can be validated at runtime instead of compile-time with the help of specific libraries. In particular, the Bean validation library does the job:

    public Person(@Title String title, @GivenName String givenName, @FamilyName String familyName, @Email String email) {
        this.title = title;
        this.givenName = givenName;
        this.familyName = familyName;
        this.email = email;
    }
    

    Admittedly, it’s not the best solution… but it works.

    Tooling to the rescue

    I have already written about tooling and that it’s as important (if not more) as the language itself.

    Tools fill gaps in languages, while being non-intrusive. The downside is that everyone has to use it (or find a tool with the same feature).

    For example, when I started my career, coding guidelines mandated developers to order methods by alphabetical order in the class file. Nowadays, that would be senseless, as every IDE worth its salt can display the methods of a class in order.

    Likewise, named parameters can be a feature of the IDE, for languages that lack it. In particular, latest versions of IntelliJ IDEA emulates named parameters for the Java language for types that are deemed to generic. The following shows the Person class inside the IDE:

    Conclusion

    While proper OOP design is the historical way to cope with stringly-typed code, it also makes it quite verbose and unwieldy in Java. This post describes alternatives, with their specific pros and cons. Each needs to be evaluated in the context of one’s own specific context to decide which one is the best fit.

  • HTTP headers forwarding in microservices

    Spring Cloud logo

    Microservices are not a trend anymore. Like it or not, they are here to stay. Yet, there’s a huge gap before embracing the microservice architecture and implementing them right. As a reminder, one might first want to check the many fallacies of distributed computed. Among all requirements necessary to overcome them is the ability to follow one HTTP request along microservices involved in a specific business scenario - for monitoring and debugging purpose.

    One possible implementation of it is a dedicated HTTP header with an immutable value passed along every microservice involved in the call chain. This week, I developed this monitoring of sort on a Spring microservice and would like to share how to achieve that.

    Headers forwarding out-of-the-box

    In the Spring ecosystem, the Spring Cloud Sleuth is the library dedicated to that:

    Spring Cloud Sleuth implements a distributed tracing solution for Spring Cloud, borrowing heavily from Dapper, Zipkin and HTrace. For most users Sleuth should be invisible, and all your interactions with external systems should be instrumented automatically. You can capture data simply in logs, or by sending it to a remote collector service.

    Within Spring Boot projects, adding the Spring Cloud Sleuth library to the classpath will automatically add 2 HTTP headers to all calls:

    X-B3-Traceid
    Shared by all HTTP calls of a single transaction i.e. the wished-for transaction identifier
    X-B3-Spanid
    Identifies the work of a single microservice during a transaction

    Spring Cloud Sleuth offers some customisation capabilities, such as alternative header names, at the cost of some extra code.

    Diverging from out-of-the-box features

    Those features are quite handy when starting from a clean slate architecture. Unfortunately, the project I’m working has a different context:

    • The transaction ID is not created by the first microservice in the call chain - a mandatory façade proxy does
    • The transaction ID is not numeric - and Sleuth handles only numeric values
    • Another header is required. Its objective is to group all requests related to one business scenario across different call chains
    • A third header is necessary. It’s to be incremented by each new microservice in the call chain

    A solution architect’s first move would be to check among API management products, such as Apigee (recently bought by Google) and search which one offers the feature matching those requirements. Unfortunately, the current context doesn’t allow for that.

    Coding the requirements

    In the end, I ended up coding the following using the Spring framework:

    1. Read and store headers from the initial request
    2. Write them in new microservice requests
    3. Read and store headers from the microservice response
    4. Write them in the final response to the initiator, not forgetting to increment the call counter

    UML modeling of the header flow

    Headers holder

    The first step is to create the entity responsible to hold all necessary headers. It’s unimaginatively called HeadersHolder. Blame me all you want, I couldn’t find a more descriptive name.

    private const val HOP_KEY = "hop"
    private const val REQUEST_ID_KEY = "request-id"
    private const val SESSION_ID_KEY = "session-id"
    
    data class HeadersHolder (var hop: Int?,
                              var requestId: String?,
                              var sessionId: String?)
    

    The interesting part is to decide which scope is more relevant to put instances of this class in. Obviously, there must be several instances, so this makes singleton not suitable. Also, since data must be stored across several requests, it cannot be prototype. In the end, the only possible way to manage the instance is through a ThreadLocal.

    Though it’s possible to manage ThreadLocal, let’s leverage Spring’s features since it allows to easily add new scopes. There’s already an out-of-the-box scope for ThreadLocal, one just needs to register it in the context. This directly translates into the following code:

    internal const val THREAD_SCOPE = "thread"
    
    @Scope(THREAD_SCOPE)
    annotation class ThreadScope
    
    @Configuration
    open class WebConfigurer {
    
        @Bean @ThreadScope
        open fun headersHolder() = HeadersHolder()
    
        @Bean open fun customScopeConfigurer() = CustomScopeConfigurer().apply {
            addScope(THREAD_SCOPE, SimpleThreadScope())
        }
    }
    

    Headers in the server part

    Let’s implement requirements 1 & 4 above: read headers from the request and write them to the response. Also, headers need to be reset after the request-response cycle to prepare for the next.

    This also mandates for the holder class to be updated to be more OOP-friendly:

    data class HeadersHolder private constructor (private var hop: Int?,
                                                  private var requestId: String?,
                                                  private var sessionId: String?) {
        constructor() : this(null, null, null)
    
        fun readFrom(request: HttpServletRequest) {
            this.hop = request.getIntHeader(HOP_KEY)
            this.requestId = request.getHeader(REQUEST_ID_KEY)
            this.sessionId = request.getHeader(SESSION_ID_KEY)
        }
    
        fun writeTo(response: HttpServletResponse) {
            hop?.let { response.addIntHeader(HOP_KEY, hop as Int) }
            response.addHeader(REQUEST_ID_KEY, requestId)
            response.addHeader(SESSION_ID_KEY, sessionId)
        }
    
        fun clear() {
            hop = null
            requestId = null
            sessionId = null
        }
    }
    

    To keep controllers free from any header management concern, related code should be located in a filter or a similar component. In the Spring MVC ecosystem, this translates into an interceptor.

    abstract class HeadersServerInterceptor : HandlerInterceptorAdapter() {
    
        abstract val headersHolder: HeadersHolder
    
        override fun preHandle(request: HttpServletRequest,
                               response: HttpServletResponse, handler: Any): Boolean {
            headersHolder.readFrom(request)
            return true
        }
    
        override fun afterCompletion(request: HttpServletRequest, response: HttpServletResponse,
                                     handler: Any, ex: Exception?) {
            with (headersHolder) {
                writeTo(response)
                clear()
            }
        }
    }
    
    @Configuration open class WebConfigurer : WebMvcConfigurerAdapter() {
    
        override fun addInterceptors(registry: InterceptorRegistry) {
            registry.addInterceptor(object : HeadersServerInterceptor() {
                override val headersHolder: HeadersHolder
                    get() = headersHolder()
            })
        }
    }
    

    Note the invocation of the clear() method to reset the headers holder for the next request.

    The most important bit is the abstract headersHolder property. As its scope - thread, is smaller than the adapter’s, it cannot be injected directly less it will be only injected during Spring’s context startup. Hence, Spring provides lookup method injection. The above code is its direct translation in Kotlin.

    Headers in client calls

    The previous code assumes the current microservice is at the end of the caller chain: it reads request headers and writes them back in the response (not forgetting to increment the ‘hop’ counter). However, monitoring is relevant only for a caller chain having more than one single link. How is it possible to pass headers to the next microservice (and get them back) - requirements 2 & 3 above?

    Spring provides a handy abstraction to handle that client part - ClientHttpRequestInterceptor, that can be registered to a REST template. Regarding scope mismatch, the same injection trick as for the interceptor handler above is used.

    abstract class HeadersClientInterceptor : ClientHttpRequestInterceptor {
    
        abstract val headersHolder: HeadersHolder
    
        override fun intercept(request: HttpRequest, 
                               body: ByteArray, execution: ClientHttpRequestExecution): ClientHttpResponse {
            with(headersHolder) {
                writeTo(request.headers)
                return execution.execute(request, body).apply {
                    readFrom(this.headers)
                }
            }
        }
    }
    
    @Configuration
    open class WebConfigurer : WebMvcConfigurerAdapter() {
    
        @Bean open fun headersClientInterceptor() = object : HeadersClientInterceptor() {
            override val headersHolder: HeadersHolder
                get() = headersHolder()
        }
    
        @Bean open fun oAuth2RestTemplate() = OAuth2RestTemplate(clientCredentialsResourceDetails()).apply {
            interceptors = listOf(headersClientInterceptor())
        }
    }
    

    With this code, every REST call using the oAuth2RestTemplate() will have headers managed automatically by the interceptor.

    The HeadersHolder just needs a quick update:

    data class HeadersHolder private constructor (private var hop: Int?,
                                                  private var requestId: String?,
                                                  private var sessionId: String?) {
    
        fun readFrom(headers: org.springframework.http.HttpHeaders) {
            headers[HOP_KEY]?.let {
                it.getOrNull(0)?.let { this.hop = it.toInt() }
            }
            headers[REQUEST_ID_KEY]?.let { this.requestId = it.getOrNull(0) }
            headers[SESSION_ID_KEY]?.let { this.sessionId = it.getOrNull(0) }
        }
    
        fun writeTo(headers: org.springframework.http.HttpHeaders) {
            hop?.let { headers.add(HOP_KEY, hop.toString()) }
            headers.add(REQUEST_ID_KEY, requestId)
            headers.add(SESSION_ID_KEY, sessionId)
        }
    }
    

    Conclusion

    Spring Cloud offers many components that can be used out-of-the-box when developing microservices. When requirements start to diverge from what it provides, the flexibility of the underlying Spring Framework can be leveraged to code those requirements.