Posts Tagged ‘plugin’
  • A SonarQube plugin for Kotlin - Creating the plugin proper

    SonarQube Continuous Inspection logo

    This is the 3rd post in a serie about creating a SonarQube plugin for the Kotlin language:

    • The first post was about creating the parsing code itself.
    • The 2nd post detailed how to use the parsing code to check for two rules.

    In this final post, we will be creating the plugin proper using the code of the 2 previous posts.

    The Sonar model

    The Sonar model is based on the following abstractions:

    Entry-point for plugins to inject extensions into SonarQube

    A plugin points to the other abstraction instances to make the SonarQube platform load them


    Pretty self-explanatory. Represents a language - Java, C#, Kotlin, etc.

    Define a profile which is automatically registered during sonar startup

    A profile is a mutable set of fully-configured rules. While not strictly necessary, having a Sonar profile pre-registered allows users to analyze their code without further configuration. Every language plugin offered by Sonar has at least one profile attached.

    Defines some coding rules of the same repository

    Defines an immutable set of rule definitions into a repository. While a rule definition defines available parameters, default severity, etc. the rule (from the profile) defines the exact value for parameters, a specific severity, etc. In short, the rule implements the role definition.

    A sensor is invoked once for each module of a project, starting from leaf modules. The sensor can parse a flat file, connect to a web server…​ Sensors are used to add measure and issues at file level.

    The sensor is the entry-point where the magic happens.

    sonarqube api class diagram

    Starting to code the plugin

    Every abstraction above needs a concrete subclass. Note that the API classes themselves are all fairly decoupled. It’s the role of the Plugin child class to bind them together.

    class KotlinPlugin : Plugin {
        override fun define(context: Context) {

    Most of the code is mainly boilerplate, but for ANTLR code.

    Wiring the ANTLR parsing code

    On one hand, the parsing code is based on generated listeners. On the other hand, the sensor is the entry-point to the SonarQube parsing. There’s a need for a bridge between the 2.

    In the first article, we used an existing grammar for Kotlin to generate parsing code. SonarQube provides its own lexer/parser generating tool (SonarSource Language Recognizer). A sizeable part of the plugin API is based on it. Describing the grammar is no small feat for any real-life language, so I preferred to design my own adapter code instead.


    Subclass of the generated ANTLR KotlinParserBaseListener. It has an attribute to store violations, and a method to add such a violation.


    The violation only contains the line number, as the rest of the required information will be stored into a KotlinCheck instance.


    Abstract class that wraps an AbstractKotlinParserListener. Defines what constitutes a violation. It handles the ANTLR boilerplate code itself.

    This can be represented as the following:

    kotlin plugin class diagram

    The sensor proper

    The general pseudo-code should look something akin to:

    FOR EACH source file
        FOR EACH rule
            Check for violation of the rule
            FOR EACH violation
                Call the SonarQube REST API to create a violation in the datastore

    This translates as:

    class KotlinSensor(private val fs: FileSystem) : Sensor {
        val sources: Iterable<InputFile>
            get() = fs.inputFiles(MAIN)
        override fun execute(context: SensorContext) {
            sources.forEach { inputFile: InputFile ->
                KotlinChecks.checks.forEach { check ->
                    val violations = check.violations(inputFile.file())
                    violations.forEach { (lineNumber) ->
                        with(context.newIssue().forRule(check.ruleKey())) {
                            val location = newLocation().apply {

    Finally, the run

    Let’s create a dummy Maven project with 2 classes, Test1 and Test2 in one Test.kt file, with the same code as last week. Running mvn sonar:sonar yields the following output:

    SonarQube screenshot

    Et voilà, our first SonarQube plugin for Kotlin, checking for our custom-developed violations.

    Of course, it has (a lot of) room for improvements:

    • Rules need to be activated through the GUI - I couldn’t find how to do it programmatically
    • Adding new rules needs updates to the plugin. Rules in 3rd-party plugins are not added automatically, as could be the case for standard SonarQube plugins.
    • So far, code located outside of classes seems not to be parsed.
    • The walk through the parse tree is executed for every check. An obvious performance gain would be to walk only once and do every check from there.
    • A lof of the above improvements could be achieved by replacing ANTLR’s grammar with Sonar’s internal SSLR
    • No tests…​

    That still makes the project a nice starting point for a full-fledged Kotlin plugin. Pull requests are welcome!

    Categories: Technical Tags: code qualitySonarQubeKotlinpluginANTLR
  • A SonarQube plugin for Kotlin - Analyzing with ANTLR

    SonarQube Continuous Inspection logo

    Last week, we used ANTLR to generate a library to be able to analyze Kotlin code. It’s time to use the generated API to check for specific patterns.

    API overview

    Let’s start by having a look at the generated API:

    • KotlinLexer: Executes lexical analysis.
    • KotlinParser: Wraps classes representing all Kotlin tokens, and handles parsing errors.
    • KotlinParserVisitor: Contract for implementing the Visitor pattern on Kotlin code. KotlinParserBaseVisitor is its empty implementation, to ease the creation of subclasses.
    • KotlinParserListener: Contract for callback-related code when visiting Kotlin code, with KotlinParserBaseListener its empty implementation.
    parser lexer class diagram

    Class diagrams are not the greatest diagrams to ease the writing of code. The following snippet is a very crude analysis implementation. I’ll be using Kotlin, but any JVM language interoperable with Java could be used:

    val stream = CharStreams.fromString("fun main(args : Array<String>) {}")                      (1)
    val lexer = KotlinLexer(stream)                                                               (2)
    val tokens = CommonTokenStream(lexer)                                                         (3)
    val parser = KotlinParser(tokens)                                                             (4)
    val context = parser.kotlinFile()                                                             (5)
    ParseTreeWalker().apply {                                                                     (6)
        walk(object : KotlinParserBaseListener() {                                                (7)
            override fun enterFunctionDeclaration(ctx: KotlinParser.FunctionDeclarationContext) { (8)
                println(ctx.SimpleName().text)                                                    (9)
        }, context)

    Here’s the explanation:

    1 Create a CharStream to feed the lexer on the next line. The CharStreams offers plenty of static fromXXX() methods, each accepting a different type (String, InputStream, etc.)
    2 Instantiate the lexer, with the stream
    3 Instantiate a token stream over the lexer. The class provides streaming capabilities over the lexer.
    4 Instantiate the parser, with the token stream
    5 Define the entry point into the code. In that case, it’s a Kotlin file - and probably will be for the plugin.
    6 Create the overall walker that will visit each node in turn
    7 Start the visiting process by calling walk and passing the desired behavior as an object
    8 Override the desired function. Here, it will be invoked every time a function node is entered
    9 Do whatever is desired e.g. print the function name

    Obviously, lines 1 to 7 are just boilerplate to wire all components together. The behavior that need to be implemented should replace lines 8 and 9.

    First simple check

    In Kotlin, if a function returns Unit - nothing, then explicitly declaring its return type is optional. It would be a great rule to check that there’s no such explicit return. The following snippets, both valid Kotlin code, are equivalent - one with an explicit return type and the other without:

    fun hello1(): Unit {
    fun hello2() {

    Let’s use grun to graphically display the parse tree (grun was explained in the previous post). It yields the following:

    Parse tree returns Unit

    As can be seen, the snippet with an explicit return type has a type branch under functionDeclaration. This is confirmed by the snippet from the KotlinParser ANTLR grammar file:

      : modifiers 'fun' typeParameters?
          (type '.' | annotations)?
          typeParameters? valueParameters (':' type)?

    The rule should check that if such a return type exists, then it shouldn’t be Unit. Let’s update the above code with the desired effect:

    ParseTreeWalker().apply {
        walk(object : KotlinParserBaseListener() {
            override fun enterFunctionDeclaration(ctx: KotlinParser.FunctionDeclarationContext) {
                if (ctx.type().isNotEmpty()) {                                       (1)
                    val typeContext = ctx.type(0)                                    (2)
                    with(typeContext.typeDescriptor().userType().simpleUserType()) { (3)
                        val typeName = this[0].SimpleName()
                        if (typeName.symbol.text == "Unit") {                        (4)
                            println("Found Unit as explicit return type " +          (5)
                            	"in function ${ctx.SimpleName()} at line ${typeName.symbol.line}")
        }, context)

    Here’s the explanation:

    1 Check there’s an explicit return type, whatever it is
    2 Strangely enough, the grammar allows for a multi-valued return type. Just take the first one.
    3 Follow the parse tree up to the final type name - refer to the above parse tree screenshot for a graphical representation of the path.
    4 Check that the return type is Unit
    5 Prints a message in the console. In the next step, we will call the SonarQube API there.

    Running the above code correctly yields the following output:

    Found Unit as explicit return type in function hello1 at line 1

    A more advanced check

    In Kotlin, the following snippets are all equivalent:

    fun hello1(name: String): String {
        return "Hello $name"
    fun hello2(name: String): String = "Hello $name"
    fun hello3(name: String) = "Hello $name"

    Note that in the last case, the return type can be inferred by the compiler and omitted by the developer. That would make a good check: in the case of a expression body, the return type should be omitted. The same technique as above can be used:

    1. Display the parse tree from the snippet using grun:
    2. Check for differences. Obviously:
      • Functions that do not have an explicit return type miss a type node in the functionDeclaration tree, as above
      • Functions with an expression body have a functionBody whose first child is = and whose second child is an expression
    3. Refer to the initial grammar, to make sure all cases are covered.
        : block
        | '=' expression
    4. Code!
      ParseTreeWalker().apply {
          walk(object : KotlinParserBaseListener() {
              override fun enterFunctionDeclaration(ctx: KotlinParser.FunctionDeclarationContext) {
                  val bodyChildren = ctx.functionBody().children
                  if (bodyChildren.size > 1
                          && bodyChildren[0] is TerminalNode && bodyChildren[0].text == "="
                          && ctx.type().isNotEmpty()) {
                      val firstChild = bodyChildren[0] as TerminalNode
                      println("Found explicit return type for expression body " +
                              "in function ${ctx.SimpleName()} at line ${firstChild.symbol.line}")
          }, context)

    The code is pretty self-explanatory and yields the following:

    Found explicit return type for expression body in function hello2 at line 5
    Categories: Technical Tags: code qualitySonarQubeKotlinplugin
  • A SonarQube plugin for Kotlin - Paving the way

    SonarQube Continuous Inspection logo

    Since I started my journey into Kotlin, I wanted to use the same libraries and tools I use in Java. For libraries - Spring Boot, Mockito, etc., it’s straightforward as Kotlin is 100% interoperable with Java. For tools, well, it depends. For example, Jenkins works flawlessly, while SonarQube lacks a dedicated plugin. The SonarSource team has limited resources: Kotlin, though on the rise - and even more so since Google I/O 17, is not in their pipe. This post serie is about creating such a plugin, and this first post is about parsing Kotlin code.


    In the realm of code parsing, ANTLR is a clear leader in the JVM world.

    ANTLR (ANother Tool for Language Recognition) is a powerful parser generator for reading, processing, executing, or translating structured text or binary files. It’s widely used to build languages, tools, and frameworks. From a grammar, ANTLR generates a parser that can build and walk parse trees.

    Designing the grammar

    ANTLR is able to generate parsing code for any language thanks to a dedicated grammar file. However, creating such a grammar from scratch for regular languages is not trivial. Fortunately, thanks to the power of the community, a grammar for Kotlin already exists on Github.

    With this existing grammar, ANTLR is able to generate Java parsing code to be used by the SonarQube plugin. The steps are the following:

    • Clone the Github repository
      git clone [email protected]:antlr/grammars-v4.git
    • By default, classes will be generated under the root package, which is discouraged. To change that default:
      • Create a src/main/antlr4/<fully>/<qualified>/<package> folder such as src/main/antlr4/ch/frankel/sonarqube/kotlin/api
      • Move the g4 files there
      • In the POM, remove the sourceDirectory and includes bits from the antlr4-maven-plugin configuration to use the default
    • Build and install the JAR in the local Maven repo
      cd grammars-v4/kotlin
      mvn install

    This should generate a KotlinLexer and a KotlinParser class, as well as several related classes in target/classes. As Maven goes, it also packages them in a JAR named kotlin-1.0-SNAPSHOT.jar in the target folder - and in the local Maven repo as well.

    Testing the parsing code

    To test the parsing code, one can use the grun command. It’s an alias for the following:

    java -Xmx500M -cp "<path/to/antlr/complete/>.jar:$CLASSPATH" org.antlr.v4.Tool

    Create the alias manually or install the antlr package via Homebrew on OSX.

    With grun, Kotlin code can parsed then displayed in different ways, textual and graphical. The following expects an input in the console:

    cd target/classes
    grun Kotlin kotlinFile -tree

    After having typed valid Kotlin code, it yields its parse tree in text. By replacing the -tree option with the -gui option, it displays the tree graphically instead. For example, the following tree comes from this snippet:

    fun main(args : Array<String>) {
        val firstName : String = "Adam"
        val name : String? = firstName
    AST Inspector

    In order for the JAR to be used later in the SonarQube plugin, it has been deployed on Bintray. In the next post, we will be doing proper code analysis to check for violations.

    Categories: Technical Tags: code qualitySonarQubeKotlinpluginANTLR
  • Starting Logstash plugin development for Java developers

    Logstash old logo

    I recently became interested in Logstash, and after playing with it for a while, I decided to create my own custom plugin for learning purpose. I chose to pull data from Reddit because a) I use it often and b) there’s no existing plugin that offers that.

    The Elasticsearch site offers quite an exhaustive documentation to create one’s own Logstash plugin. Such endeavour requires Ruby skills - not only the language syntax but also the ecosystem. Expectedly, the site assumes the reader is familiar with both. Unfortunately, that’s not my case. I’ve been developing in Java a lot, I’ve dabbled somewhat in Scala, I’m quite interested in Kotlin - in the end, I’m just a JMV developer (plus some Javascript here and there). Long talk short, I start from scratch in Ruby.

    At this stage, there are two possible approaches:

    1. Read documentation and tutorials about Ruby, Gems, bundler, the whole nine yard and come back in a few months (or more)
    2. Or learn on the spot by diving right into development

    Given that I don’t have months, and that whatever I learned is good enough, I opted for the 2nd option. This post is a sum-up of the steps I went through, in the hopes it might benefit others who find themselves in the same situation.

    The first step is not the hardest

    Though new Logstash plugins can be started from scratch, the documentation advise to start from a template. This is explained in the online procedure. The generation yields the following structure:

    $ tree logstash-input-reddit
    ├── Gemfile
    ├── LICENSE
    ├── Rakefile
    ├── lib
    │   └── logstash
    │       └── inputs
    │           └── reddit.rb
    ├── logstash-input-reddit.gemspec
    └── spec
        └── inputs
            └── reddit_spec.rb

    Not so obviously for a Ruby newbie, this structure is one of a Ruby Gem. In general, dependencies are declared in the associated Gemfile:

    source ''

    However, in this case, the gemspec directive adds one additional indirection level. Not only dependencies, but also meta-data, are declared in the associated gemspec file. This is a feature of the Bundler utility gem.

    To install dependencies, the bundler gem first needs to be installed. Aye, there’s the rub…​

    Ruby is the limit

    Trying to install the gem yields the following:

    gem install bundler
    Fetching: bundler-1.13.6.gem (100%)
    ERROR:  While executing gem ... (TypeError)
        no implicit conversion of nil into String

    The first realization - and it took a lot of time (browsing and reading), is that there are different flavours of Ruby runtimes. Simple Ruby is not enough for Logstash plugin development: it requires a dedicated runtime that runs on the JVM aka JRuby.

    The second realization is that while it’s easy to install multiple Ruby runtimes on a machine, it’s impossible to have them run at the same time. While Homebrew makes the jruby package available, it seems there’s only one single gem repository per system and it reacts very poorly to being managed by different runtimes.

    After some more browsing, I found the solution: rbenv. It not only mangages ruby itself, but also all associated executables (gem, irb, rake, etc.) by isolating every runtime. This makes possible to run my Jekyll site with the latest 2.2.3 Ruby runtime and build the plugin with JRuby on my machine. rbenv is available via Homebrew:

    This is how it goes:

    Install rbenv

    brew install rbenv

    Configure the PATH

    echo 'eval "$(rbenv init -)"' >> ~/.bash_profile

    Source the bash profile script


    List all available runtimes
    rbenv install -l
    Available versions:
    Install the desired runtime

    rbenv install jruby-

    Configure the project to use the desired runtime
    cd logstash-input-reddit
    rbenv local jruby-
    Check it’s configured
    ruby --version

    Finally, bundler can be installed:

    gem install bundler
    Successfully installed bundler-1.13.6
    1 gem installed

    And from this point on, all required gems can be installed as well:

    bundle install
    Fetching gem metadata from
    Fetching version metadata from
    Fetching dependency metadata from
    Resolving dependencies...
    Installing rake 12.0.0
    Installing public_suffix 2.0.4
    Installing rspec-wait 0.0.9
    Installing logstash-core-plugin-api 2.1.17
    Installing logstash-codec-plain 3.0.2
    Installing logstash-devutils 1.1.0
    Using logstash-input-reddit 0.1.0 from source at `.`
    Bundle complete! 2 Gemfile dependencies, 57 gems now installed.
    Use `bundle show [gemname]` to see where a bundled gem is installed.
    Post-install message from jar-dependencies:
    if you want to use the executable lock_jars then install ruby-maven gem before using lock_jars
       $ gem install ruby-maven -v '~> 3.3.11'
    or add it as a development dependency to your Gemfile
       gem 'ruby-maven', '~> 3.3.11'

    Plugin development proper

    With those requirements finally addressed, proper plugin development can start. Let’s skip finding the right API to use to make an HTTP request in Ruby or addressing Bundler warnings when installing dependencies, the final code is quite terse:

    class LogStash::Inputs::Reddit < LogStash::Inputs::Base
      config_name 'reddit'
      default :codec, 'plain'
      config :subreddit, :validate => :string, :default => 'elastic'
      config :interval, :validate => :number, :default => 10
      def register
        @host = Socket.gethostname
        @http ='', 443)
        @get ="/r/={@subreddit}/.json")
        @http.use_ssl = true
      def run(queue)
        = we can abort the loop if stop? becomes true
        while !stop?
          response = @http.request(@get)
          json = JSON.parse(response.body)
          json['data']['children'].each do |child|
            event ='message' => child, 'host' => @host)
            queue << event
          Stud.stoppable_sleep(@interval) { stop? }

    The plugin defines two configuration parameters, which subrredit will be parsed for data and the interval between 2 calls (in seconds).

    The register method initializes the class attributes, while the run method loops over:

    • Making the HTTP call to Reddit
    • Parsing the response body as JSON
    • Making dedicated fragments from the JSON, one for each post. This is particularly important because we want to index each post separately.
    • Sending each fragment as a Logstash event for indexing

    Of course, it’s very crude, there’s no error handling, it doesn’t save the timestamp of the last read post to prevent indexing duplicates, etc. In its current state, the plugin offers a lot of room for improvement, but at least it works from a MVP point-of-view.

    Building and installing

    As written above, the plugin is a Ruby gem. It can be built as any other gem:

    gem build logstash-input-reddit

    This creates a binary file named logstash-input-reddit-0.1.0.gem - name and version both come from the Bundler’s gemspec. It can be installed using the standard Logtstash plugin installation procedure:

    bin/logstash-plugin install logstash-input-reddit-0.1.0.gem

    Downstream processing

    One huge benefit of Logstash is the power of its processing pipeline. The plugin is designed to produce raw data, but the indexing should handle each field separately. Extracting fields from another field can be achieved with the mutate filter.

    Here’s one Logstash configuration snippet example, to fill some relevant fields (and to remove message):

      mutate {
        add_field => {
          "kind" => "%{message[kind]}"
          "subreddit" => "%{message[data][subreddit]}"
          "domain" => "%{message[data][domain]}"
          "selftext" => "%{message[data][selftext]}"
          "url" => "%{message[data][url]}"
          "title" => "%{message[data][title]}"
          "id" => "%{message[data][id]}"
          "author" => "%{message[data][author]}"
          "score" => "%{message[data][score]}"
          "created_utc" => "%{message[data][created_utc]}"
        remove_field => [ "message" ]

    Once the plugin has been built and installed, Logstash can be run with a config file that includes the previous snippet. It should yield something akin to the following - when used in conjunction with the rubydebug codec:

           "selftext" => "",
               "kind" => "t3",
             "author" => "nfrankel",
              "title" => "Structuring data with Logstash",
          "subreddit" => "elastic",
                "url" => "",
               "tags" => [],
              "score" => "9",
         "@timestamp" => 2016-12-07T22:32:03.320Z,
             "domain" => "",
               "host" => "LSNM33795267A",
           "@version" => "1",
                 "id" => "5f66bk",
        "created_utc" => "1.473948927E9"


    Starting from near-zero kwnowledge about the Ruby ecosystem, I’m quite happy of the result.

    The only thing I couldn’t achieve was to add 3rd libraries (like rest-client), Logstash kept complaining about not being able to install the plugin because of a missing dependency. Falling back to standard HTTP calls solved the issue.

    Also, note that the default template has some warnings on install, but they can be fixed quite easily:

    • The license should read Apache-2.0 instead of Apache License (2.0)
    • Dependencies version are open-ended ('>= 0') whereas they should be more limited i.e. '~> 2'
    • Some meta-data is missing, like the homepage

    I hope this post will be useful to other Java developers wanting to develop their own Logstash plugin.

    Categories: Development Tags: Logstashpluginruby
  • goes mobile!

    If you do browse this site with your computer, chances are you didn’t notice anything. On the contrary, if you browse it with a mobile device (an iPhone for example), it’s kind like a new user experience!

    In fact, in almost 1 minute, I made this site adapted to mobile browsing. How? Just used my friend Google who found WPTouch, a WordPress plugin which in fact behaves like a configurable theme, but only on mobiles.

    WPTouch features are impressive (and free). Most notably, supported user-agents include: Android, BlackBerry9500, BlackBerry9530, CUPCAKE, dream, iPhone, iPod, incognito, webOS, webmate.

    If you own a WordPress blog and want to offer to users the best user experience, I would recommend using this plugin.

    Categories: Technical Tags: pluginwordpress