Posts Tagged ‘logging’
  • Smart logging in Java 8 and Kotlin

    Logging is a not a sexy subject but it’s important nonetheless. In the Java world, logging frameworks range from Log4J to SLF4J via Commons Logging and JDK logging (let’s exclude Log4J 2 for the time being). Though different in architecture and features, all of their API look the same. The logger has a method for each log level e.g.:

    • debug(String message)
    • info(String message)
    • error(String message)
    • etc.

    Levels are organized into a hierarchy. Once the framework is configured at a certain level, only messages logged with the same or a higher priority will be written.

    So far, so good. The problem comes when messages contain variables so that they must be concatenated.

    LOGGER.debug("Customer + " customer.getId() + " has just ordered " + item.getName() + " (" + item.getId() + ")");

    String concatenation has a definite performance cost in Java, and whatever the configured log level, it will occur.

    For this reason, modern logging frameworks such as SLF4J provide improved signature accepting a message format typed as String and variables as varargs of Objects. In that case, concatenation occurs only when the logger effectively writes.

    LOGGER.debug("Customer {} has just ordered {} ({})", customer.getId(), item.getName(), item.getId());

    Sometimes, however, variables are not readily available but have to be computed explicitly solely for the purpose of logging.

    LOGGER.debug("Customer {} has just ordered {}", customer.getId(), order.expensiveComputation());

    SLF4J doesn’t help there, as the method is evaluated even if the logger decides latter not to write because the framework is configured with a higher priority. In that case, it’s therefore advised to wrap the logger method call inside a relevant priority check.

    if (LOGGER.isDebug()) {
        LOGGER.debug("Customer {} has just ordered {}", customer.getId(), order.expensiveComputation());

    That has to be done for every expensive method call, so it requires a strong coding discipline and reviews to make sure the wrapping occurs when it’s relevant - but only then. Besides, it decreases readability and thus maintainability. To achieve the same result automatically without those cons, one could use AOP at the cost of extra-complexity.

    Comes Java 8 and the Supplier<T> interface which returns a String, so that a method can be created like so:

    public void debug(Supplier<String> s) {
       if (LOGGER.isDebugEnabled()) {

    In that case, the get() method is called only when the wrapping condition evaluates to true.

    Using this method is as simple as that:

    debug(() -> ("Customer + " customer.getId() + " has just ordered " + order.expensiveComputation());

    Great! But where to put this improved debug() method?

    • In the class itself: and duplicate for every class. Really?
    • In an utility class: one should add the LOGGER as the first parameter. Did you say cumbersome?
    • In the logger: one can create a wrapper and keep a standard logger as a delegate but the factory is final (at least in SLF4J) and has a lot of private methods.
    • In an aspect: back to square one…

    That’s the step where things start not being so nice in the Java realm.

    What about Kotlin? It comes with extension functions (and properties). This will probably be the subject of a future post since you should adopt Kotlin if only for this feature. Suffice to say that Kotlin can make it look like one can add state and behavior for already defined types.

    So debug() can be defined in a aptly named file:

    fun Logger.debug(s: () -> String) {
        if (isDebugEnabled) debug(s.invoke())

    And calling it really feels like calling a standard logger, only a lambda is passed:

    LOGGER.debug { "Customer + " customer.getId() + " has just ordered " + order.expensiveComputation() }

    As a finishing touch, let’s make the defined function inlined. By doing that, the compiler will effectively replace every place the method is called with the method in the bytecode. We get the readable syntax benefit and avoid the overhead of un-wrapping the lambda at runtime:

    inline fun Logger.debug(s: () -> String) {
        if (isDebugEnabled) debug(s.invoke())

    Note that it can still be called in Java, albeit with a not-so-nice syntax:

    Slf4KUtilsKt.debug(LOGGER, () -> "Customer + " + customer.getId() + " has just ordered " + order.expensiveComputation());

    Also note that Log4J 2 already implements this feature out-of-the-box.

    At this point, it’s simple enough to copy-code the above snippet when required. But developers are lazy by nature so I created a full-fledged wrapper around SLF4J methods. The source code is available on Github and the binary artifact on Bintray so that you only need the following dependency to use it:

    Categories: Development Tags: kotlinloggingperformance
  • Log management in Spring Boot

    Logging is for sure not a glamorous subject, but it’s a critical one - at least for DevOps and Ops teams. While there are plenty of material on the Web describing how to change your ASCII banner, there is not much on how to efficiently manage the log output.

    By default, Spring Boot will output log on the console and not use any file at all.

    However, it’s possible to tell Spring Boot to log in an output file. At the simplest level, the path where Spring Boot will put the spring.log file can be easily configured in the under the logging.path key:


    Note that another alternative is to use the logging.file key in order to not only set the file path but also the file name.


    While this works very well for development purposes, it’s not an acceptable process for the Ops team to unzip the final jar, update the file and repackage it - this for each and every different environment.

    Spring Boot allows to override the value set in the packaged file (if any) on the command-line as a standard system property when launching the jar:

    java -jar -Dlogging.path=/tmp  myapp.jar

    Finally, it’s also possible to override this value when invoking the Spring Boot Maven plugin on the command line. However, directly using the system property doesn’t work for the plugin will spawn another JVM. One has to use the run.jvmArguments system property and delegate it the wanted value:

    mvn spring-boot:run -Drun.jvmArguments="-Dlogging.path=/tmp"

    Note that this works for every available property!

    To go further:

    Categories: Java Tags: loggingspring boot
  • Configuring Maven to use SLF4J

    I mainly write articles for two reasons: to understand something that is new to me or to document something I regularly have to explain to others. This article definitely falls in the second category: in order to celebrate the new 1.0.0 version of Logback, I’ve decided to write down once and for all how to properly use SLF4J with Maven since it seems there’s no shortage of questions about it.


    The basis of SLF4J is to have two separate components, one API and one implementation. That means that your code should solely be dependent on the API thus the implementation can be changed at your convenience. Remember that decoupling code from implementations by introducing an interface should be your first concern: at the library level, it’s the role of an API.

    This is easily done like this in your POM:

    <project ...>

    The runtime scope prevents us from using Logback’s implementation classes when coding, which achieves decoupling.

    Note: if you’re using a parent POM for your project (because you have modules), it goes without saying the above dependencies should be under the dependencyManagement section.


    Since an application with only logging features is a bit limited, it’s highly likely we’ll need other dependencies. Of course, some of those dependencies may use other logging frameworks, such as Commons Logging or Log4J.

    For example, Spring uses Commons Logging: thus, logs produced by our application would use logback while those produced by Spring would use Commons Logging. This strategy needs two different configurations files (as well as knowledge of both) which is not very intuitive. Moreover, this would be unwieldly when reading twothe flow on two different sources. Note that this could be mitigated by using logging targets other than files (which I haven’t seen yet).

    Fortunately, SLF4J provides bridging components that let us wire third-party API calls directly to SLF4J. For Spring, that means removing all dependencies to Commons Logging and replacing them with a single dependency to the Commons Logging to SLF4 bridge.

    <project ...>

    Like in the section above, you should preferably use the

    Note: although this knowledge is by no means necessary, know that since API and implementation are both packages together in the commons-logging.jar, the jcl-over-slf4.jar bridge completely replaces the API classes provided by Commons Logging. That means you shouldn’t have both JAR in your classpath!


    Remember the Highlander rule: “there can be only one”. In this case, there can be only one implementation available on the classpath since SLF4J uses the Java Services Provider. If its the case, SLF4J will display a “Multiple bindings were found on the class path” warning and uses the first it finds (first on the classpath). Since classpaths are ordered by Maven, there’s little chance it will be the thing we want. Thus, take care of not putting slf4j-simple in the test scope along with logback or be prepared to face undetermined side-effects.

    To go further:

    Categories: Java Tags: loggingslf4j
  • Thoughts on Java logging and SLF4J

    In this post, I will ramble on logging in Java, how it was done in the old days and what a library like SLF4J can bring.

    Logging is a basic need when you create software. Logging use-cases are:

    • debugging the software during development
    • help diagnose bugs during production
    • trace access for security purposes
    • create data for statistical use
    • etc.

    Whatever the use, logs should be detailed, configurable and reliable.


    Historically, Java logs where done with System.out.println(), System.err.println() or e.printStackTrace() statements. Debug logs where put in System.out and error logs in System.err. In production, both were redirected: the former on the null output, the latter to the desired error log file. So they were useful enough but they suffered from a big drawback: they were not very configurable. It was an all or nothing switch, either log or don’t. You could not focus detailed logs on a particular layer or package.

    Log4J came to the rescue. It was everything one could ever want from a logging framework. It invented many concepts still used in today’s logging frameworks (it was the first framework I used so please bear with me if a concept was not invented by it):

    • the concept of Logger so that each logger can be configured independently
    • the concept of Appender so that each appender can log wherever it wants (file, database, message, etc.)
    • the concept of Level so that one can configure the logging (or not) of each message separately

    After that, Sun felt the need for a logging feature inside the JDK but, instead of using log4j directly, it created an API inspired by it. However, it was not not so well finished as Log4J. If you want to use JDK 1.4 logging, chances are you’ll have to write your own Appenders - called Handlers - since the only handlers available are Console and File out-of-the-box.

    With both frameworks available, you needed to configure both of them since whatever framework you used, there was surely at least one dependency of your project that used the other. Apache Commons Logging is an API bridge that connects itself to supported logging frameworks. Libraries should use commons-logging so that the real framework used is the choice of the project, and is not imposed by the dependencies. This is not always the case, so Commons Logging does not solve the double configuration problem. Morevoer, Commons Logging suffer from some class-loading problems, leading to NoClassDefFoundError errors.

    Finally, the lead programer from Log4J quit the project for reasons I will not detail here. He created another logging framework, namely SLF4J, that should have been Log4J v2.

    Some strange facts

    The following facts are things that bother me with the previous frameworks. They are not drawbacks per se but are worth mentioning:

    • Log4J has Maven dependencies on JMS, Mail and JMX that are not optional, meaning they will appear on your classpath if you do not bother to exclude them
    • Likewise, Commons Logging has Maven dependencies on Avalon (another logging framework), Log4J, LogKit and Servlet API (!) that are not optional
    • A Swing log viewer is included in Log4J jar, even if you it is used in an headless environment, such as a batch or an application server
    • Log4J v1.3 main page redirects to v1.2 while Log4 v2.0 is experimental

    Which framework to use ?

    Log4J would be the framework of choice (and is for most) but it is no longer developed. Version 1.2 is the reference, version 1.3 is abandoned and version 2.0 is still in its early stage.

    Commons Logging is a good choice for a library (as opposed to an application) but I suffered once classloaders issues, and once is enough to veto it (finally, i threw Commons Logging out and used Log4J directly).

    JDK 1.4 Logging is a standard and does not raise concurrent versions problems. But it lacks so many features, it cannot be used without redeveloping some such as a database adapter and such. Too bad… but it does not answers the question: which framework to use?

    Recently, architects of my company decided for SLF4J. Why such a choice?


    SLF4J is not as widespread as Log4J because most architects (and developers alike) know Log4J well and either don’t know about SLF4J, or don’t care and stick to Log4J anyway. Moreover, for most projects Log4J fulfills all logging’s needs. Yet, interestingly enough, Hibernate uses SLF4J. It has some nice features that are not present in Log4J.

    Simple syntax

    Take the following Log4J example:

    LOGGER.debug("Hello " + name);

    Since String concatenation is frowned upon, many companies enforce the following syntax, so that the concatenation only does take place when in DEBUG level:

    if (logger.isDebugEnabled()) {
      LOGGER.debug("Hello " + name);

    It avoids String concatenation but it’s a bit heavy, isn’t it? In contrast, SLF4J offers the following simple syntax:

    LOGGER.debug("Hello {}", name);

    It’s like the first syntax but without the concatenation cost nor the heavy syntax burden.

    SLF4J API and implementation

    Moreover, SLF4 nicely decouples API from implementation so that you can use the API that works best with your development with the back-end that suits best your production team. For example, you could enforce the use of the SL4J API while letting the production still reuse the old they’ve known for ages. SLF4J’s logging implementation is known as LogKit.

    SLF4J bridging

    SLF4J has a bridging feature so you can remove all log4j and commons-logging dependencies from your project’s dependencies and use only SLF4J.

    SLF4J offers a JAR for each logging framework: they mimic its API but reroute the calls to the SLF4J API (which in turn uses the real framework). A word of warning: you could run into a cycle so beware to not have the bridging library along the implementation library in your classpath. For example, if you use the Log4J bridge, each Log4J API call will be rerouted to SLF4J. But if the SLF4J Log4J implementation is present, it will be routed back to Log4J then again, and again.

    SLF4J API with Log4J implementation

    Taking all these facts into account, my advice is to use SLF4J API and Log4J implementation. This way, you still configure logging the old Log4J way but you have access to SLF4J’s simpler API. In order to do so, you’ll have to:

    Action Location Description
    Add to classpath slf4j-api.jar* Main API without which you cannot use SLF4J
    slf4j-log4j.jar* SLF4J Log4J implementation
    jul-to-slf4j.jar* Enables rerouting JDK 1.4 logging calls to SLF4J
    jcl-over-slf4j.jar* Reroutes commons-logging calls to SLF4J
    Remove from classpath commons-logging.jar* Would conflict with commons-logging API in jcl-over-slf4j.jar
    SLF4JBridgeHandler.install()** Main application Redirect JDK 1.4 logging calls to SLF4J
    * Jar name will likely includes version
    ** 20% overhead advertised so do only if you need the single entry point and if there are a few calls

    In you run inside an application server, you’ll probably have to change its libraries and / or its configuration to reach this situation.

    To go further:

  • JMX use cases

    JMX is a Java standard shipped with the JDK since Java 5. Though it enables you to efficiently and dynamically manage your applications, JMX has seen very few productions uses. In this article, I will show you the benefits of using such a technology in a couple of use cases.

    Manage your application’s configuration

    Even though each application has different needs regarding configuration (one needing a initial thread number attribute, the other an URL), every application needs to be more or less parameterized. In order to do this, countless generations of Java developers (am I overdoing here?) have created two components:

    • the first one is a property file where one puts the name value pairs
    • the other one is a Java class whose responsibilities are to load the properties in itself and to provide access to the values. This class should be a singleton.

    This is good and fine for initialization, but what about runtime changes of those parameters? This is where JMX comes in. With JMX, you can now expose those parameters with read/write authorizations. JDK 6 provides you with the JConsole application, which can connect on JMX-enabled applications. Lets’s take a very simple example, with a configuration having only two properties: there will be one Configuration class and one interface named ConfigurationMBean in order to follow the JMX standard for Standard MBean. This interface will describe all methods available on the MBean instance:

    public interface ConfigurationMBean {
        public String getUrl();
        public int getNumberOfThread();
        public void setUrl(String url);
        public void setNumberOfThread(int numberOfThread);

    Now, you’ll only have to register the singleton instance of this class in you MBean server, and you have exposed your application’s configuration to the outside with JMX!

    Manage your application’s Springified configuration

    In Spring, every bean that is configured in the Spring configuration file can be registered to the JMX server. That’s it: no need to create an MBean suffixed interface for each class you want to expose. Spring does it for you, using the more powerful DynamicMBean and ModelMBean classes under the hood.

    By default, Spring will expose all your public properties and methods. You can still control more precisely what will be exposed through the use of:

    • meta-datas (@@-like comments in the javadocs, thus decoupling your code from Spring API),
    • Spring Java 5 annotations,
    • classical MBean interfaces referenced in the Spring’s definition file,
    • or even using MethodNameBasedMBeanInfoAssembler which describes the interface in the Spring’s definition file.

    More importantly, Spring provides your MBeans with notification provider support. This means every MBean will implement NotificationBroadcaster and thus be able to send notifications to subscribers, for example when you change a value to a property or when you call a method.

    Following is a snippet for the previous Configuration, using Spring:

    <bean id="genericCfg" class="" />
    <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter">
        <property name="beans">
                <entry key="bean:type=configuration,name=generic" value-ref="genericCfg" />

    Spring uses the MBean’s id to register it under the MBean server.

    Change a logger’s level

    Logging is a critical functionality since the dawn of software. Now, let’s say you are faced with the following problem: your application keeps throwing exceptions but what is traced is not enough for the developers to diagnose. Luckily, one of the developer did put some trace, but on a very primitive level. Unluckily, in production mode, it’s pretty sure you log only important events, mainly exceptions, not the debug informations that could be so useful here.

    The first solution is to change the log level in the configuration file and the restart the application. Ouch, that’s a very crude way, one that won’t make many people happy (depending on the criticity and availability of the application).

    Another answer is to use the abilities of the logging framework. For example, Log4J is the legacy logging framework. It provides a way to configure the framework with a configuration file and to listens to changes made to this file in order to reflect it in the in-memory configuration (the static configureAndWatch() method found in both DOMConfigurator and PropertyConfigurator).This runs fine if you have an external file but what about configuration files shipped with the archive? You can argue that Web archives are often deployed in exploded mode but you cannot rely on it.

    JMX will prove to be handy for such a case: if you could have exposed your loggers logging level, you could change them at runtime. Since JDK 1.4, Java has an API to log messages. It offers JMX registration for free, so let’s use it. The only thing to do is create a logger for your class. In a business method, use the logger to trace at FINE level. Now using your JMX console, locate the MBean named java.util.logging:type=Logging.

    Type Name Description
    Attribute LoggerNames Array of all loggers configured
    Operation getLoggerLevel Get the level of a logger The root logger is referenced by an empty string
    Operation setLoggerLevel Set the level of a logger to a specified value

    In order to activate the log, just set the level of the logger used by your class to the value you used in the code. In order to test this, I recomend to create a Spring bean from a Java class using the logger and exporting it to JMX with Spring (see above).

    Flush your cache

    For the data access layer, Hibernate is the most used framework at the time of this writing. Hibernate enables you to use caching. Hibernate’s first level caching (session cache) is done within Hibernate itself; Hibernate’s second level caching (transsession cache) is delegated to a 3rd party framework. Default framework is Ehcache: it is a very simple, yet efficient solution.

    Let’s say some of your application’s table contain repository datas, that is data that won’t be changed by the application. These datas are by definition eligible for second-level caching. Now picture this: your application should be highly available (365/24/7) and the repositories just changed. How can you tell Ehcache to reload the tables in memory without restarting the application?

    Luckily, Ehcache provides you with the way to do it. In fact, the class implements the so you can call all of the interface methods: one of such method, aptly named removeAll() empties your cache. Just call it and Hibernate, not finding any data in the cache, will reload them from the database.

    You will perhaps object you do not want all of your cache to be reinitialized: now you understand why you should separate your cache instances in the configuration (e.g. one for each table or one for repository datas and one for updatable datas).

    Testing JMX

    It is legitimate to want to test JMX while developing before bringing in the whole server infrastructure. JDK 6 provides you with the JConsole utility which displays the following informations on any application you can connect to (local and remote):

    • memory usage through time,
    • threads instances through time,
    • number of loaded classes through time,
    • summary of VM (including classpath and properties),
    • all your exposed MBeans.

    JConsole MBean view

    This view lets you view your MBeans attributes and call your MBeans operations, so this is a very valuable tool (for free). Now try it with a legacy application of yours: notice how many MBeans are registered. Guess you didn’t expect that!

    In order to use this tool in development mode, do not forget to launch it with the following arguments (notice the -J):

    • -J-Djava.class.path="${JDK_HOME}/jdk1.6.0_10/lib/jconsole.jar;${JDK_HOME}/lib/tools.jar;${ADDITIONNAL_CLASSPATH}

    The last argument can be omitted in most cases (though this is not the case for managing EhCache).

    You can find the sources for all the examples here.

    To go further: