Archive

Posts Tagged ‘configuration’
  • Flavors of Spring application context configuration

    Spring framework logo

    Every now and then, there’s an angry post or comment bitching about how the Spring framework is full of XML, how terrible and verbose it is, and how the author would never use it because of that. Of course, that is completely crap. First, when Spring was created, XML was pretty hot. J2EE deployment descriptors (yes, that was the name at the time) was XML-based.

    Anyway, it’s 2017 folks, and there are multiple ways to skin a cat. This article aims at listing the different ways a Spring application context can be configured so as to enlighten the aforementioned crowd - and stop the trolling around Spring and XML.

    XML

    XLM has been the first way to configure the Spring application context. Basically, one create an XML file with a dedicated namespace. It’s very straightforward:

    <beans xmlns="http://www.springframework.org/schema/beans"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
           xsi:schemaLocation="http://www.springframework.org/schema/beans 
               http://www.springframework.org/schema/beans/spring-beans.xsd">
        <bean id="foo" class="ch.frankel.blog.Foo">
            <constructor-arg value="Hello world!" />
        </bean>
        <bean id="bar" class="ch.frankel.blog.Bar">
            <constructor-arg ref="bar" />
        </bean>
    </beans>
    

    The next step is to create the application context, using dedicated classe:

    ApplicationContext ctx = new ClassPathXmlApplicationContext("ch/frankel/blog/context.xml");
    ApplicationContext ctx = new FileSystemXmlApplicationContext("/opt/app/context.xml");
    ApplicationContext ctx = new GenericXmlApplicationContext("classpath:ch/frankel/blog/context.xml");
    

    XML’s declarative nature enforces simplicity at the cost of extra verbosity. It’s orthogonal to the code - it’s completely independent. Before the coming of JavaConfig, I still favored XML over self-annotated classes.

    Self-annotated classes

    As for every new future/technology, when Java 5 introduced annotations, there was a rush to use them. In essence, a self-annotated class will be auto-magically registered into the application context.

    To achieve that, Spring provides the @Component annotation. However, to improve semantics, there are also dedicated annotations to differentiate between the 3 standard layers of the layered architecture principle:

    • @Controller
    • @Service
    • @Repository

    This is also quite straightforward:

    @Component
    public class Foo {
    
        public Foo(@Value("Hello world!") String value) { }
    }
    
    @Component
    public class Bar {
    
        @Autowired
        public Bar(Foo foo) { }
    }
    

    To scan for self-annotated classes, a dedicated application context is necessary:

    ApplicationContext ctx = new AnnotationConfigApplicationContext("ch.frankel.blog");
    

    Self-annotated classes are quite easy to use, but there are some downsides:

    • A self-annotated class becomes dependent on the Spring framework. For a framework based on dependency injection, that’s quite a problem.
    • Usage of self-annotations blurs the boundary between the class and the bean. As a consequence, the class cannot be registered multiple times, under different names and scopes into the context.
    • Self-annotated classes require autowiring, which has downsides on its own.

    Java configuration

    Given the above problems regarding self-annotated classes, the Spring framework introduced a new way to configure the context: JavaConfig. In essence, JavaConfig configuration classes replace XML file, but with compile-time safety instead of XML-schema runtime validation. This is based on two annotations @Configuration for classes, and @Bean for methods.

    The equivalent of the above XML is the following snippet:

    @Configuration
    public class JavaConfiguration {
    
        @Bean
        public Foo foo() {
            return new Foo("Hello world!");
        }
    
        @Bean
        public Bar bar() {
            return new Bar(foo());
        }
    }
    

    JavaConfig classes can be scanned like self-annotated classes:

    ApplicationContext ctx = new AnnotationConfigApplicationContext("ch.frankel.blog");
    

    JavaConfig is the way to configure Spring application: it’s orthogonal to the code, and brings some degree of compile-time validation.

    Groovy DSL

    Spring 4 added a way to configure the context via a Groovy Domain-Specific Language. The configuration takes place in a Groovy file, with the beans element as its roots.

    beans {
        foo String, 'Hello world!'
        bar Bar, foo
    }
    

    There’s an associated application context creator class:

    ApplicationContext ctx = new GenericGroovyApplicationContext("ch/frankel/blog/context.groovy");
    

    I’m not a Groovy developer, so I never used that option. But if you are, it makes a lot of sense.

    Kotlin DSL

    Groovy has been unceremoniously kicked out of the Pivotal portfolio some time ago. There is no correlation, Kotlin has found its way in. It’s no wonder that the the upcoming release of Spring 5 provides a Kotlin DSL.

    package ch.frankel.blog
    
    fun beans() = beans {
        bean {
            Foo("Hello world!")
            Bar(ref())
        }
    }
    

    Note that while bean declaration is explicit, wiring is implicit, as in JavaConfig @Bean methods with dependencies.

    In opposition to configuration flavors mentioned above, the Kotlin DSL needs an existing context to register beans in:

    import ch.frankel.blog.beans
    
    fun register(ctx: GenericApplicationContext) {
        beans().invoke(ctx)
    }
    

    I didn’t use Kotlin DSL but to play a bit with it for a demo, so I cannot say for sure about pros/cons.

    Conclusion

    So far, the JavaConfig alternative is my favorite: it’s orthogonal to the code and provides some degree of compile-time validation. As a Kotlin enthusiast, I’m also quite eager to try the Kotlin DSL in large projects to experience its pros and cons first-hand.

  • Spring profiles or Maven profiles?

    Deploying on different environments requires configuration, e.g. database URL(s) must be set on each dedicated environment. In most - if not all Java applications, this is achieved through a .properties file, loaded through the appropriately-named Properties class. During development, there’s no reason not to use the same configuration system, e.g. to use an embedded h2 database instead of the production one.

    Unfortunately, Jave EE applications generally fall outside this usage, as the good practice on deployed environments (i.e. all environments save the local developer machine) is to use a JNDI datasource instead of a local connection. Even Tomcat and Jetty - which implement only a fraction of the Java EE Web Profile, provide this nifty and useful feature.

    As an example, let’s take the Spring framework. In this case, two datasource configuration fragments have to be defined:

    • For deployed environment, one that specifies the JNDI location lookup
    • For local development (and test), one that configures a connection pool around a direct database connection

    Simple properties file cannot manage this kind of switch, one has to use the build system. Shameless self-promotion: a detailed explanation of this setup for integration testing purposes can be found in my book, Integration Testing from the Trenches.

    With the Maven build system, change between configuration is achieved through so-called profiles at build time. Roughly, a Maven profile is a portion of a POM that’s can be enabled (or not). For example, the following profile snippet replaces Maven’s standard resource directory with a dedicated one.

    <profiles>
        <profile>
          <id>dev</id>
          <build>
            <resources>
              <resource>
                <directory>profile/dev</directory>
                <includes>
                  <include>**/*</include>
                </includes>
              </resource>
            </resources>
          </build>
        </profile>
    </profiles>
    

    Activating a single or different profiles is as easy as using the -P switch with their id on the command-line when invoking Maven. The following command will activate the dev profile (provided it is set in the POM):

    mvn package -Pdev
    

    Now, let’s add a simple requirement: as I’m quite lazy, I want to exert the minimum effort possible to package the application along with its final production release configuration. This translates into making the production configuration, i.e. the JNDI fragment, the default one, and using the development fragment explicitly when necessary. Seasoned Maven users know how to implement that: create a production profile and configure it to be the default.

    <profile>
      <id>dev</id>
      <activation>
        <activeByDefault>true</activeByDefault>
      </activation>
      ...
    </profile>
    

    Icing on the cake, profiles can even be set in Maven settings.xml files. Seems to good to be true? Well, very seasoned Maven users know that as soon as a single profile is explicitly activated, the default profile is de-activated. Previous experiences have taught me that because profiles are so easy to implement, they are used (and overused) so that the default one gets easily lost in the process. For example, in one such job, a profile was used on the Continuous Integration server to set some properties for the release in a dedicated setting files. In order to keep the right configuration, one has to a) know about the sneaky profile b) know it will break the default profile c) explicitly set the not-default-anymore profile.

    Additional details about the dangers of Maven profiles for building artifacts can be found in this article.

    Another drawback of this global approach is the tendency for over-fragmentation of the configuration files. I prefer to have coarse-grained configuration files, with each dedicated to a layer or a use-case. For example, I’d like to declare at least the datasource, the transaction manager and the entity manager in the same file with possibly the different repositories.

    Come Spring profiles. As opposed to Maven profiles, Spring profiles are activated at runtime. I’m not sure whether this is a good or a bad thing, but the implementation makes it possible for real default configurations, with the help of @Conditional annotations (see my previous article for more details). That way, the wrapper-around-the-connection bean gets created when the dev profile is activated, and when not, the JNDI lookup bean. This kind of configuration is implemented in the following snippet:

    @Configuration
    public class MyConfiguration {
    
        @Bean
        @Profile("dev")
        public DataSource dataSource() throws Exception {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:file:~/conditional");
            dataSource.setUsername("sa");
            return dataSource;
        }
    
        @Bean
        @ConditionalOnMissingBean(DataSource.class)
        public DataSource fakeDataSource() {
            JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
            return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
        }
    }
    

    In this context, profiles are just a way to activate specific beans, the real magic is achieved through the different @Conditional annotations.

    Note: it is advised to create a dedicated annotation to avoid String typos, to be more refactoring friendly and improve search capabilities on the code.

    @Retention(RUNTIME)
    @Target({TYPE, METHOD})
    @Profile("dev")
    public @interface Development {}
    

    Now, this approach has some drawbacks as well. The most obvious problem is that the final archive will contain extra libraries, those that are use exclusively for development. This is readily apparent when one uses Spring Boot. One of such extra library is the h2 database, a whooping 1.7 Mb jar file. There are a two main counterarguments to this:

    • First, if you’re concerned about a couple of additional Mb, then your main issue is probably not on the software side, but on the disk management side. Perhaps a virtual layer such as VMWare or Xen could help?
    • Then, if the need be, you can still configure the build system to streamline the produced artifact.

    The second drawback of Spring profiles is that along with extra libraries, the development configuration will be packaged into the final artifact as well. To be honest, when I first stumbled this approach, this was a no-go. Then, as usual, I thought more and more about it, and came to the following conclusion: there’s nothing wrong with that. Packaging the development configuration has no consequence whatsoever, whether it is set through XML or JavaConfig. Think about this: once an archive has been created, it is considered sealed, even when the application server explodes it for deployment purposes. It is considered very bad practice to do something on the exploded archive in all cases. So what would be the reason not to package the development configuration along? The only reason I can think of is: to be clean, from a theoretical point of view. Me being a pragmatist, I think the advantages of using Spring profiles is far greater than this drawback.

    In my current project, I created a single configuration fragment with all beans that are dependent on the environment, the datasource and the Spring Security authentication provider. For the latter, the production configuration uses an internal LDAP, so that the development bean provides an in-memory provider.

    So on one hand, we’ve got Maven profiles, which have definite issues but which we are familiar with, and on the other hand, we’ve got Spring profiles which are brand new, hurt our natural inclination but gets the job done. I’d suggest to give them a try: I did and am so far happy with them.

    Categories: Java Tags: configurationmavenspring
  • Avoid conditional logic in @Configuration

    Integration Testing Spring applications mandates to create small dedicated configuration fragments and to assemble them either during normal run of the application or during tests. Even in the latter case, different fragments can be assembled in different tests.

    However, this practice doesn’t handle the use-case where I want to use the application in two different environments. As an example, I might want to use a JNDI datasource in deployed environments and a direct connection when developing  on my local machine. Assembling different fragment combinations is not possible, as I want to run the application in both cases, not test it.

    My only requirement is that the default should use the JNDI datasource, while activating a flag - a profile, should switch to the direct connection. The Pavlovian reflex in this case would be to add a simple condition in the @Configuration class.

    @Configuration
    public class MyConfiguration {
    
        @Autowired
        private Environment env;
    
        @Bean
        public DataSource dataSource() throws Exception {
    
            if (env.acceptsProfiles("dev")) {
                org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
                dataSource.setDriverClassName("org.h2.Driver");
                dataSource.setUrl("jdbc:h2:file:~/conditional");
                dataSource.setUsername("sa");
                return dataSource;
            }
    
            JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
            return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional"); 
        }
    }
    

    Starting to use this kind flow control statements is the beginning of the end, as it will lead to adding more control flow statements in the future, which will lead in turn to a tangled mess of spaghetti configuration, and ultimately to an unmaintainable application.

    Spring Boot offers a nice alternative to handle this use-case with different flavors of @ConditionalXXX annotations. Using them have the following advantages while doing the job: easy to use, readable and limited. While the latter point might seem to be a drawback, it’s the biggest asset IMHO (not unlike Maven plugins). Code is powerful, and with great power must come great responsibility, something that is hardly possible during the course of a project with deadlines and pressure from the higher-ups. That’s the main reason one of my colleagues advocates XML over JavaConfig: with XML, you’re sure there won’t be any abuse while the project runs its course.

    But let’s stop the philosophy and back to @ConditionalXXX annotations. Basically, putting such an annotation on a @Bean method will invoke this method and put the bean in the factory based on a dedicated condition. There are many of them, here are some important ones:

    • Dependent on Java version, newer or older - @ConditionalOnJava
    • Dependent on a bean present in factory - @ConditionalOnBean, and its opposite, dependent on a bean name not present - @ConditionalOnMissingBean
    • Dependent on a class present on the classpath - @ConditionalOnClass, and its opposite @ConditionalOnMissingClass
    • Whether it's a web application or not - @ConditionalOnWebApplication and @ConditionalOnNotWebApplication
    • etc.

    Note that the whole list of existing conditions can be browsed in Spring Boot’s org.springframework.boot.autoconfigure.condition package.

    With this information, we can migrate the above snippet to a more robust implementation:

    @Configuration
    public class MyConfiguration {
    
        @Bean
        @Profile("dev")
        public DataSource dataSource() throws Exception {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:file:~/localisatordb");
            dataSource.setUsername("sa");
            return dataSource;
        }
    
        @Bean
        @ConditionalOnMissingBean(DataSource.class)
        public DataSource fakeDataSource() {
            JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
            return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
        }
    }
    

    The configuration is now neatly separated into two different methods, the first method will be called only when the dev profile is active while the second will be when the first method is not called, hence when the dev profile is not active.

    Finally, the best thing in that feature is that it is easily extensible, as it depends only on the @Conditional annotation and the Condition interface (who are part of Spring proper, not Spring Boot).

    Here’s a simple example in Maven/IntelliJ format for you to play with. Have fun!

    Categories: Java Tags: configurationspring
  • Seamless installation: convention over configuration

    Today, I will not take the role of the architect that knows how to deliver applications but instead I will play the end-user part.

    In a previous post, I was tasked to put a whole development infrastructure in place. A continuous integration server was indeed in order. I took a look at some, but I was really dumbfounded when I tried Hudson. Features are not what stroke me at that time (although Hudson’s features did serve me well) but only the ease of installation.

    Let’s look at a traditional installation. The steps are the following:

    • Download the installer
    • Launch the installer
    • Accept the security warning (I'm on Windows, guess Nix users would probably sudo before)
    • Follow the wizard numerous steps (which probably includes accepting a license)

    In turn, launching the Hudson test drive is a two-click process, the only thing needed being a local JVM:

    • Click the Java Web Start link
    • Accept the security warning

    Let’s not dive into the technical details on how it is done. I’m only interested with the results: with only two mouse clicks, Hudson launches its console and you can start working. From a user point of view, that’s real value! Now, I understand that such an installation is just for example purposes; yet, this is really nice to have a product ready to run in such a few steps.

    Maven invented the convention over configuration so that build managers would not have to write the same tasks over and over for each of their projects. Learning its lessons from EJB2, Sun took the same path for EJB3: developers now really have less code to write. Build managers and developers are end-users in these processes. As the product end-user, I would really like to install it from some common sense default configuration. If needed, I should be able to overload this convention (Hudson does not provide this overloading because the goal is to test the product quickly).

    As an architect, I think the installation domain area is pretty uncharted. We are much focused on clean code, maintenability, design and such. Some of us even sometimes explore the interface and ergonomy of the product. All of these are fine and needed but  not enough IMHO. Think of the installation process too and of Hudson’s example so that we, as end-users, can benefit from seamless installation.

    Categories: Technical Tags: configurationconventioninstallation
  • JMX use cases

    JMX is a Java standard shipped with the JDK since Java 5. Though it enables you to efficiently and dynamically manage your applications, JMX has seen very few productions uses. In this article, I will show you the benefits of using such a technology in a couple of use cases.

    Manage your application’s configuration

    Even though each application has different needs regarding configuration (one needing a initial thread number attribute, the other an URL), every application needs to be more or less parameterized. In order to do this, countless generations of Java developers (am I overdoing here?) have created two components:

    • the first one is a property file where one puts the name value pairs
    • the other one is a Java class whose responsibilities are to load the properties in itself and to provide access to the values. This class should be a singleton.

    This is good and fine for initialization, but what about runtime changes of those parameters? This is where JMX comes in. With JMX, you can now expose those parameters with read/write authorizations. JDK 6 provides you with the JConsole application, which can connect on JMX-enabled applications. Lets’s take a very simple example, with a configuration having only two properties: there will be one Configuration class and one interface named ConfigurationMBean in order to follow the JMX standard for Standard MBean. This interface will describe all methods available on the MBean instance:

    public interface ConfigurationMBean {
        public String getUrl();
        public int getNumberOfThread();
        public void setUrl(String url);
        public void setNumberOfThread(int numberOfThread);
    }
    

    Now, you’ll only have to register the singleton instance of this class in you MBean server, and you have exposed your application’s configuration to the outside with JMX!

    Manage your application’s Springified configuration

    In Spring, every bean that is configured in the Spring configuration file can be registered to the JMX server. That’s it: no need to create an MBean suffixed interface for each class you want to expose. Spring does it for you, using the more powerful DynamicMBean and ModelMBean classes under the hood.

    By default, Spring will expose all your public properties and methods. You can still control more precisely what will be exposed through the use of:

    • meta-datas (@@-like comments in the javadocs, thus decoupling your code from Spring API),
    • Spring Java 5 annotations,
    • classical MBean interfaces referenced in the Spring’s definition file,
    • or even using MethodNameBasedMBeanInfoAssembler which describes the interface in the Spring’s definition file.

    More importantly, Spring provides your MBeans with notification provider support. This means every MBean will implement NotificationBroadcaster and thus be able to send notifications to subscribers, for example when you change a value to a property or when you call a method.

    Following is a snippet for the previous Configuration, using Spring:

    <bean id="genericCfg" class="ch.frankel.blog.jmx.GenericConfiguration" />
    <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter">
        <property name="beans">
            <map>
                <entry key="bean:type=configuration,name=generic" value-ref="genericCfg" />
            </map>
        </property>
    </bean>
    

    Spring uses the MBean’s id to register it under the MBean server.

    Change a logger’s level

    Logging is a critical functionality since the dawn of software. Now, let’s say you are faced with the following problem: your application keeps throwing exceptions but what is traced is not enough for the developers to diagnose. Luckily, one of the developer did put some trace, but on a very primitive level. Unluckily, in production mode, it’s pretty sure you log only important events, mainly exceptions, not the debug informations that could be so useful here.

    The first solution is to change the log level in the configuration file and the restart the application. Ouch, that’s a very crude way, one that won’t make many people happy (depending on the criticity and availability of the application).

    Another answer is to use the abilities of the logging framework. For example, Log4J is the legacy logging framework. It provides a way to configure the framework with a configuration file and to listens to changes made to this file in order to reflect it in the in-memory configuration (the static configureAndWatch() method found in both DOMConfigurator and PropertyConfigurator).This runs fine if you have an external file but what about configuration files shipped with the archive? You can argue that Web archives are often deployed in exploded mode but you cannot rely on it.

    JMX will prove to be handy for such a case: if you could have exposed your loggers logging level, you could change them at runtime. Since JDK 1.4, Java has an API to log messages. It offers JMX registration for free, so let’s use it. The only thing to do is create a logger for your class. In a business method, use the logger to trace at FINE level. Now using your JMX console, locate the MBean named java.util.logging:type=Logging.

    Type Name Description
    Attribute LoggerNames Array of all loggers configured
    Operation getLoggerLevel Get the level of a logger The root logger is referenced by an empty string
    Operation setLoggerLevel Set the level of a logger to a specified value

    In order to activate the log, just set the level of the logger used by your class to the value you used in the code. In order to test this, I recomend to create a Spring bean from a Java class using the logger and exporting it to JMX with Spring (see above).

    Flush your cache

    For the data access layer, Hibernate is the most used framework at the time of this writing. Hibernate enables you to use caching. Hibernate’s first level caching (session cache) is done within Hibernate itself; Hibernate’s second level caching (transsession cache) is delegated to a 3rd party framework. Default framework is Ehcache: it is a very simple, yet efficient solution.

    Let’s say some of your application’s table contain repository datas, that is data that won’t be changed by the application. These datas are by definition eligible for second-level caching. Now picture this: your application should be highly available (365/24/7) and the repositories just changed. How can you tell Ehcache to reload the tables in memory without restarting the application?

    Luckily, Ehcache provides you with the way to do it. In fact, the net.sf.ehcache.management.Cache class implements the net.sf.ehcache.management.CacheMBean so you can call all of the interface methods: one of such method, aptly named removeAll() empties your cache. Just call it and Hibernate, not finding any data in the cache, will reload them from the database.

    You will perhaps object you do not want all of your cache to be reinitialized: now you understand why you should separate your cache instances in the configuration (e.g. one for each table or one for repository datas and one for updatable datas).

    Testing JMX

    It is legitimate to want to test JMX while developing before bringing in the whole server infrastructure. JDK 6 provides you with the JConsole utility which displays the following informations on any application you can connect to (local and remote):

    • memory usage through time,
    • threads instances through time,
    • number of loaded classes through time,
    • summary of VM (including classpath and properties),
    • all your exposed MBeans.

    JConsole MBean view

    This view lets you view your MBeans attributes and call your MBeans operations, so this is a very valuable tool (for free). Now try it with a legacy application of yours: notice how many MBeans are registered. Guess you didn’t expect that!

    In order to use this tool in development mode, do not forget to launch it with the following arguments (notice the -J):

    • -J-Dcom.sun.management.jmxremote.ssl=false
    • -J-Dcom.sun.management.jmxremote.authenticate=false
    • -J-Djava.class.path="${JDK_HOME}/jdk1.6.0_10/lib/jconsole.jar;${JDK_HOME}/lib/tools.jar;${ADDITIONNAL_CLASSPATH}

    The last argument can be omitted in most cases (though this is not the case for managing EhCache).

    You can find the sources for all the examples here.

    To go further: