Archive

Posts Tagged ‘maven’
  • Polyglot everywhere - part 1

    This is the era of polyglot! Proponents of this practice spread the word that you’ve to choose the language best adapted to the problem at hand. And with a single team dedicated to a microservice, this might make sense.

    My pragmatic side tells me it means that developers get to choose the language they are developing with and don’t care how it will be maintained when they go away… On the other hand, my shiny-loving side just want to try - albeit in a more controlled environment, such as this blog!

    Introduction

    In this 3 parts serie, I’ll try to use polyglot on a project:

    • The first part is about the build system
    • The second part will be about the server side
    • The final part will be about the client-side

    My example will use a Vaadin project built with Maven and using a simple client-side extension. You can follow the project on Github.

    Polyglot Maven

    Though it may have been largely ignored, Maven can now talk many different languages since its version 3.3.1 thanks to an improved extension mechanism. In the end, the system is quite easy:

    • Create a .mvn folder at the root of your project
    • Create a extensions.xml file
    • Set the type of language you’d like to use:
    <?xml version="1.0" encoding="UTF-8"?>
    <extensions>
      <extension>
        <groupId>io.takari.polyglot</groupId>
        <artifactId>polyglot-yaml</artifactId>
        <version>0.1.8</version>
      </extension>
    </extensions>
    

    Here, I set the build “language” as YAML.

    In the end, the translation from XML to YAML is very straightforward:

    modelVersion: 4.0.0
    groupId: ch.frankel.blog.polyglot
    artifactId: polyglot-example
    packaging: war
    version: 1.0.0-SNAPSHOT
    dependencies:
        - { groupId: com.vaadin, artifactId: vaadin-spring, version: 1.0.0.beta2 }
    build:
        plugins:
            - artifactId: maven-compiler-plugin
              version: 3.1
              configuration:
                source: 1.8
                target: 1.8
            - artifactId: maven-war-plugin
              version: 2.2
              configuration:
                failOnMissingWebXml: false
    
    

    The only problem I had was in the YAML syntax itself: just make sure to align the elements of the plugin to the plugin declaration (e.g. align version with artifactId).

    Remember to check the POM on Github with each new part of the serie!

    Categories: Development Tags: buildmavenpolyglot
  • Better developer-to-developer collaboration with Bintray

    I recently got interested in Spring Social, and as part of my learning path, I tried to integrate their Github module which is still in Incubator mode. Unfortunately, this module seems to have been left behind, and its dependency on the core module uses an old version of it. And since I use the latest version of this core, Maven resolves one version to put in the WEB-INF/lib folder of the WAR package. Unfortunately, it doesn’t work so well at runtime.

    The following diagram shows this situation:

    Dependencies original situation

     

    I could have excluded the old version from the transitive dependencies, but I’m lazy and Maven doesn’t make it easy (yet). Instead, I decided to just upgrade the Github module to the latest version and install it in my local repository. That proved to be quite easy as there was no incompatibility with the newest version of the core - I even created a pull request. This is the updated situation:

    Dependencies final situation

    Unfortunately, if I now decide to distribute this version of my application, nobody will be able to neither build nor run it since only I have the “patched” (latest) version of the Github module available in my local repo. I could distribute along the updated sources, but it would mean you would have to build it and install it into your local repo first before using my app.

    Bintray to the rescue! Bintray is a binary repository, able to host any kind of binaries: jars, wars, deb, anything. It is hosted online, and free for OpenSource projects, which nicely suits my use-case. This is how I uploaded my artifact on Bintray.

    Create an account
    Bintray makes it quite easy to create such an account, using of the available authentication providers - Github, Twitter or Google+. Alternatively, one can create an old-style account, with password.
    Create an artifact
    Once authentified, an artifact needs to be created. Select your default Maven repository, it can be found at https://bintray.com/${username}/maven. Then, click on the big Add New Package button located on the right border. On the opening page, fill in the required information. The package can be named whatever you want, I chose to use the Maven artifact identifier: spring-social-github.
    Create a version
    Files can only be added to a version, so that a version need to be created first. On the package detail page, click on the New Version link (second column, first line).On the opening page, fill in the version name. Note that snapshots are not accepted and this is only checked through the -SNAPSHOT suffix. I chose to use 1.0.0.BUILD.
    Upload files
    Once the version is created, files can finally be uploaded. In the top bar, click the Upload Files button. Drag and drop all desired files, of course the main JAR and the POM, but it can also include source and javadoc JARs. Notice the Target Repository Path field: it should be set to the logical path to the Maven artifact, including groupId, artifactId and version separated by slashes. For example, my use-case should resolve to org/springframework/social/spring-social-github/1.0.0.BUILD. Note that instead of filling this field, you can wait for the files to be uploaded as Bintray will detect this upload, analyze the POM and propose to set it automatically: if this fits - and it probably does, just accept the proposal.
    Publish
    Uploading files is not enough, as those files are temporary until publication. A big notice warns about it: just click on the Publish link located on the right border.

    At this point, you need only to add the Bintray repository in the POM.

    <repositories>
        <repository>
            <id>bintray</id>
            <url>http://dl.bintray.com/nfrankel/maven</url>
            <snapshots>
                <enabled>true</enabled>
            </snapshots>
        </repository>
    </repositories>
    
    Categories: Java Tags: bintraymaven
  • Spring profiles or Maven profiles?

    Deploying on different environments requires configuration, e.g. database URL(s) must be set on each dedicated environment. In most - if not all Java applications, this is achieved through a .properties file, loaded through the appropriately-named Properties class. During development, there’s no reason not to use the same configuration system, e.g. to use an embedded h2 database instead of the production one.

    Unfortunately, Jave EE applications generally fall outside this usage, as the good practice on deployed environments (i.e. all environments save the local developer machine) is to use a JNDI datasource instead of a local connection. Even Tomcat and Jetty - which implement only a fraction of the Java EE Web Profile, provide this nifty and useful feature.

    As an example, let’s take the Spring framework. In this case, two datasource configuration fragments have to be defined:

    • For deployed environment, one that specifies the JNDI location lookup
    • For local development (and test), one that configures a connection pool around a direct database connection

    Simple properties file cannot manage this kind of switch, one has to use the build system. Shameless self-promotion: a detailed explanation of this setup for integration testing purposes can be found in my book, Integration Testing from the Trenches.

    With the Maven build system, change between configuration is achieved through so-called profiles at build time. Roughly, a Maven profile is a portion of a POM that’s can be enabled (or not). For example, the following profile snippet replaces Maven’s standard resource directory with a dedicated one.

    <profiles>
        <profile>
          <id>dev</id>
          <build>
            <resources>
              <resource>
                <directory>profile/dev</directory>
                <includes>
                  <include>**/*</include>
                </includes>
              </resource>
            </resources>
          </build>
        </profile>
    </profiles>
    

    Activating a single or different profiles is as easy as using the -P switch with their id on the command-line when invoking Maven. The following command will activate the dev profile (provided it is set in the POM):

    bashmvn package -Pdev Now, let’s add a simple requirement: as I’m quite lazy, I want to exert the minimum effort possible to package the application along with its final production release configuration. This translates into making the production configuration, i.e. the JNDI fragment, the default one, and using the development fragment explicitly when necessary. Seasoned Maven users know how to implement that: create a production profile and configure it to be the default.

    <profile>
      <id>dev</id>
      <activation>
        <activeByDefault>true</activeByDefault>
      </activation>
      ...
    </profile>
    

    Icing on the cake, profiles can even be set in Maven settings.xml files. Seems to good to be true? Well, very seasoned Maven users know that as soon as a single profile is explicitly activated, the default profile is de-activated. Previous experiences have taught me that because profiles are so easy to implement, they are used (and overused) so that the default one gets easily lost in the process. For example, in one such job, a profile was used on the Continuous Integration server to set some properties for the release in a dedicated setting files. In order to keep the right configuration, one has to a) know about the sneaky profile b) know it will break the default profile c) explicitly set the not-default-anymore profile.

    Additional details about the dangers of Maven profiles for building artifacts can be found in this article.

    Another drawback of this global approach is the tendency for over-fragmentation of the configuration files. I prefer to have coarse-grained configuration files, with each dedicated to a layer or a use-case. For example, I’d like to declare at least the datasource, the transaction manager and the entity manager in the same file with possibly the different repositories.

    Come Spring profiles. As opposed to Maven profiles, Spring profiles are activated at runtime. I’m not sure whether this is a good or a bad thing, but the implementation makes it possible for real default configurations, with the help of @Conditional annotations (see my previous article for more details). That way, the wrapper-around-the-connection bean gets created when the dev profile is activated, and when not, the JNDI lookup bean. This kind of configuration is implemented in the following snippet:

    @Configuration
    public class MyConfiguration {
    
        @Bean
        @Profile("dev")
        public DataSource dataSource() throws Exception {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:file:~/conditional");
            dataSource.setUsername("sa");
            return dataSource;
        }
    
        @Bean
        @ConditionalOnMissingBean(DataSource.class)
        public DataSource fakeDataSource() {
            JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
            return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
        }
    }
    

    In this context, profiles are just a way to activate specific beans, the real magic is achieved through the different @Conditional annotations.

    Note: it is advised to create a dedicated annotation to avoid String typos, to be more refactoring friendly and improve search capabilities on the code.

    @Retention(RUNTIME)
    @Target({TYPE, METHOD})
    @Profile("dev")
    public @interface Development {}
    

    Now, this approach has some drawbacks as well. The most obvious problem is that the final archive will contain extra libraries, those that are use exclusively for development. This is readily apparent when one uses Spring Boot. One of such extra library is the h2 database, a whooping 1.7 Mb jar file. There are a two main counterarguments to this:

    • First, if you're concerned about a couple of additional Mb, then your main issue is probably not on the software side, but on the disk management side. Perhaps a virtual layer such as VMWare or Xen could help?
    • Then, if the need be, you can still configure the build system to streamline the produced artifact.

    The second drawback of Spring profiles is that along with extra libraries, the development configuration will be packaged into the final artifact as well. To be honest, when I first stumbled this approach, this was a no-go. Then, as usual, I thought more and more about it, and came to the following conclusion: there’s nothing wrong with that. Packaging the development configuration has no consequence whatsoever, whether it is set through XML or JavaConfig. Think about this: once an archive has been created, it is considered sealed, even when the application server explodes it for deployment purposes. It is considered very bad practice to do something on the exploded archive in all cases. So what would be the reason not to package the development configuration along? The only reason I can think of is: to be clean, from a theoretical point of view. Me being a pragmatist, I think the advantages of using Spring profiles is far greater than this drawback.

    In my current project, I created a single configuration fragment with all beans that are dependent on the environment, the datasource and the Spring Security authentication provider. For the latter, the production configuration uses an internal LDAP, so that the development bean provides an in-memory provider.

    So on one hand, we’ve got Maven profiles, which have definite issues but which we are familiar with, and on the other hand, we’ve got Spring profiles which are brand new, hurt our natural inclination but gets the job done. I’d suggest to give them a try: I did and am so far happy with them.

    Categories: Java Tags: configurationmavenspring
  • Easier Spring version management

    Earlier on, Spring migrated from a monolithic approach - the whole framework, to a modular one - bean, context, test, etc. so that one could decide to use only the required modules. This modularity came at a cost, however: in the Maven build configuration (or the Gradle one for that matter), one had to specify the version for each used module.

    <?xml version="1.0" encoding="UTF-8"?>
    <project...>
        ...
        <dependencies>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-webmvc</artifactId>
                <version>4.0.5.RELEASE</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-jdbc</artifactId>
                <version>4.0.5.RELEASE</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-test</artifactId>
                <version>4.0.5.RELEASE</version>
                <scope>test</scope>
            </dependency>
        </dependencies>
    </project>
    

    Of course, professional Maven users would improve this POM with the following:

    <?xml version="1.0" encoding="UTF-8"?>
    <project...>
        ...
       <properties>
            <spring.version>4.0.5.RELEASE</spring.version>
        </properties>
        <dependencies>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-webmvc</artifactId>
                <version>${spring.version}</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-jdbc</artifactId>
                <version>${spring.version}</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-test</artifactId>
                <version>${spring.version}</version>
                <scope>test</scope>
            </dependency>
        </dependencies>
    </project>
    

    There’s an more concise way to achieve the same through a BOM-typed POM (see section on scope import) since version 3.2.6 though.

    <?xml version="1.0" encoding="UTF-8"?>
    <project...>
        ...
        <dependencyManagement>
            <dependencies>
                <dependency>
                    <groupId>org.springframework</groupId>
                    <artifactId>spring-framework-bom</artifactId>
                    <type>pom</type>
                    <version>4.0.5.RELEASE</version>
                    <scope>import</scope>
                </dependency>
            <dependencies>
        </dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-webmvc</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-jdbc</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-test</artifactId>
                <scope>test</scope>
            </dependency>
        </dependencies>
    </project>
    

    Note that Spring’s BOM only sets version but not scope, this has to be done in each user POM.

    Spring released very recently the Spring IO platform which also includes a BOM. This BOM not only includes Spring dependencies but also other third-party libraries.

    <?xml version="1.0" encoding="UTF-8"?>
    <project...>
        ...
        <dependencyManagement>
            <dependencies>
                <dependency>
                    <groupId>io.spring.platform</groupId>
                    <artifactId>platform-bom</artifactId>
                    <type>pom</type>
                    <version>1.0.0.RELEASE</version>
                    <scope>import</scope>
                </dependency>
            <dependencies>
        </dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-webmvc</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-jdbc</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-test</artifactId>
                <scope>test</scope>
            </dependency>
            <dependency>
                <groupId>org.testng</groupId>
                <artifactId>testng</artifactId>
                <scope>test</scope>
            </dependency>
        </dependencies>
    </project>
    

    There’s one single problem with Spring IO platform’s BOM, there’s no simple mapping from the BOM version to declared dependencies versions. For example, the BOM’s 1.0.0.RELEASE maps to Spring 4.0.5.RELEASE.

    To go further:

    Categories: Java Tags: mavenspring
  • Scala on Android and stuff: lessons learned

    I play Role-Playing since I’m eleven, and me and my posse still play once or twice a year. Recently, they decided to play Earthdawn again, a game we didn’t play since more than 15 years! That triggered my desire to create an application to roll all those strangely-shaped dice. And to combine the useful with the pleasant, I decided to use technologies I’m not really familiar with: the Scala language, the Android platform and the Gradle build system.

    The first step was to design a simple and generic die-rolling API in Scala, and that was the subject of one of my former article. The second step was to build upon this API to construct something more specific to Earthdawn and design the GUI. Here’s the write up of my musings in this development.

    Here’s the a general component overview:

    Component overview

    Reworking the basics

    After much internal debate, I finally changed the return type of the roll method from (Rollable[T], T) to simply T following a relevant comment on reddit. I concluded that it’s to the caller to get hold of the die itself, and return it if it wants. That’s what I did in the Earthdawn domain layer.

    Scala specs

    Using Scala meant I also dived into Scala Specs 2 framework. Specs 2 offers a Behavior-Driven way of writing tests as well as integration with JUnit through runners and Mockito through traits. Test instances can be initialized separately in a dedicated class, isolated of all others.

    My biggest challenge was to configure the Maven Surefire plugin to execute Specs 2 tests:

    <plugin>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>2.17</version>
        <configuration>
            <argLine>-Dspecs2.console</argLine>
                <includes>
                    <include>**/*Spec.java</include>
                </includes>
        </configuration>
    </plugin>
    

    Scala + Android = Scaloid

    Though perhaps disappointing at first, wanting to run Scala on Android is feasible enough. Normal Java is compiled to bytecode and then converted to Dalvik-compatible Dex files. Since Scala is also compiled to bytecode, the same process can also be applied. Not only can Scala be easily be ported to Android, some frameworks to do that are available online: the one which seemed the most mature was Scaloid.

    Scaloid most important feature is to eschew traditional declarative XML-based layout in favor of a Scala-based Domain Specific Language with the help of implicit:

    val layout = new SLinearLayout {
      SButton("Click").<<.Weight(1.0f).>>
    }
    

    Scaloid also offers:

    • Lifecycle management
    • Implicit conversions
    • Trait-based hierarchy
    • Improved getters and setters
    • etc.

    If you want to do Scala on Android, this is the prject you’ve to look at!

    Some more bitching about Gradle

    I’m not known to be a big fan of Gradle - to say the least. The bigest reason however, is not because I think Gradle is bad but because using a tool based on its hype level is the worst reason I can think of.

    I used Gradle for the API project, and I must admit it it is more concise than Maven. For example, instead of adding the whole maven-scala-plugin XML snippet, it’s enough to tell Gradle to use it with:

    apply plugin: 'scala'
    

    However the biggest advantage is that Gradle keeps the state of the project, so that unnecessary tasks are not executed. For example, if code didn’t change, compilation will not be executed.

    Now to the things that are - to put in politically correct way, less than optimal:

    • First, interestingly enough, Gradle does not output test results in the console. For a Maven user, this is somewhat unsettling. But even without this flaw, I'm afraid any potential user is interested in the test ouput. Yet, this can be remedied with some Groovy code: ```groovy test { onOutput { descriptor, event -> logger.lifecycle("Test: " + descriptor + ": " + event.message ) } } ```
    • Then, as I wanted to install my the resulting package into my local Maven repository, I had to add the Maven plugin: easy enough... This also required Maven coordinates, quite expected. But why am I allowed to install without executing test phases??? This is not only disturbing, but I cannot accept any rationale to allow installing without testing.
    • Neither of the previous points proved to be a show stopper, however. For the Scala-Android project, you might think I just needed to apply both scala and android plugins and be done with that. Well, this is wrong! It seems that despite Android being showcased as the use-case for Gradle, scala and android plugins are not compatible. This was the end of my Gradle adventure, and probably for the foreseeable future.

    Testing with ease and Genymotion

    The build process i.e. transforming every class file into Dex, packaging them into an .apk and signing it takes ages. It would be even worse if using the emulator from Google Android’s SDK. But rejoice, Genymotion is an advanced Android emulator that not only very fast but easy as pie to use.

    Instead of doing an adb install, installing an apk on Genymotion can be achieved by just drag’n’dropping it on the emulated device. Even better, it doesn’t require first uninstalling the old version and it launches the application directly. Easy as pie I told you!

    Conclusion

    I have now a working Android application, complete with tests and repeatable build. It’s not much, but it gets the job done and it taught me some new stuff I didn’t know previously. I can only encourage you to do the same: pick a small application and develop it with languages/tools/platforms that you don’t use in your day-to-day job. In the end, you will have learned stuff and have a working application. Doesn’t it make you feel warm inside?

     

    Categories: Java Tags: androidgenymotiongradlemavenscala
  • Stop the f... about Gradle

    Stop the f… about #Spring & #Hibernate migrating to #Gradle. Repeat after me: "my project do NOT have the same requirements" #Maven

    This was my week’s hate tweet, and I take full responsibility for every character in it. While that may seem like a troll, Twitter is not really the place to have a good-natured debate with factual arguments, so here is the follow up.

    Before going into full-blown rhetoric mode, let me first say that despite popular belief, I’m open to shiny new things. For example, despite being a Vaadin believer - which is a stateful server-side technology, I’m also interested in AngularJS - which is its exact opposite. I’m also in favor of TestNG over JUnit, and so on. I even went as far as going to a Gradle session at Devoxx France! So please, hear my arguments out, only then think them over.

    So far, I’ve heard only two arguments in favor of Gradle:

    1. It's flexible (implying Maven is not)
    2. Spring and Hibernate use it

    Both are facts, let’s go in detail over each of them to check why those are not arguments.

    Gradle is flexible

    There’s no denying that Gradle is flexible: I mean, it’s Groovy with a build DSL. Let us go further: how is flexibility achieved? My take is that it comes from the following.

    • Providing very fine-grained - almost atomic operations: compilation, copying, moving, packaging, etc.
    • Allowing to define lists of those operation - tasks: packaging a JAR would mean copying classes and resources files into a dedicated folder and zipping it
    • Enabling dependencies between those: packaging is dependent on compilation

    If you look at it closely, what I said can be applied to Gradle, of course, but also to Ant! Yes, both operate at the same level of granularity.

    Now the problem lies in that Gradle proponents tell Gradle is flexible as if flexibility was a desirable quality. Let me say: flexibility is not a quality for a build tool. I would even say it is a big disadvantage. That’s the same as having your broken arm put in a cast. That’s a definite lack of flexibility (to says the least) but it’s for your own good. The cast prevents you from moving in a painful way, just as unflexible build tools (such as Maven) make strange things definitely expensive.

    If you need to do something odd over and over because of your specific context, it’s because of a recurring requirement. In Maven, you’d create a plugin to address it and be done with that. If it’s a one-shot requirement, that’s probably no requirement but a quirk. Rethink the way you do it, it’s a smell something is definitely fishy.

    Spring and Hibernate both use Gradle

    That one really makes me laugh: because some framework choose a build, we should just use theirs, no questions asked? Did you really check why they migrated in the first place?

    I won’t even look at the Hibernate case, because it annoys me to no end to read arguments such as “I personally hate…” or “…define the build and directories the way that seemed to make sense to me”. That’s no good reason to change a build tool (but a real display of inflated ego in the latter case).

    For Spring, well… Groovy is just in their strategic path and has been for years. SpringSource Tools Suite fully support Groovy, so I guess using Gradle is a way to spread Groovy love all over the world. A more technical reason I’ve heard is that Spring must be compatible with different Java versions, and Maven cannot address that. I’m too lazy to check for myself but even if that’s true, it has only a slight chance of applying to your current project.

    Gradlew is one cool feature

    The only feature I know of that I currently lack - and would definitely love to have, is to set the build engine version once and for all, to be able to run my build 10 years from now if I need it. It’s amazing the number of software products that go to their first maintenance cycle in years and are at loss building from sources. Believe me, it happened to me (in a VB environment) but I had the fortune to have a genius at hand, something I unfortunately cannot count on.

    In the Gradle parlance, such feature is achieved through something known the Gradle Wrapper. Using that downloads the build engine itself, so you can put it into your version control. Too bad nobody ever raised this as an argument :-) though this is not enough to make me want to migrate.

    Note: during this writing, I just searched for a port of this feature to Maven and I found maven-wrapper. Any feedback?

    Wrap-up

    TL;DR:

    • Gradle is just Ant with Groovy instead of XML
    • Your context is (probably) different from those of frameworks which are using Gradle

    Both points have only one logical conclusion: there’s no real reason to use Gradle, stop the f… about it and go back to develop the next Google Search, there’s much more value in that!

    Categories: Java Tags: antbuildgradlemaven
  • Maven between different environments

    As a consultant, I find myself in different environments in need of different configurations. One such configuration is about the Maven settings file. This file is very important, for it governs such things as servers, mirrors and proxies. When you have a laptop, switching from customer configuration to home configuration and vice versa when you change place quickly becomes a bore. When you have to handle more than one customer, it escalates a nightmarish and tangled configuration mess.

    In a former environment, colleagues handled Eclipse ini file switch, a very similar concern, by having a dedicated .bat to overwrite the reference file. I heard a colleague of mine do exactly the same for Maven settings file. It does the job, but it is not portable, is more than slightly intrusive and has something I cannot put quite my finger on that does not “fit”.

    As IT people are, I’m lazy but idealist, so I scratched my head to handle this problem in a way I would deem more elegant. I think I may have found one, through Maven command native CLI. If you run mvn --help, you’ll get plenty of CLI options: go to the -s section.

    -s,--settings <arg>     Alternate path for the user
                                   settings file
    

    This means Maven let you use settings files other than ~/.m2/settings.xml. So you can create settings-cust.xml, set all needed configuration for this customer and run mvn -s ~/.m2/settings-cust.xml.

    And since I’m really lazy, I just added the following snippet in my ~/.bash_profile to make my life even easier:

    alias mvncust='mvn -s ~/.m2/settings-cust.xml'
    

    Now, I just need to run mvncust to run Maven with all relevant configuration for this environment. And it is compatible with other options!

    The only drawback I found so far is I’ve to explicitly set the settings file in Eclipse’s m2e but that doesn’t bother me much since I’ve a dedicated Eclipse instance (for configuration and plugins) for each of my environment.

    Categories: Java Tags: maven
  • Re-use your test classes across different projects

    Sometimes, you need to reuse your test classes across different projects. These are two use-cases that I know of:

    • Utility classes that create relevant domain objects used in different modules
    • Database test classes (ans resources) that need to be run in the persistence project as well as the integration test project

    Since I’ve seen more than my share of misuses, this article aim to provide an elegant solution once and for all.

    Creating the test artifact

    First, we have to use Maven: I know that not everyone is a Maven fanboy, but it get the work done - and in our case, it does it easily. Then, we configure the JAR plugin to attach tests. This will compile test classes and copy test resources, and package them in an attached test artifact.

    <project>
      <build>
        <plugins>
         <plugin>
           <groupId>org.apache.maven.plugins</groupId>
           <artifactId>maven-jar-plugin</artifactId>
           <version>2.2</version>
           <executions>
             <execution>
               <goals>
                 <goal>test-jar</goal>
               </goals>
             </execution>
           </executions>
         </plugin>
        </plugins>
      </build>
    </project>
    

    The test artifact is stored side-by-side with the main artifact once deployed in the repository. Note that the configured test-jar is bound to the install goal.

    Using the test artifact

    The newly-created test artifact can be expressed as a dependency of a projet with the following snippet:

    <dependency>
      <groupId>ch.frankel.blog.foo</groupId>
      <artifactId>foo</artifactId>
      <version>1.0.0</version>
      <type>test-jar</type>
      <scope>test</scope>
    </dependency>
    

    The type has to be test-jar instead of simply jar in order to Maven to pick the attached artifact and not the main one. Also, note although you could configure the dependency with a classifier instead of a type, the current documentation warns about possible bugs and favor the type configuration.

    To go further:

    Categories: Java Tags: maventest
  • Empower your CSS in your Maven build

    People who know me also know I’ve interest in the GUI: that means I’ve sometimes to get my hands dirty and dig deep into stylesheets (even though I’ve no graphical skill whatsoever). When it happens, I’ve always questions regarding how to best factorize styles. In this regard, the different CSS versions are lacking, because they were not meant to be managed by engineers. A recent trend is to generate CSS from a source, which brings some interesting properties such as nesting, variables, mixins, inheritance and others. Two examples of this trend are LESS and SASS.

    A not-so-quick example can be found just below.

    Given that I’m an engineer, the only requirement I have regarding those technologies is that I can generate the final CSS during my build in an automated and reproductible way. After a quick search, I became convinced to use wro4j. Here are the reasons why, in a simple use-case.

    Maven plugin

    Wro4j includes a Maven plugin. In order to call it in the build, just add it to your POM:

    <plugin>
        <groupId>ro.isdc.wro4j</groupId>
        <artifactId>wro4j-maven-plugin</artifactId>
        <version>1.4.6</version>
        <executions>
            <execution>
                <goals>
                    <goal>run</goal>
                </goals>
                <phase>generate-resources</phase>
            </execution>
        </executions>
    </plugin>
    

    You’re allergic to Maven? No problem, a command-line tool is also provided, free of charge.

    Build time generation

    For evident performance reasons, I favor build time generation instead of runtime. But if you prefer the latter, there’s a JavaEE filter ready just for that.

    Configurability

    Since wro4j original strategy is runtime generation, default configuration files are meant to be inside the archive. However, this can easily tweaked by setting some tags in the POM:

    <configuration>
        <wroManagerFactory>ro.isdc.wro.maven.plugin.manager.factory.ConfigurableWroManagerFactory</wroManagerFactory>
        <wroFile>${basedir}/src/main/config/wro.xml</wroFile>
        <extraConfigFile>${basedir}/src/main/config/wro.properties</extraConfigFile>
    </configuration>
    

    The final blow

    The real reason to praise wro4j is that LESS generation is only a small part of its features: for wro4j, it’s only a processor among others. You’ve only to look at the long list of processors (pre- and post-) available to want to use them ASAP. For example, wro4j also wraps a JAWR processor (a bundling and minifying product I’ve blogged about some time ago).

    Once you get there, however, a whole new universe opens before your eyes, since you get processors for:

    • Decreasing JavaScript files size by minifying them with Google, YUI, JAWR, etc.
    • Decreasing CSS files size by minifying them
    • Minimizing the number of requests by merging your JavaSscript files into a single file
    • Minimizing the number of requests by merging your CSS files into a single file
    • Processing LESS CSS
    • Processing SASS CSS
    • Analyzing your JavaScript with LINT

    The list is virtually endless, you should really have a look. Besides, you can bridge your favorite by writing your own processor if the need be.

    Quick how-to

    In order to set up wro4j real quick, here are some ready to use snippets:

    *The Maven POM, as seen above

    <plugin>
        <groupId>ro.isdc.wro4j</groupId>
        <artifactId>wro4j-maven-plugin</artifactId>
        <version>1.4.6</version>
        <configuration>
            <wroManagerFactory>ro.isdc.wro.maven.plugin.manager.factory.ConfigurableWroManagerFactory</wroManagerFactory>
            <wroFile>${basedir}/src/main/config/wro.xml</wroFile>
            <extraConfigFile>${basedir}/src/main/config/wro.properties</extraConfigFile>
            <targetGroups>all</targetGroups>
            <cssDestinationFolder>${project.build.directory}/${project.build.finalName}/style/</cssDestinationFolder>
            <jsDestinationFolder>${project.build.directory}/${project.build.finalName}/script/</jsDestinationFolder>
            <contextFolder>${basedir}/src/main/webapp/</contextFolder>
            <ignoreMissingResources>false</ignoreMissingResources>
        </configuration>
        <executions>
            <execution>
                <goals>
                    <goal>run</goal>
                </goals>
                <phase>generate-resources</phase>
            </execution>
        </executions>
    </plugin>
    

    Note the use of the ConfigurableWroManagerFactory. It makes adding and removing processors a breeze by updating the following file.

    • The wro.properties processors file, that list processings that should be executed on the source files. Here, we generate CSS from LESS, resolve imports to have a single file and minify both CSS (with JAWR) and JavaScript:
    preProcessors=lessCss,cssImport
    postProcessors=cssMinJawr,jsMin
    
    • The wro.xml configuration file, to tell wro4j how to merge CSS and JS files. In our case, styles.css will be merged into all.css and global.js into all.js.
    <?xml version="1.0" encoding="UTF-8"?>
    <groups xmlns="http://www.isdc.ro/wro">
      <group name="all">
        <css>/style/styles.css</css>
        <js>/script/global.js</js>
      </group>
    </groups>
    
    • Finally, a LESS example can be found just below.</li>

    Conclusion

    There are some paths to optimization for webapps readily available. Some provide alternative ways to define your CSS, some minify your CSS, some your JS, other merge those files together, etc. It would be a shame to ignore them because they can be a real asset in heavy-load scenarii. wro4j provides an easy way to wrap all those operations into a reproductible build.

    LESS
    {% highlight less %} @default-font-family: Helvetica, Arial, sans-serif; @default-radius: 8px; @default-color: #5B83AD; @dark-color: @default-color - #222; @light-color: @default-color + #222; .bottom-left-rounded-corners (@radius: @default-radius) { .rounded-corners(0, 0, @radius, 0) } .bottom-right-rounded-corners (@radius: @default-radius) { .rounded-corners(0, 0, 0, @radius) } .bottom-rounded-corners (@radius: @default-radius) { .rounded-corners(0, 0, @radius, @radius) } .top-left-rounded-corners (@radius: @default-radius) { .rounded-corners(@radius, 0, 0, 0) } .top-right-rounded-corners (@radius: @default-radius) { .rounded-corners(0, @radius, 0, 0) } .top-rounded-corners (@radius: @default-radius) { .rounded-corners(@radius, @radius, 0, 0) } .rounded-corners(@top-left: @default-radius, @top-right: @default-radius, @bottom-left: @default-radius, @bottom-right: @default-radius) { -moz-border-radius: @top-left @top-right @bottom-right @bottom-left; -webkit-border-radius: @top-left @top-right @bottom-right @bottom-left; border-radius: @top-left @top-right @bottom-right @bottom-left; } .default-border { border: 1px solid @dark-color; } .no-bottom-border { border-bottom: 0; } .no-left-border { border-left: 0; } .no-right-border { border-right: 0; } body { font-family: @default-font-family; color: @default-color; } h1 { color: @dark-color; } a { color: @dark-color; text-decoration: none; font-weight: bold; &:hover { color: black; } } th { color: white; background-color: @light-color; } td, th { .default-border; .no-left-border; .no-bottom-border; padding: 5px; &:last-child { .no-right-border; } } tr:first-child { th:first-child { .top-left-rounded-corners; } th:last-child { .top-right-rounded-corners; } th:only-child { .top-rounded-corners; } } tr:last-child { td:first-child { .bottom-left-rounded-corners; } td:last-child { .bottom-right-rounded-corners; } td:only-child { .bottom-rounded-corners; } } thead tr:first-child th, thead tr:first-child th { border-top: 0; } table { .rounded-corners; .default-border; border-spacing: 0; margin: 5px; } {% endhighlight %}
    Generated CSS
    {% highlight css %} .default-border { border: 1px solid #39618b; } .no-bottom-border { border-bottom: 0; } .no-left-border { border-left: 0; } .no-right-border { border-right: 0; } body { font-family: Helvetica, Arial, sans-serif; color: #5b83ad; } h1 { color: #39618b; } a { color: #39618b; text-decoration: none; font-weight: bold; } a:hover { color: black; } th { color: white; background-color: #7da5cf; } td,th { border: 1px solid #39618b; border-left: 0; border-bottom: 0; padding: 5px; } td:last-child,th:last-child { border-right: 0; } tr:first-child th:first-child { -moz-border-radius: 8px 0 0 0; -webkit-border-radius: 8px 0 0 0; border-radius: 8px 0 0 0; } tr:first-child th:last-child { -moz-border-radius: 0 8px 0 0; -webkit-border-radius: 0 8px 0 0; border-radius: 0 8px 0 0; } tr:first-child th:only-child { -moz-border-radius: 8px 8px 0 0; -webkit-border-radius: 8px 8px 0 0; border-radius: 8px 8px 0 0; } tr:last-child td:first-child { -moz-border-radius: 0 0 0 8px; -webkit-border-radius: 0 0 0 8px; border-radius: 0 0 0 8px; } tr:last-child td:last-child { -moz-border-radius: 0 0 8px 0; -webkit-border-radius: 0 0 8px 0; border-radius: 0 0 8px 0; } tr:last-child td:only-child { -moz-border-radius: 0 0 8px 8px; -webkit-border-radius: 0 0 8px 8px; border-radius: 0 0 8px 8px; } thead tr:first-child th,thead tr:first-child th { border-top: 0; } table { -moz-border-radius: 8px 8px 8px 8px; -webkit-border-radius: 8px 8px 8px 8px; border-radius: 8px 8px 8px 8px; border: 1px solid #39618b; border-spacing: 0; margin: 5px; } {% endhighlight %}
    Categories: JavaEE Tags: cssmaven
  • Why Eclipse WTP doesn't publish libraries when using m2e

    Lately, I noticed my libraries weren’t published to Tomcat when I used a Maven project in Eclipse, even though it was standard war packaging. Since I mostly use Vaadin, I didn’t care much, I published the single vaadin-x.y.z.jar to the deployed WEB-INF/lib manually and I was done with it.

    Then, I realized it happened on two different instances of Eclipse and for the writing of Develop Vaadin apps with Scala, I used 3 different libraries, so I wanted to correct the problem. At first, I blamed it on the JRebel plugin I recently installed, then I began investigating the case further and I found out that I needed a special Maven connector that was present in neither of my Eclipse instances: the WTP connector. I had installed this kind of connector back when Maven Integration was done by m2eclipse by Sonatype, but had forgotten that a while ago. Now the integration is called m2e, but I have to do the whole nine yards again…

    The diagnostics that can be hard but the solution is very simple.

    Go to the Windows -> Preferences menu and from this point on go to Maven -> Discovery.

    Click on Open Catalog.

    Search for “wtp”, select Maven Integration for WTP and click on Finish. Restart and be pleased. Notice that most Maven plugins integration in Eclipse can be resolved the same way.

    For a sum up on my thoughts about the current state of m2e, please see here.

    Categories: JavaEE Tags: eclipsemavenwtp
  • Apache Maven 3 Cookbook

    This review is about Apache Maven 3 Cookbook from Srirangan from Packt Publishing.

    Facts

    1. 9 chapters, 208 pages, $35.99
    2. This book covers Apache Maven 3

    Pros

    Each recipe is structured in 3 steps:

    1. "Getting ready"
    2. "How to do it"
    3. "See also" for references on associated recipes

    Cons

    1. The scope of the book is too large for only 200 pages. It spans from Java to Scala and Groovy through Android, GWT and Flex.
    2. Most products installation processes (Sonatype Nexus, Hudson) are documented with a few screenshots. It would have been better to reference the product installation on the Web or to go in detail. In the current state of thing, most readers are left wondering what to do with it.
    3. The recipe structure is well adapted for... recipes. When talking about general Maven principles like compiling a project, it feels convoluted and artificial.
    4. Writing a whole chapter about native Maven reporting when there's Sonar? Come on...

    Conclusion

    I was expecting much from this book, because I’m a daily Maven user and because Maven is regularly misused (see here and here for a start). I’m sorry to say I’m disappointed: there’s not much regarding how to resolve daily Maven problems provided. The concept is a good idea but IMHO the result is a failure.

    Disclaimer: I was provided the book freely, courtesy of Packt Publishing

    Categories: Bookreview Tags: mavenreview
  • Maven doesn't suck, your POM does

    Maven bashing is an all-time favorite: there are plenty of articles telling how Maven downloads the whole Internet, or how POMs are bloated and so on.

    While I agree that Maven could be perfected, I’m also aware that some (if not most) of its shortcomings are not intrinsic but are caused by (very) bad configuration. Worse, even if used correctly in your projects, problems sometimes come from third-party dependencies! You do not believe me? Well, two examples follow, from standard libraries.

    Log4J

    Coming from the classic logging framework, this may seem a surprise but log4j POM is a mess. Just look at its dependencies section:

    <dependencies>
      <dependency>
        <groupId>javax.mail</groupId>
        <artifactId>mail</artifactId>
        <version>1.4</version>
      </dependency>
      <dependency>
        <groupId>javax.jms</groupId>
        <artifactId>jms</artifactId>
        <version>1.1</version>
      </dependency>
      ...
    </dependencies>
    

    Interestingly enough, Log4J depends on the Java Mail and JMS API! If you are using your application in an application server, you may be in for a rude surprise as conflicts may arise between your dependencies and the libraries available on the server.

    Moreover, while some users may have the need of mail or JMS appenders, this is not the case for all of us. As such, there’s clearly a lack of appropriate modularization in the design of the library. Luckily, the above POM is an excerpt of version 1.2.15. Version 1.2.16 uses the optional tag for those libraries (which breaks transitivity): it addresses our application server use-case but is still a problem for those needing the dependencies as they have to add the dependency manually. See here for a thought or two on optional dependencies.

    If we need version 1.2.15, there are basically two solutions.

    • Clean but cumbersome: we have to manually exclude JMS and Mail from each POM using Log4J. Who said software development was fun?
    • Gruesome but effective: if we have a Maven enterprise repository, we correct the POM (i.e. we add optional tags) on the repository.

    Jasper reports

    The Log4J example was straightforward: just the side-effects of bad modularization. Jasper’s POM has another twist, just look at a typical dependency:

    <dependency>
      <groupId>com.lowagie</groupId>
      <artifactId>itext</artifactId>
      <version>[1.02b,)</version>
      <scope>compile</scope>
    </dependency>
    

    The version part means the dependency’s version should be between 1.02b included and the latest version. This obviously has two drawbacks:

    • From an architectural point of view, how can the POM provider guarantee there won't be an API break with the latest version?
    • From a Maven POV, it means Maven will try to download the latest version. In order to do that, it will try to contact repo1... You're beginning to see the problem? If you're behing a corporate proxy that isolates you from the Internet, you're toast.

    The POM excerpt comes from version 2.0.4. Starting from version 2.0.5, Jasper’s designers used only single version dependencies.

    If you’re stuck with older versions, the only solution here is to replace the POM with a manually crafted one that do not use unboundedversion dependencies on your enterprise repository.

    Conclusion

    Despite the constant noise on the Internet, Maven is a wonderful tool. Yet, even if I take the utmost care to design my POM, some external dependencies make my life harder. It would be better to stop losing time complaining about the tool and invest this time helping the developers of those dependencies to provide better POMs.

    Categories: Java Tags: maven
  • Free eBook: Apache Maven 3 Cookbook

    Dear readers,

    In order to celebrate the release of Apache Maven 3 Cookbook, Packt Publishing contacted me in order to hold a contest to grab a free copy of the eBook!

    To be frank, I haven’t a clue toward organizing a contest so the first three who send me a mail at nicolas at frankel dot ch with the subject “Apache Maven 3 Cookbook” will be sent the eBook for free. Don’t waste your time: on your mark, ready, go!

    And thanks Packt for these gifts.

    Update[10h20]: Sorry folks, all three eBooks have already been given. Winners will be contacted shortly by a Packt representative. Thanks for participating!

    Categories: Bookreview Tags: maven
  • Migrating from m2eclipse to m2e

    Since Indigo, the Maven Ecliple plugin formerly known as m2eclipse became part of Eclipse release (at least in the pure Java release). The name of the plugin also changed from m2eclipse to m2e. This was not the sole change, however:

    • The number of tabs on the POM has shrinked drastically, and the features as well. This will probably be the subject of a later post since I feel quite cheated by the upgrade.
    • The POM configuration has been more integrated with Eclipse build (which can cause unwanted side-effects as I described in my last article).

    More importantly, projects that began with m2eclipse can be built in Indigo but no contextual Maven menu is accessible on the project itself (though a contextual menu is available on the POM).

    In order to migrate flawlessly and have our contextual menu back, some actions are necessary. They are gruesome because it involves updating by hand Eclipse configuration file.

    Warning: at this point, you have the choice to stop reading. If you decide to continue and use the process described below, it’s at your own risk!

    The Maven plugin recognizes a project as a Maven one based the .project Eclipse proprietary configuration file. To display it, go to the Project Explorer view, click on the scrolling menu at the top right and choose Customize View. You have to uncheck *.resources: along the .project file , you should see a .classpath file as well as a .settings folder.

    1. In the .project:
      • Replace org.maven.ide.eclipse.maven2Builder by org.eclipse.m2e.core.maven2Builder in the buildSpec section
      • Replace org.maven.ide.eclipse.maven2Nature by org.eclipse.m2e.core.maven2Nature in the natures section
    2. In the .classpath, replace org.maven.ide.eclipse.MAVEN2_CLASSPATH_CONTAINER by org.eclipse.m2e.MAVEN2_CLASSPATH_CONTAINER
    3. Finally, in the .settings folder, rename the org.maven.ide.eclipse.prefs file to org.eclipse.m2e.core.prefs. Contents should be left unchanged.

    Now, the contextual menu should appear and work accordingly.

    Remember, this is a big hack you should only use with the right parachute (at least a source control management system) since it will hurt you plenty if it fails. For me, it has always worked… yet.

    Categories: Java Tags: eclipsem2eclipsemaven
  • Better Maven integration leads to unforeseen consequences (bugs)

    This week, I was faced with what seemed an near-insuperable problem. I was called by one of my dev: as soon as he upgraded his Eclipse (or more precisely, our own already-configured Eclipse), he couldn’t deploy to Tomcat through WTP. Here are the steps I took to resolve the problem and more general thoughts about upgrading and tooling.

    The log displayed a ClassNotFoundException, one involving Spring. So, the first step is to look under the hood. Provided you used the default configuration - to use workspace metadata - it should be located under <WORKSPACE>/.metadata/.plugins/org.eclipse.wst.server.core/tmp<i>/wtpwebapps where WORKSPACE is your workspace location and i is an incremental number beginning with 0 assigned by Eclipse to each virtual application server in WTP. There you should see the applications deployed in Eclipse. When I checked, I realized the application was deployed alright, but found no WEB-INF/lib folder.

    I had already experienced such deployment problems: I solved them by first forcing publish (right-click on the server and choose Publish) and if not successful, cleaning (it removes all deployment and starts again from scratch). Well, this didn’t work.

    Also, sometimes classes are not deployed because there aren’t there in the first place: compilation doesn’t occur (because of bad code or Eclipse may be playful) and this prevents generating .class files. This didn’t seem like the problem but hell, this was a lead like any other. In order to force compilation, clean the project and check recompile afterwards (or check auto build is on). This didn’t work too…

    It happened before that closing the project and reopening it resolved some code-unrelated compilation problems. Closing and restarting Eclipse may yield the same results. I did both to no avail. At this point, I was disappointed because all previous steps should have resolved the problem… yet they hadn’t.

    Since the project used Maven (and we had m2eclipse installed), I also checked the Maven Dependencies library was correctly referenced: it was. Just to be sure, I disabled and re-enabled dependencies management. Guess what? Nothing changed.

    Desperate, I hopelessly browsed through the POM and the following caught my attention:

    <plugin>
     <artifactId>maven-war-plugin</artifactId>
     <configuration>
     <packagingExcludes>WEB-INF/lib/*.jar</packagingExcludes>
     </configuration>
    </plugin>
    

    This configuration creates skinny WARs so that libraries are not packaged in the WAR but are in the EAR, provided they are referenced, of course (for the right way to create such WARs, see my post from last week). With a faint gleam of hope, I removed the configuration part and it worked, the libraries were finally deployed.

    Yes, dear readers, it seems newer versions of the m2eclipse plugin are able to parse some portions of the POM that were not parsed before and act accordingly. Although this is a good news in general, it means we have to carefully check about potential side-effects of upgrading to those newer versions. Of course, this was stupid to upgrade in the first place but think that sometimes it’s mandatory for a newer project.

    However, the problem is much more widespread than m2eclipse. It’s very similar to issues adressed in Andrew Spencer’s post, only more so since if we don’t upgrade, we may have to keep an Eclipse instance for each project, that has to be shared between all team members. Right now, I don’t have any solutions for this problem, apart from changing the code to keep in synch with the tool.

    PS : since we still wanted to have skinny WARs, I just moved the configuration part in a profile… which was what was configured in the archetype provided by the support team :-)

    Categories: JavaEE Tags: m2eclipsemaven
  • Skinny WAR done right

    In a previous post, I wrote about how to create skinny WAR with Maven the DRY way: it was not the DRYier way to do it, as was demonstrated to me this week by my colleague Olivier Chapiteau (credit where credit is due).

    His solution is far more elegant; it is reproduced below for reference’s sake.

    <project xmlns="http://maven.apache.org/POM/4.0.0"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
        http://maven.apache.org/xsd/maven-4.0.0.xsd">
      ...
      <dependencies>
        <dependency>
          <groupId>ch.frankel.blog.ear-war</groupId>
          <artifactId>war-example</artifactId>
          <version>1.0.0</version>
          <type>war</type>
        </dependency>
        <dependency>
          <groupId>ch.frankel.blog.ear-war</groupId>
          <artifactId>war-example</artifactId>
          <type>pom</type> <!-- Here works the magic -->
          <version>1.0.0</version>
        </dependency>
      </dependencies>
    ...
    </project>
    

    That’s it! The beauty lies in using the WAR’s POM as a dependency as well as the WAR itself: it’s simple, DRY and effective.

    Categories: JavaEE Tags: drymaven
  • Maven repositories in anger!

    Every build systems worth his salt acknowledges Maven dependencies repository. Even those vehemently opposed to the way Maven does things, like Gradle, still uses repo1.

    Wait, repo1? If there was only repo1. But nowadays, every project publishes its artifacts in its own repository. For some providers, like SpringSource and JBoss, I think it may be for marketing reasons. But whatever the reasons are, it only makes the job of the enterprise repository manager harder, since he has to reference all those repositories.

    I also stumbled upon another serious glitch in repositories this week. My client bought JBoss Enterprise Application Platform (JBoss EAP) 5.0. Now, suppose we want to compile our code against JBoss EAP’s libraries using Maven. Guess what? Those libraries are available in no accessible repository: check this issue if you don’t believe me. Now, support proposes two solutions:

    • Either manually put EAP's libraries in our own repository, reconstructing each POM
    • Or use Maven's system path to point to a local (or network) JBoss EAP

    Is this really the level of support we are expecting from a major player like Red Hat? Unfortunately, this is only a single real-world example of things going awry regarding Maven repositories. I’m afraid there are more going around beyond my meager knowledge, and even more to come.

    Repo1 was meant to ease our life. I really wish we could go back to this simple scheme once again: stop tackling problems brought by providers and provide answers to problems brought by the business.

    Categories: Java Tags: jbossmavenrepo1
  • Mixing Vaadin and Scala (with a touch of Maven)

    People who are familiar with my articles know that I’m interested in Vaadin (see the “Go further” section below) and also more recently in Scala since both can increase your productivity.

    Environment preparation

    Thus it is only natural that I tried to embed the best of both worlds, so to speak, as an experience. As an added challenge, I also tried to add Maven to the mix. It wasn’t as successful as I had wished, but the conclusions are interesting (at least to me).

    I already showed how to create Scala projects in Eclipse in a previous post, so this was really a no-brainer. However, layering Maven on top of that was trickier. The scala library was of course added to the dependencies list, but I didn’t found how to make Maven compile Scala code so that each Eclipse save does the compilation (like it is the case with Java). I found the maven-scala-plugin provided by Scala Tools. However, I wasn’t able to use it to my satisfaction with m2eclipse. Forget the Maven plugin… Basically what I did was create the Maven project, then update the Eclipse configuration from Maven with m2eclipse and finally add the Scala builder: not very clean and utterly brittle since any update would overwrite Eclipse files. I’m all ears if anyone knows the “right” way to do!

    Development time

    Now to the heart of the matter: I just want a text field and a button that, when pressed, displays the content of the field. Simple enough? Not really. The first problem I encountered was to create an implementation of the button click listener in Scala. In Vaadin, the listener interface has a single method void buttonClick(Button.ClickEvent event). Notice the type of the event? It is an inner class of Button and wasn’t able to import it in Scala! Anyone who has the solution is welcome to step forward and tell it.

    Faced with this limitation, I decided to encapsulate both the listener and the event class in two standard Java classes, one in each. In order to be decoupled, not to mention to ease my life, I created a parent POM project, and two modules, one for the Java workaround classes, the other for the real application.

    Next obstacle is also Scala-related, and due to a lack of knowledge on my part. I’m a Java boy so, in order to pass a Class instance, I’m used to write something like this:

    eventRouter.addListenerVisibleClickEvent.class, this, "displayMessage")
    

    Scala seems to frown upon it and refuses to compile the previous code. The message is “identifier expected but ‘class’ found”. The correct syntax is:

    eventRouter.addListener(classOf[VisibleClickEvent], this, "displayMessage")
    

    Moreover, while developing, I wrote a cast the Java way:

    getWindow().showNotification(button.getCaption() + " " + (String) field.getValue())
    

    My friend Scala loudly complained that “object String is not a value” and I corrected the code like so:

    getWindow().showNotification(button.getCaption() + " " + field.getValue().asInstanceOf[String])
    

    Astute readers should have remarked that concatenating strings render this cast unnecessary and I gladly removed it in the end.

    Conclusion

    In the end, it took more time than if I had done the example in Java.

    • For sure, some of the lost time is due to a lack of Scala knowledge on my part.
    • However, I'm not sure the number of code lines would have been lower in Java, due to the extra-code in order to access inner classes .
    • In fact, I'm positive that the project would have been simpler with Java instead of Scala (one project instead of a parent and 2 modules).

    The question I ask myself is, if Scala cannot extend Java inner classes - and that being no mistake on my part, should API evolve with this constraint? Are inner classes really necessary in order to achieve a clean design or are they only some nice-to-have that are not much used?

    In all cases, developers who want to code Vaadin applications in Scala should take extra care before diving in and be prepared to have a lower productivity than in Java because there are many inner classes in Vaadin component classes.

    You can find the sources for this article here in Maven/Eclipse format.

    To go further:

    Categories: JavaEE Tags: mavenscalavaadin
  • Managing POM versions

    This article won’t be long but can be a lifesaver. If you use Maven, how many times did you need to manually update POM versions in an entire modules hierarchy? For me, the answer is: “too many”.

    When you project grows to include many Maven modules, releasing a new version can be a nightmare. Sure, you have the maven-release-plugin. It does many things under the cover but in some cases, I saw it fail. What do you do then? You manually change your POM version in your modules hierarchy:

    • the version of the module’s POM
    • the version of the parent’s POM

    It’s not only a boring taks, it’s also an error-prone one. Imagine my surprise when I found the maven-version-plugin and its little jewel of a command line:

    mvn versions:set -DnewVersion=1.0.1-SNAPSHOT
    

    And presto, the plugin does it all for you, entering each module and changing the previous informations:

    [INFO] [versions:set]
    [INFO] Searching for local aggregator root...
    [INFO] Local aggregation root: D:\workspace\Champion Utilities
    [INFO] Processing ch.frankel.champions.license:license
    [INFO]     Updating project ch.frankel.champions.license:license
    [INFO]         from version 1.0.0 to 1.0.1-SNAPSHOT
    [INFO]
    [INFO] Processing ch.frankel.champions.license:license-check
    [INFO]     Updating parent ch.frankel.champions.license:license
    [INFO]         from version 1.0.0 to 1.0.1-SNAPSHOT
    [INFO]     Updating project ch.frankel.champions.license:license-check
    [INFO]         from version 1.0.0 to 1.0.1-SNAPSHOT
    [INFO]
    [INFO] Processing ch.frankel.champions.license:license-common
    [INFO]     Updating parent ch.frankel.champions.license:license
    [INFO]         from version 1.0.0 to 1.0.1-SNAPSHOT
    [INFO]     Updating project ch.frankel.champions.license:license-common
    [INFO]         from version 1.0.0 to 1.0.1-SNAPSHOT
    [INFO]
    [INFO] Processing ch.frankel.champions.license:license-generation
    [INFO]     Updating parent ch.frankel.champions.license:license
    [INFO]         from version 1.0.0 to 1.0.1-SNAPSHOT
    [INFO]     Updating project ch.frankel.champions.license:license-generation
    [INFO]         from version 1.0.0 to 1.0.1-SNAPSHOT
    

    Give it a try, it’s a real powerful yet easy!

    Categories: Java Tags: maven
  • DRY and skinny war

    In this article, I will show you how the DRY principle can be applied when using the skinny war configuration of the maven-war-plugin.

    While packaging an EAR, it is sometimes suitable that all libraries of the different WARs be contained not in their respective WEB-INF/lib folders but at the EAR level so they are usable by all WARs. Some organizations even enforce this rule so that this is not merely desiarable but mandatory.

    Using Maven, nothing could be simpler. The maven-war-plugin documents such a use case and calls it skinny war. Briefly, you have two actions to take:

    • you have to configure every WAR POM so that the artifact will not include any library like so:
    <project>
      ...
      <build>
    	<plugins>
    	  <plugin>
    		<artifactId>maven-war-plugin</artifactId>
    		<configuration>
    		  <packagingExcludes>WEB-INF/lib/*.jar</packagingExcludes>
    		  <archive>
    			<manifest>
    			  <addClasspath>true</addClasspath>
    			  <classpathPrefix>lib/</classpathPrefix>
    			</manifest>
    		  </archive>
    		</configuration>
    	  </plugin>
    	</plugins>
      </build>
    </project>
    
    • you have to add every dependency of all you WARs in the EAR

    The last action is the real nuisance since you have to do it manually. In the course of the project, a desynchronization is sure to happen as you add/remove dependencies from the WAR(s) and forget to repeat the action on the EAR. The DRY principle should be applied here, the problem lies in how to realize it.

    There’s an easy solution to this problem: if a POM could regroup all my WAR dependencies, I would only have to draw a dependency from the WAR to it, and another from the EAR to it and it would fulfill my DRY requirement. Let’s do it!

    The pomlib itself

    Like I said before, the pomlib is just a project whose packaging is POM and that happens to have dependencies. To be simple, our only dependency will be Log4J 1.2.12 (so as not have transitive dependency, let’s keep it simple).

    The POM will be:

    <project xmlns="http://maven.apache.org/POM/4.0.0"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
      <modelVersion>4.0.0</modelVersion>
      <parent>
        <groupId>ch.frankel.blog.pomlib</groupId>
        <artifactId>pomlib-parent</artifactId>
        <version>1.0.0</version>
      </parent>
      <artifactId>pomlib-lib</artifactId>
      <packaging>pom</packaging>
      <dependencies>
        <dependency>
          <groupId>log4j</groupId>
          <artifactId>log4j</artifactId>
        </dependency>
      </dependencies>
    </project>
    

    Like for any other project module, I put the versions in the parent POM.

    The EAR and the WAR

    Both should now add a dependency to the pomlib. For brevity, only the EAR POM is displayed, the WAR POM can be found in the sources if the need be.

    <project xmlns="http://maven.apache.org/POM/4.0.0"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
      <modelVersion>4.0.0</modelVersion>
      <parent>
        <groupId>ch.frankel.blog.pomlib</groupId>
        <artifactId>pomlib-parent</artifactId>
        <version>1.0.0</version>
      </parent>
      <artifactId>pomlib-ear</artifactId>
      <packaging>ear</packaging>
      <dependencies>
        <dependency>
          <groupId>${project.groupId}</groupId>
          <artifactId>pomlib-lib</artifactId>
          <type>pom</type>
          <scope>import</scope>
        </dependency>
        <dependency>
          <groupId>${project.groupId}</groupId>
          <artifactId>pomlib-war</artifactId>
          <type>war</type>
        </dependency>
      </dependencies>
    </project>
    

    Likewise, versions are put in the parent POM. Notice the import scope on the pomlib dependency, it’s the only magic.

    Using mvn install from this point on will put the log4j dependency at the root of the generated EAR artifact and not in the WEB-INF/lib folder of the WAR.

    Conclusion

    Now that all dependencies are described in the pomlib, should you need to add/remove a dependency, it’s the only place you’ll need to modify. With just a little configuration, you’ve streamlined you delivery process and saved yourself from a huge amount of stress in the future.

    By the way, if you use a simple Servlet container (like Tomcat or Jetty) as your development application server, you could even put the skinny war configuration in a profile. Maven will produce “fat” WARs by default, at development time. At delivery time, just activate the profile and here’s your EAR product, bundling skinny WARs.

    You can find the source of this article in Maven/Eclipse format here.

    Categories: Java Tags: drymaven
  • Maven The complete reference

    This review is about Sonatype’s Maven: The complete reference by Tim O’Brien, John Casey, Brian Fox, Jason Van Zyl, Eric Redmond and Larry Shatzer.

    Disclaimer: I learned Maven from Sonatype’s site 3 years ago. I found it was a great tool to learn Maven. Now that I have a little more experience in the tool, I tried to write this review in an objective manner.

    Facts

    1. 13 chapters, 267 pages, free (see below)
    2. This book is intended for both readers who wants to learn Maven from scratch and for readers who need to look for a quick help on an obscure feature
    3. A whole chapter is dedicated to the Maven assembly plugin
    4. Another chapter is dedicated to Flexmojos, a Sonatype plugin to manage Flex projects

    Pros

    1. First of all, this book is 100% free to view and to download. This is rare enough to be state!
    2. Complete reference books are sometimes a mere paraphrase of a product's documentation. This one is not. I do not claim I'm a Maven expert but I did learn things in here
    3. This book is up-to-date with Maven 2.2. For example, it explains password encryption (available since Maven 2.1.0) or how to configure plugins called from the command line differently using default-cli (since Maven 2.2.0)
    4. A very interesting point is a list of some (all?) JEE API released by the Geronimo project and referenced by groupId and artifactId. If you frown because the point is lost on you, just try using classes from activation.jar (javax.activation:activation): you'll never be able to let Maven download it for you since it is not available in the first place for licensing reasons. Having an alternative from Geronimo is good, knowing what is available thanks to the book is better

    Cons

    To be frank, I only found a problem with Maven: The complete reference. Although a whole chapter is written on the Maven Assembly plugin, I understood nothing from it… The rest of the book is crystal clear, this chapter only obfuscated the few things I thought I knew about the plugin.

    Conclusion

    This book is top quality and free: what can I say? If you’re a beginner in Maven, you’ll find a real stable base to learn from. If you need to update your knowledge, you will find a wealth of information. If you’re a Maven guru, please contribute to the Assembly plugin’s chapter. I can only give a warm thank you for Sonatype’s effort for giving this quality book to the community.

    Categories: Bookreview Tags: bookmaven
  • Apache Maven 2 Effective implementation

    Apache Maven 2 Effective implementation

    This review is about Packt’s Apache Maven 2 Effective Implementation by Maria Odea Ching and Brett Porter.

    Facts

    1. 12 chapters, 436 pages, 39.99$
    2. This book is intended for people that already have a good experience of Maven. The 'About Maven' part is as small as it can get, it is the opposite of what could be 'Maven for Dummies', where you learn to type mvn something.
    3. A good portion of the book is about tools that are part of the Maven ecosystem: Continuum for the CI part and Archiva for the repository part.
    4. A chapter is dedicated to testing, which test to pass automatically, what frameworks to use and how to configure the whole lot.

    Pros

    1. People that wrote the book really know Maven intimately and it shows. I'm not a newbie myself and I learned some things that I have put to good use since then (or intend to in the near future).
    2. There's an interesting multi-module structure described that is designed for big projects. It shows Maven structure can be quite adaptable and module design should be custom tailored to each project's needs. A module for each layer / artifact is only the first step.
    3. The part about Maven plugins is very interesting. Since Maven adopts a plugin architecture, knowing what plugins can do what, why and how to use it is invaluable.
    4. So is the part about testing: a good idea is that some tests should not be passed everytime, but instead launched manually or attached to a specific module.

    Cons

    1. The tools used are Continuum and Archiva but there's no justification for this choice. One could think that's because they're both Apache but that's just not enough. Java.net's Hudson seems the most used CI server and Sonatype's Nexus is the reference for Maven repositories (although I have a soft spot for JFrog's Artifactory).
    2. What I regret most is the part taken by reporting. My personal stance on this is that only very few organizations use these features, mainly Open Source organizations. Since you now have products such as Sonar, describing in detail how to configure Maven reporting is a waste of time. Since the book is already oriented toward tools, why doesn't it just teach how to use Sonar (since it cites Sonar anyway)?

    Conclusion

    All in all, Apache Maven 2 Effective implementation is not a great book but rather a good book to have when you already worked with Maven so as to stand back a little and build your projects more effectively with Maven in the future.

    Categories: Bookreview Tags: effectivemaven
  • Top Eclipse plugins I wouldn't go without

    Using an IDE to develop today is necessary but any IDE worth his salt can be enhanced with additional features. NetBeans, IntelliJ IDEA and Eclipse have this kind of mechanism. In this article, I will mention the plugins I couldn’t develop without in Eclipse and for each one advocate for it.

    m2eclipse

    Maven is my build tool of choice since about 2 years. It adds some very nice features comparing to Ant, mainly the dependencies management, inheritance and variable filtering. Configuring the POM is kind of hard once you’ve reached a fairly high number of lines. The Sonatype m2eclipse plugin (formerly hosted by Codehaus) gives you a tabs-oriented view of every aspect of the POM:

    • An Overview tab neatly grouped into : Artifact, Parent, Properties, Modules, Project, Organization, SCM, Issue Management and Continuous Integration,

      m2eclipse Overview tab

    • A Dependencies tab for managing (guess what) dependencies and dependencies management. For each of the former, you can even exclude dependent artifacts. This tab is mostly initialized at the start of the project, since its informations shouldn’t change during the lifecycle,
    • A Repositories tab to deal with repositories, plugin repositories, distribution, site and relocation (an often underused feature that enable you to change an artifact location without breaking builds a.k.a a level of indirection),
    • A Build tab for customizing Maven default folders (a usually very bad idea),
    • A Plugins tab to configure and/or execute Maven plugins. This is one of the most important tab since it’s here you will configure maven-compiler-plugin to use Java 6, or such,
    • A Report tab to manage the ` part,
    • A Profiles tab to cope with profiles,
    • A Team tab to fill out team-oriented data such as developers and contributors information,
    • The most useful and important tab (according to me) graphically displays the dependency tree. Even better, each scope is represented in a different way and you can filter out unwanted scope.

      m2eclipse Dependencies tab

    • Last but not least, the last tab enables you to directly edit the underlying XML.

    Moreover, m2eclipse adds a new Maven build Run configuration that is equivalent for the command line:

    m2eclipse Run Configuration

    With this, you can easily configure the -X option (Debug Output) or the -Dmaven.test.skip option (Skip Tests).

    More importantly, you can set up the plugin to resolve dependencies from within the workspace during Eclipse builds; that is, instead of using your repository classpath, Eclipse will use the project’s classpath (provided it is in the desired version). It prevents the need to build an artifact each time it is modified because another won’t compile after the change. It merely duplicates the legacy Eclipse dependency management.

    I advise not to use the Resolve Workspace artifacts in the previous Run configuration because it will use this default behaviour. In Maven build, I want to distance myself from the IDE, using only the tool’s features.

    TestNG plugin

    For those not knowing TestNG, it is very similar to JUnit4. It was the first to bring Java 5 annotations (even before JUnit) so I adopted the tool. Now as to why I keep it even though JUnit4 uses annotations: it has one important feature JUnit has not. You can make a test method dependent on another, so as to develop test scenarios. I know this is not pure unit testing anymore; still I like using some scenarios in my testing packages in order to test build breaks as early as possible.

    FYI, Maven knows about TestNG and runs TestNG tests as readily as JUnit ones.

    The TestNG plugin for Eclipse does as the integrated JUnit plugin does, whether regarding configuration or run or even test results.

    TestNG Plugin Run configuration

    Emma

    When developing, and if one uses tests, one should know about one’s test coverage over code. I used to use Cobertura Maven plugin: I configured in the POM and, every once in a while, I ran a simple mvn cobertura:cobertura. Unfortunately, it is not very convenient to do so. I searched for an Eclipse plugin to have the same feature; alas, there’s none.  However, I found the EclEmma Eclipse plugin that brings the same functionality. It uses Emma (an OpenSource code coverage tool) under the hood and, though I searched thoroughly, Emma has no Maven 2 plugin. Since I value equally IDE code coverage during development and Maven code coverage during nightly builds (on a Continuous Integration infrastrcuture), so you’re basically stuck with 2 different products. So?

    EclEmma line highlighting

    ElcEmma provides a 4th run button (in addition to Run, Debug and External Tools) that launches the desired run configuration (JUnit, TestNG or what have you) in enhanced mode, the latter providing the code coverage feature. In the above screenshot, you can see line 20 was not run during the test.

    Even better, the plugin provides a aggregation view for code coverage percentage. This view can be decomposed on the project, source path, package and class levels.

    EclEmma statistics

    Spring IDE

    Spring does not need to be introduced. Whether it will be outed by JEE 5 dependency injection annotations remains to be seen. Plenty of projects still use Spring and that’s a fact. Still, XML configuration is very ankward in Spring in a number of cases:

    • referencing a qualified class name. It is not neither easy nor productive to type it; the same is true for properties
    • understanding complex or big configurations files
    • referencing a Spring bean in a hundred or more lines of file
    • refactoring a class name or a property name without breaking the configurations files
    • being informed of whether a class is a Spring bean in a project and if so, where it is used

    Luckily, Spring IDE provides features that make such tasks easy as a breeze:

    • auto-completion on XML configuration files
    • graphic view of such files

      Spring IDE Graph View

    • an internal index so that refactoring is takes into account the XML files (though I suspect there is some bugs hidden somewhere for I regularly have errors)
    • a enhanced Project Explorer view to display where a bean is used

      Spring Project Explorer View

    This entire package guarantees a much better productivity when using XML configuration in Spring than plain old XML editor. Of course, you can still use Annotation configuration, though I’m reluctant to do so (more in a latter post).

    I conclusion, these 4 integration plugins mean I feel really more comfortable using their underlying tools. If I were in an environment where I couldn’t update my Eclipse however I choose, I would definitely think about using these tools at all (except Maven), or use Annotations configuration for Spring. You can have exceptional products or frameworks, they have to integrate seamlessly into your IDE(s) to really add value: think about the fate of EJB v2!