Archive

Posts Tagged ‘spring’
  • Starting with Cucumber for end-to-end testing

    Cucumber logo

    This week, my team decided to create a smoke test harness around our web app to avoid the most stupid regressions. I was not in favor of that, because of my prior experience with the fragility of end-to-end testing. But since we don’t have enough testers on our team, that was the only sane thing to do. I stepped forward to develop that suite.

    A simple TestNG MVP

    At first, I wanted to make a simple working test harness, so I chose technologies I was familiar with:

    • Selenium to manage browser interactions.
    • TestNG for the testing framework.

    TestNG is a much better choice that JUnit (even compared to the latest 5th version) for end-to-end testing because it lets you order test methods.

    However, the problem with this approach is readability when failures happen. Stack traces are hardly understandable by team members that are not developers. After having developed something that worked, I thus wanted to add some degree of usability for all team members.

    Migrating to Cucumber

    Cucumber is a BDD tool available on the JVM. It can be integrated with Selenium to have a thin BDD layer on top of GUI testing.

    Project structure

    Cucumber is based on 3 main components :

    • A feature, as its name implies, is a high-level feature of the system. It contains different scenarios, smaller-grained features that realize a feature. Each scenario is made in turn by a combination of step_s: _Given, When and Then that are well-know to BDD practitioners.
      In Cucumber, a feature is written in its own file Gherkin
    • The step definition is a Java file that implements in code the steps described in the feature
    • Last but not least, the test class is a JUnit (or TestNG) test class that binds the 2 former components

    For example, let’s analyze how to create a test harness for an e-commerce shop.

    The following is an excerpt of the feature to handle the the checkout:

    {% highlight text linenos %} Feature: Checkout A customer should be able to browse the shop, put an item in the cart, proceed to checkout, login and pay by credit card

    Scenario: Customer goes to the homepage and chooses a category Given the homepage is displayed Then the page should display 5 navigation categories And the page should display the search box When the first displayed category is chosen Then the customer is shown the category page

    # Other scenarios follow {% endhighlight %}

    • On line 1 stands the name of the feature. The name should be short and descriptive.
    • Follows on line 2 a longer text that describes the feature in detail. It’s only meant for documentation.
    • The title of the scenario is set on line 8.
    • Initialization is set on line 9 with the Given keyword
    • On line 10 & 11 are some assertions via the Then keyword. Note that And could be replaced by Then but it feels more readable.
    • On line 12 is an interaction, the When keyword is used
    • And on line 13, we assert the state of the app again

    The corresponding step definition to the previous feature could look like that (in Kotlin):

    {% highlight java linenos %} class HomePageStepDef @Autowired constructor(page: HomePage) : En {

    init {
        Given("^the homepage is displayed$") { page.displayHomePage() }
        Then("^the page should display (\\d+) navigation categories$") { numberOfCategories: Int ->
            val displayedCategories = page.displayedCategories
            assertThat(displayedCategories).isNotNull().isNotEmpty().hasSize(numberOfCategories)
        }
        And("^the page should display the search box$") {
            val searchBox = page.searchBox
            assertThat(searchBox).isNotNull().isNotEmpty().hasSize(1)
        }
        When("^the first displayed category is chosen$") { page.chooseFirstDisplayedCategory() }
        Then("^the customer is shown the category page$") {
            assertThat(page.url).isNotNull().isNotEmpty().containsPattern("/c/")
        }
    } } {% endhighlight %}
    

    Let’s forget for now about line 1 from the above snippet from above, included @Autowired and focus on the rest.

    For each Given/When/Then line in the feature file, there’s a corresponding method with a matching regexp in the class file:

    • Cucumber matches the step defined in the feature with the method by using the first parameter - the regexp.
    • Parameters can be defined in the feature and used in the step. As an example, compare line 10 of the first snippet with line 8 of the second: the regexp will capture the number of categories so it can be easily changed in the feature without additional development.
    • The second method parameter is the lambda that will get executed by Cucumber.
    • Given I’m using a Java runtime v8, those methods are default methods implemented in the En interface. There’s one such interface for each available language, so that step definitions can be implemented in your own language.
    • The class has no direct dependency on the Selenium API, it’s wrapped behind the Page Object pattern (see below).

    Finally, here’s the entry point test class:

    @RunWith(Cucumber::class)
    @CucumberOptions(
            features = arrayOf("classpath:feature/guest_checkout.feature"),
            glue = arrayOf("de.sansibar.glue"),
            format = arrayOf("pretty"))
    class GuestCheckoutIT
    

    As can be seen, it’s empty: it just provides the entry point and binds a feature to the step definitions package. At that point, running the test class in the IDE or through Maven will run the associated Cucumber feature.

    Improving beyond the first draft

    So far, so good. But the existing code deserves to be improved.

    Coping with fragility

    I cheated a little for this one as it was already implemented in the first TestNG MVP but let’s pretend otherwise.

    If you’ve read the step definition class above, you might have noticed that there’s no Selenium dependency anywhere in the code. All of it has been hidden in a class that represents the page:

    {% highlight java %} class HomePage(driver: WebDriver, private val contextConfigurator: ContextConfigurator): AbstractPage(driver) {

    val displayedCategories: List<WebElement> by lazy { driver.findElements(className("navigationBannerHome")) }
    val searchBox: List<WebElement> by lazy { driver.findElements(id("input_SearchBox")) }
    
    fun displayHomePage() {
        val url = contextConfigurator.url
        driver.get(url)
    }
    
    fun chooseFirstDisplayedCategory() {
        displayedCategories[0].click()
    } } {% endhighlight %}
    

    This approach is known as the Page Object pattern.

    Mixing selectors and tests into the same class makes tests brittle, especially in the early stage of the project when the GUI changes a lot. Isolating selectors into a dedicated class let us buffer changes into that class only.

    There are a couple of good practices there - suggested by colleagues and from my personal experience:

    • Use id attributes on elements used for selection. This makes it less likely to break the test by changing the structure of the DOM.
    • Use coarse-grained methods mapped to a business case. For example, instead of having a whole bunch of selectTitle(), fillFirstName(), fillLastName(), submitRegistration(), etc. methods for each registration field, have a single register() method that inputs and submits the data. Again, this isolates possible breaking changes in the page class.

    Improved design

    The Page needs to select components through the Selenium API, thus it needs a reference to a WebDriver. This is a problem when a single feature contains several scenarios as this reference needs to be shared among all scenarios. Possible solutions to this include:

    1. A single scenario per feature. Every scenario will have to define its starting point. For our e-commerce checkout scenario, this defeats the purpose of testing itself.
    2. A single scenario containing steps of all scenarios. In this case, all scenarios will be merged into a very long one. That makes for a hard-to-read scenario and an even harder to read (and maintain) class.
    3. To be able to have multiple scenarios per feature while putting methods into their relevant step definitions, one needs to share the same driver instance among all step definitions. This can be achieved by applying the Singleton pattern to a dedicated class.
    4. The last alternative is to use… DI! Actually, Cucumber integrates quite nicely to integrate with some commons DI frameworks, including Weld and Spring.

    This is great news, as it’s possible to use the libraries we already use in development in the tests. Regular readers know me as a Spring proponent, so I naturally used it as the DI framework. Those are dependencies that are required for that in the POM:

    <!-- Cucumber Spring integration -->
    <dependency>
      <groupId>info.cukes</groupId>
      <artifactId>cucumber-spring</artifactId>
      <version>${cucumber.version}</version>
      <scope>test</scope>
    </dependency>
    <!-- Spring -->
    <dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring-core</artifactId>
      <version>${spring.version}</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring-beans</artifactId>
      <version>${spring.version}</version>
      <scope>test</scope>
    </dependency>
    <dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring-context</artifactId>
      <version>${spring.version}</version>
      <scope>test</scope>
    </dependency>
    <!-- Spring test - mandatory but not available via transitivity -->
    <dependency>
      <groupId>org.springframework</groupId>
      <artifactId>spring-test</artifactId>
      <version>${spring.version}</version>
      <scope>test</scope>
    </dependency>
    

    At this point, it’s quite easy to create a standard Spring configuration file to generate the driver, as well as the page objects and their necessaries dependencies:

    @Configuration
    open class AppConfiguration {
    
        @Bean open fun driver() = ChromeDriver()
        @Bean open fun contextConfigurator() = ContextConfigurator(properties)
        @Bean open fun homePage(contextConfigurator: ContextConfigurator) = HomePage(driver(), contextConfigurator)
        // Other page beans
    }
    

    In thie configuration, the driver bean is a singleton managed by Spring and the single instance can be shared among all page beans.

    Those are also singletons part of the Spring bean factory. That’s the reason of the @Autowired annotation in the step definition constructor. But why don’t step definitions get created in the Spring configuration class? Because they are to be created by the Cucumber framework itself using package scanning - yuck. Note that they don’t need to be self-annotated, it’s part of Cucumber’s magic but they still are part of the Spring context and can be injected.

    Screen capture

    A common mistake made when testing is to think nothing will break, ever. I fell into this trap when I was more junior, and now I try to prepare for failure.

    In order to achieve that, I wanted to take a screenshot when a test fails so that it will be easier to fix the failure. Cucumber provides a dedicated lifecycle - before and after code run around each test through respective @Before and @After annotations. Note those are not the same as JUnit’s, and Cucumber doesn’t parse JUnit’s own annotations.

    The easy way would be to create an @After-annotated method in each step definition. Yet, that would just be code duplication. Cucumber also offers hooks, classes which annotated methods are run around each step definition test method. The only constraint for hooks is to place them in the same package as the step definitions so they can be discovered and managed by Cucumber Spring package scanning as for step definitions.

    private val LOGGER = LoggerFactory.getLogger(ScreenCaptureHook::class.java)
    private var DATE_FORMAT = "yyyy-MM-dd-HH-mm-ss"
    
    class ScreenCaptureHook @Autowired constructor(driver: WebDriver) {
    
        private val screenshotFolder = File(File("target", "e2e"), "screenshots")
        private val takesScreenshot = driver as TakesScreenshot
    
        @Before
        fun ensureScreenshotFolderExists() {
            if (!screenshotFolder.exists()) {
                val folderCreated = screenshotFolder.mkdirs()
                if (!folderCreated) {
                    LOGGER.warn("Could not create takesScreenshot folder. Screen capture won't work")
                }
            }
        }
    
        @After
        fun takeScreenshot(scenario: Scenario) {
            if (scenario.isFailed) {
                LOGGER.warn(scenario.status)
                LOGGER.info("Test failed. Taking takesScreenshot")
                val screenshot = takesScreenshot.getScreenshotAs(FILE)
                val format = SimpleDateFormat(DATE_FORMAT)
                val imageFile = File(screenshotFolder, format.format(Date()) + "-" + scenario.id + ".png")
                FileUtils.copyFile(screenshot, imageFile)
            }
        }
    }
    
    • Cucumber unfortunately doesn’t distinguish between a failed test and a skipped one. Thus, screenshots will be taken for the failed test as well as the skipped tests ran after it.
    • There’s no global hook in Cucumber. The ensureScreenshotFolderExists() will thus be ran before each step definition test method. This way requires to manage state, so as to initialize only once.

    Conclusion

    The final result is a working end-to-end testing harness, that non-technical people are able to understand. Even better, using Cucumber over TestNG let us improve the code with Dependency Injection. The setup of Cucumber over Selenium was not trivial, but quite achievable with a little effort.

    Despite all the above good parts, the harness fails… sometimes, for reasons unrelated to code, as all end-to-end tests go. That’s the reason I’m extremely reluctant to set them up in the build process. If that would happen every so often, the team would lose confidence in the harness and all would have been for nothing. So far, we’ll use it manually before promoting the application to an environment.

  • SpringOne2GX 2015

    This week, I had the privilege to talk at SpringOne2GX in Washington D.C. in not only one but 2 talks:

    Apart from preparing and rehearsing, I also used the occasion to attend some talks. Here follows a resume of the best.

    Spring Framework, the ultimate configuration battle

    The talk compared 3 methods to configure a Spring application, good old XML, JavaConfiguration and Groovy. There were a number of use cases, and one speaker was to implement it in one of the dedicated way. The talk was very entertaining, as the 3 speakers seemed to really enjoy their show. I think JavaConfig was downplayed in some aspects, such as not using an anonymous inner class, to make Groovy shinier. However, I learned that XML and Groovy returned Bean Definitions and later instantiated while JavaConfig returned the beans directly.

    Hands on Spring Security

    Nice talk from Spring Security’s lead himself, with plenty of demos that highlight different security problems. For me, the magical moment was when the browser displayed an image that triggered a JS script. Thanks for Internet Explorer for content sniffing. I think security is often undervalued and that talk reminds you about it.

    Intro to Spring Boot for the web tier

    Spring Boot developers showed how to kick-start a web application from an empty app, to a complete fully-configured one. A step-by-step demo, like I like them, with each step the basis for the next one. Special mention to the error page, it’s worth a sight.

    Apache Spark for Big Data Processing

    The talk was separated into two parts, with a speaker for each: the first presented raw Apache Spark and it’s was quite possible to understand thanks to the many drawings and a demo; the second described Spring XD but unfortunately, slides with bullet points were not enough for that in regard to my existing knowledge.

    Comparing Hot JavaScript Frameworks: AngularJS, Ember.js and React.js

    This had nothing to do with Spring, but since I don’t do JavaScript, I decided to go in there to have an overview of the beasts. Plus the speaker was Matt Raible who kind of specialized himself in comparing frameworks, and he’s a good speaker. The talk was entertaining, but the usual conclusion was the same: do as you want, which kind of defeats the purpose.

    Reactive Web Applications

    Last but for sure not least, this talk really blew my mind. The good folks at Spring are aware of the Reactive wave and are surfing it full speed. In this session, the presenters demoed a web application which implemented the Reactive paradigm using different APIs (I remember RxJava and Netty but I’m afraid there were more): the server served content in a non-blocking way, depending on the speed of the subscribing client! I think when this is available for GA, this means web developers - myself included, will have to rethink the way they model client-server interactions. No more request-response but infinite streams…

    Conclusion

    The 3 main ideas I come back with are: Spring Boot, Cloud Foundry and Reactive. Those are already either in place and evolving at light speed or soon to be implemented. Watch for them, they’re going to be the next big things (or already are).

    However, as always, talks are only second to social interactions, meeting conference pals or being acquainted with new people and their ideas. It has been fun, folks, hope to see you next year!

    Categories: Event Tags: spring
  • Final release of Integration Testing from the Trenches

    Writing a book is a journey. At the beginning of the journey, you mostly know where you want to go, but have only vague notion of the way to get there and the time it will take. I've finally released the paperback version of on Amazon and that means this specific journey is at end. The book starts by a very generic discussion about testing and continues by defining Integration Testing in comparison to Unit Testing. The next chapter compares the respective merits of Junit and TestNG. It is followed by complete description on how to make a design testable: what works for Unit Testing works also for Integration Testing. Testing in software relies on automation, so that specific usage of the Maven build tool is described in regard to Integration Testing - as well as Gradle. Dependencies on external resources make integration tests more fragile so faking those make them more robust. Those resources include: databases, the file system, SOAP and REST web services, etc. The most important dependency in any application is the container. The last chapters are dedicated to the Spring framework, including Spring MVC and Java EE. In this journey, I also dared ask Josh Long of Spring fame and Aslak Knutsen, team lead of the Arquillian project to write a foreword to the book - and I've been delighted to have them both answer positively. Thank you guys! I've also talked on the subject at some JUG and European conferences: JavaDay Kiev, Joker, Agile Tour London, and JUG Lyon and will again at JavaLand, DevIt, TopConf Romania and GeeCon. I hope that by doing so, Integration Testing will be used more effectively on projects and with bigger ROI. Should you want to go further, the book is available in multiple formats:
    1. A paperback version on for $49.99
    2. Electronic versions for Mac, Kindle and plain old PDF on . The pricing here is more open, starting from $21.10 with a suggested price of $31.65. Note you can get it in all formats to read on all your devices.

    If you’re already a reader and you like it, please feel free to recommend it. If you don’t, I welcome your feedback in the comments section. Of course, if neither - I encourage you to get a book and see for yourself!

  • Improving the Vaadin 4 Spring project with a simpler MVP

    I’ve been using the Vaadin 4 Spring library on my current project, and this has been a very pleasant experience. However, in the middle of the project, a colleague of mine decided to “improve the testability”. The intention was laudable, though the project already tried to implement the MVP pattern (please check this article for more detailed information). Instead of correcting the mistakes here and there, he refactored the whole codebase using the provided MVP module… IMHO, this has been a huge mistake. In this article, I’ll try to highlights the stuff that bugs me in the existing implementation, and an alternative solution to it.

    The existing MVP implementation consists of a single class. Here it is, abridged for readability purpose:

    public abstract class Presenter<V extends View> {
    
        @Autowired
        private SpringViewProvider viewProvider;
    
        @Autowired
        private EventBus eventBus;
    
        @PostConstruct
        protected void init() {
            eventBus.subscribe(this);
        }
    
        public V getView() {
            V result = null;
            Class<?> clazz = getClass();
            if (clazz.isAnnotationPresent(VaadinPresenter.class)) {
                VaadinPresenter vp = clazz.getAnnotation(VaadinPresenter.class);
                result = (V) viewProvider.getView(vp.viewName());
            }
            return result;
        }
    
        // Other plumbing code
    }
    

    This class is quite opinionated and suffers from the following drawbacks:

    1. It relies on field auto-wiring, which makes it extremely hard to unit test Presenter classes. As a proof, the provided test class is not a unit test, but an integration test.
    2. It relies solely on component scanning, which prevents explicit dependency injection.
    3. It enforces the implementation of the View interface, whether required or not. When not using the Navigator, it makes the implementation of an empty enterView() method mandatory.
    4. It takes responsibility of creating the View from the view provider.
    5. It couples the Presenter and the View, with its @VaadinPresenter annotation, preventing a single Presenter to handle different View implementations.
    6. It requires to explicitly call the init() method of the Presenter, as the @PostConstruct annotation on a super class is not called when the subclass has one.

    I’ve developed an alternative class that tries to address the previous points - and is also simpler:

    public abstract class Presenter<T> {
    
        private final T view;
        private final EventBus eventBus;
    
        public Presenter(T view, EventBus eventBus) {
            Assert.notNull(view);
            Assert.notNull(eventBus);
            this.view = view;
            this.eventBus = eventBus;
            eventBus.subscribe(this);
        }
    
        // Other plumbing code
    }
    

    This class makes every subclass easily unit-testable, as the following snippets proves:

    public class FooView extends Label {}
    
    public class FooPresenter extends Presenter<FooView> {
    
        public FooPresenter(FooView view, EventBus eventBus) {
            super(view, eventBus);
        }
    
        @EventBusListenerMethod
        public void onNewCaption(String caption) {
            getView().setCaption(caption);
        }
    }
    
    public class PresenterTest {
    
        private FooPresenter presenter;
        private FooView fooView;
        private EventBus eventBus;
    
        @Before
        public void setUp() {
            fooView = new FooView();
            eventBus = mock(EventBus.class);
            presenter = new FooPresenter(fooView, eventBus);
        }
    
        @Test
        public void should_manage_underlying_view() {
            String message = "anymessagecangohere";
            presenter.onNewCaption(message);
            assertEquals(message, fooView.getCaption());
        }
    }
    

    The same Integration Test as for the initial class can also be handled, using explicit dependency injection:

    public class ExplicitPresenter extends Presenter<FooView> {
    
        public ExplicitPresenter(FooView view, EventBus eventBus) {
            super(view, eventBus);
        }
    
        @EventBusListenerMethod
        public void onNewCaption(String caption) {
            getView().setCaption(caption);
        }
    }
    
    @Configuration
    @EnableVaadin
    public class ExplicitConfig {
    
        @Autowired
        private EventBus eventBus;
    
        @Bean
        @UIScope
        public FooView fooView() {
            return new FooView();
        }
    
        @Bean
        @UIScope
        public ExplicitPresenter fooPresenter() {
            return new ExplicitPresenter(fooView(), eventBus);
        }
    }
    
    @RunWith(SpringJUnit4ClassRunner.class)
    @ContextConfiguration(classes = ExplicitConfig.class)
    @VaadinAppConfiguration
    public class ExplicitPresenterIT {
    
        @Autowired
        private ExplicitPresenter explicitPresenter;
    
        @Autowired
        private EventBus eventBus;
    
        @Test
        public void should_listen_to_message() {
            String message = "message_from_explicit";
            eventBus.publish(this, message);
            assertEquals(message, explicitPresenter.getView().getCaption());
        }
    }
    

    Last but not least, this alternative also let you use auto-wiring and component-scanning if you feel like it! The only difference being that it enforces constructor auto-wiring instead of field auto-wiring (in my eyes, this counts as a plus, albeit a little more verbose):

    @UIScope
    @VaadinComponent
    public class FooView extends Label {}
    
    @UIScope
    @VaadinComponent
    public class AutowiredPresenter extends Presenter<FooView> {
    
        @Autowired
        public AutowiredPresenter(FooView view, EventBus eventBus) {
            super(view, eventBus);
        }
    
        @EventBusListenerMethod
        public void onNewCaption(String caption) {
            getView().setCaption(caption);
        }
    }
    
    @ComponentScan
    @EnableVaadin
    public class ScanConfig {}
    
    @RunWith(SpringJUnit4ClassRunner.class)
    @ContextConfiguration(classes = ScanConfig.class)
    @VaadinAppConfiguration
    public class AutowiredPresenterIT {
    
        @Autowired
        private AutowiredPresenter autowiredPresenter;
    
        @Autowired
        private EventBus eventBus;
    
        @Test
        public void should_listen_to_message() {
            String message = "message_from_autowired";
            eventBus.publish(this, message);
            assertEquals(message, autowiredPresenter.getView().getCaption());
        }
    }
    

    The good news is that this module is now part of the vaadin4spring project on Github. If you need MVP for your Vaadin Spring application, you’re just a click away!

    Categories: JavaEE Tags: open sourcespringvaadin
  • Spring profiles or Maven profiles?

    Deploying on different environments requires configuration, e.g. database URL(s) must be set on each dedicated environment. In most - if not all Java applications, this is achieved through a .properties file, loaded through the appropriately-named Properties class. During development, there’s no reason not to use the same configuration system, e.g. to use an embedded h2 database instead of the production one.

    Unfortunately, Jave EE applications generally fall outside this usage, as the good practice on deployed environments (i.e. all environments save the local developer machine) is to use a JNDI datasource instead of a local connection. Even Tomcat and Jetty - which implement only a fraction of the Java EE Web Profile, provide this nifty and useful feature.

    As an example, let’s take the Spring framework. In this case, two datasource configuration fragments have to be defined:

    • For deployed environment, one that specifies the JNDI location lookup
    • For local development (and test), one that configures a connection pool around a direct database connection

    Simple properties file cannot manage this kind of switch, one has to use the build system. Shameless self-promotion: a detailed explanation of this setup for integration testing purposes can be found in my book, Integration Testing from the Trenches.

    With the Maven build system, change between configuration is achieved through so-called profiles at build time. Roughly, a Maven profile is a portion of a POM that’s can be enabled (or not). For example, the following profile snippet replaces Maven’s standard resource directory with a dedicated one.

    <profiles>
        <profile>
          <id>dev</id>
          <build>
            <resources>
              <resource>
                <directory>profile/dev</directory>
                <includes>
                  <include>**/*</include>
                </includes>
              </resource>
            </resources>
          </build>
        </profile>
    </profiles>
    

    Activating a single or different profiles is as easy as using the -P switch with their id on the command-line when invoking Maven. The following command will activate the dev profile (provided it is set in the POM):

    ```bashmvn package -Pdev

    Now, let's add a simple requirement: as I'm quite lazy, I want to exert the minimum effort possible to package the application along with its final production release configuration. This translates into making the production configuration, <em>i.e.</em> the JNDI fragment, <strong>the default one</strong>, and using the development fragment explicitly when necessary. Seasoned Maven users know how to implement that: create a production profile and configure it to be the default.
    
    
    ```xml
    <profile>
      <id>dev</id>
      <activation>
        <activeByDefault>true</activeByDefault>
      </activation>
      ...
    </profile>
    

    Icing on the cake, profiles can even be set in Maven settings.xml files. Seems to good to be true? Well, very seasoned Maven users know that as soon as a single profile is explicitly activated, the default profile is de-activated. Previous experiences have taught me that because profiles are so easy to implement, they are used (and overused) so that the default one gets easily lost in the process. For example, in one such job, a profile was used on the Continuous Integration server to set some properties for the release in a dedicated setting files. In order to keep the right configuration, one has to a) know about the sneaky profile b) know it will break the default profile c) explicitly set the not-default-anymore profile.

    Additional details about the dangers of Maven profiles for building artifacts can be found in this article.

    Another drawback of this global approach is the tendency for over-fragmentation of the configuration files. I prefer to have coarse-grained configuration files, with each dedicated to a layer or a use-case. For example, I’d like to declare at least the datasource, the transaction manager and the entity manager in the same file with possibly the different repositories.

    Come Spring profiles. As opposed to Maven profiles, Spring profiles are activated at runtime. I’m not sure whether this is a good or a bad thing, but the implementation makes it possible for real default configurations, with the help of @Conditional annotations (see my previous article for more details). That way, the wrapper-around-the-connection bean gets created when the dev profile is activated, and when not, the JNDI lookup bean. This kind of configuration is implemented in the following snippet:

    @Configuration
    public class MyConfiguration {
    
        @Bean
        @Profile("dev")
        public DataSource dataSource() throws Exception {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:file:~/conditional");
            dataSource.setUsername("sa");
            return dataSource;
        }
    
        @Bean
        @ConditionalOnMissingBean(DataSource.class)
        public DataSource fakeDataSource() {
            JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
            return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
        }
    }
    

    In this context, profiles are just a way to activate specific beans, the real magic is achieved through the different @Conditional annotations.

    Note: it is advised to create a dedicated annotation to avoid String typos, to be more refactoring friendly and improve search capabilities on the code.

    @Retention(RUNTIME)
    @Target({TYPE, METHOD})
    @Profile("dev")
    public @interface Development {}
    

    Now, this approach has some drawbacks as well. The most obvious problem is that the final archive will contain extra libraries, those that are use exclusively for development. This is readily apparent when one uses Spring Boot. One of such extra library is the h2 database, a whooping 1.7 Mb jar file. There are a two main counterarguments to this:

    • First, if you're concerned about a couple of additional Mb, then your main issue is probably not on the software side, but on the disk management side. Perhaps a virtual layer such as VMWare or Xen could help?
    • Then, if the need be, you can still configure the build system to streamline the produced artifact.

    The second drawback of Spring profiles is that along with extra libraries, the development configuration will be packaged into the final artifact as well. To be honest, when I first stumbled this approach, this was a no-go. Then, as usual, I thought more and more about it, and came to the following conclusion: there’s nothing wrong with that. Packaging the development configuration has no consequence whatsoever, whether it is set through XML or JavaConfig. Think about this: once an archive has been created, it is considered sealed, even when the application server explodes it for deployment purposes. It is considered very bad practice to do something on the exploded archive in all cases. So what would be the reason not to package the development configuration along? The only reason I can think of is: to be clean, from a theoretical point of view. Me being a pragmatist, I think the advantages of using Spring profiles is far greater than this drawback.

    In my current project, I created a single configuration fragment with all beans that are dependent on the environment, the datasource and the Spring Security authentication provider. For the latter, the production configuration uses an internal LDAP, so that the development bean provides an in-memory provider.

    So on one hand, we’ve got Maven profiles, which have definite issues but which we are familiar with, and on the other hand, we’ve got Spring profiles which are brand new, hurt our natural inclination but gets the job done. I’d suggest to give them a try: I did and am so far happy with them.

    Categories: Java Tags: configurationmavenspring
  • Optional dependencies in Spring

    I’m a regular Spring framework user and I think I know the framework pretty well, but it seems I’m always stumbling upon something useful I didn’t know about. At Devoxx, I learned that you could express conditional dependencies using Java 8’s new Optional&lt;T&gt; type. Note that before Java 8, optional dependencies could be auto-wired using @Autowired(required = false), but then you had to check for null.

    How good is that? Well, I can think about a million use-cases, but here are some that come out of my mind:

    • Prevent usage of infrastructure dependencies, depending on the context. For example, in a development environment, one wouldn’t need to send metrics to a MetricRegistry
    • Provide defaults when required infrastructure dependencies are not provided e.g. a h2 datasource
    • The same could be done in a testing environment.
    • etc.

    The implementation is very straightforward:

    @ContextConfiguration(classes = OptionalConfiguration.class)
    public class DependencyPresentTest extends AbstractTestNGSpringContextTests {
    
        @Autowired
        private Optional<HelloService> myServiceOptional;
    
        @Test
        public void should_return_hello() {
            String sayHello = null;
            if (myServiceOptional.isPresent()) {
                sayHello = myServiceOptional.get().sayHello();
            }
    
            assertNotNull(sayHello);
            assertEquals(sayHello, "Hello!");
        }
    }
    

    At this point, not only does the code compile fine, but the dependency is evaluated at compile time. Either the OptionalConfiguration contains the HelloService bean - and the above test succeeds, or it doesn’t - and the test fails.

    This pattern is very elegant and I suggest you list it into your bag of available tools.

    Categories: Java Tags: java 8spring
  • Avoid conditional logic in @Configuration

    Integration Testing Spring applications mandates to create small dedicated configuration fragments and to assemble them either during normal run of the application or during tests. Even in the latter case, different fragments can be assembled in different tests.

    However, this practice doesn’t handle the use-case where I want to use the application in two different environments. As an example, I might want to use a JNDI datasource in deployed environments and a direct connection when developing  on my local machine. Assembling different fragment combinations is not possible, as I want to run the application in both cases, not test it.

    My only requirement is that the default should use the JNDI datasource, while activating a flag - a profile, should switch to the direct connection. The Pavlovian reflex in this case would be to add a simple condition in the @Configuration class.

    @Configuration
    public class MyConfiguration {
    
        @Autowired
        private Environment env;
    
        @Bean
        public DataSource dataSource() throws Exception {
    
            if (env.acceptsProfiles("dev")) {
                org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
                dataSource.setDriverClassName("org.h2.Driver");
                dataSource.setUrl("jdbc:h2:file:~/conditional");
                dataSource.setUsername("sa");
                return dataSource;
            }
    
            JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
            return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional"); 
        }
    }
    

    Starting to use this kind flow control statements is the beginning of the end, as it will lead to adding more control flow statements in the future, which will lead in turn to a tangled mess of spaghetti configuration, and ultimately to an unmaintainable application.

    Spring Boot offers a nice alternative to handle this use-case with different flavors of @ConditionalXXX annotations. Using them have the following advantages while doing the job: easy to use, readable and limited. While the latter point might seem to be a drawback, it’s the biggest asset IMHO (not unlike Maven plugins). Code is powerful, and with great power must come great responsibility, something that is hardly possible during the course of a project with deadlines and pressure from the higher-ups. That’s the main reason one of my colleagues advocates XML over JavaConfig: with XML, you’re sure there won’t be any abuse while the project runs its course.

    But let’s stop the philosophy and back to @ConditionalXXX annotations. Basically, putting such an annotation on a @Bean method will invoke this method and put the bean in the factory based on a dedicated condition. There are many of them, here are some important ones:

    • Dependent on Java version, newer or older - @ConditionalOnJava
    • Dependent on a bean present in factory - @ConditionalOnBean, and its opposite, dependent on a bean name not present - @ConditionalOnMissingBean
    • Dependent on a class present on the classpath - @ConditionalOnClass, and its opposite @ConditionalOnMissingClass
    • Whether it's a web application or not - @ConditionalOnWebApplication and @ConditionalOnNotWebApplication
    • etc.

    Note that the whole list of existing conditions can be browsed in Spring Boot’s org.springframework.boot.autoconfigure.condition package.

    With this information, we can migrate the above snippet to a more robust implementation:

    @Configuration
    public class MyConfiguration {
    
        @Bean
        @Profile("dev")
        public DataSource dataSource() throws Exception {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:file:~/localisatordb");
            dataSource.setUsername("sa");
            return dataSource;
        }
    
        @Bean
        @ConditionalOnMissingBean(DataSource.class)
        public DataSource fakeDataSource() {
            JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
            return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
        }
    }
    

    The configuration is now neatly separated into two different methods, the first method will be called only when the dev profile is active while the second will be when the first method is not called, hence when the dev profile is not active.

    Finally, the best thing in that feature is that it is easily extensible, as it depends only on the @Conditional annotation and the Condition interface (who are part of Spring proper, not Spring Boot).

    Here’s a simple example in Maven/IntelliJ format for you to play with. Have fun!

    Categories: Java Tags: configurationspring
  • Metrics, metrics everywhere

    With DevOps, metrics are starting to be among the non-functional requirements any application has to bring into scope. Before going further, there are several comments I’d like to make:

    1. Metrics are not only about non-functional stuff. Many metrics represent very important KPI for the business. For example, for an e-commerce shop, the business needs to know how many customers leave the checkout process, and in which screen. True, there are several solutions to achieve this, though they are all web-based (Google Analytics comes to mind) and metrics might also be required for different architectures. And having all metrics in the same backend mean they can be correlated easily.
    2. Metrics, as any other NFR (e.g. logging and exception handling) should be designed and managed upfront and not pushed in as an afterthought. How do I know that? Well, one of my last project focused on functional requirement only, and only in the end did project management realized NFR were important. Trust me when I say it was gory - and it has cost much more than if designed in the early phases of the project.
    3. Metrics have an overhead. However, without metrics, it's not possible to increase performance. Just accept that and live with it.

    The inputs are the following: the application is Spring MVC-based and metrics have to be aggregated in Graphite. We will start by using the excellent Metrics project: not only does it get the job done, its documentation is of very high quality and it’s available under the friendly OpenSource Apache v2.0 license.

    That said, let’s imagine a “standard” base architecture to manage those components.

    First, though Metrics offer a Graphite endpoint, this requires configuration in each environment and this makes it harder, especially on developers workstations. To manage this, we’ll send metrics to JMX and introduce jmxtrans as a middle component between JMX and graphite. As every JVM provides JMX services, this requires no configuration when there’s none needed - and has no impact on performance.

    Second, as developers, we usually enjoy develop everything from scratch in order to show off how good we are - or sometimes because they didn’t browse the documentation. My point of view as a software engineer is that I’d rather not reinvent the wheel and focus on the task at end. Actually, Spring Boot already integrates with Metrics through the Actuator component. However, it only provides GaugeService - to send unique values, and CounterService - to increment/decrement values. This might be good enough for FR but not for NFR so we might want to tweak things a little.

    The flow would be designed like this: Code > Spring Boot > Metrics > JMX > Graphite

    The starting point is to create an aspect, as performance metric is a cross-cutting concern:

    @Aspect
    public class MetricAspect {
    
        private final MetricSender metricSender;
    
        @Autowired
        public MetricAspect(MetricSender metricSender) {
            this.metricSender = metricSender;
        }
    
        @Around("execution(* ch.frankel.blog.metrics.ping..*(..)) ||execution(* ch.frankel.blog.metrics.dice..*(..))")
        public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
    
            StopWatch stopWatch = metricSender.getStartedStopWatch();
    
            try {
                return pjp.proceed();
            } finally {
                Class<?> clazz = pjp.getTarget().getClass();
                String methodName = pjp.getSignature().getName();
                metricSender.stopAndSend(stopWatch, clazz, methodName);
            }
        }
    }
    

    The only thing outside of the ordinary is the usage of autowiring as aspects don’t seem to be able to be the target of explicit wiring (yet?). Also notice the aspect itself doesn’t interact with the Metrics API, it only delegates to a dedicated component:

    public class MetricSender {
    
        private final MetricRegistry registry;
    
        public MetricSender(MetricRegistry registry) {
            this.registry = registry;
    
        }
    
        private Histogram getOrAdd(String metricsName) {
            Map<String, Histogram> registeredHistograms = registry.getHistograms();
            Histogram registeredHistogram = registeredHistograms.get(metricsName);
            if (registeredHistogram == null) {
                Reservoir reservoir = new ExponentiallyDecayingReservoir();
                registeredHistogram = new Histogram(reservoir);
                registry.register(metricsName, registeredHistogram);
            }
            return registeredHistogram;
        }
    
        public StopWatch getStartedStopWatch() {
            StopWatch stopWatch = new StopWatch();
            stopWatch.start();
            return stopWatch;
        }
    
        private String computeMetricName(Class<?> clazz, String methodName) {
            return clazz.getName() + '.' + methodName;
        }
    
        public void stopAndSend(StopWatch stopWatch, Class<?> clazz, String methodName) {
            stopWatch.stop();
            String metricName = computeMetricName(clazz, methodName);
            getOrAdd(metricName).update(stopWatch.getTotalTimeMillis());
        }
    }
    

    The sender does several interesting things (but with no state):

    • It returns a new StopWatch for the aspect to pass back after method execution
    • It computes the metric name depending on the class and the method
    • It stops the StopWatch and sends the time to the MetricRegistry
    • Note it also lazily creates and registers a new Histogram with an ExponentiallyDecayingReservoir instance. The default behavior is to provide an UniformReservoir, which keeps data forever and is not suitable for our need.

    The final step is to tell the Metrics API to send data to JMX. This can be done in one of the configuration classes, preferably the one dedicated to metrics, using the @PostConstruct annotation on the desired method.

    @Configuration
    public class MetricsConfiguration {
    
        @Autowired
        private MetricRegistry metricRegistry;
    
        @Bean
        public MetricSender metricSender() {
            return new MetricSender(metricRegistry);
        }
    
        @PostConstruct
        public void connectRegistryToJmx() {
            JmxReporter reporter = JmxReporter.forRegistry(metricRegistry).build();
            reporter.start();
        }
    }
    

    The JConsole should look like the following. Icing on the cake, all default Spring Boot metrics are also available:

    Sources for this article are available in Maven “format”.

    To go further:

  • Using exceptions when designing an API

    Many knows the tradeoff of using exceptions while designing an application:

    • On one hand, using try-catch block nicely segregates between regular code and exception handling code
    • On the other hand, using exceptions has a definite performance cost for the JVM

    Every time I’ve been facing this quandary, I’ve ruled in favor of the former, because “premature optimization is evil”. However, this week has proved me that exception handling in designing an API is a very serious decision.

    I’ve been working to improve the performances of our application and I’ve noticed many silent catches coming from the Spring framework (with the help of the excellent dynaTrace tool). The guilty lines comes from the RequestContext.initContext() method:

    if (this.webApplicationContext.containsBean(REQUEST_DATA_VALUE_PROCESSOR_BEAN_NAME)) {
        this.requestDataValueProcessor = this.webApplicationContext.getBean(
        REQUEST_DATA_VALUE_PROCESSOR_BEAN_NAME, RequestDataValueProcessor.class);
    }
    

    Looking at the JavaDocs, it is clear that this method (and the lines above) are called each time the Spring framework handles a request. For web applications under heavy load, this means quite a lot! I provided a pass-through implementation of the RequestDataValueProcessor and patched one node of the cluster. After running more tests, we noticed response times were on average 5% faster on the patched node compared to the un-patched node. This is not my point however.

    Should an exception be thrown when the bean is not present in the context? I think not… as the above snippet confirms. Other situations e.g. injecting dependencies, might call for an exception to be thrown, but in this case, it has to be the responsibility of the caller code to throw it or not, depending on the exact situation.

    There are plenty of viable alternatives to exceptions throwing:

    Returning null
    This means the intent of the code is not explicit without looking at the JavaDocs, and so the worst option on our list
    Returning an Optional<T>
    This makes the intent explicit compared to returning `null`. Of course, this requires Java 8
    Return a Guava's Optional<T>
    For those of us who are not fortunate enough to have Java 8
    Returning one's own Optional<T>
    If you don't use Guava and prefer to embed your own copy of the class instead of relying on an external library
    Returning a Try
    Cook up something like Scala's Try, which wraps either (hence its old name - Either) the returned bean or an exception. In this case, however, the exception is not thrown but used like any other object - hence there will be no performance problem.

    Conclusion: when designing an API, one should really keep using exceptions for exceptional situations only.

    As for the current situation, Spring’s BeanFactory class lies at center of a web of dependencies and its multiple getBean() method implementation cannot be easily replaced with one of the above options without forfeiting backward compatibility. One solution, however, would be to provide additional getBeanSafe() methods (or a better relevant name) using one of the above options, and then replace usage of their original counterpart step by step inside the Spring framework.

    Categories: Java Tags: designexceptionspring
  • Spring configuration modularization for Integration Testing

    Object-Oriented Programming advocates for modularization in order to build small and reusable components. There are however other reasons for this. In the case of the Spring framework, modularization enables Integration Testing, the ability to test the system or parts of it, including assembly configuration.

    Why is it so important to test the system assembled with the final configuration? Let’s take a simple example, the making of a car. Unit Testing the car would be akin to testing every nuts and bolts of the car separately, while Integration Testing the car would be like driving it on a circuit. By testing only the car’s components separately, selling the assembled car is a huge risk as nothing guarantees it will behave correctly in real-life conditions.

    Now that we have asserted Integration Testing is necessary to guarantee the adequate level of internal quality, it’s time to enable Integration Testing with the Spring framework. Integration Testing is based on the notion of SUT. Defining the SUT is to define the boundaries between what is tested and its dependencies. In nearly all cases, test setup will require to provide some kind of test double for each required dependency. Configuring those test doubles can only be achieved by modularizing Spring configuration, so that they can replace dependent beans located outside the SUT.

    Sample bean dependency diagram

    _Fig. 1 - Sample bean dependency diagram_

    Spring’s DI configuration comes in 3 different flavors: XML - the legacy way, autowiring and the newest JavaConfig. We’ll have a look at how modularization can be achieved for each flavor. Mixed DI modularization can be deduced from each separate entry.

    Autowiring

    Autowiring is an easy way to assemble Spring applications. It is achieved through the use of either @Autowiring or @Inject. Let’s cover quickly autowiring: as injection is implicit, there’s no easy way to modularize configuration. Applications using autowiring will just have to migrate to another DI flavor to allow for Integration Testing.

    XML

    XML is the legacy way to inject dependencies, but is still in use. Consider the following monolithic XML configuration file:

    <?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
         xmlns:jee="http://www.springframework.org/schema/jee"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://www.springframework.org/schema/beans
            http://www.springframework.org/schema/beans/spring-beans.xsd
            http://www.springframework.org/schema/jee
            http://www.springframework.org/schema/jee/spring-jee.xsd">
      <jee:jndi-lookup id="dataSource" jndi-name="jdbc/MyDataSource" />
      <bean id="productRepository" class="ProductRepository">
        <constructor-arg ref="dataSource" />
      </bean>
      <bean id="customerRepository" class="CustomerRepository">
        <constructor-arg ref="dataSource" />
      </bean>
      <bean id="orderRepository" class="OrderRepository">
        <constructor-arg ref="dataSource" />
      </bean>
      <bean id="orderService" class="OrderService">
        <constructor-arg ref="productRepository" index="0" />
        <constructor-arg ref="customerRepository" index="1" />
        <constructor-arg ref="orderRepository" index="2" />
      </bean>
    </beans>
    

    At this point, Integration Testing orderService is not easy as it should be. In particular, we need to:

    • Download the application server
    • Configure the server for the jdbc/MyDataSource data source
    • Deploy all classes to the server
    • Start the server
    • Stop the server After the test(s)

    Of course, all previous tasks have to be automated! Though not impossible thanks to tools such as Arquillian, it’s contrary to the KISS principle. To overcome this problem and make our life (as well as test maintenance) easier in the process requires tooling and design. On the tooling part, we’ll be using a local database. Usually, such a database is of the in-memory kind e.g. H2. On the design part, his requires separating our beans by creating two different configuration fragments, one solely dedicated to the data source to be faked and the other one for the beans constituting the SUT.

    Then, we’ll use a Maven classpath trick: Maven puts the test classpath in front of the main classpath when executing tests. This way, files found in the test classpath will “override” similarly-named files in the main classpath. Let’s create two configuration files fragments:

    • The “real” JNDI datasource as in the monolithic configuration
    <beans...>
      <jee:jndi-lookup id="dataSource" jndi-name="jdbc/MyDataSource" />
    </beans>
    
    • The Fake datasource
    <beans...>
      <bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource">
          <property name="driverClassName" value="org.h2.Driver" />
          <property name="url" value="jdbc:h2:~/test" />
          <property name="username" value="sa" />
          <property name="maxActive" value="1" />
      </bean>
    </beans>
    

    Note we are using a Tomcat datasource object, this requires the org.apache.tomcat:tomcat-jdbc:jar library on the test classpath. Also note the maxActive property. This reflects the maximum number of connections to the database. It is advised to always set it to 1 for test scenarios so that connections pool exhaustion bugs can be checked as early as possible.

    The final layout is the following:

    _Fig. 2 - Project structure for Spring XML configuration Integration Testing_

    1. JNDI datasource
    2. Other beans
    3. Fake datasource

    The final main-config.xml file looks like:

    <?xml version="1.0" encoding="UTF-8"?>
    <beans...>
      <import resource="classpath:datasource-config.xml" />
      <!-- other beans go here -->
    </beans>
    

    Such a structure is the basics to enable Integration Testing.

    JavaConfig

    JavaConfig is the most recent way to configure Spring applications, bringing both compile-time (as autowiring) and explicit configuration (as XML) safety.

    The above datasources fragments can be “translated” in Java as follows:

    • The “real” JNDI datasource as in the monolithic configuration
    @Configuration
    public class DataSourceConfig {
        @Bean
        public DataSource dataSource() throws Exception {
            Context ctx = new InitialContext();
            return (DataSource) ctx.lookup("jdbc/MyDataSource");
        }
    }
    
    • The Fake datasource
    public class FakeDataSourceConfig {
    
        public DataSource dataSource() {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:~/test");
            dataSource.setUsername("sa");
            dataSource.setMaxActive(1);
            return dataSource;
        }
    }
    

    However, there are two problems that appear when using JavaConfig.

    1. It’s not possible to use the same classpath trick with an import as with XML previously, as Java forbids to have 2 (or more) classes with the same qualified name loaded by the same classloader (which is the case with Maven). Therefore, JavaConfig configuration fragments shouldn’t explicitly import other fragments but should leave the fragment assembly responsibility to their users (application or tests) so that names can be different, e.g.:
    @ContextConfiguration(classes = {MainConfig.class, FakeDataSource.class})
    public class SimpleDataSourceIntegrationTest extends AbstractTestNGSpringContextTests {
    
        @Test
        public void should_check_something_useful() {
            // Test goes there
        }
    }
    
    1. The main configuration fragment uses the datasource bean from the other configuration fragment. This mandates for the former to have a reference on the latter. This is obtained by using the @Autowired annotation (one of the few relevant usage of it).
    @Configuration
    public class MainConfig {
        @Autowired
        private DataSource dataSource;
        // Other beans go there. They can use dataSource!
    }
    

    Summary

    In this article, I showed how Integration Testing to a Fake data source could be achieved by modularizing the monolithic Spring configuration into different configuration fragments, either XML or JavaConfig.

    However, the realm of Integration Testing - with Spring or without, is vast. Should you want to go further, I’ll hold a talk on Integration Testing at Agile Tour London on Oct. 24th and at Java Days Kiev on Oct. 17th-18th.

    This article is an abridged version of a part of the Spring chapter of Integration Testing from the Trenches. Have a look at it, there’s even a sample free chapter!

    Integration Testing from the Trenches
    Categories: Java Tags: integration testingspring
  • Easier Spring version management

    Earlier on, Spring migrated from a monolithic approach - the whole framework, to a modular one - bean, context, test, etc. so that one could decide to use only the required modules. This modularity came at a cost, however: in the Maven build configuration (or the Gradle one for that matter), one had to specify the version for each used module.

    <?xml version="1.0" encoding="UTF-8"?>
    <project...>
        ...
        <dependencies>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-webmvc</artifactId>
                <version>4.0.5.RELEASE</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-jdbc</artifactId>
                <version>4.0.5.RELEASE</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-test</artifactId>
                <version>4.0.5.RELEASE</version>
                <scope>test</scope>
            </dependency>
        </dependencies>
    </project>
    

    Of course, professional Maven users would improve this POM with the following:

    <?xml version="1.0" encoding="UTF-8"?>
    <project...>
        ...
       <properties>
            <spring.version>4.0.5.RELEASE</spring.version>
        </properties>
        <dependencies>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-webmvc</artifactId>
                <version>${spring.version}</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-jdbc</artifactId>
                <version>${spring.version}</version>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-test</artifactId>
                <version>${spring.version}</version>
                <scope>test</scope>
            </dependency>
        </dependencies>
    </project>
    

    There’s an more concise way to achieve the same through a BOM-typed POM (see section on scope import) since version 3.2.6 though.

    <?xml version="1.0" encoding="UTF-8"?>
    <project...>
        ...
        <dependencyManagement>
            <dependencies>
                <dependency>
                    <groupId>org.springframework</groupId>
                    <artifactId>spring-framework-bom</artifactId>
                    <type>pom</type>
                    <version>4.0.5.RELEASE</version>
                    <scope>import</scope>
                </dependency>
            <dependencies>
        </dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-webmvc</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-jdbc</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-test</artifactId>
                <scope>test</scope>
            </dependency>
        </dependencies>
    </project>
    

    Note that Spring’s BOM only sets version but not scope, this has to be done in each user POM.

    Spring released very recently the Spring IO platform which also includes a BOM. This BOM not only includes Spring dependencies but also other third-party libraries.

    <?xml version="1.0" encoding="UTF-8"?>
    <project...>
        ...
        <dependencyManagement>
            <dependencies>
                <dependency>
                    <groupId>io.spring.platform</groupId>
                    <artifactId>platform-bom</artifactId>
                    <type>pom</type>
                    <version>1.0.0.RELEASE</version>
                    <scope>import</scope>
                </dependency>
            <dependencies>
        </dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-webmvc</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-jdbc</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-test</artifactId>
                <scope>test</scope>
            </dependency>
            <dependency>
                <groupId>org.testng</groupId>
                <artifactId>testng</artifactId>
                <scope>test</scope>
            </dependency>
        </dependencies>
    </project>
    

    There’s one single problem with Spring IO platform’s BOM, there’s no simple mapping from the BOM version to declared dependencies versions. For example, the BOM’s 1.0.0.RELEASE maps to Spring 4.0.5.RELEASE.

    To go further:

    Categories: Java Tags: mavenspring
  • The right bean at the right place

    Among the different customers I worked for, I noticed a widespread misunderstanding regarding the use of Spring contexts in Spring MVC.

    Basically, you have contexts, in a parent-child relationship:

    • The main context is where service beans are hosted. By convention, it is spawned from the /WEB-INF/applicationContext.xml file but this location can be changed by using the contextConfigLocation context parameter. Alternatively, one can use the AbstractAnnotationConfigDispatcherServletInitializer and in this case, configuration classes should be parts of the array returned by the getRootConfigClasses() method.
    • Web context(s) is where Spring MVC dispatcher servlet configuration beans and controllers can be found. It is spawned from <servlet-name>-servlet.xml or, if using the JavaConfig above, comes from classes returned by the getServletConfigClasses() method

    As in every parent-child relationship, there’s a catch:

    Beans from the child contexts can access beans from the parent context, but not the opposite.

    That makes sense if you picture this: I want my controllers to be injected with services, but not the other way around (it would be a funny idea to inject controllers in services). Besides, you could have multiple Spring servlets with its own web context, each sharing the same main context for parent. When it goes beyond controllers and services, one should decide in which context a bean should go. For some, that’s pretty evident: view resolvers, message sources and such go into the web context; for others, one would have to spend some time thinking about it.

    A good rule of thumb to decide in which context which bean should go is the following: IF you had multiple servlets (even if you do not), what would you like to share and what not.

    Note: this way of thinking should not be tied to your application itself, as otherwise you’d probably end up sharing message sources in the main application context, which is a (really) bad idea.

    This modularization let you put the right bean at the right place, promoting bean reusability.

    Categories: JavaEE Tags: beancontextspring
  • Vaadin and Spring integration through JavaConfig

    When I wrote the first version of Learning Vaadin, I hinted at how to integrate Vaadin with the Spring framework (as well as CDI). I only described the overall approach by providing a crude servlet that queried the Spring context to get the Application instance.

    At the time of Learning Vaadin 7, I was eager to work on add-ons the community provided in terms of Spring integration. Unfortunately, I was sorely disappointed, as I found only few and those were lacking in one way or another. The only stuff mentioning was an article by Petter Holmström - a Vaadin team member (and voluntary fireman) describing how one should do to achieve Vaadin & Spring integration. It was much more advanced than my own rant but still not a true ready-to-be-used library.

    So, when I learned that both Vaadin and Spring teams joined forces to provided a true integration library between two frameworks I love, I was overjoyed. Even better, this project was developed by none other than Petter for Vaadin and Josh Long for Pivotal. However, the project was aimed at achieving DI through auto-wiring. Since JavaConfig makes for a cleaner and more testable code, I filled an issue to allow that. Petter kindly worked on this, and in turn, I spent some time making it work.

    The result of my experimentation with Spring Boot Vaadin integration has been published on morevaadin.com, a blog exclusively dedicated to Vaadin.

    Categories: JavaEE Tags: javaconfigspringvaadin
  • Integrate Spring JavaConfig with legacy configuration

    The application I’m working on now uses Spring both by parsing for XML Spring configuration files in pre-determined locations and by scanning annotations-based autowiring. I’ve already stated my stance on autowiring previously, this article only concerns itself on how I could use Spring JavaConfig without migrating the whole existing codebase in a single big refactoring.

    This is easily achieved by scanning the package where the JavaConfig class is located in the legacy Spring XML configuration file:

    <ctx:component-scan base-package="ch.frankel.blog.spring.javaconfig.config" />
    

    However, we may need to inject beans created through the old methods into our new JavaConfig file. In order to achieve this, we need to autowire those beans into JavaConfig:

    @Configuration
    public class JavaConfig {
    
        @Autowired
        private FooDao fooDao;
    
        @Autowired
        private FooService fooService;
    
        @Autowired
        private BarService barService;
    
        @Bean
        public AnotherService anotherService() {
            return new AnotherService(fooDao);
        }
    
        @Bean
        public MyFacade myFacade() {
            return new MyFacade(fooService, barService, anotherService());
        }
    }
    

    You can find complete sources for this article (including tests) in IDEA/Maven format.

    Thanks to Josh Long and Gildas Cuisinier for the pointers on how to achieve this.

    Categories: Java Tags: javaconfigspring
  • My case against autowiring

    Autowiring is a particular kind of wiring, where injecting dependencies is not explicit but actually managed implicitly by the container. This article tries to provide some relevant info regarding disadvantages of using autowiring. Although Spring is taken as an example, the same reasoning can apply to JavaEE’s CDI.

    Autowiring basics

    Autowiring flavors

    Autowiring comes into different flavors:

    • Autowiring by type means Spring will look for available beans that match the type used in the setter. For example, for a bean having a setCar(Car car) method, Spring will look for a Car type (or one of its subtype) in the bean context.
    • Autowiring by name means Spring search strategy is based on beans name. For the previous example, Spring will look for a bean named car regardless of its type. This both requires the bean to be named (no anonymous bean here) and implies that the developer is responsible for any type mistmatch (if the car bean is of type Plane).

    Autowiring is also possible for constructor injection.

    Autowiring through XML configuration

    Let’s first dispel some misconceptions: autowiring does not imply annotations. In fact, autowiring is available through XML since ages and can be enabled with the following syntax.

    <bean autowire="byType" class="some.Clazz" />
    

    This means Spring will try to find the right type to fill the dependency(ies) by using setter(s).

    Alternative autowire parameters include:

    • by name: instead of matching by type, Spring will use bean names
    • constructor: constructor injection restricted to by-type
    • autodetect: use constructor injection, but falls back to by-type if no adequate constructor is found

    Autowiring through annotations configuration

    Available annotations are the following:

    Annotation Description Source
     org.springframework.bean.factory.@Autowired  Wires by type  Spring 2.5+
     javax.annotation.@Resource  Wires by name  JavaEE 5+
     javax.inject.@Inject  Wires by type  JavaEE 6+
    javax.inject.@Qualifier  Narrows type wiring  JavaEE 6+

    Dependency Injection is all about decoupling your classes from one another to make them more testable. Autowiring through annotations strongly couples annotated classes to annotations libraries. Admittedly, this is worse for Spring’s @Autowired than for others as they are generally provided by application servers.

    Why it is so bad after all?

    Autowiring is all about simplifying dependency injection, and although it may seem seductive at first, it’s not maintainable in real-life projects.

    Explicit dependency injection is - guess what, explicitly defining a single bean to match the required dependency. Even better, if Java Configuration is used, it is validated at compile time.

    On the opposite, autowiring is about expressing constraints on all available beans in the Spring context so that one and only one matches the required dependency. In effect, you delegate wiring to Spring, tasking him to find the right bean for you. This means that by adding beans in your context, you run the risk of providing more than one match, if your previous constraints do not apply to the new beans. The larger the project, the larger the team (or even worse, the number of teams), the larger the context, the higher the risk.

    For example, imagine you crafted a real nice service with a single implementation, that you need to inject somewhere. After a while, someone notices your service but creates an alternative implementation for some reason. By just declaring this new bean in the context, along with the former, it will break container initialization! Worse, it may be done in an entirely different part of the application, with seemingly no link with the former. At this point, good luck to analyze the reason of the bug.

    Note that autowiring by name if even worse, since bean names have a probability of collision, even across different type hierarchies.

    Autowiring is bad in Spring, but it is even worse in CDI where every class in the classpath is a candidate for injection. By the way, any CDI guru reading this post care to explain why autowiring is the only way of DI? That would be really enlightening.

    Summary

    In this post, I tried to explain what is autowiring. It can be used all right, but now you should be aware of its con. IMHO, you should only use it for prototyping or quick-and-dirty proof of concepts, everything that can be discarded after a single use. If really needed, prefer wiring by type over wiring by name as at least it matching doesn’t depend on a String.

    Categories: Java Tags: autowiringCDIspring
  • Spring method injection with Java Configuration

    Last week, I described how a Rich Model Object could be used with Spring using Spring’s method injection from an architectural point of view.

    What is missing, however, is how to use method injection with my new preferred method for configuration, Java Config. My start point is the following, using both autowiring (shudder) and method injection.

    public abstract class TopCaller {
    
        @Autowired
        private StuffService stuffService;
    
        public SomeBean newSomeBean() {
            return newSomeBeanBuilder().with(stuffService).build();
        }
    
        public abstract SomeBeanBuilder newSomeBeanBuilder();
    }
    

    Migrating to Java Config requires the following steps:

    1. Updating the caller structure to allow for constructor injection
    2. For method injection, provide an implementation for the abstract method in the configuration class
    3. That’s all…
    import org.springframework.beans.factory.config.BeanDefinition;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    import org.springframework.context.annotation.Scope;
    
    @Configuration
    public class JavaConfig {
    
        @Bean
        public TopCaller topCaller() {
            return new TopCaller(stuffService()) {
                @Override
                public SomeBeanBuilder newSomeBeanBuilder() {
                    return someBeanBuilder();
                }
            };
        }
    
        @Bean
        public StuffService stuffService() {
            return new StuffService();
        }
    
        @Bean
        @Scope(BeanDefinition.SCOPE_PROTOTYPE)
        public SomeBeanBuilder someBeanBuilder() {
            return new SomeBeanBuilder();
        }
    }
    
    Categories: Java Tags: javaconfigmethod injectionspring
  • Rich Domain Objects and Spring Dependency Injection are compatible

    I’m currently working in a an environment where most developers are Object-Oriented fanatics. Given that we develop in Java, I think that it is a good thing - save the fanatics part. In particular, I’ve run across a deeply-entrenched meme that states that modeling Rich Domain Objects and using Spring dependency injection at the same time is not possible. Not only is this completely false, it reveals a lack of knowledge of Spring features, one I’ll be trying to correct in this article.

    However, my main point is not about Spring but about whatever paradigm one holds most dear, be it Object-Oriented Programming, Functional Programming, Aspect-Oriented Programming or whatever. Those are only meant to give desired properties to software: unit-testable, readable, whatever… So one should always focus on those properties than on the way of getting them. Remember:

    When the wise man points at the moon, the idiot looks at the finger.

    Back to the problem at hand. The basic idea is to have some bean having reference on some service to do some stuff (notice the real-word notions behind this idea…) so you could call:

    someBean.doStuff()
    

    The following is exactly what not to do:

    public class TopCaller {
    
        @Autowired
        private StuffService stuffService;
        public SomeBean newSomeBean() {
            return new SomeBeanBuilder().with(stuffService).build();
        }
    }
    

    This design should be anathema to developers promoting unit-testing as the new() deeply couples the TopCaller and SomeBeanBuilder classes. There’s no way to stub this dependency in a test, it prevents testing the createSomeBean() method in isolation.

    What you need is to inject SomeBean prototypes into the SomeBeanBuilder singleton. This is method injection and is possible within Spring with the help of lookup methods (I’ve already blogged about that some time ago and you should probably have a look at it).

    public abstract class TopCaller {
    
        @Autowired
        private StuffService stuffService;
    
        public SomeBean newSomeBean() {
            return newSomeBeanBuilder().with(stuffService).build();
        }
    
        public abstract SomeBeanBuilder newSomeBeanBuilder();
    }
    

    With the right configuration, Spring will take care of providing a different SomeBeanBuilder each time the newSomeBeanBuilder() method is called. With this new design, we changed the strong coupling between TopCaller and SomeBeanBuilder to a soft coupling, one that can be stubbed in tests and allow for unit-testing.

    In the current design, it seems the only reason for SomeBeanBuilder to exist is to pass the StuffService from the TopCaller to SomeBean instances. There is no need to keep it with method injection.

    There are two different possible improvements:

    1. Given our "newfound" knowledge, inject:
      • method newSomeBean() instead of newSomeBeanBuilder() into TopCaller
      • StuffService directly into SomeBean
    2. Keep StuffService as a TopCaller attribute and pass it every time doStuff() is invoked

    I would favor the second option since I frown upon a Domain Object keeping references to singleton services, unless there are many parameters to pass at every call. As always, there’s not a single best choice, but only contextual choices.

    Also, I would also use explicit dependency management instead of some automagic stuff, but that another debate.

    I hope this piece proved beyond any doubt that Spring does not prevent Rich Domain Model, far from it. As a general rule, know about tools you use and go their way instead of following some path just for the sake of it .

  • Spring 3.2 sweetness

    Even the most extreme Spring opponents have to admit it is all about making developers life easier. Version 3.2 of Spring MVC brings even more sweetness to the table.

    Sweetness #1: No web.xml

    The ability to run a webapp without any web deployment descriptor comes from Servlet 3.0.

    One option would be to annotate your servlet with the @WebServlet annotation to set mapping and complementary data. When you get your servlet for free, like Spring’s DispatcherServlet, you’d need to subclass you servlet for no other purpose than adding annotation(s).

    Alternatively, Servlet 3.0 offers a way to both programmatically register servlets in the container and to offer hooks at startup through the ServletContainerInitializer interface. The container will call the onStartup() method of all concrete implementation at webapp startup. The Spring framework leverages this feature to do so for WebApplicationInitializer instances.

    Spring MVC 3.2 provides such an implementation - AbstractContextLoaderInitializer, to programmatically register the DispatcherServlet. This means that as soon as the spring-webmvc jar is in the WEB-INF/lib folder of the webapp, you’ve got the Dispatcher servlet up and ready.

    This replaces both the servlet and servlet mapping and the context listener declarations in the web.xml.

    Sweetness #2: Easy Java configuration integration

    Java configuration is the way to configure Spring injection explicitly in a typesafe way. I won’t go into the full demonstration of it, as I already wrote about that some time ago.

    Earlier Spring versions provided a way to use Java configuration classes instead of XML files in the web deployment descriptor. Spring 3.2 offers AbstractAnnotationConfigDispatcherServletInitializer, a AbstractContextLoaderInitializer subclass with hooks for Java configuration classes.

    Your own concrete subclass has to implement methods to define servlet mappings, as well as root and web Java configuration classes:

    public class SugarSpringWebInitializer extends AbstractAnnotationConfigDispatcherServletInitializer {
    
        @Override
        protected Class<?>[] getRootConfigClasses() {
            return new Class[] { JavaConfig.class };
        }
    
        @Override
        protected Class<?>[] getServletConfigClasses() {
            return new Class[] { WebConfig.class };
        }
    
        @Override
        protected String[] getServletMappings() {
            return new String[] { "/" };
        }
    }
    

    At this point, you just need to create those configuration classes.

    Sweetness #3: integration testing at the mapping level

    Your code should be unit-tested, that is tested in isolation to ensure that each method is bug-free, and integration-tested to ensure that collaboration between classes yield expected results.

    Before v. 3.2, the Spring test framework let you assemble your configuration classes / files easily enough. The problem lay in the way you would call entry-points - the controllers. You could call methods of those controllers, but not the mappings, leaving those untested.

    With v. 3.2, Spring Test brings a whole MVC testing framework which entry-points are mappings. This way, instead of testing method x() of controller C, you would test a request to /z, letting Spring MVC handle it so we can check for expected results.

    The framework also provide expectations for view returning, forwarding, redirecting, and model attribute setting, all with the help of a specific DSL:

    public class SayHelloControllerIT extends AbstractTestNGSpringContextTests {
    
        private MockMvc mockMvc;
    
        @BeforeMethod
        public void setUp() {
            mockMvc = webAppContextSetup((WebApplicationContext) applicationContext).build();
        }
    
        @Test(dataProvider = "pathParameterAndExpectedModelValue")
        public void accessingSayhelloWithSubpathShouldForwardToSayHelloJspWithModelFilled(String path, String value) throws Exception {
            mockMvc.perform(get("/sayHello/Jo")).andExpect(view().name("sayHello")).andExpect(model().attribute("name", "Jo"));
        }
    }
    

    The project for this article can be downloaded in Eclipse/Maven format.

    To go further:

    • The entire set of new features is available here
    Categories: JavaEE Tags: integration testingspringSpring MVC
  • Modularity in Spring configuration

    The following goes into more detail what I already ranted about in one of my previous post.

    In legacy Spring applications I’ve to develop new features in, I regularly stumble upon a big hindrance that slows down my integration-testing effort. This hindrance, and I’ll go as far as naming it an anti-pattern, is to put every bean in the same configuration file (either in XML or in Java).

    The right approach is to define at least the following beans in a dedicated configuration component:

    • Datasource(s)
    • Mail server
    • External web services
    • Every application dependency that doesn't fall into one of the previous category

    Depending on the particular context, we should provide alternate beans in test configuration fragments. Those alternatives can either be mocked beans or test doubles. In the first case, I would suggest using the Springockito framework, based upon Mockito; in the second case, tools depend on the specific resource: in-memory database (such as HSQLDB, Derby or H2*) for datasource and GreenMail for mail server.

    It is up to the application and each test’s responsibility to assemble the different required configuration fragments to initialize the full-fledged Spring application context. Note that coupling fragments together is also not a good idea.

    • My personal preference goes to H2 these days
    Categories: Java Tags: spring
  • Devoxx France 2013 - Day 3

    Classpath isn't dead... yet by Alexis Hassler

    Classpath is dead! Mark Reinold

    What is the classpath anyway? In any code, there are basically two kinds of classes: those coming from the JRE, and those that do not (either becasue they are your own custom class or becasue they come from 3rd-party libraries). Classpath can be set either with a simple java class load by the -cp argument or with a JAR by the embedded MANIFEST.MF.

    A classloader is a class itself. It can load resources and classes. Each class knows its classloader, that is which classloader has loaded the class inside the JVM (i.e. sun.misc.Launcher$AppClassLoader). JRE classes classloader is not not a Java type, so the returned classloader is null. Classloader are organized into a parent-child hierarchy, with delegation from bottom to top if a class is not found in the current classloader.

    The BootstrapClassLoader can be respectively replaced, appended or prepended with -Xbootclasspath,-Xbootclasspath/a and -Xbootclasspath/p. This is a great way to override standard classes (such as String or Integer); this is also a major security hole. Use at your own risk! Endorsed dir is a way to override some API with a more recent version. This is the case with JAXB for example.

    ClassCastException usually comes from the same class being loaded by two different classloaders (or more…). This is because classes are not identified inside the JVM by the class only, but by the tuple {classloader, class}.

    Classloader can be developed, and then set in the provided hierarchy. This is generally done in application servers or tools such as JRebel. In Tomcat, each webapp has a classloader, and there’s a parent unified classloader for Tomcat (that has the System classloader as its parent). Nothing prevents you from developing your own: for example, consider a MavenRepositoryClassLoader, that loads JAR from your local Maven repository. You just have to extends UrlClassLoader.

    JAR hell comes from dependencies management or more precisely its lack thereof. Since dependencies are tree-like during development time, but completely flat at runtime i.e. on the classpath, conflicts may occur if no care is taken to eliminate them beforehand.

    One of the problem is JAR visibility: you either have all classes available if the JAR is present, or none if it is not. The granularity is at the JAR level, whereas it would be better to have finer-grained visibility. Several solutions are available:

    • OSGi has an answer to these problems since 1999. With OSGi, JARs become bundles, with additional meta-data set in the JAR manifest. These meta-data describe visibility per package. For a pure dependency management point of view, OSGi comes with additional features (services and lifecycle) that seem overkill [I personally do not agree].
    • Project Jigsaw also provides this modularity (as well as JRE classes modularity) in the form of modules. Unfortunately, it has been delayed since Java 7, and will not be included in Java 8. Better forget it at the moment.
    • JBoss Module is a JBoss AS 7 subproject, inspired by Jigsaw and based on JBoss OSGi. It is already available and comes with much lower complexity than OSGi. Configuration is made through a module.xml description file. This system is included in JBoss AS 7. On the negative side, you can use Module either with JBoss or on its own, which prevents us from using it in Tomcat. An ongoing Github proof-of-concept achieves it though, which embeds the JAR module in the deployed webapp and overrides Tomcat classloader of the webapp. Several problems still exists:
      • Artefacts are not modules
      • Lack of documentation

    Animate your HTML5 pages with CSS3, SVG, Canvas & WebGL by Martin Gorner

    Within the HTML5 specification alone, there are 4 ways to add fun animations to your pages.

    CSS 3
    CSS 3 transitions come through the transition property. They are triggered though user-events.Animations are achieved through animation properties. Notice the plural, because you define keyframes and the browser computes intermediate ones.2D transformations -property transformation include rotate, scale, skew, translate and matrix. As an advice, timing can be overriden, but the default one is quite good. CSS 3 also provides 3D transformations. Those are the same as above, but with either X, Y or Z appended to the value name to specify the axis name.The biggest flaw from CSS 3 is that they lack draw features.
    SVG + SMIL
    SVG not only provides vectorial drawing features but also out-of-the-box animation features. SVG is described in XML: SVG animations are much more powerful that CSS 3 but also more complex. You'd better use a tool to generate it, such as Inkscape.There are different ways to animate SVG, all through sub-tags: animate, animateTransform and animateTransform.Whereas CSS 3 timing is acceptable out-of-the-box, default in SVG is linear (which is not pleasant to the eye). SVG offers timing configuration through the keySplines attribute of the previous tags.Both CSS 3 and SVG have a big limitations: animations are set in stone and cannot respond to external events, such as user inputs. When those are a requirement, the following two standard apply.
    Canvas + JavaScript
    From this point on, programmatic (as opposed to descriptive) configuration is available. Beware that JavaScript animations comes at a cost: on mobile devices, it will dry power. As such, know about method that let the browser stop animations when the page is not displayed.
    WebGL + THREE.js
    WebGL let use a OpenGL API (read 3D), but it is very low-level. THREE.js comes with a full-blown high level API. Better yet, you can import Sketchup mesh models into THREE.js.In all cases, do not forget to use the same optimization as in 2D canvas to stop animations when the canvas is not visible.

    Tip: in order to not care about prefix, prefix.js let us preserve original CSS and enhance with prefix at runtime. Otherwise, use LESS / SASS. Slides are readily available online with associated labs.

    [I remember using the same 3D techniques 15 years ago when I learnt raytracing. That’s awesome!]

    The Spring update by Josh Long

    [Talk is shown in code snippets, rendering full-blown notes mostly moot. It is dedicated to new features of the latest Spring platform versions]

    Version Feature
    3.1
    • JavaConfig equivalents of XML
    • Profiles
    • Cache abstraction, with CacheManager and Cache
    • Newer backend cache adapters (Hazelcast, memcached, GemFire, etc.) in addition to EhCache
    • Servlet 3.0 support
    • Spring framework code available on GitHub
    3.2
    • Gradle-based builds [Because of incompatible versions support. IMHO, this is one of the few use-case for using Gradle that I can agree with]
    • Async MVC processing through Callable (threads are managed by Spring), DeferredResult and AsyncTask<?>
    • Content negotiation strategies
    • MVC Test framework server
    4
    • Groovy-configuration support. Note that all available configuration ways (XML, JavaConfig, etc.) and their combinations have no impact at runtime
    • Java 8 closures support
    • JSR 310 (Date and Time API) support
    • Removal of setting @PathVariable's value need, using built-in JVM mechanism to get it
    • Various support for Java EE 7
    • Backward compatibility will still include Java 5
    • Annotation-based JMS endpoints
    • WebSocket aka "server push" support
    • Web resources caching

    Bean validation 1.1: we're not in Care Bears land anymore by Emmanuel Bernard

    All that will be written here is not set in stone, it has to be approved first. Bean Validation comes bundled with Java EE 6+ but it can be used standalone.

    Before Bean Validation, validations were executed at each different layer (client, application layers, database). This led to duplications as well as inconsistencies. The Bean Validation motto is something along the lines of:

    Constrain once, run anywhere

    1.0 has been released with Java EE 6. It is fully integrated with other stacks including JPA, JSF (& GWT, Wicket, Tapestry) and CDI (& Spring).

    Declaring a constraint is as simple as adding a specific validation annotation. Validation can be cascaded, not only on the bean itself but on embedded beans. Also, validation may wrap more than one property to validate if two different properties are consistent with one another. Validation can be set on the whole, but also defined subsets - called groups, of it. Groups are created through interfaces.

    Many annotations come out-of-the-box, but you can also define your own. This is achieved with the @Constraint annotation on a custom annotation. It includes the list of validators to use when validating. Those validators must implement the Validator interface.

    1.1 will be included in Java EE 7. The most important thing to remember is that it is 100% open. Everything is available on GitHub, go fork it.

    Now, containers are in complete control of Bean Validation components creation, so that they are natively compatible with CDI. Also, other DI containers, such as Spring, may plug in their own SPI implementation.

    The greatest feature of 1.1 is that not only properties can be validated, but also method parameters and method return values. Constructors being specialized method, it also applies to them. It is achieved internally with interceptors. However, this requires an interception stack - either CDI, Spring or any AOP, and comes with associated limitations, such as proxies. This enables declarative Contract-Oriented Programming, and its pre- and post-conditions.

    Conclusion

    Devoxx France 2013 has been a huge success, thanks to the organization team. Devoxx is not only tech talks, it is also a time to meet new people, exchange ideas and see old friends.

    See you next year, or at Devoxx 2013!

    Thanks to my employer - hybris, who helped me attend this great event!

    Categories: Event Tags: devoxxhtml5jbossspring
  • Consider replacing Spring XML configuration with JavaConfig

    Spring articles are becoming a trend on this blog, I should probably apply for a SpringSource position :-)

    Colleagues of mine sometimes curse me for my stubbornness in using XML configuration for Spring. Yes, it seems so 2000’s but XML has definite advantages:

    1. Configuration is centralized, it's not scattered among all different components so you can have a nice overview of beans and their wirings in a single place
    2. If you need to split your files, no problem, Spring let you do that. It then reassembles them at runtime through internal <import> tags or external context files aggregation
    3. Only XML configuration allows for explicit wiring - as opposed to autowiring. Sometimes, the latter is a bit too magical for my own taste. Its apparent simplicity hides real complexity: not only do we need to switch between by-type and by-name autowiring, but more importantly, the strategy for choosing the relevant bean among all eligible ones escapes but the more seasoned Spring developers. Profiles seem to make this easier, but is relatively new and is known to few
    4. Last but not least, XML is completely orthogonal to the Java file: there's no coupling between the 2 so that the class can be used in more than one context with different configurations

    The sole problem with XML is that you have to wait until runtime to discover typos in a bean or some other stupid boo-boo. On the other side, using Spring IDE plugin (or the integrated Spring Tools Suite) definitely can help you there.

    An interesting alternative to both XML and direct annotations on bean classes is JavaConfig, a former separate project embedded into Spring itself since v3.0. It merges XML decoupling advantage with Java compile-time checks. JavaConfig can be seen as the XML file equivalent, only written in Java. The whole documentation is of course available online, but this article will just let you kickstart using JavaConfig. As an example, let us migrate from the following XML file to a JavaConfig

    <?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://www.springframework.org/schema/beans
            http://www.springframework.org/schema/beans/spring-beans-3.2.xsd">
        <bean id="button" class="javax.swing.JButton">
            <constructor-arg value="Hello World" />
        </bean>
        <bean id="anotherButton" class="javax.swing.JButton">
            <property name="icon" ref="icon" />
        </bean>
        <bean id="icon" class="javax.swing.ImageIcon">
            <constructor-arg>
                <bean class="java.net.URL">
                    <constructor-arg value="http://morevaadin.com/assets/images/learning_vaadin_cover.png" />
                </bean>
            </constructor-arg>
        </bean>
    </beans>
    

    The equivalent file is the following:

    import java.net.MalformedURLException;
    import java.net.URL;
    import javax.swing.Icon;
    import javax.swing.ImageIcon;
    import javax.swing.JButton;
    import org.springframework.context.annotation.Bean;
    import org.springframework.context.annotation.Configuration;
    
    @Configuration
    public class MigratedConfiguration {
    
        @Bean
        public JButton button() {
            return new JButton("Hello World");
        }
    
        @Bean
        public JButton anotherButton(Icon icon) {
            return new JButton(icon);
        }
    
        @Bean
        public Icon icon() throws MalformedURLException {
            URL url = new URL("http://morevaadin.com/assets/images/learning_vaadin_cover.png");
            return new ImageIcon(url);
        }
    }
    

    Usage is simpler than simple: annotate the main class with @Configuration and individual producer methods with @Bean. The only drawback, IMHO, is that it uses autowiring. Apart from that, It just works.

    Note that in a Web environment, the web deployment descriptor should be updated with the following lines:

    <context-param>
        <param-name>contextClass</param-name>
        <param-value>org.springframework.web.context.support.AnnotationConfigWebApplicationContext</param-value>
    </context-param>
    <context-param>
        <param-name>contextConfigLocation</param-name>
        <param-value>com.packtpub.learnvaadin.springintegration.SpringIntegrationConfiguration</param-value>
    </context-param>
    

    Sources for this article are available in Maven/Eclipse format here.

    To go further:

    • Java-based container configuration documentation
    • AnnotationConfigWebApplicationContext JavaDoc
    • @ContextConfiguration JavaDoc (to configure Spring Test to use JavaConfig)
    Categories: Java Tags: javaconfigspring
  • Spring beans overwriting strategy

    I find myself working more and more with Spring these days, and what I find raises questions. This week, my thoughts are turned toward beans overwriting, that is registering more than one bean with the samee name.

    In the case of a simple project, there’s no need for this; but when building a a plugin architecture around a core, it may be a solution. Here are some facts I uncovered and verified regarding beans overwriting.

    Single bean id per file
    The id attribute in the Spring bean file is of type ID, meaning you can have only a single bean with a specific ID in a specific Spring beans definition file.
    Overwriting bean dependent on context fragments loading order
    As opposed to classpath loading where the first class takes priority over those others further on the classpath, it's the last bean of the same name that is finally used. That's why I called it overwriting. Reversing the fragment loading order proves that.
    Fragment assembling methods define an order
    Fragments can be assembled from <import> statements in the Spring beans definition file or through an external component (e.g. the Spring context listener in a web app or test classes). All define a deterministic order.
    As a side note, though I formerly used import statements in my projects (in part to take advantage of IDE support), experience taught me it can bite you in the back when reusing modules: I'm in favor of assembling through external components now.
    Names
    Spring lets you define names in addition to ids (which is a cheap way of putting illegals characters fors ID). Those names also overwrites ids.
    Aliases
    Spring lets you define aliases of existing beans: those aliases also overwrites ids.
    Scope overwriting
    This one is really mean: by overwriting a bean, you also overwrite scope. So, if the original bean had a specified scope and you do not specify the same, tough luck: you just probably changed the application behavior.

    Not only are perhaps not known by your development team, but the last one is the killer reason not to overwrite beans. It’s too easy to forget scoping the overwritten bean.

    In order to address plugins architecture, and given you do not want to walk the OSGi path, I would suggest what I consider a KISS (yet elegant) solution.

    Let us use simple Java properties in conjunction with ProperyPlaceholderConfigurer. The main Spring Beans definition file should define placeholders for beans that can be overwritten and read two defined properties file: one wrapped inside the core JAR and the other on a predefined path (eventually set by a JVM property).

    Both property files have the same structure: fully-qualified interface names as keys and fully-qualified implementations names as values. This way, you define default implementations in the internal property file and let uses overwrite them in the external file (if necessary).

    As an added advantage, it shields users from Spring so they are not tied to the framework.

    Sources for this article can be found in Maven/Eclipse format here.

    Categories: Java Tags: spring
  • The case for Spring inner beans

    When code reviewing or pair programming, I’m always amazed by the following discrepancy. On one hand, 99% of developers conscientiously apply encapsulation and limit accessibility and variable scope to the minimum possible. On the other hand, nobody cares one bit about Spring beans and such beans are always set at top-level, which makes them accessible from every place where you can get a handle on the Spring context.

    For example, this a typical Spring beans configuration file:

    <bean id="one" class="ch.frankel.blog.spring.One" />
    <bean id="two" class="ch.frankel.blog.spring.Two" />
    
    <bean id="three" class="ch.frankel.blog.spring.Three" />
        <property name="one" ref="one" />
        <property name="two" ref="two" />
    </bean>
    
    <bean id="four" class="ch.frankel.blog.spring.Four" />
    
    <bean id="five" class="ch.frankel.blog.spring.Five">
        <property name="three" ref="three" />
        <property name="four" ref="four" />
    </bean>
    

    If beans one, two, three and four are only used by bean five, they shouldn’t be accessible from anywhere else and should be defined as inner beans.

    <bean id="five">
        <property name="three">
            <bean class="ch.frankel.blog.spring.Three">
                <property name="one">
                    <bean class="ch.frankel.blog.spring.One" />
                </property>
                <property name="two">
                    <bean class="ch.frankel.blog.spring.Two" />
                </property>
            </bean>
         </property>
         <property name="four">
             <bean class="ch.frankel.blog.spring.Four" />
         </property>
    </bean>
    

    From this point on, beans one, two, three and four cannot be accessed in any way outside of bean five; in effect, they are not visible.

    There are a couple of points I’d like to make:

    1. By using inner beans, those beans are implicitly made anonymous but also scoped prototype, which doesn't mean squat since they won't be reused anywhere else.
    2. With annotations configuration, this is something that is done under the cover when you set a new instance in the body of the method
    3. I acknowledge it renders the Spring beans definition file harder to read but with the graphical representation feature brought by Spring IDE, this point is moot

    In conclusion, I would like every developer to consider not only technologies, but also concepts. When you understand variable scoping in programming, you should not only apply it to code, but also wherever it is relevant.

    Categories: Java Tags: spring
  • Changing default Spring bean scope

    By default, Spring beans are scoped singleton, meaning there’s only one instance for the whole application context. For most applications, this is a sensible default; then sometimes, not so much. This may be the case when using a custom scope, which is the case, on the product I’m currently working on. I’m not at liberty to discuss the details further: suffice to say that it is very painful to configure each and every needed bean with this custom scope.

    Since being lazy in a smart way is at the core of developer work, I decided to search for a way to ease my burden and found it in the BeanFactoryPostProcessor class. It only has a single method - postProcessBeanFactory(), but it gives access to the bean factory itself (which is at the root of the various application context classes).

    From this point on, the code is trivial even with no prior experience of the API:

    public class PrototypeScopedBeanFactoryPostProcessor implements BeanFactoryPostProcessor {
    
        @Override
        public void postProcessBeanFactory(ConfigurableListableBeanFactory factory) throws BeansException {
            for (String beanName : factory.getBeanDefinitionNames()) {
                BeanDefinition beanDef = factory.getBeanDefinition(beanName);
                String explicitScope = beanDef.getScope();
                if ("".equals(explicitScope)) {
                    beanDef.setScope("prototype");
                }
            }
        }
    }
    

    The final touch is to register the post-processor in the context . This is achieved by treating it as a simple anonymous bean:

    <?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
        xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
        <bean class="ch.frankel.blog.spring.scope.PrototypeScopedBeanFactoryPostProcessor" />
    </beans>
    

    Now, every bean which scope is not explicitly set will be scoped prototype.

    Sources for this article can be found attached in Eclipse/Maven format.

    Categories: Java Tags: spring
  • DRY your Spring Beans configuration file

    It’s always when you discuss with people that some things that you (or the people) hold for an evidence seems to be a closely-held secret. That’s what happened this week when I tentatively showed a trick during a training session that started a debate.

    Let’s take an example, but the idea behind this can of course be applied to many more use-cases: imagine you developed many DAO classes inheriting from the same abstract DAO Spring provides you with (JPA, Hibernate, plain JDBC, you name it). All those classes need to be set either a datasource (or a JPA EntityManager, a Spring Session, etc.). At your first attempt, you would create the Spring beans definition file a such:

    <bean id="dataSource" ... /><!-- Don't care how we obtain dataSource -->
    <bean id="playerDao" class="ch.frankel.blog.app.persistence.dao.PlayerDao">
        <property name="dataSource" ref="dataSource" />
    </bean>
    <bean id="matchDao" class="ch.frankel.blog.app.persistence.dao.MatchDao">
        <property name="dataSource" ref="dataSource" />
    </bean>
    <bean id="stadiumDao" class="ch.frankel.blog.app.persistence.dao.StadiumDao">
        <property name="dataSource" ref="dataSource" />
    </bean>
    <bean id="teamDao" class="ch.frankel.blog.app.persistence.dao.TeamDao">
        <property name="dataSource" ref="dataSource" />
    </bean>
    <bean id="competitionDao" class="ch.frankel.blog.app.persistence.dao.CompetitionDao">
        <property name="dataSource" ref="dataSource" />
    </bean>
    <bean id="betDao" class="ch.frankel.blog.app.persistence.dao.BetDao">
        <property name="dataSource" ref="dataSource" />
    </bean>
    

    Notice a pattern here? Not only is it completely opposed to the DRY principle, it also is a source for errors as well as decreasing future maintainability. Most important, I’m lazy and I do not like to type characters just for the fun of it.

    Spring to the rescue. Spring provides the way to make beans abstract. This is not to be confused with the abstract keyword of Java. Though Spring abstract beans are not instantiated, children of these abstract beans are injected the properties of their parent abstract bean. This implies you do need a common Java parent class (though it doesn’t need to be abstract). In essence, you will shorten your Spring beans definitions file like so:

    <bean id="dataSource" ... /><!-- Don't care how we obtain dataSource -->
    <bean id="abstractDao" class="org.springframework.jdbc.core.support.JdbcDaoSupport" abstract="true">
        <property name="dataSource" ref="dataSource" />
    </bean>
    <bean id="playerDao" class="ch.frankel.blog.app.persistence.dao.PlayerDao" parent="abstractDao" />
    <bean id="matchDao" class="ch.frankel.blog.app.persistence.dao.MatchDao" parent="abstractDao" />
    <bean id="stadiumDao" class="ch.frankel.blog.app.persistence.dao.StadiumDao" parent="abstractDao" />
    <bean id="teamDao" class="ch.frankel.blog.app.persistence.dao.TeamDao" parent="abstractDao" />
    <bean id="competitionDao" class="ch.frankel.blog.app.persistence.dao.CompetitionDao" parent="abstractDao" />
    <bean id="betDao" class="ch.frankel.blog.app.persistence.dao.BetDao" parent="abstractDao" />
    

    The instruction to inject the data source is configured only once for the abstractDao. Yet, Spring will apply it to every DAO configured as having the it as its parent. DRY from the trenches…

    Note: if you use two different data sources, you’ll just have to define two abstract DAOs and set the correct one as the parent of your concrete DAOs.

    Categories: Java Tags: dryspring
  • Web Services: JAX-WS vs Spring

    In my endless search for the best way to develop applications, I’ve recently been interested in web services in general and contract-first in particular. Web services are coined contract-first when the WSDL is designed in the first place and classes are generated from it. They are acknowledged to be the most interoperable of web services, since the WSDL is agnostic from the underlying technology.

    In the past, I’ve been using Axis2 and then CXF but now, JavaEE provides us with the power of JAX-WS (which is aimed at SOAP, JAX-RS being aimed at REST). There’s also a Spring Web Services sub-project.

    My first goal was to check how easy it was to inject through Spring in both technologies but during the course of my development, I came across other comparison areas.

    Overview

    With Spring Web Services, publishing a new service is a 3-step process:

    1. Add the Spring MessageDispatcherServlet in the web deployment descriptor.
    <web-app version="2.5" ...>
        <context-param>
            <param-name>contextConfigLocation</param-name>
            <param-value>classpath:/applicationContext.xml</param-value>
        </context-param>
        <servlet>
            <servlet-name>spring-ws</servlet-name>
            <servlet-class>org.springframework.ws.transport.http.MessageDispatcherServlet</servlet-class>
            <init-param>
                <param-name>transformWsdlLocations</param-name>
                <param-value>true</param-value>
            </init-param>
            <init-param>
                <param-name>contextConfigLocation</param-name>
                <param-value>classpath:/spring-ws-servlet.xml</param-value>
            </init-param>
        </servlet>
        <servlet-mapping>
            <servlet-name>spring-ws</servlet-name>
            <url-pattern>/spring/*</url-pattern>
        </servlet-mapping>
        <listener>
            <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class>
        </listener>
    </web-app>
    
    1. Create the web service class and annotate it with @Endpoint. Annotate the relevant service methods with @PayloadRoot. Bind the method parameters with @RequestPayload and the return type with @ResponsePayload.
    @Endpoint
    public class FindPersonServiceEndpoint {
    
        private final PersonService personService;
    
        public FindPersonServiceEndpoint(PersonService personService) {
            this.personService = personService;
        }
    
        @PayloadRoot(localPart = "findPersonRequestType", namespace = "http://blog.frankel.ch/ws-inject")
        @ResponsePayload
        public FindPersonResponseType findPerson(@RequestPayload FindPersonRequestType parameters) {
            return new FindPersonDelegate().findPerson(personService, parameters);
        }
    }
    
    1. Configure the web service class as a Spring bean in the relevant Spring beans definition file.
    <beans ...>
        <sws:annotation-driven />
        <bean class="ch.frankel.blog.wsinject.impl.FindPersonServiceEndpoint">
            <constructor-arg>
                <ref bean="personService" />
            </constructor-arg>
        </bean>
    </beans>
    

    For JAX-WS (Metro implementation), the process is very similar:

    1. Add the WSServlet in the web deployment descriptor:
    <web-app version="2.5" ...>
        <servlet>
            <servlet-name>jaxws-servlet</servlet-name>
            <servlet-class>com.sun.xml.ws.transport.http.servlet.WSServlet</servlet-class>
        </servlet>
        <servlet-mapping>
            <servlet-name>jaxws-servlet</servlet-name>
            <url-pattern>/jax/*</url-pattern>
        </servlet-mapping>
        <listener>
            <listener-class>com.sun.xml.ws.transport.http.servlet.WSServletContextListener</listener-class>
        </listener>
    </web-app>
    
    1. Create the web service class:
    @WebService(endpointInterface = "ch.frankel.blog.wsinject.jaxws.FindPersonPortType")
    
    public class FindPersonSoapImpl extends SpringBeanAutowiringSupport implements FindPersonPortType {
    
        @Autowired
        private PersonService personService;
    
        @Override
        public FindPersonResponseType findPerson(FindPersonRequestType parameters) {
            return new FindPersonDelegate().findPerson(personService, parameters);
        }
    }
    
    1. Finally, we also have to configure the service, this time in the standard Metro configuration file (sun-jaxws.xml):
    <endpoints version="2.0" ...>
        <endpoint name="FindPerson"
            implementation="ch.frankel.blog.wsinject.impl.FindPersonSoapImpl"
            url-pattern="/jax/findPerson" />
    </endpoints>
    

    Creating a web service in Spring or JAX-WS requires the same number of steps of equal complexity.

    Code generation

    In both cases, we need to generate Java classes from the Web Service Definitions File (WSDL). This is completely independent from the chosen technology.

    However, whereas JAX-WS uses all generated Java classes, Spring WS uses only the ones that maps to WSD types and elements: wiring the WS call to the correct endpoint is achieved by mapping the request and response types.

    The web service itself

    Creating the web service in JAX-RS is just a matter of implementing the port type interface, which contains the service method.

    In Spring, the service class has to be annotated with @Endpoint to be rcognized as a service class.

    URL configuration

    In JAX-RS, the sun-jaxws.xml file syntax let us configure very finely how each URL is mapped to a particular web service.

    In Spring WS, no such configuration is available.

    Since I’d rather have an overview of the different URLs, my preferences go toward JAX-WS.

    Spring dependency injection

    Injecting a bean in a Spring WS is very easy since the service is already a Spring bean thanks to the <sws:annotation-driven /> part.

    On the contrary, injecting a Spring bean requires our JAX-WS implementation to inherit from SpringBeanAutowiringSupport which prevent us from having our own hierarchy. Also, we are forbidden from using explicit XML wiring then.

    It’s easier to get dependency injection with Spring WS (but that was expected).

    Exposing the WSDL

    Both JAX-WS and Spring WS are able to expose a WSDL. In order to do so, JAX-WS uses the generated classes and as such, the exposed WSDL is the same as the one we designed in the first place.

    On the contrary, Spring provides us with 2 options:

    • either use the static WSDL in which it replaces the domain and port in the <soap:address> section
    • or generate a dynamic WSDL from XSD files, in which case the designed one and the generated one aren’t the same

    Integration testing

    Spring WS has one feature that JAX-WS lacks: integration testing. A test class that will be configured with Spring Beans definitions file(s) can be created to assert sent output messages agains known inputs. Here’s an example of such a test, based on both Spring test and TestNG:

    @ContextConfiguration(locations = { "classpath:/spring-ws-servlet.xml", "classpath:/applicationContext.xml" })
    public class FindPersonServiceEndpointIT extends AbstractTestNGSpringContextTests {
    
        @Autowired
        private ApplicationContext applicationContext;
    
        private MockWebServiceClient client;
    
        @BeforeMethod
        protected void setUp() {
            client = MockWebServiceClient.createClient(applicationContext);
        }
    
        @Test
        public void findRequestPayloadShouldBeSameAsExpected(String request, String expectedResponse)
          throws DatatypeConfigurationException {
            int id = 5;
            Source requestPayload = new StringSource(request);
            Source expectedResponsePayload = new StringSource(expectedResponse);
            String request = "<a:findPersonRequestType xmlns:a='http://blog.frankel.ch/ws-inject'><id>"
                + id + "</id></a:findPersonRequestType>";
            GregorianCalendar calendar = new GregorianCalendar();
            XMLGregorianCalendar xmlCalendar = DatatypeFactory.newInstance().newXMLGregorianCalendar(calendar);
            xmlCalendar.setHour(FIELD_UNDEFINED);
            xmlCalendar.setMinute(FIELD_UNDEFINED);
            xmlCalendar.setSecond(FIELD_UNDEFINED);
            xmlCalendar.setMillisecond(FIELD_UNDEFINED);
            String expectedResponse = "<ns3:findPersonResponseTypexmlns:ns3='http://blog.frankel.ch/ws-inject'><person><id>"
                + id
                + "</id><firstName>John</firstName><lastName>Doe</lastName><birthdate>"
                + xmlCalendar.toXMLFormat()
                + "</birthdate></person></ns3:findPersonResponseType>";
            client.sendRequest(withPayload(requestPayload)).andExpect(payload(expectedResponsePayload));
        }
    }
    

    Note that I encountered problems regarding XML suffixes since not only is the namespace checked, but also the prefix name itself (which is a terrible idea).

    Included and imported schema resolution

    On one side, JAX-WS inherently resolves included/imported schemas without a glitch.

    On the other side, we need to add specific beans in the Spring context in order to be able to do it with Spring WS:

    <!-- Let us reference XML schemas -->
    <bean class="org.springframework.ws.transport.http.XsdSchemaHandlerAdapter" />
    <!-- Let us resolve person.xsd reference in the WSDL -->
    <bean id="person" class="org.springframework.xml.xsd.SimpleXsdSchema">
        <property name="xsd" value="classpath:/wsdl/person.xsd" />
    </bean>
    

    Miscellaneous

    JAX-WS provides an overview of all published services at the root of the JAX-WS servlet mapping:

    Adress Informations
    Service name : {http://impl.wsinject.blog.frankel.ch/} FindPersonSoapImplService Adress : http://localhost:8080/wsinject/jax/findPerson
    Port name : {http://impl.wsinject.blog.frankel.ch/} FindPersonSoapImplPort WSDL: http://localhost:8080/wsinject/jax/findPerson?wsdl
    Implementation class : ch.frankel.blog.wsinject.impl.FindPersonSoapImpl

    Conclusion

    Considering contract-first web services, my little experience has driven me to choose JAX-WS in favor of Spring WS: testing benefits do not balance the ease-of-use of the standard. I must admit I was a little surprised by these results since most Spring components are easier to use and configure than their standard counterparts, but results are here.

    You can find the sources for this article here.

    Note: the JAX-WS version used is 2.2 so the Maven POM is rather convoluted to override Java 6 native JAX-WS 2.1 classes.

    Categories: JavaEE Tags: jax-wsspringweb services
  • Lessons learned from integrating jBPM 4 with Spring

    When I was tasked with integrating a process engine into one of my projects, I quickly decided in favor of Activiti. Activiti is the next version of jBPM 4, is compatible with BPMN 2.0, is well documented and has an out-of-the-box module to integrate with Spring. Unfortunately, in a cruel stroke of fate, I was overruled by my hierarchy (because of some petty reason I dare not write here) and I had to use jBPM. This articles tries to list all lessons I learned in this rather epic journey.

    Lesson 1: jBPM documentation is not enough

    Although jBPM 4 has plenty of available documentation, when you’re faced with the task of starting from scratch, it’s not enough, especially when compared to Activiti’s own documentation. For example, there’s no hint on how to bootstrap jBPM in a Spring context.

    Lession 2: it's not because there's no documentation that it's not there

    jBPM 4 wraps all needed components to bootstrap jBPM through Spring, even though it stays strangely silent on how to do so. Google is no helper here since everybody seems to have infered its own solution.For the curious, here’s how I finally ended doing it:

    For example, here’s how I found some declared the jBPM configuration bean:

    <bean id="jbpmConfiguration" class="org.jbpm.pvm.internal.processengine.SpringHelper">
        <property name="jbpmCfg">
            <value>jbpm.cfg.xml</value>
        </property>
    </bean>
    

    At first, I was afraid to use a class in a package which contained the dreaded “internal” word but it seems the only way to do so…

    Lesson 3: Google is not really your friend here

    … Was it? In fact, I also found this configuration snippet:

    <bean id="jbpmConfiguration" class="org.jbpm.pvm.internal.cfg.SpringConfiguration">
        <constructor-arg value="jbpm.cfg.xml" />
    </bean>
    

    And I think if you search long enough, you’ll find many other ways to achieve engine configuration. That’s the problem when documentation is lacking: we developers are smart people, so we’ll find a way to make it work, no matter what.

    Lesson 4: don't forget to configure the logging framework

    This one is not jBPM specific, but it’s important nonetheless. I lost much (really much) time because I foolishly ignored the message warning me about Log4J not finding its configuration file. After having created the damned thing, I finally could read an important piece of information I receveid when programmatically deploying a new jPDL file:

    ```textWARNING: no objects were deployed! Check if you have configured a correct deployer in your jbpm.cfg.xml file for the type of deployment you want to do.

    <h2>Lesson 5: the truth is in the code</h2>
    
    This lesson is a direct consequences of the previous one: since I already had double-checked my file had the jbpm.xml extension and that the jBPM deployer was correctly set in my configuration, I had to understand what the code really did. In the end, this meant I had to get the sources and debug in my IDE to watch what happened under the cover. The culprit was the following line:
    
    ```java
    repositoryService.createDeployment().addResourceFromInputStream("test", new FileInputStream(file));
    

    Since I used a stream on the file, and not the stream itself, I had to supply a fictitious resource name (“test”). The latter was checked for a jpdl.xml file extension and of course, failed miserably. This was however not readily apparent by reading the error message… The fix was the following:

    repositoryService.createDeployment().addResourceFromFile(file);
    

    Of course, the referenced file had the correct jbpm.xml extension and it worked like a charm.

    Lesson 6: don't reinvent the wheel

    Tightly coupled with lesson 2 and 3, I found many snippets fully describing the jbpm.cfg.xml configuration file. Albeit working (well, I hope so since I didn’t test it), it’s overkill, error-prone and a perhaps sign of too much Google use (vs brain use). For example, the jbpm4-spring-demo project published on Google Code provides a full-fledged configuration file. With a lenghty trial-and-error process, I managed to achieve success with a much shorter configuration file that reused existing configuration snippets:

    <?xml version="1.0" encoding="UTF-8"?>
    <jbpm-configuration xmlns="http://jbpm.org/xsd/cfg">
        <import resource="jbpm.default.cfg.xml" /><!-- Default configuration -->
        <import resource="jbpm.tx.spring.cfg.xml" /><!-- Use Spring transaction management -->
        <import resource="jbpm.jpdl.cfg.xml" /><!-- Can deploy jPDL files -->
    </jbpm-configuration>
    

    Lesson 7: jPBM can access the Spring context

    jBPM offers the java activity to call some arbitrary Java code: it can be EJB but we can also wire the Spring context to jBPM so as to make the former accessible by the latter. It’s easily done by modifying the former configuration:

    <jbpm-configuration xmlns="http://jbpm.org/xsd/cfg">
        <import resource="jbpm.default.cfg.xml" />
        <import resource="jbpm.tx.spring.cfg.xml" />
        <import resource="jbpm.jpdl.cfg.xml" />
        <process-engine-context>
            <script-manager default-expression-language="juel"
                default-script-language="juel"
                read-contexts="execution, environment, process-engine, spring"
                write-context="">
                <script-language name="juel" factory="org.jbpm.pvm.internal.script.JuelScriptEngineFactory" />
            </script-manager>
        </process-engine-context>
    </jbpm-configuration>
    

    Note that I haven’t found an already existing snippet for this one, feedback welcome.

    Lesson 8: use the JTA transaction manager if needed

    You’ll probably end having a business database along your jBPM database. In most companies, you’ll be gently asked to put them in different schemas by the DBAs. In this case, don’t forget to use a JTA transaction manager along some XA datasources to use 2-phase commit. In your tests, you’d better use the same schema and a simple transaction manager based on the data source will be enough.

    For those of you that need yet another way to configure their jBPM/Spring integration, here are the sources I used in Maven/Eclipse format. I hope my approach is a lot more pragmatic. In all cases, remember I’m a newbie in this product.

    Categories: Java Tags: jbpmspring
  • Transaction management: EJB3 vs Spring

    Transaction management is a subject that is generally left to the tender care of a senior developer (or architect). Given the messages coming from soem actors of the JavaEE community that with newer versions of JavaEE you don’t need Spring anymore, I was interested in some fact-checking on how transaction management was handled in both technologies.

    Note: these messages were already sent one year and a half ago and prompted me to write this article.

    Transaction demarcation

    Note that although both technologies provide programmatic transaction demarcation (start transaction then commit/rollback), we’ll focus on declarative demarcation since they are easier to use in real life.

    In EJB3, transactions are delimited by the @TransactionAttribute annotation. The annotation can be set on the class, in which case every method will have the transactional attribute, or per method. Annotation at the method level can override the annotation at the class level. Annotations are found automatically by the JavaEE-compliant application server.

    In Spring, the former annotation is replaced by the proprietary @Transactional annotation. The behaviour is exactly the same (annotation at method/class level and possibility of overriding). Annotations can only be found when the Spring bean definition file contains the http://www.springframework.org/schema/tx namespace as well as the following snippet:

    <tx:annotation-driven />
    

    Alternatively, both technologies provide an orthogonal way to set transaction demarcation: in EJB3, one can use the EJB JAR deployment descriptor (ejb-jar.xml) while in Spring, any Spring configuration file will do.

    Transactions propagation

    In EJB3, there are exactly 6 possible propagation values: MANDATORY, REQUIRED (default), REQUIRES_NEW, SUPPORTS, NOT_SUPPORTED and NEVER.

    Spring adds support for NESTED (see below).

    Nested transactions

    Nested transactions are transactions that are started then commited/rollbacked during the execution of the root transaction. Nested transaction results are limited to the scope of this transaction only (it has no effect on the umbrella transaction).

    Nested transactions are not allowed in EJB3; they are in Spring.

    Read-only transaction

    Read-only transactions are best used with certain databases or ORM frameworks like Hibernate. In the latter case, Hibernate optimizes sessions so that they never flush (i.e. never push changes from the cache to the underlying database).

    I haven’t found a way (yet) to qualify a transaction as read-only in EJB3 (help welcome). In Spring, @Transactional has a readOnly parameter to implement read-only transactions:

    @Transactional(readOnly = true)
    

    Local method calls

    In local method calls, a bean’s method A() calls another method B() on the same instance. The expected behavior should be that transaction attributes of method B() is taken into account. It’s not the case in neither technologies, but both offer some workaround.

    In EJB3, you have to inject an instance of the same class and call the method on this instance in order to use transaction attributes. For example:

    @Stateless
    public MyServiceBean implements MyServiceLocal {
    
        @EJB
        private MyServiceLocal service;
    
        @TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
        public void A() {
            service.B();
        }
    
        @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
        public void B() {
            ...
        }
    }
    

    In Spring, by default, transaction management is handled through pure Java proxies. If you want to have transaction management in local method calls, you’ll have to turn on AspectJ. This is easily done in the Spring beans definition file (but beware the side-effects, see the Spring documentation for more details):

    <tx:annotation-driven mode="aspectj" />
    

    Exception handling and rollback

    Exception handling is the area with the greater differences between Spring and EJB3.

    In EJB3, only runtime exceptions thrown from a method rollback a transaction delimited around this method by default. In order to mimic this behavior for checked exceptions, you have to annotate the exception class with @ApplicationException(rollback = true). Likewise, if you wish to discard this behavior for runtime exceptions, annotate your exception class with @ApplicationException(rollback = false).

    This has the disadvantage of not being able to use the same exception class to rollback in a method and to still to commit despite the exception in another method. In order to achieve this, you have to manage your transaction programmatically:

    @Stateless
    public MyServiceBean implements MyServiceLocal {
    
        @Resource
        private SessionContext context;
    
        public void A() {
            try {
                ...
            } catch (MyException e) {
                context.setRollbackOnly();
            }
        }
    
    }
    

    In Spring, runtime exceptions also cause transaction rollback. In order to change this behavior, use the rollbackFor or noRollbackFor attributes of @Transactional:

    public MyServiceImpl {
    
        @Resource
        private SessionContext context;
    
    	@Transactional(rollbackFor = MyException.class)
        public void A() {
    		...
        }
    }
    

    Conclusion

    There’s no denying that JavaEE has made giant steps in the right direction with its 6th version. And yet, small details keep pointing me toward Spring. If you know only about one - Spring or JavaEE 6, I encourage you to try “the other” and see for yourself which one you’re more comfortable with.

    To go further:

    Categories: JavaEE Tags: ejb3springtransaction
  • Database unit testing with DBUnit, Spring and TestNG

    I really like Spring, so I tend to use its features to the fullest. However, in some dark corners of its philosophy, I tend to disagree with some of its assumptions. One such assumption is the way database testing should work. In this article, I will explain how to configure your projects to make Spring Test and DBUnit play nice together in a multi-developers environment.

    Context

    My basic need is to be able to test some complex queries: before integration tests, I’ve to validate those queries get me the right results. These are not unit tests per se but let’s assilimate them as such. In order to achieve this, I use since a while a framework named DBUnit. Although not maintained since late 2010, I haven’t found yet a replacement (be my guest for proposals).

    I also have some constraints:

    • I want to use TestNG for all my test classes, so that new developers wouldn't think about which test framework to use
    • I want to be able to use Spring Test, so that I can inject my test dependencies directly into the test class
    • I want to be able to see for myself the database state at the end of any of my test, so that if something goes wrong, I can execute my own queries to discover why
    • I want every developer to have its own isolated database instance/schema

    Considering the last point, our organization let us benefit from a single Oracle schema per developer for those “unit-tests”.

    Basic set up

    Spring provides the AbstractTestNGSpringContextTests class out-of-the-box. In turn, this means we can apply TestNG annotations as well as @Autowired on children classes. It also means we have access to the underlying applicationContext, but I prefer not to (and don’t need to in any case).

    The structure of such a test would look like this:

    @ContextConfiguration(location = "classpath:persistence-beans.xml")
    public class MyDaoTest extends AbstractTestNGSpringContextTests {
    
        @Autowired
        private MyDao myDao;
    
        @Test
        public void whenXYZThenTUV() {
            ...
        }
    }
    

    Readers familiar with Spring and TestNG shouldn’t be surprised here.

    Bringing in DBunit

    DbUnit is a JUnit extension targeted at database-driven projects that, among other things, puts your database into a known state between test runs. [...] DbUnit has the ability to export and import your database data to and from XML datasets. Since version 2.0, DbUnit can also work with very large datasets when used in streaming mode. DbUnit can also help you to verify that your database data match an expected set of values.

    DBunit being a JUnit extension, it’s expected to extend the provided parent class org.dbunit.DBTestCase. In my context, I have to redefine some setup and teardown operation to use Spring inheritance hierarchy. Luckily, DBUnit developers thought about that and offer relevant documentation.

    Among the different strategies available, my tastes tend toward the CLEAN_INSERT and NONE operations respectively on setup and teardown. This way, I can check the database state directly if my test fails. This updates my test class like so:

    @ContextConfiguration(locations = {"classpath:persistence-beans.xml", "classpath:test-beans.xml"})
    public class MyDaoTest extends AbstractTestNGSpringContextTests {
    
        @Autowired
        private MyDao myDao;
    
        @Autowired
        private IDatabaseTester databaseTester;
    
        @BeforeMethod
        protected void setUp() throws Exception {
            // Get the XML and set it on the databaseTester
            // Optional: get the DTD and set it on the databaseTester
            databaseTester.setSetUpOperation(DatabaseOperation.CLEAN_INSERT);
            databaseTester.setTearDownOperation(DatabaseOperation.NONE);
            databaseTester.onSetup();
        }
    
        @Test
        public void whenXYZThenTUV() {
            ...
        }
    }
    

    Per-user configuration with Spring

    Of course, we need to have a specific Spring configuration file to inject the databaseTester. As an example, here is one:

    <?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://www.springframework.org/schema/beans
            http://www.springframework.org/schema/beans/spring-beans-3.0.xsd">
            <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
                <property name="location" value="${user.name}.database.properties" />
            </bean>
            <bean name="dataSource" class="org.springframework.jdbc.datasource.SingleConnectionDataSource">
                 <property name="driverClass" value="oracle.jdbc.driver" />
                 <property name="username" value="${db.username}" />
                 <property name="password" value="${db.password}" />
                 <property name="url" value="jdbc:oracle:thin:@<server>:<port>/${db.schema}" />
            </bean>
            <bean name="databaseTester" class="org.dbunit.DataSourceDatabaseTester">
                <constructor-arg ref="dataSource" />
            </bean>
    </beans>
    

    However, there’s more than meets the eye. Notice the databaseTester has to be fed a datasource. Since a requirement is to have a database per developer, there are basically two options: either use a in-memory database or use the same database as in production and provide one such database schema per developer. I tend toward the latter solution (when possible) since it tends to decrease differences between the testing environment and the production environment.

    Thus, in order for each developer to use its own schema, I use Spring’s ability to replace Java system properties at runtime: each developer is characterized by a different user.name. Then, I configure a PlaceholderConfigurer that looks for {user.name}.database.properties file, that will look like so:

    db.password=mypassword1
    db.schema=myschema1
    

    This let me achieve my goal of each developer using its own instance of Oracle. If you want to use this strategy, do not forget to provide a specific database.properties for the Continuous Integration server.

    Huh oh?

    Finally, the whole testing chain is configured up to the database tier. Yet, when the previous test is run, everything is fine (or not), but when checking the database, it looks untouched. Strangely enough, if you did load some XML dataset and assert it during the test, it does behaves accordingly: this bears all symptoms of a transaction issue. In fact, when you closely look at Spring’s documentation, everything becomes clear. Spring’s vision is that the database should be left untouched by running tests, in complete contradiction to DBUnit’s. It’s achieved by simply rollbacking all changes at the end of the test by default.

    In order to change this behavior, the only thing to do is annotate the test class with @TransactionConfiguration(defaultRollback=false). Note this doesn’t prevent us from specifying specific methods that shouldn’t affect the database state on a case-by-case basis with the @Rollback annotation.

    The test class becomes:

    @ContextConfiguration(locations = {classpath:persistence-beans.xml", "classpath:test-beans.xml"})
    @TransactionConfiguration(defaultRollback=false)
    public class MyDaoTest extends AbstractTestNGSpringContextTests {
    
        @Autowired
        private MyDao myDao;
    
        @Autowired
        private IDatabaseTester databaseTester;
    
        @BeforeMethod
        protected void setUp() throws Exception {
            // Get the XML and set it on the databaseTester
            // Optional: get the DTD and set it on the databaseTester
            databaseTester.setSetUpOperation(DatabaseOperation.CLEAN_INSERT);
            databaseTester.setTearDownOperation(DatabaseOperation.NONE);
            databaseTester.onSetup();
        }
    
        @Test
        public void whenXYZThenTUV() {
            ...
        }
    }
    

    Conclusion

    Though Spring and DBUnit views on database testing are opposed, Spring’s configuration versatility let us make it fit our needs (and benefits from DI). Of course, other improvements are possible: pushing up common code in a parent test class, etc.

    To go further:

    Categories: Java Tags: dbunitspringtesttestng
  • EJB3 façade over Spring services

    As a consultant, you seldom get to voice out your opinions regarding the technologies used by your customers and it’s even more extraordinary when you’re heard. My current context belongs to the usual case: I’m stuck with Java 6 running JBoss 5.1 EAP with no chance of going forward in the near future (and I consider myself happy since a year and a half ago, that was Java 5 with JOnAS 4). Sometimes, others wonder if I’m working in a museum but I see myself more as an archaeologist than a curator.

    Context

    I recently got my hands on an application that had to be migrated from a proprietary framework, to more perennial technologies. The application consists of one web-front office and a Swing back-office. The key difficulty was to make the Swing part communicate with the server part, since both lives in two different network zones, separated by a firewall (with some open ports for RMI and HTTP). Moreover, our Security team enforces that such communications has to be secured.

    The hard choice

    The following factors played a part in my architecture choice:

    • My skills, I've plenty more experience in Spring than in EJB3
    • My team skills, more oriented toward Swing
    • Reusing as much as possible the existing code or at least interfaces
    • Existing requirement toward web-services:
    • Web services security is implemented through the reverse proxy, and its reliability is not the best I've ever seen (to put it mildly)
    • Web services applications have to be located on dedicated infrastructure
    • Mature EJB culture
    • Available JAAS LoginModule for secure EJB calls and web-services from Swing

    Now, it basically boils down to exposing Spring web-services on HTTP or EJB3. In the first case, cons include no experience of Spring remoting, performance (or even reliability) issues, and deployment on different servers thus more complex packaging for the dev team. In the second case, they include a slightly higher complexity (yes, EJB3 are easier than with EJB 2.1, but still), a higher ramp-up time and me not being able to properly support my team when a difficulty is met.

    In the end, I decided to use Spring services in fullest, but to put them behind a EJB3 façade. That may seem strange but I think that I get the best of both world: EJB3 skills are kept at a bare minimum (transactions will be managed by Spring), while the technology gets me directly through the reverse-proxy. I’m open for suggestions and arguments toward such and such solutions given the above factors, but the quicker the better :-)

    Design

    To ease the design to the maximum, each Spring service will have exactly one and only one EJB3 façade, which will delegate calls to the underlying service. Most IDEs will be able to take care of the boilerplate delegating code (hell, you can even use Project Lombok with @Delegate - I’m considering it).

    On the class level, the following class design will be used:

    This is only standard EJB design, with the added Spring implementation.

    On the module level, this means we will need a somewhat convoluted packaging for the service layer:

    • A Business Interfaces module
    • A Spring Implementations module
    • An EJB3 module, including remote interfaces and session beans (thanks to the Maven EJB plugin, it will produce two different artifacts)

    How-to

    Finally, developing  the EJB3 façade and injecting it with Spring beans is ridiculously simple. The magic lies in the JavaEE 5 Interceptors annotation on top of the session bean class that references the SpringBeanAutowiringInterceptor class, that will kick in Spring injection after instantiation (as well as activation) on every referenced dependency.

    The only dependency in our case is the delegate Spring bean, which as to be annotated with legacy @Autowired.

    import javax.ejb.Stateless;
    import javax.interceptor.Interceptors;
    import org.springframework.beans.factory.annotation.Autowired;
    import org.springframework.ejb.interceptor.SpringBeanAutowiringInterceptor;
    import ch.frankel.blog.ejb.spring.service.client.RandomGeneratorService;
    
    @Stateless
    @Interceptors(SpringBeanAutowiringInterceptor.class)
    public class RandomGeneratorBean implements RandomGeneratorService {
    
        @Autowired
        private ch.frankel.blog.ejb.spring.service.RandomGeneratorService delegate;
    
        @Override
        public int generateNumber(int lowerLimit, int upperLimit) {
            return delegate.generateNumber(lowerLimit, upperLimit);
        }
    }
    

    In order to work, we have to use a specific Spring configuration file, which references the Spring application context defined in our services module as well as activate annotation configuration.

    <?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:context="http://www.springframework.org/schema/context"
      xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.1.xsd">
      <context:annotation-config />
      <bean>
        <constructor-arg>
          <list>
            <value>classpath:ejb-service.xml</value>
          </list>
        </constructor-arg>
      </bean>
    </beans>
    

    Warning: it’s mandatory to name this file beanRefContext.xml and to make it available at the root of the EJB JAR (and of course to set the Spring service module as a dependency).

    Conclusion

    Sometimes, you have to make some interesting architectural choices that are pertinent only in your context. In these cases, it’s good to know somebody paved the road for you: this is the thing with EJB3 façade over Spring services.

    To go further:

    Categories: JavaEE Tags: ejb3spring
  • Spring: profiles or not profiles?

    I’m a big user of Spring, and I must confess I didn’t follow all the latest additions of version 3.1. One such addition is the notion of profile.

    Context

    Profiles are meant to address the case when you use the same Spring configuration across all your needs, but when there are tiny differences. The most frequent use-case encountered is the datasource. For example, during my integration tests, I’m using no application server so my datasource comes from a simple org.apache.commons.dbcp.BasicDataSource configured with URL, driver class name, user and password like so:

    <bean id="dataSource">
        <property name="driverClassName" value="${db.driver}" />
        <property name="url" value="${db.url}" />
        <property name="username" value="${db.username}" />
        <property name="password" value="${db.password}" />
    </bean>
    

    Note there are other alternatives to BasicDataSource, such as org.springframework.jdbc.datasource.SingleConnectionDataSource or com.mchange.v2.c3p0.ComboPooledDataSource.

    In an application server environment, I use another bean definition:

    <jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/ds" />
    

    You probably noticed both previous bean definitions have the same name. Keep it in mind, it will play a role in the next section.

    My legacy solution

    In order to be able to switch from one bean to another in regard to the context, I use the following solution:

    • I separate the datasource bean definition in its own Spring configuration file (respectively spring-datasource-it.xml and spring-datasource.xml)
    • For production code, I create a main Spring configuration file that imports the latter: ```xml ... ```
    • For integration testing, I use a Spring Test class as parent (like AbstractTestNGSpringContextTests) and configure it with the ContextConfiguration annotation. @ContextConfiguration has a location attribute that can be set with the location of all needed Spring configuration fragments for my integration test to run.

    Besides, as a Maven user, I can neatly store the spring-datasource.xml file under src/main/resources and the spring-datasource-it.xml file under src/test/resources, or in separate modules. Given these, the final artifact only contains the relevant application server datasource bean in my Spring configuration: the basic datapool safely stays in my test code.

    The profile solution

    Remember when bean identifiers had to be unique across your entire Spring configuration, this is the case no more. With profiles, each bean (or more precisely beans group) can be added an extra profile information so that bean identifiers have to be unique across a profile. This means that we can define two beans with the datasource identifier, and set a profile for each: Spring won’t complain. If we call those profiles integration and application-server, and activate those in the code when needed, this will have exactly the same results. The Spring configuration file will look like this:

    <beans xmlns="http://www.springframework.org/schema/beans"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xmlns:jee="http://www.springframework.org/schema/jee"
        xsi:schemaLocation="...">
        ...
        <beans profile="integration">
            <bean id="dataSource">
                <property name="driverClassName" value="${db.driver}" />
                <property name="url" value="${db.url}" />
                <property name="username" value="${db.username}" />
                <property name="password" value="${db.password}" />
            </bean>
        </beans>
        <beans profile="production">
            <jee:jndi-lookup id="dataSource" jndi-name="java:comp/env/jdbc/ds" />
        </beans>
    </beans>
    

    Now, the Spring configuration file contains both contexts.

    Conclusion

    I’m not entirely sure that such a use of profiles brings anything to the table (despite the datasource example being depicted everywhere on the web). Granted, we switched from a build-time configuration to a runtime configuration. But the only consequence I see is that the final artifacts ships with information irrelevant to the environment: it’s bad practice at best, and at worst a security flaw.

    Other use-cases could include the need to run the application both on an application server and in the cloud. Even then, I would be a in favor of distinct configuration files assembled at build time. I think I will stay with my legacy approach for the moment, at least until I’m proven wrong or until a case presents itself that can easily be solved by profiles without side-effects.

    Categories: Java Tags: spring
  • A Spring hard fact about transaction management

    In my Hibernate hard facts article serie, I tackled some misconceptions about Hibernate: there are plenty of developers using Hibernate (myself including) that do not use it correctly, sometimes from a lack of knowledge. The same can be said about many complex products, but I was dumbfounded this week when I was faced with such a thing in the Spring framework. Surely, something as pragmatic as Spring couldn’t have shadowy areas in some corner of its API.

    About Spring's declarative transaction boundaries

    Well, I’ve found at least one, in regard to declarative transaction boundaries:

    Spring recommends that you only annotate concrete classes (and methods of concrete classes) with the @Transactional annotation, as opposed to annotating interfaces. You certainly can place the @Transactional annotation on an interface (or an interface method), but this works only as you would expect it to if you are using interface-based proxies. The fact that Java annotations are not inherited from interfaces means that if you are using class-based proxies (proxy-target-class="true") or the weaving-based aspect (mode="aspectj"), then the transaction settings are not recognized by the proxying and weaving infrastructure, and the object will not be wrapped in a transactional proxy, which would be decidedly bad.

    From Spring's documentation

    Guess what? Even though it would be nice to have transactional behavior as part of the contract, it’s sadly not the case as it depends on your context configuration, as stated in the documentation! To be sure, I tried and it’s (sadly) true.

    Consider the following contract and implementation :

    public interface Service {
    
      void forcedTransactionalMethod();
    
      @Transactional
      void questionableTransactionalMethod();
    }
    
     public class ImplementedService implements Service {
    
      private DummyDao dao;
    
      public void setDao(DummyDao dao) {
        this.dao = dao;
      }
    
      @Transactional
      public void forcedTransactionalMethod() {
        dao.getJdbcTemplate().update("INSERT INTO PERSON (NAME) VALUES ('ME')");
      }
    
      public void questionableTransactionalMethod() {
        dao.getJdbcTemplate().update("INSERT INTO PERSON (NAME) VALUES ('YOU')");
      }
    }
    

    Now, depending on whether we activate the CGLIB proxy, the questionableTransactionMethod behaves differently, committing in one case and not in the other.

    Note: for Eclipse users - even without Spring IDE, this is shown as a help popup when CTRL-spacing for a new attribute (it doesn’t show when the attribute already exists in the XML though).

    Additional facts for proxy mode

    Spring’s documentation also documents two other fine points that shouldn’t be lost on developers when using proxies (as opposed to AspectJ weaving):

    • Only annotate public methods. Annotating methods with other visibilities will do nothing - truly nothing - as no errors will be raised to warn you something is wrong
    • Be wary of self-invocation. Since the transactional behavior is based on proxys, a method of the target object calling another of its method won't lead to transactional behavior even though the latter method is marked as transactional

    Both those limitations can be removed by weaving the transactional behavior inside the bytecode instead of using proxies (but not the first limitation regarding annotations on interfaces).

    To go further:

    You can found the sources for this article in Eclipse/Maven format here (but you’ll have to configure a MySQL database instance).

    Categories: Java Tags: springtransaction
  • CDI worse than Spring for autowiring?

    Let’s face it, there are two kinds of developers: those that favor Spring autowiring because it alleviates them from writing XML (even though you can do autowiring with XML) and those that see autowiring as something risky.

    I must admit I’m of the second brand. In fact, I’d rather face a rabbied 800-pounds gorilla than use autowiring. Sure, it does all the job for you, doesn’t it? Maybe, but it’s a helluva job and I’d rather dirty my hands than let some cheap bot do it for me. The root of the problem lies in the implicit-ness of autowiring. You declare two beans, say one needs a kind of the other and off we go.

    It seems simple on the paper and it is if we let it at that. Now, autowiring comes into two major flavors:

    • By name where matching is done between the property's name and the bean's name
    • By type where matching is done between the property's type and the bean's type

    The former, although relatively benign, can lead to naming nightmares where developers are to tweak names to make them autowire together. The second is an utter non-sense: in this case, you can create problems in a working context by creating a bean in it, only because it has the same class as another bean already existing in the context. Worse, autowiring errors can occur in a completely unrelated location, just because of the magic involved by autowiring. And no, solutions don’t come from mixing autowiring and explicit wiring, mixing autowiring between name and type or even excluding beans from being candidates for autowiring; that just worsens the complexity as developers have to constantly question what will be the behavior.

    Autowiring fans that are not convinced should read the Spring documentation itself for a list of limitations and disadvantages. This is not to say that autowiring in itself is bad, just that is has to be kept strictly in check. So far, I’ve allowed it only for small teams and only for tests (i.e. code that doesn’t ship to production).

    All in all, Spring autowiring has one redeeming quality: candidates are only chosen from the context, meaning instances outside the context cannot wreak havoc our nicely crafted application.

    CDI developers should have an hint where I’m heading. Since in CDI every class on the classpath is candidate for autowiring, this means that adding a new JAR on the application’s classpath can disrupt CDI and prevent the application from launching. In this light, only autowiring by name should be used for CDI… and then, only by those courageous to take the risk :-)

    Categories: Java Tags: autowiringCDIspring
  • Property editors

    In Java, property editors are seldom used. Originally, they were meant for Swing applications to transform between model objects and their string representations on the GUI.

    Spring found them another use: it’s typically what needs to be done when parsing a Spring beans definition file in order to create beans in the factory. Have you Spring users noticed you can set a number to a property and it is taken as such?

    It’s because in every bean factory, there’re some editors registered by default. In fact, there are three kinds of editors:

    1. The first level consist of those default editors.
    2. If you need to go further, you can register editors provided by the Spring framework.
    3. Finally, you can also create your own editors.

    The list of editors built-in, pre-registeredor not, is available in the Spring documentation.

    For example, the LocaleEditor is built-in and to use it, we only have to provide the correct locale string representation like so:

    <bean id="dummy" class="ch.frankel.blog.spring.propertyeditor.Dummy">
    	<property name="locale" value="fr_FR" />
    </bean>
    

    Interestingly enough, some other editors are not well-known, like the PatternEditor, although they are registered by default.

    In order to register other property editors, whether built-in or homemade, just configure the customEditors map property of the org.springframework.beans.factory.config.CustomEditorConfigurer bean. Note that the latter can safely remain anonymous.

    The following configuration snippet adds a date property editor so that dates can easily be configured in beans definitions files.

    <bean class="org.springframework.beans.factory.config.CustomEditorConfigurer">
      <property name="customEditors">
        <map>
          <entry key="java.util.Date">
            <bean class="org.springframework.beans.propertyeditors.CustomDateEditor">
              <constructor-arg index="0">
                <bean class="java.text.SimpleDateFormat">
                  <constructor-arg  value="dd.MM.yyyy" />
                </bean>
              </constructor-arg>
              <constructor-arg index="1" value="false" />
            </bean>
          </entry>
        </map>
      </property>
    </bean>
    

    Note that keys in the custom editors map are the wanted object type, meaning there can be only a single editor for each type - a good thing if there’s one.

    Sources for this article can be found here in Eclipse/Maven format.

    Categories: Java Tags: propertyspring
  • Playing with Spring Roo and Vaadin

    One year ago, under the gentle pressure of a colleague, I tried Spring Roo. I had mixed feelings about the experience: while wanting to increase productivity was of course a good idea, I had concerned regarding Roo’s intrusiveness. I left it at that, and closed the part of my memory related to it.

    Now, one year later, I learned that my favourite web framework, namely Vaadin, had set their sight on a Roo plugin. That was enough for me to get back on my feet and try again. This article tries to describe as honestly as possible the good and the bad of this experience related to both Roo and the Vaadin plugin.

    Pure Roo

    First of all, I download the free STS 2.5, that includes Roo 1.1.1 and install it. No problem at this time. Then, I create a new Spring Roo project inside STS. Unluckily, STS does not create the default maven directories (src/main-test/java-resources): errors appear in my Problems tab. Trying to solve the project, I type the project command into Roo’s shell inside STS. Roo complains I should give it my top-level package, though I already did that with the wizard. OK, I am a nice guy and do as asked:

    project --topLevelPackage ch.frankel.blog.roo.vaadin

    This time, Roo successfully creates the aforementioned directories as well as a Spring configuration file and a log4j properties file.

    Next step is to create my persistence layer. The use of CTRL+SPACE is real nice and helps you much with command parameters. Combined with hint, it’s real easy to get comfortable with Roo.

    persistence setup --provider HIBERNATE --database DERBY

    Roo nicely updates my Maven POM and let it download the latest version right dependencies (JPA, Hibernate, Derby, etc.). I can always change version if I need a specific one. It even adds JBoss Maven repo so I can download Hibernate. I just need to update my database properties, driver and URL. Uh oh: when I open the file, I see Roo has strangely escaped colons characters with backslash. I just replace the example escaped URL with an unescaped real one.

    Meanwhile, I notice an error in the log: “The POM for org.aspectj:aspectjtools:jar:1.6.11.M1 is missing, no dependency information available”. Roo updated my POM with version 1.6.11.M1. If I check on repo1, the latest version is 1.6.10. Replacing 1.6.11.M1 with 1.6.10 removes the error.

    Now would be a good time to create my first entity:

    entity --class ~.entity.Teacher --testAutomatically

    STS now complains that the target directory doesn’t exist, like before it complained for source directories. In order to remedy to that, I just do as before, I launch an instruction. In this case, I order Roo to launch the tests with the perform test command. In turn, Roo launches mvn test under the hood and Maven does create the target directory.

    Entities without fields are useless. Let’s add a first name and a name to our teacher.

    field string --fieldName firstName

    field string –fieldName name</code>

    It would be cool to have courses attached.

    entity --class ~.entity.Course --testAutomatically

    field string –fieldName name

    focus –class Teacher

    field set –fieldName courses –type Course</code>

    Notice that since Roo works in a contextual manner, I have to use the focus command so that I do not create the courses Set inside the Course class but inside the Teacher class.

    Vaadin plugin

    Until now, there was nothing about Vaadin. Here it comes: following the instructions from Vaadin’s wiki, I download the latest Vaadin Roo plugin snapshot and put in in my Roo local OSGI repository. In order to the new bundle to be seen by Roo, I have to restart it which I don’t know how to do inside STS. Instead, I restart STS.

    There comes the magic command:

    vaadin setup --applicationPackage ~.web --useJpaContainer false

    This command:

    • adds Vaadin dependency
    • adds web directory with images
    • updates the POM to war packaging

    Yet, when I try to add my new web project to Tomcat, STS complains it doesn’t find candidate projects. The problem lies in that Eclipse is not synchronized with Maven and as such, my project lacks the Web facet. The solution: right-click on my project, go to the Maven menu and click on “Update project configuration” submenu. When done, I can add the project to Tomcat like any other web project (since it is anyway).

    Starting Tomcat and going to http://localhost:8080/vaadin, I can see Vaadin handling my request. I just miss the views over my entities, which is done with the following:

    vaadin generate all --package ~.web.ui --visuallyComposable true

    Conclusion

    Well, that was quick, if not perfect. I do miss some features in Roo:

    As for the Vaadin plugin, it lacks updating Eclipse files after having changed the POM. Someone not familiar with m2eclipse inner workings could lose some time with this behaviour.

    On the other hand, I have a simple webapp in a matter of minutes, that I can now update how I choose. CTRL+SPACE and hint are Roo killer features. Moreover, like the Vaadin add-on showed, you can add whatever functionality is lacking with your own plugin (or use one already available). What’s really important for me though, and without it, I wouldn’t even consider using Roo is that it is completely removable in 3 easy steps. As such you can use Roo’s productivity boost, not tell anyone and remove Roo just before passing your project to the maintenance team.

    Thanks to Joonas Lehtinen and Henri Sara for their work on Roo Vaadin’s plugin and sending me ahead of schedule the wiki draft explaining the Vaadin part.

    Here are the sources for this article in Maven/STS format.

    To go further:

    Categories: JavaEE Tags: roospringvaadin
  • Why CDI won't replace Spring

    CDI is part of JavaEE 6 and that’s a great move forward. Now, there’s a standard telling vendors and developers how to do DI. It can be refined, but it’s here nonetheless. Norms and standards are IMHO a good thing in any industry. Yet, I don’t subscribe to some people’s points of view that this means the end of Spring. There are multiple DI frameworks around here and Spring is number one.

    Why is that? Because it was the first? It wasn’t (look at Avalon). My opinion is that it’s because Spring’s DI mechanism is only a part of its features. Sure, it’s great but Guice and now CDI offers the same. What really sets Spring apart is its integration of JavaEE API and of the best components available on the market that are built on top of DI.

    A good example of this added value is JMS: if you ever tried to post to a queue before, you know what I mean. This is the kind of code you would need to write:

    Context context = new InitialContext();
    QueueConnectionFactory queueConnectionFactory = (QueueConnectionFactory) context.lookup("myQConnectionFactory");
    Queue queue = (Queue) context.lookup("myQueue");
    QueueConnection queueConnection = queueConnectionFactory.createQueueConnection();
    QueueSession queueSession = queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
    QueueSender queueSender = queueSession.createSender(queue);
    TextMessage message = queueSession.createTextMessage();
    message.setText("Hello world!");
    queueSender.send(message);
    

    Now in Spring, this is configured like so:

    <jee:jndi-lookup id="queueConnectionFactory" jndi-name="myQConnectionFactory" />
    <jee:jndi-lookup id="q" jndi-name="myQueue" />
    <bean id="jmsTemplate">
      <property name="connectionFactory" ref="jmsQueueConnectionFactory" />
      <property name="defaultDestination" ref="q" />
    </bean>
    

    I don’t care it’s shorter event though it is (but it lacks the code to send the text). I don’t care it’s XML either. My real interest is that I code less: less errors, less bugs, less code to test. I don’t have to write the boilerplate code because it’s taken care of by Spring. Sure, I have to configure, but it’s a breeze compared to the code.

    This is true for JMS, but equally for Hibernate, EclipseLink, iBatis, JPA, JDO, Quartz, Groovy, JRuby, JUnit, TestNG, JasperReports, Velocity, FreeMarker, JSF, what have you that is the best of breed (or at least on top) of its category. Morevover,  if nothing exists to fit the bill, Spring provides it: look at Spring Batch for a very good example of a framework that handles the boring and repetitive code and let you focus on your business code.

    Do Guice or other frameworks have such integration features included in them? No, they are pure, “untainted” DI frameworks. As such, they can easily be replaced by CDI, Spring not so much.

    Categories: JavaEE Tags: CDIspring
  • Next book review: Spring Security 3

    My next book review will be on Spring Security 3 from Packt. I’ve heard of Spring Security since it was previously named Acegi Security but I hadn’t the chance to play with it. A book on the Spring Security model will let me dive into the subject, providing me with the means to see if it warrants further investigation on my part.

    The shipment is on its way, the rest is on my shoulders!

    Categories: Bookreview Tags: securityspring
  • Chicken and egg problem with Spring and Vaadin

    The more I dive into Vaadin, the more I love it: isolation from dirty plumbing, rich components, integration with portlets, Vaadin has it all.

    Anyway, the more you explore a technology, the bigger the chances you fall down the proverbial rabbit hole. I found one just yesterday and came up with a solution. The problem is the following: in Vaadin, application objects are tied to the session. Since I’m a Spring fanboy, it does make sense to use Spring to wire all my dependencies. As such, I scoped all application-related beans (application, windows, buttons, resources, …) with session like so:

    <?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:context="http://www.springframework.org/schema/context"
      xmlns:aop="http://www.springframework.org/schema/aop"
      xsi:schemaLocation="http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans.xsd
        http://www.springframework.org/schema/context
        http://www.springframework.org/schema/context/spring-context.xsd
        http://www.springframework.org/schema/aop
        http://www.springframework.org/schema/aop/spring-aop.xsd">
      <context:annotation-config />
      <bean id="application" scope="session">
      <aop:scoped-proxy />
    ...
    </beans>
    

    This works nicely, until you happen to use DI to wire further. By wire further, I mean wiring windows into application, button into windows and then resources (icons) into buttons. It is the last  step that cause a circular dependency. Icon resources are implemented in Vaadin by com.vaadin.terminal.ClassResource. This class has two constructors that both needs an application instance.

    The dependency cycle is thus: application -> window -> button -> resource -> application. Spring don’t like it and protests energically to it by throwing an exception which is labeled like so Requested bean is currently in creation: Is there an unresolvable circular reference?

    I think, in this case, that the design of the ClassResource is not adapted. That’s whyI designed a class aligned with Spring deferred instantiation: since the application is session-scoped, Spring uses a proxy that defers instantation from application start (Spring normal behaviour) to session use. The point is to implement the ApplicationResource interface, which needs an interface, but to set application only when needed:

    public class DeferredClassResource implements ApplicationResource {
    
      private static final long serialVersionUID = 1L;
    
      private int bufferSize = 0;
      private long cacheTime = DEFAULT_CACHETIME;
      private Class<?> associatedClass;
      private final String resourceName;
      private Application application;
    
      public DeferredClassResource(String resourceName) {
        if (resourceName == null) {
          throw new IllegalArgumentException("Resource name cannot be null");
        }
        this.resourceName = resourceName;
      }
    
      public DeferredClassResource(Class<?> associatedClass, String resourceName) {
        this(resourceName);
        if (associatedClass == null) {
          throw new IllegalArgumentException("Associated class cannot be null");
        }
        this.associatedClass = associatedClass;
      }
    
    ... // standard getters
    
      @Override
      public String getFilename() {
        int index = 0;
        int next = 0;
        while ((next = resourceName.indexOf('/', index)) > 0
          && next + 1 < resourceName.length()) {
          index = next + 1;
        }
        return resourceName.substring(index);
      }
    
      @Override
      public DownloadStream getStream() {
        final DownloadStream ds = new DownloadStream(associatedClass
          .getResourceAsStream(resourceName), getMIMEType(),
          getFilename());
        ds.setBufferSize(getBufferSize());
        ds.setCacheTime(cacheTime);
        return ds;
      }
    
      @Override
      public String getMIMEType() {
        return FileTypeResolver.getMIMEType(resourceName);
      }
    
      public void setApplication(Application application) {
        if (this.application == application) {
          return;
        }
        if (this.application != null) {
          throw new IllegalStateException("Application is already set for this resource");
        }
        this.application = application;
        associatedClass = application.getClass();
        application.addResource(this);
      }
    }
    

    DeferredClassResource is just a copy of ClassResource, but with adaptations that let the application be set later, not only in constructors. My problem is solved, I just need to let application know my resources so it can call setApplication(this) in a @PostConstruct annotated method.

    Categories: JavaEE Tags: springvaadin
  • Vaadin Spring integration

    I lately became interested in Vaadin, another web framework but where everything is done on the server side: no need for developers to learn HTML, CSS nor JavaScript. Since Vaadin adress my remarks about web applications being to expensive because of a constant need of well-rounded developers, I dug a little deeper: it will probably be the subject of another post.

    Anyway, i became a little disappointed when I wanted to use my favourite Dependency Injection framework, namely Spring, in Vaadin. After a little Google research, I found the Vaadin Wiki and more precisely the page talking about Vaadin Spring Integration. It exposes two ways to integrate Spring in Vaadin.

    The first one uses the Helper “pattern”, a class with static method that has access to the Spring application context. IMHO, those Helper classes should be forgotten now we have DI since they completely defeat its purpose. If you need to explicitly call the Helper static method in order to get the bean, where’s the Inversion of Control?

    The second solution uses Spring proprietary annotation @Autowired in order to use DI. Since IoC is all about decoupling, I’m vehemently opposed to coupling my code to the Spring framework.

    Since neither option seemed viable to me, let me present you the one I imagined: it is very simple and consists of subclassing the Vaadin’s AbstractApplicationServlet and using it instead of the classical ApplicationServlet.

    public class SpringVaadinServlet extends AbstractApplicationServlet {
    
      /** Class serial version unique identifier. */
      private static final long serialVersionUID = 1L;
    
      private Class clazz;
    
      @Override
      public void init(ServletConfig config) throws ServletException {
        super.init(config);
        WebApplicationContext wac = WebApplicationContextUtils.getRequiredWebApplicationContext(
          config.getServletContext());
        Application application = (Application) wac.getBean("application", Application.class);
        clazz = application.getClass();
     }
    
      /**
      * Gets the application from the Spring context.
      *
      * @return The Spring bean named 'application'
      */
      @Override
      protected Application getNewApplication(HttpServletRequest request)
        throws ServletException {
        WebApplicationContext wac = WebApplicationContextUtils.getRequiredWebApplicationContext(
          request.getSession().getServletContext());
        return (Application) wac.getBean("application", Application.class);
      }
    
      /**
      * @see com.vaadin.terminal.gwt.server.AbstractApplicationServlet#getApplicationClass()
      */
      @Override
      protected Class getApplicationClass()
      throws ClassNotFoundException {
        return clazz;
      }
    }
    

    This solution is concise and elegant (according to me). Its only drawback is that it couples the servlet to Spring. But a class-localized coupling is of no consequesence and perfectly acceptable. Morevoer, this lets you use Spring autowiring mechanism, JSR 250 autowiring or plain old XML explicit wiring.

    Categories: JavaEE Tags: springvaadin
  • Spring Persistence with Hibernate

    This review is about Spring Persistence with Hibernate by Ahmad Reza Seddighi from Packt Publishing.

    Facts

    1. 15 chapters, 441 pages, 38€99
    2. This book is intended for beginners but more experienced developers can learn a thing or two
    3. This book covers Hibernate and Spring in relation to persistence

    Pros

    1. The scope of this book is what makes it very interesting. Many books talk about Hibernate and many talk about Spring. Yet, I do not know of many which talk about the use of both in relation to persistence. Explaining Hibernate without describing the transactional side is pointless
    2. The book is well detailed, taking you by the hand from the bottom to reach a good level of knowledge on the subject
    3. It explains plain AOP, then Spring proxies before heading to the transactional stuff

    Cons

    1. The book is about Hibernate but I would have liked to see a more tight integration with JPA. It is only described as an another way to configure the mappings
    2. Nowadays, I think Hibernate XML configuration is becoming obsolete. The book views XML as the main way of configuration, annotations being secondary
    3. Some subjects are not documented: for some, that's not too important (like Hibernate custom SQL operations), for others, that's a real loss (like the @Transactional Spring annotation)

    Conclusion

    Despite some minor flaws, Spring Persistence with Hibernate let you go head first into the very complex sujbect of Hibernate. I think that Hibernate has a very low entry ticket, and you can be more productive with it very quickly. On the downside, mistakes will cost you much more than with old plain JDBC. This book serves you Hibernate and Spring concepts on a platter, so you will make less mistakes.

    Categories: Bookreview Tags: hibernatepersistencespring
  • Discover Spring authoring

    In this article, I will describe a useful but much underused feature of Spring, the definition of custom tags in the Spring beans definition files.

    Spring namespaces

    I will begin with a simple example taken from Spring’s documentation. Before version 2.0, only a single XML schema was available. So, in order to make a constant available as a bean, and thus inject it in other beans, you had to define the following:

    <bean id="java.sql.Connection.TRANSACTION_SERIALIZABLE"
      class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean" />
    

    Spring made it possible, but realize it’s only a trick to expose the constant as a bean. However, since Spring 2.0, the framework let you use the util namespace so that the previous example becomes:

    <util:constant static-field="java.sql.Connection.TRANSACTION_SERIALIZABLE"/>
    

    In fact, there are many namespaces now available:

    Prefix Namespace Description
    bean http://www.springframework.org/schema/bean Original bean schema
    util http://www.springframework.org/schema/util Utilities: constants, property paths and collections
    jee http://www.springframework.org/schema/jee JNDI lookup
    lang http://www.springframework.org/schema/lang Use of other languages
    tx http://www.springframework.org/schema/tx Transactions
    aop http://www.springframework.org/schema/aop AOP
    context http://www.springframework.org/schema/context ApplicationContext manipulation

    Each of these is meant to reduce verbosity and increase readibility like the first example showed.

    Authoring

    What is still unknown by many is that this feature is extensible that is Spring API provides you with the mean to write your own. In fact, many framework providers should take advantage of this and provide their own namespaces so that integrating thier product with Spring should be easier. Some already do: CXF with its many namespaces comes to mind but there should be others I don’t know of.

    Creating you own namespace is a 4 steps process: 2 steps about the XML validation, the other two for creating the bean itself. In order to illustrate the process, I will use a simple example: I will create a schema for EhCache, the Hibernate’s default caching engine.

    The underlying bean factory will be the existing EhCacheFactoryBean. As such, it won’t be as useful as a real feature but it will let us focus on the true authoring plumbing rather than EhCache implementation details.

    Creating the schema

    Creating the schema is about describing XML syntax and more importantly restrictions. I want my XML to look something like the following:

    <ehcache:cache id="myCache" eternal="true" cacheName="foo"
    maxElementsInMemory="5" maxElementsOnDisk="2" overflowToDisk="false"
    diskExpiryThreadIntervalSeconds="18" diskPersistent="true" timeToIdle="25" timeToLive="50"
    memoryStoreEvictionPolicy="FIFO">
      <ehcache:manager ref="someManagerRef" />
    </ehcache:echcache>
    

    Since I won’t presume to teach anyone about XML, here’s the schema. Just notice the namespace declaration:

    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"
      targetNamespace="http://blog.frankel.ch/spring/ehcache" xmlns="http://blog.frankel.ch/spring/ehcache"
      elementFormDefault="qualified">
      <xsd:complexType name="cacheType">
        <xsd:sequence maxOccurs="1" minOccurs="0">
          <xsd:element name="manager" type="managerType" />
        </xsd:sequence>
        <xsd:attribute name="id" type="xsd:string" />
        <xsd:attribute name="cacheName" type="xsd:string" />
        <xsd:attribute name="diskExpiryThreadIntervalSeconds" type="xsd:int" />
        <xsd:attribute name="diskPersistent" type="xsd:boolean" />
        <xsd:attribute name="eternal" type="xsd:boolean" />
        <xsd:attribute name="maxElementsInMemory" type="xsd:int" />
        <xsd:attribute name="maxElementsOnDisk" type="xsd:int" />
        <xsd:attribute name="overflowToDisk" type="xsd:boolean" />
        <xsd:attribute name="timeToLive" type="xsd:int" />
        <xsd:attribute name="timeToIdle" type="xsd:int" />
        <xsd:attribute name="memoryStoreEvictionPolicy" type="memoryStoreEvictionPolicyType" />
      </xsd:complexType>
      <xsd:simpleType name="memoryStoreEvictionPolicyType">
        <xsd:restriction base="xsd:string">
          <xsd:enumeration value="LRU" />
          <xsd:enumeration value="LFU" />
          <xsd:enumeration value="FIFO" />
        </xsd:restriction>
      </xsd:simpleType>
      <xsd:complexType name="managerType">
        <xsd:attribute name="ref" type="xsd:string" />
      </xsd:complexType>
      <xsd:element name="cache" type="cacheType" />
    </xsd:schema>
    

    And for those, like me, that prefer a graphic display:

    Mapping the schema

    The schema creation is only the first part. Now, we have to make Spring aware of it. Create the file META-INF/spring.schemas and write in the following line is enough:

    http\://blog.frankel.ch/spring/schema/custom.xsd=ch/frankel/blog/spring/authoring/custom.xsd
    

    Just take care to insert the backslash, otherwise it won’t work. It maps the schema declaration in the XML to the real file that will be used from the jar!

    Before going further, and for the more curious, just notice that in spring-beans.jar (v3.0), there’s such a file. Here is it’s content:

    http\://www.springframework.org/schema/beans/spring-beans-2.0.xsd=org/springframework/beans/factory/xml/spring-beans-2.0.xsd
    http\://www.springframework.org/schema/beans/spring-beans-2.5.xsd=org/springframework/beans/factory/xml/spring-beans-2.5.xsd
    http\://www.springframework.org/schema/beans/spring-beans-3.0.xsd=org/springframework/beans/factory/xml/spring-beans-3.0.xsd
    http\://www.springframework.org/schema/beans/spring-beans.xsd=org/springframework/beans/factory/xml/spring-beans-3.0.xsd
    http\://www.springframework.org/schema/tool/spring-tool-2.0.xsd=org/springframework/beans/factory/xml/spring-tool-2.0.xsd
    http\://www.springframework.org/schema/tool/spring-tool-2.5.xsd=org/springframework/beans/factory/xml/spring-tool-2.5.xsd
    http\://www.springframework.org/schema/tool/spring-tool-3.0.xsd=org/springframework/beans/factory/xml/spring-tool-3.0.xsd
    http\://www.springframework.org/schema/tool/spring-tool.xsd=org/springframework/beans/factory/xml/spring-tool-3.0.xsd
    http\://www.springframework.org/schema/util/spring-util-2.0.xsd=org/springframework/beans/factory/xml/spring-util-2.0.xsd
    http\://www.springframework.org/schema/util/spring-util-2.5.xsd=org/springframework/beans/factory/xml/spring-util-2.5.xsd
    http\://www.springframework.org/schema/util/spring-util-3.0.xsd=org/springframework/beans/factory/xml/spring-util-3.0.xsd
    http\://www.springframework.org/schema/util/spring-util.xsd=org/springframework/beans/factory/xml/spring-util-3.0.xsd
    

    It brings some remarks:

    • Spring eat their own dogfood (that’s nice to know)
    • I didn’t look into the code but I think that’s why XML validation of Spring’s bean files never complain about not finding the schema over the Internet (a real pain in production environment because of firewall security issues). That’s because the XSD are looked inside the jar
    • If you don’t specify the version of the Spring schema you use (2.0, 2.5, 3.0, etc.), Spring will automatically upgrade it for you with each major/minor version of the jar. If you want this behaviour, fine, if not, you’ll have to specify version

    Creating the parser

    The previous steps are only meant to validate the XML so that the eternal attribute will take a boolean value, for example. We still did not wire our namespace into Spring factory. This is the goal of this step.

    The first thing to do is create a class that implement org.springframework.beans.factory.xml.BeanDefinitionParser. Looking at its hierarchy, it seems that the org.springframework.beans.factory.xml.AbstractSimpleBeanDefinitionParser is a good entry point since:

    • the XML is not overly complex
    • there will be a single bean definition

    Here’s the code:

    public class EhCacheBeanDefinitionParser extends AbstractSimpleBeanDefinitionParser {
    
      private static final List&amp;amp;amp;amp;amp;lt;String&amp;amp;amp;amp;amp;gt; PROP_TAG_NAMES;
    
      static {
      PROP_TAG_NAMES = new ArrayList();
        PROP_TAG_NAMES.add("eternal");
        PROP_TAG_NAMES.add("cacheName");
        PROP_TAG_NAMES.add("maxElementsInMemory");
        PROP_TAG_NAMES.add("maxElementsOnDisk");
        PROP_TAG_NAMES.add("overflowToDisk");
        PROP_TAG_NAMES.add("diskExpiryThreadIntervalSeconds");
        PROP_TAG_NAMES.add("diskPersistent");
        PROP_TAG_NAMES.add("timeToLive");
        PROP_TAG_NAMES.add("timeToIdle");
      }
    
      @Override
      protected Class getBeanClass(Element element) {
        return EhCacheFactoryBean.class;
      }
    
      @Override
      protected boolean shouldGenerateIdAsFallback() {
        return true;
      }
    
      @Override
      protected void doParse(Element element, ParserContext parserContext, BeanDefinitionBuilder builder) {
        for (String name : PROP_TAG_NAMES) {
          String value = element.getAttribute(name);
          if (StringUtils.hasText(value)) {
            builder.addPropertyValue(name, value);
          }
        }
        NodeList nodes = element.getElementsByTagNameNS("http://blog.frankel.ch/spring/ehcache", "manager");
        if (nodes.getLength() > 0) {
          builder.addPropertyReference("cacheManager",
            nodes.item(0).getAttributes().getNamedItem("ref").getNodeValue());
        }
        String msep = element.getAttribute("memoryStoreEvictionPolicy");
        if (StringUtils.hasText(msep)) {
          MemoryStoreEvictionPolicy policy = MemoryStoreEvictionPolicy.fromString(msep);
          builder.addPropertyValue("memoryStoreEvictionPolicy", policy);
        }
      }
    }
    

    That deserves some explanations. The static block fills in what attributes are valid. The getBeanClass() method returns what class will be used, either directly as a bean or as a factory. The shouldGenerateIdAsFallback() method is used to tell Spring that when no id is supplied in the XML, it should generate one. That makes it possible to create pseudo-anonymous beans (no bean is really anonymous in the Spring factory).

    The real magic happen in the doParse() method: it just adds every simple property it finds in the builder. There are two interesting properties though: cacheManager and memoryStoreEvictionPolicy.

    The former, should it exist, is a reference on another bean. Therefore, it should be added to the builder not as a value but as a reference. Of course, the code doesn’t check if the developer declared the cache manager as an anonymous bean inside the ehcache but the schema validation took already care of that.

    The latter just uses the string value to get the real object behind and add it as a property to the builder. Likewise, since the value was enumerated on the schema, exceptions caused by a bad syntax cannot happen.

    Registering the parser

    The last step is to register the parser in Spring. First, you just have to create a class that extends org.springframework.beans.factory.xml.NamespaceHandlerSupport and register the handler under the XML tag name in its init() method:

    public class EhCacheNamespaceHandler extends NamespaceHandlerSupport {
      public void init() {
        registerBeanDefinitionParser("cache", new EhCacheBeanDefinitionParser());
      }
    }
    

    Should you have more parsers, just register them in the same method under each tag name.

    Second, just map the formerly created namespace to the newly created handler in a file META-INF/spring.handlers:

    http\://blog.frankel.ch/spring/ehcache=ch.frankel.blog.spring.authoring.ehcache.EhCacheNamespaceHandler
    

    Notice that you map the declared schema file to the real schema but the namespace to the handler.

    Conclusion

    Now, when faced by overly verbose bean configuration, you have the option to use this nifty 4-steps techniques to simplify it. This technique is of course more oriented toward product providers but can be used by projects, provided the time taken to author a namespace is a real time gain over normal bean definitions.

    You will find the sources for this articlehere in Maven/Eclipse format.

    To go further:

    • Spring authoring: version 2.0 is enough since nothing changed much (at all?) with following versions
    • Spring's Javadoc relative to authoring
  • Spring can inject Servlets too!

    In this article, I will show you that Spring dependency injection mechanism is not restricted solely to Spring-managed beans, that is Spring can inject its beans in objects created by the new keywords, servlets instantiated in the servlet container, and pretty anything you like. Spring classical mode is to be an object factory. That is, Spring creates anything in your application, using the provided constructor. Yet, some of the objects you use are outer Spring’s perimeter. Two simple examples:

    • servlets are instantiated by the servlet container. As such, they cannot be injected out-of-the-box
    • some business objects are not parameterized in Spring but rather created by your own code with the new keyword

    Both these examples show you can’t delegate to Spring every object instantiation.

    There was a time when I foolishly thought Spring was a closed container. Either your beans were managed by Spring, or they didn’t: if they were, you could inject them with other Spring-managed beans. If they weren’t, tough luck! Well, this is dead wrong. Spring can inject its beans into pretty much anything provided you’re okay to use AOP.

    In order to do this, there are only 2 steps to take:

    Use AOP

    It is done in your Spring configuration file.

    <?xml version="1.0" encoding="UTF-8"?>
    <beans xmlns="http://www.springframework.org/schema/beans"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns:context="http://www.springframework.org/schema/context"
      xsi:schemaLocation="http://www.springframework.org/schema/beans
        http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
        http://www.springframework.org/schema/context
    	http://www.springframework.org/schema/context/spring-context-2.5.xsd">
      <This does the magic />
      <context:spring-configured />
      <-- These are for classical annotation configuration -->
      <context:annotation-config />
      <context:component-scan base-package="ch.frankel.blog.spring.outcontainer" />
    </beans>
    

    You need also configure which aspect engine to use to weave the compiled bytecode. In this cas, it is AspectJ, which is the AOP component used by Spring. Since i’m using Maven as my build tool of choice, this is easily done in my POM. Ant users will have to do it in their build.xml:

    <project xmlns="http://maven.apache.org/POM/4.0.0"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
        http://maven.apache.org/maven-v4_0_0.xsd">
    ...
      <properties>
        <spring-version>2.5.6.SEC01</spring-version>
      </properties>
      <dependencies>
        <dependency>
          <groupId>org.springframework</groupId>
          <artifactId>spring-aop</artifactId>
          <version>${spring-version}</version>
        </dependency>
        <dependency>
          <groupId>org.springframework</groupId>
          <artifactId>spring-aspects</artifactId>
          <version>${spring-version}</version>
        </dependency>
        <dependency>
          <groupId>aspectj</groupId>
          <artifactId>aspectjrt</artifactId>
          <version>1.5.4</version>
          <scope>test</scope>
        </dependency>
      </dependencies>
      <build>
        <plugins>
          <plugin>
            <groupId>org.codehaus.mojo</groupId>
            <artifactId>aspectj-maven-plugin</artifactId>
            <configuration>
              <complianceLevel>1.5</complianceLevel>
              <aspectLibraries>
                <aspectLibrary>
                  <groupId>org.springframework</groupId>
                  <artifactId>spring-aspects</artifactId>
                </aspectLibrary>
              </aspectLibraries>
            </configuration>
            <executions>
              <execution>
                <goals>
                  <goal>compile</goal>
                </goals>
              </execution>
            </executions>
          </plugin>
        </plugins>
      </build>
    </project>
    

    Configure which object creation to intercept

    This is done with the @org.springframework.beans.factory.annotation.Configurable annotation on your injectable object.

    @Configurable
    public class DomainObject {
    
      /** The object to be injected by Spring. */
      private Injectable injectable;
    
      public Injectable getInjectable() {
        return injectable;
    
      }
    
      @Autowired
      public void setInjectable(Injectable injectable) {
        this.injectable = injectable;
      }
    }
    

    Now with only these few lines of configuration (no code!), I’m able to inject Spring-managed beans into my domain object. I leave to you to implement the same with regular servlets (which are much harder to display as unit test).

    You can find the Maven project used for this article here. The unit test packaged shows the process described above.

    To go further:

  • Top Eclipse plugins I wouldn't go without

    Using an IDE to develop today is necessary but any IDE worth his salt can be enhanced with additional features. NetBeans, IntelliJ IDEA and Eclipse have this kind of mechanism. In this article, I will mention the plugins I couldn’t develop without in Eclipse and for each one advocate for it.

    m2eclipse

    Maven is my build tool of choice since about 2 years. It adds some very nice features comparing to Ant, mainly the dependencies management, inheritance and variable filtering. Configuring the POM is kind of hard once you’ve reached a fairly high number of lines. The Sonatype m2eclipse plugin (formerly hosted by Codehaus) gives you a tabs-oriented view of every aspect of the POM:

    • An Overview tab neatly grouped into : Artifact, Parent, Properties, Modules, Project, Organization, SCM, Issue Management and Continuous Integration,

      m2eclipse Overview tab

    • A Dependencies tab for managing (guess what) dependencies and dependencies management. For each of the former, you can even exclude dependent artifacts. This tab is mostly initialized at the start of the project, since its informations shouldn’t change during the lifecycle,
    • A Repositories tab to deal with repositories, plugin repositories, distribution, site and relocation (an often underused feature that enable you to change an artifact location without breaking builds a.k.a a level of indirection),
    • A Build tab for customizing Maven default folders (a usually very bad idea),
    • A Plugins tab to configure and/or execute Maven plugins. This is one of the most important tab since it’s here you will configure maven-compiler-plugin to use Java 6, or such,
    • A Report tab to manage the ` part,
    • A Profiles tab to cope with profiles,
    • A Team tab to fill out team-oriented data such as developers and contributors information,
    • The most useful and important tab (according to me) graphically displays the dependency tree. Even better, each scope is represented in a different way and you can filter out unwanted scope.

      m2eclipse Dependencies tab

    • Last but not least, the last tab enables you to directly edit the underlying XML.

    Moreover, m2eclipse adds a new Maven build Run configuration that is equivalent for the command line:

    m2eclipse Run Configuration

    With this, you can easily configure the -X option (Debug Output) or the -Dmaven.test.skip option (Skip Tests).

    More importantly, you can set up the plugin to resolve dependencies from within the workspace during Eclipse builds; that is, instead of using your repository classpath, Eclipse will use the project’s classpath (provided it is in the desired version). It prevents the need to build an artifact each time it is modified because another won’t compile after the change. It merely duplicates the legacy Eclipse dependency management.

    I advise not to use the Resolve Workspace artifacts in the previous Run configuration because it will use this default behaviour. In Maven build, I want to distance myself from the IDE, using only the tool’s features.

    TestNG plugin

    For those not knowing TestNG, it is very similar to JUnit4. It was the first to bring Java 5 annotations (even before JUnit) so I adopted the tool. Now as to why I keep it even though JUnit4 uses annotations: it has one important feature JUnit has not. You can make a test method dependent on another, so as to develop test scenarios. I know this is not pure unit testing anymore; still I like using some scenarios in my testing packages in order to test build breaks as early as possible.

    FYI, Maven knows about TestNG and runs TestNG tests as readily as JUnit ones.

    The TestNG plugin for Eclipse does as the integrated JUnit plugin does, whether regarding configuration or run or even test results.

    TestNG Plugin Run configuration

    Emma

    When developing, and if one uses tests, one should know about one’s test coverage over code. I used to use Cobertura Maven plugin: I configured in the POM and, every once in a while, I ran a simple mvn cobertura:cobertura. Unfortunately, it is not very convenient to do so. I searched for an Eclipse plugin to have the same feature; alas, there’s none.  However, I found the EclEmma Eclipse plugin that brings the same functionality. It uses Emma (an OpenSource code coverage tool) under the hood and, though I searched thoroughly, Emma has no Maven 2 plugin. Since I value equally IDE code coverage during development and Maven code coverage during nightly builds (on a Continuous Integration infrastrcuture), so you’re basically stuck with 2 different products. So?

    EclEmma line highlighting

    ElcEmma provides a 4th run button (in addition to Run, Debug and External Tools) that launches the desired run configuration (JUnit, TestNG or what have you) in enhanced mode, the latter providing the code coverage feature. In the above screenshot, you can see line 20 was not run during the test.

    Even better, the plugin provides a aggregation view for code coverage percentage. This view can be decomposed on the project, source path, package and class levels.

    EclEmma statistics

    Spring IDE

    Spring does not need to be introduced. Whether it will be outed by JEE 5 dependency injection annotations remains to be seen. Plenty of projects still use Spring and that’s a fact. Still, XML configuration is very ankward in Spring in a number of cases:

    • referencing a qualified class name. It is not neither easy nor productive to type it; the same is true for properties
    • understanding complex or big configurations files
    • referencing a Spring bean in a hundred or more lines of file
    • refactoring a class name or a property name without breaking the configurations files
    • being informed of whether a class is a Spring bean in a project and if so, where it is used

    Luckily, Spring IDE provides features that make such tasks easy as a breeze:

    • auto-completion on XML configuration files
    • graphic view of such files

      Spring IDE Graph View

    • an internal index so that refactoring is takes into account the XML files (though I suspect there is some bugs hidden somewhere for I regularly have errors)
    • a enhanced Project Explorer view to display where a bean is used

      Spring Project Explorer View

    This entire package guarantees a much better productivity when using XML configuration in Spring than plain old XML editor. Of course, you can still use Annotation configuration, though I’m reluctant to do so (more in a latter post).

    I conclusion, these 4 integration plugins mean I feel really more comfortable using their underlying tools. If I were in an environment where I couldn’t update my Eclipse however I choose, I would definitely think about using these tools at all (except Maven), or use Annotations configuration for Spring. You can have exceptional products or frameworks, they have to integrate seamlessly into your IDE(s) to really add value: think about the fate of EJB v2!

  • JMX use cases

    JMX is a Java standard shipped with the JDK since Java 5. Though it enables you to efficiently and dynamically manage your applications, JMX has seen very few productions uses. In this article, I will show you the benefits of using such a technology in a couple of use cases.

    Manage your application’s configuration

    Even though each application has different needs regarding configuration (one needing a initial thread number attribute, the other an URL), every application needs to be more or less parameterized. In order to do this, countless generations of Java developers (am I overdoing here?) have created two components:

    • the first one is a property file where one puts the name value pairs
    • the other one is a Java class whose responsibilities are to load the properties in itself and to provide access to the values. This class should be a singleton.

    This is good and fine for initialization, but what about runtime changes of those parameters? This is where JMX comes in. With JMX, you can now expose those parameters with read/write authorizations. JDK 6 provides you with the JConsole application, which can connect on JMX-enabled applications. Lets’s take a very simple example, with a configuration having only two properties: there will be one Configuration class and one interface named ConfigurationMBean in order to follow the JMX standard for Standard MBean. This interface will describe all methods available on the MBean instance:

    public interface ConfigurationMBean {
        public String getUrl();
        public int getNumberOfThread();
        public void setUrl(String url);
        public void setNumberOfThread(int numberOfThread);
    }
    

    Now, you’ll only have to register the singleton instance of this class in you MBean server, and you have exposed your application’s configuration to the outside with JMX!

    Manage your application’s Springified configuration

    In Spring, every bean that is configured in the Spring configuration file can be registered to the JMX server. That’s it: no need to create an MBean suffixed interface for each class you want to expose. Spring does it for you, using the more powerful DynamicMBean and ModelMBean classes under the hood.

    By default, Spring will expose all your public properties and methods. You can still control more precisely what will be exposed through the use of:

    • meta-datas (@@-like comments in the javadocs, thus decoupling your code from Spring API),
    • Spring Java 5 annotations,
    • classical MBean interfaces referenced in the Spring’s definition file,
    • or even using MethodNameBasedMBeanInfoAssembler which describes the interface in the Spring’s definition file.

    More importantly, Spring provides your MBeans with notification provider support. This means every MBean will implement NotificationBroadcaster and thus be able to send notifications to subscribers, for example when you change a value to a property or when you call a method.

    Following is a snippet for the previous Configuration, using Spring:

    <bean id="genericCfg" class="ch.frankel.blog.jmx.GenericConfiguration" />
    <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter">
        <property name="beans">
            <map>
                <entry key="bean:type=configuration,name=generic" value-ref="genericCfg" />
            </map>
        </property>
    </bean>
    

    Spring uses the MBean’s id to register it under the MBean server.

    Change a logger’s level

    Logging is a critical functionality since the dawn of software. Now, let’s say you are faced with the following problem: your application keeps throwing exceptions but what is traced is not enough for the developers to diagnose. Luckily, one of the developer did put some trace, but on a very primitive level. Unluckily, in production mode, it’s pretty sure you log only important events, mainly exceptions, not the debug informations that could be so useful here.

    The first solution is to change the log level in the configuration file and the restart the application. Ouch, that’s a very crude way, one that won’t make many people happy (depending on the criticity and availability of the application).

    Another answer is to use the abilities of the logging framework. For example, Log4J is the legacy logging framework. It provides a way to configure the framework with a configuration file and to listens to changes made to this file in order to reflect it in the in-memory configuration (the static configureAndWatch() method found in both DOMConfigurator and PropertyConfigurator).This runs fine if you have an external file but what about configuration files shipped with the archive? You can argue that Web archives are often deployed in exploded mode but you cannot rely on it.

    JMX will prove to be handy for such a case: if you could have exposed your loggers logging level, you could change them at runtime. Since JDK 1.4, Java has an API to log messages. It offers JMX registration for free, so let’s use it. The only thing to do is create a logger for your class. In a business method, use the logger to trace at FINE level. Now using your JMX console, locate the MBean named java.util.logging:type=Logging.

    Type Name Description
    Attribute LoggerNames Array of all loggers configured
    Operation getLoggerLevel Get the level of a logger The root logger is referenced by an empty string
    Operation setLoggerLevel Set the level of a logger to a specified value

    In order to activate the log, just set the level of the logger used by your class to the value you used in the code. In order to test this, I recomend to create a Spring bean from a Java class using the logger and exporting it to JMX with Spring (see above).

    Flush your cache

    For the data access layer, Hibernate is the most used framework at the time of this writing. Hibernate enables you to use caching. Hibernate’s first level caching (session cache) is done within Hibernate itself; Hibernate’s second level caching (transsession cache) is delegated to a 3rd party framework. Default framework is Ehcache: it is a very simple, yet efficient solution.

    Let’s say some of your application’s table contain repository datas, that is data that won’t be changed by the application. These datas are by definition eligible for second-level caching. Now picture this: your application should be highly available (365/24/7) and the repositories just changed. How can you tell Ehcache to reload the tables in memory without restarting the application?

    Luckily, Ehcache provides you with the way to do it. In fact, the net.sf.ehcache.management.Cache class implements the net.sf.ehcache.management.CacheMBean so you can call all of the interface methods: one of such method, aptly named removeAll() empties your cache. Just call it and Hibernate, not finding any data in the cache, will reload them from the database.

    You will perhaps object you do not want all of your cache to be reinitialized: now you understand why you should separate your cache instances in the configuration (e.g. one for each table or one for repository datas and one for updatable datas).

    Testing JMX

    It is legitimate to want to test JMX while developing before bringing in the whole server infrastructure. JDK 6 provides you with the JConsole utility which displays the following informations on any application you can connect to (local and remote):

    • memory usage through time,
    • threads instances through time,
    • number of loaded classes through time,
    • summary of VM (including classpath and properties),
    • all your exposed MBeans.

    JConsole MBean view

    This view lets you view your MBeans attributes and call your MBeans operations, so this is a very valuable tool (for free). Now try it with a legacy application of yours: notice how many MBeans are registered. Guess you didn’t expect that!

    In order to use this tool in development mode, do not forget to launch it with the following arguments (notice the -J):

    • -J-Dcom.sun.management.jmxremote.ssl=false
    • -J-Dcom.sun.management.jmxremote.authenticate=false
    • -J-Djava.class.path="${JDK_HOME}/jdk1.6.0_10/lib/jconsole.jar;${JDK_HOME}/lib/tools.jar;${ADDITIONNAL_CLASSPATH}

    The last argument can be omitted in most cases (though this is not the case for managing EhCache).

    You can find the sources for all the examples here.

    To go further:

  • Mockito' spy() method and Spring

    Mockito is a mocking framework (see Two different mocking approaches) that is an offshoot of EasyMock. Whatever the mocking framework one uses, a common feature is the ability to mock interfaces, through the JDK Proxy class. This is well and nice, but one has to explicitly mock every method that one wants to use in the course of the test.

    What if I want to mock an already existing implementation, with some methods providing behaviours that suit me ? Today, I ran across this case: I had a legacy helper class I wanted to reuse. This class used commons-http-client to ease the process of calling an URL. It had property accessors, like any good old POJO, that I really needed even in the test scope and a method that made the real call, using previously set properties (such as URL). It implemented no interface, though. Here’s what it looked like:

    public class LegacyHelper {
    
        // Various attributes
        ...
    
        // Various accessors to get/set these properties
        ...
    
        // One big method that uses external resources (very bad for Unit Testing)
        public int callUrl() {
            ...
        }
    }
    

    Although Mockito lacks exhaustive documentation (a trait shared by many Google Code projects, much to my dismay), I happened to run across the Mockito.spy() method. This magic method creates a proxy (hence the name spy) on a real object. It delegates its method calls to the proxied object unless these methods are stubbed. It means I could rely on the getters/setters doing their work while neutralizing the legacy method that broke isolation testing.

    public class MyTest {
    
        // This I don't want to test but my class uses it
        private LegacyHelper helper;
    
        @BeforeMethod
        public void setUp() {
            helper = Mockito.spy(new LegacyHelper());
            Mockito.when(helper.callUrl()).thenReturn(0);
        }
    
        @Test
        public void testCall() {
            // Now I can use helper without it really calling anything
            helper.callUrl();
            // Do real testing here
           ...
        }
    }
    

    This is only the first step. What if I need to provide spied objects throughout the entire application? Spring certainly helps here with the FactoryBean interface. When Spring creates a new instance, either it calls the new operator or the getObject() method if the referenced class is of type FactoryBean. Our spy factory looks like this:

    public class SpyFactoryBean {
    
        // Real or spied object
        private Object real;
    
        public void setReal(Object object) {
            real = object;
        }
    
        public boolean isSingleton() {
            return false;
        }
    
        public Class getObjectType() {
            return real.getClass();
        }
    
        public Object getObject() {
    
            return Mock.spy(real);
        }
    }
    

    To use it in a Spring context file:

    <?xml version="1.0" encoding="ISO-8859-1"?>
    <beans>
        <bean id="legacyHelper" class="LegacyHelper" />
        <bean id="mockHelper" class="SpyFactoryBean" dependency-check="objects">
            <property name="real" ref="legacyHelper" />
        </bean>
    </beans>
    

    Now you’ve got a factory of spies that you can reuse across your project’s tests or even better, ship for use among all your enterprise’s projects. The only thing to do is not to forget to stub the methods that may have side-effects.

  • Age of Spring

    Since it first version, Spring has known success. How is it possible that such an unknown framework (at the time) has become so widespread that companies demand to attendants to have Spring knowledge?

    I think they are two main reasons for this. First, the use of Inversion of Control really helps unit testing your classes and since unit tests have become a hot topic, it is natural that a framework that promote independency between classes should win. But they are other IoC frameworks available. Did you hear about Guice? Yes, with a G like Google. Not really a small potatoes player, is it? Yet, it is a previously unkwown company like Interface21 (now renamed SpringSource) that made the reference product.

    The second reason, and according to me, the most important, is that Java & JEE development is complex and thus costly. So, when a framework like Spring provides ways to speed up such development by giving integration components, it is natural it becomes the leader in its field.

    For example, the following snippet shows how to execute a simple query on a database:

    Context ctx = new InitialContext();
    DataSource ds = (DataSource) ctx.lookup("jdbc/javadb");
    Connection con = ds.getConnection();
    PreparedStatement pstmt = con.prepareStatement("SELECT * FROM CUSTOMERS WHERE NAME = ?");
    pstmt.setString(1, "KERRY");
    ResultSet rs = pstmt.executeQuery();
    while (rs.next()) {
        Customer customer = new Customer();
        customer.setName(rs.getString("NAME"));
        customer.setFirstName(rs.getString("FIRST_NAME"));
    }
    

    The same thing can be done in Spring like this:

    RowMapper mapper = new RowMapper() {
        public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
            Customer customer = new Customer();
            customer.setName(rs.getString("NAME"));
            customer.setFirstName(rs.getString("FIRST_NAME"));
            return customer ;
        }
    };
    jdbcTemplate.queryForObject("SELECT * FROM CUSTOMERS WHERE NAME = ?", mapper, new Object[] {"KERRY"});
    

    Take a look at lines 8-10 of first snippet compared to lines 1-8 of second snippet: the mapping lines look the same (yet they are not since the Spring version is more object-oriented). On the other hand, the lines 1-11 of classic snippet are sumed up in only one line in the Spring snippet! Boilerplate code is provided by Spring and executed behind the scene.

    In conclusion, I think that this integration state of mind is the biggest asset of Spring. It would be nice to see such approach in other API too.

    Categories: Java Tags: jdbcspring