PowerMock, features and use-cases

October 7th, 2012 No comments

Even if you don’t like it, your job sometimes requires you to maintain legacy applications. It happened to me (fortunately rarely), and the first thing I look for before writing as much as a single character in a legacy codebase is a unit testing harness. Most of the time, I tend to focus on the code coverage of the part of the application I need to change, and try to improve it as much as possible, even in the total lack of unit tests.

Real trouble happens when design isn’t state-of-the-art. Unit testing requires mocking, which is only good when dependency injection (as well as initialization methods) makes classes unit-testable: this isn’t the case with some legacy applications, where there is a mess of private and static methods or initialization code in constructors, not to mention static blocks.

For example, consider the following example:

public class ExampleUtils {

    public static void doSomethingUseful() { ... }
}

public class CallingCode {

    public void codeThatShouldBeTestedInIsolation() {

        ExampleUtils.doSomethingUseful();

        ...
    }
}

It’s impossible to properly unit test the codeThatShouldBeTestedInIsolation() method since it has a dependency on another unmockable static method of another class. Of course, there are proven techniques to overcome this obstacle. One such technique would be to create a “proxy” class that would wrap the call to the static method and inject this class in the calling class like so:

public class UsefulProxy {

    public void doSomethingUsefulByProxy() {

        ExampleUtils.doSomethingUseful();
    }
}

public class CallingCodeImproved {

    private UsefulProxy proxy;

    public void codeThatShouldBeTestedInIsolation() {

        proxy.doSomethingUSeful();

        ...
    }
}

Now I can inject a mock UsefulProxy and finally test my method in isolation. There are several drawbacks to ponder, though:

  • The produced code hasn’t provided tests, only a way to make tests possible.
  • While writing this little workaround, you didn’t produce any tests. At this point, you achieved nothing.
  • You changed code before testing and took the risk of breaking behavior! Granted, the example doesn’t imply any complexity but such is not always the case in real life applications.
  • You made the code more testable, but only with an additional layer of complexity.

For all these reasons, I would recommend this approach only as a last resort. Even worse, there are designs that are completely closed to simple refactoring, such as the following example which displays a static initializer:

public class ClassWithStaticInitializer {

    static { ... }
}

As soon as the ClassWithStaticInitializer class is loaded by the class loader, the static block will be executed, for good or ill (in the light of unit testing, it probably will be the latter).

My mock framework of choice is Mockito. Its designers made sure features such as static method mocking weren’t available and I thank them for that. It means that if I cannot use Mockito, it’s a design smell. Unfortunately, as we’ve seen previously, tackling legacy code may require such features. That’s when enters PowerMock (and only then – using PowerMock in a standard development process is also a sure sign the design is fishy).

With PowerMock, you can leave the initial code untouched and still test to begin changing the code with confidence. Here’s the test code for the first legacy snippet, using Mockito and TestNG:

@PrepareForTest(ExampleUtils.class)
public class CallingCodeTest {

    private CallingCode callingCode;

    @BeforeMethod
    protected void setUp() {

        mockStatic(ExampleUtils.class);

        callingCode = new CallingCode();
    }

    @ObjectFactory
    public IObjectFactory getObjectFactory() {

        return new PowerMockObjectFactory();
    }

    @Test
    public void callingMethodShouldntRaiseException() {

        callingCode.codeThatShouldBeTestedInIsolation();

        assertEquals(getInternalState(callingCode, "i"), 1);
    }
}

There isn’t much to do, namely:

  • Annotate test classes (or individual test methods) with @PrepareForTest, which references classes or whole packages. This tells PowerMock to allow for byte-code manipulation of those classes, the effective instruction being done in the following step.
  • Mock the desired methods with the available palette of mockXXX() methods.
  • Provide the object factory in a method that returns IObjectFactory and annotated with @ObjectFactory.

Also note that with the help of the Whitebox class we can access the class internal state (i.e. private variables). Even though this is bad, the alternative – taking chance with the legacy code without test harness is worse: remember our goal is to lessen the chance to introduce new bugs.

Features list of PowerMock is available here for Mockito. Note that suppressing static blocks is not possible with TestNG right now.

You can find the sources for this article here in Maven format.

Categories: Java Tags: , , ,

Use local resources when validating XML

September 30th, 2012 1 comment

Depending of you enterprise security policy, some – if not most of your middleware servers have no access to Internet. It’s even worse when your development infrastructure is isolated from the Internet (such as banks or security companies). In this case, validating your XML against schemas becomes a real nightmare.

Of course, you could set the XML schema location to a location on your hard drive. But about about your co-workers then? They would have to have the schema in exactly the same filesystem hierarchy, and that wouldn’t solve your problem about the production environment…

XML catalogs

XML catalogs are the nominal solution for your quandary. In effect, you plug in a resolver that knows how to map an online location to a location on your filesystem (or more precisely a location to another one). This resolver is set through the xml.catalog.files system property that may be different for each developer or in the production environment:



    
    

Now, when validation happens, instead of looking up the online schema URL location, it looks up the local schema.

How-to

XML catalogs can be used with JAXP, either SAX, DOM or STaX. The following details the code how to use them with SAX.

SAXParserFactory factory = SAXParserFactory.newInstance();

factory.setNamespaceAware(true);
factory.setValidating(true);

XMLReader reader = factory.newSAXParser().getXMLReader();

// Set XSD as the schema language
reader.setProperty("http://java.sun.com/xml/jaxp/properties/schemaLanguage", "http://www.w3.org/2001/XMLSchema");

// Use XML catalogs
reader.setEntityResolver(new CatalogResolver());

InputStream stream = getClass().getClassLoader().getResourceAsStream("person.xml");

reader.parse(new InputSource(stream));

Two lines are important:

  • In line 13, we tell the reader to use XML catalogs
  • But line 7 is almost as important: we absolutely have to use the reader, not the SAX parser to parse the XML or validation will be done online!

Use cases

There are basically two use cases for XML catalogs:

  1. As seen above, the first use-case is to validate XML files against cached local schemas files
  2. In addition to that, XML catalogs can also be used as an alternative to LSResourceResolver (as seen previously in XML validation with imported/included schemas)

Important note

Beware: the CatalogResolver class is available in the JDK in an internal com.sun package, but the Apache XML resolver library does the job. In fact, the internal class is just a repackaging of the Apache one.

If you use Maven, the dependency is the following:


    xml-resolver
    xml-resolver
    1.2

Beyond catalogs

Catalog are a file-system based standard way to map an online schema location.

However, this isn’t always the best strategy to choose from. For example, Spring provides its different schemas inside JARs, along classes and validation is done against those schemas.

In order to do the same, replace line 13 in the code above, and instead of a CatalogResolver, use your own implementation of EntityResolver.

You can find the source for this article here (note that associated tests use a custom security policy file to prevent network access, and if you want to test directly inside your IDE, you should reuse the system properties used inside the POM).

Categories: Java Tags:

Why I enrolled in an online Scala course

September 23rd, 2012 2 comments

When I heard that the Coursera online platform offered free Scala courses, I jumped at the opportunity.

Here are some reasons why:

  • Over the years, I’ve been slowly convinced that whatever the language you program in your professional life, learning new languages is an asset as it change the way you design your code.
    For example, the excellent LambdaJ library gave me an excellent overview of how functional programming can be leveraged to ease manipulation of collections in Java.
  • Despite my pessimistic predictions toward how Scala can penetrate mainstream enterprises, I still see the language as being a strong asset in small companies with high-level developers. I do not wish to be left out of this potential market.
  • The course if offered by Martin Oderski himself, meaning I get data directly from the language creator. I guess one cannot hope for a better teacher than him.
  • Being a developer means you’ve to keep in touch with the way the world is going. And the current trend is revolutions everyday. Think about how nobody speak about Big Data 3 or 4 years ago. Or how at the time, you developed your UI layer with Flex. Amazing how things are changing, and changing faster and faster. You’d better keep the rythm…
  • The course is free!
  • There’s the course, of course, but there are also a weekly assignment, each being assigned a score. Those assignments fill the gap in most online courses, where you lose motivation with each passing day: this way, the regular challenge ensures a longer commitment.
  • Given that some of my colleagues have also enrolled in the course, there’s some level of competition (friendly, of course). This is most elating and enables each of us not only to search for a solution, but spend time looking for the most elegant one.
  • The final reason is that I’m a geek. I love learning new concepts and new ways to do things. In this case, the concept is functional programming and the way is Scala. Other alternatives are also available: Haskell, Lisp, Erlang or F#. For me, Scala has the advantage of being able to be run on the JVM.

In conclusion, I cannot recommend you enough to do likewise, there are so many reasons to choose from!

Note: I also just stumbled upon this Git kata; an area where I also have considerable progress to make.

Categories: Development Tags: ,

Web Services: JAX-WS vs Spring

September 15th, 2012 No comments

In my endless search for the best way to develop applications, I’ve recently been interested in web services in general and contract-first in particular. Web services are coined contract-first when the WSDL is designed in the first place and classes are generated from it. They are acknowledged to be the most interoperable of web services, since the WSDL is agnostic from the underlying technology.

In the past, I’ve been using Axis2 and then CXF but now, JavaEE provides us with the power of JAX-WS (which is aimed at SOAP, JAX-RS being aimed at REST). There’s also a Spring Web Services sub-project.

My first goal was to check how easy it was to inject through Spring in both technologies but during the course of my development, I came across other comparison areas.

Overview

With Spring Web Services, publishing a new service is a 3-step process:

  1. Add the Spring MessageDispatcherServlet in the web deployment descriptor.
    
        
            contextConfigLocation
            classpath:/applicationContext.xml
        
        
            spring-ws
            org.springframework.ws.transport.http.MessageDispatcherServlet
            
                transformWsdlLocations
                true
            
            
                contextConfigLocation
                classpath:/spring-ws-servlet.xml
            
        
        
            spring-ws
            /spring/*
        
        
            org.springframework.web.context.ContextLoaderListener
        
    
  2. Create the web service class and annotate it with @Endpoint. Annotate the relevant service methods with @PayloadRoot. Bind the method parameters with @RequestPayload and the return type with @ResponsePayload.
    @Endpoint
    public class FindPersonServiceEndpoint {
    
        private final PersonService personService;
    
        public FindPersonServiceEndpoint(PersonService personService) {
    
            this.personService = personService;
        }
    
        @PayloadRoot(localPart = "findPersonRequestType", namespace = "http://blog.frankel.ch/ws-inject")
        @ResponsePayload
        public FindPersonResponseType findPerson(@RequestPayload FindPersonRequestType parameters) {
    
            return new FindPersonDelegate().findPerson(personService, parameters);
        }
    }
    
  3. Configure the web service class as a Spring bean in the relevant Spring beans definition file.
    
        
        
            
                
            
        
    

For JAX-WS (Metro implementation), the process is very similar:

  1. Add the WSServlet in the web deployment descriptor:
    
        
            jaxws-servlet
            com.sun.xml.ws.transport.http.servlet.WSServlet
        
        
            jaxws-servlet
            /jax/*
        
        
            com.sun.xml.ws.transport.http.servlet.WSServletContextListener
        
    
  2. Create the web service class:
    @WebService(endpointInterface = "ch.frankel.blog.wsinject.jaxws.FindPersonPortType")
    public class FindPersonSoapImpl extends SpringBeanAutowiringSupport implements FindPersonPortType {
    
        @Autowired
        private PersonService personService;
    
        @Override
        public FindPersonResponseType findPerson(FindPersonRequestType parameters) {
    
            return new FindPersonDelegate().findPerson(personService, parameters);
        }
    }
  3. Finally, we also have to configure the service, this time in the standard Metro configuration file (sun-jaxws.xml):
    
        
    

Creating a web service in Spring or JAX-WS requires the same number of steps of equal complexity.

Code generation

In both cases, we need to generate Java classes from the Web Service Definitions File (WSDL). This is completely independent from the chosen technology.
However, whereas JAX-WS uses all generated Java classes, Spring WS uses only the ones that maps to WSD types and elements: wiring the WS call to the correct endpoint is achieved by mapping the request and response types.

The web service itself

Creating the web service in JAX-RS is just a matter of implementing the port type interface, which contains the service method.
In Spring, the service class has to be annotated with @Endpoint to be rcognized as a service class.

URL configuration

In JAX-RS, the sun-jaxws.xml file syntax let us configure very finely how each URL is mapped to a particular web service.
In Spring WS, no such configuration is available.
Since I’d rather have an overview of the different URLs, my preferences go toward JAX-WS.

Spring dependency injection

Injecting a bean in a Spring WS is very easy since the service is already a Spring bean thanks to the part.
On the contrary, injecting a Spring bean requires our JAX-WS implementation to inherit from SpringBeanAutowiringSupport which prevent us from having our own hierarchy. Also, we are forbidden from using explicit XML wiring then.
It’s easier to get dependency injection with Spring WS (but that was expected).

Exposing the WSDL

Both JAX-WS and Spring WS are able to expose a WSDL. In order to do so, JAX-WS uses the generated classes and as such, the exposed WSDL is the same as the one we designed in the first place.
On the contrary, Spring provides us with 2 options:

  • either use the static WSDL in which it replaces the domain and port in the section
  • or generate a dynamic WSDL from XSD files, in which case the designed one and the generated one aren’t the same

Integration testing

Spring WS has one feature that JAX-WS lacks: integration testing. A test class that will be configured with Spring Beans definitions file(s) can be created to assert sent output messages agains known inputs. Here’s an example of such a test, based on both Spring test and TestNG:

@ContextConfiguration(locations = { "classpath:/spring-ws-servlet.xml", "classpath:/applicationContext.xml" })
public class FindPersonServiceEndpointIT extends AbstractTestNGSpringContextTests {

    @Autowired
    private ApplicationContext applicationContext;

    private MockWebServiceClient client;

    @BeforeMethod
    protected void setUp() {

        client = MockWebServiceClient.createClient(applicationContext);
    }

    @Test
    public void findRequestPayloadShouldBeSameAsExpected(String request, String expectedResponse) throws DatatypeConfigurationException {

        int id = 5;
            
        Source requestPayload = new StringSource(request);

        Source expectedResponsePayload = new StringSource(expectedResponse);

        String request = ""
            + id + "";

        GregorianCalendar calendar = new GregorianCalendar();

        XMLGregorianCalendar xmlCalendar = DatatypeFactory.newInstance().newXMLGregorianCalendar(calendar);

        xmlCalendar.setHour(FIELD_UNDEFINED);
        xmlCalendar.setMinute(FIELD_UNDEFINED);
        xmlCalendar.setSecond(FIELD_UNDEFINED);
        xmlCalendar.setMillisecond(FIELD_UNDEFINED);

        String expectedResponse = ""
            + id
            + "JohnDoe"
            + xmlCalendar.toXMLFormat()
            + "";

        client.sendRequest(withPayload(requestPayload)).andExpect(payload(expectedResponsePayload));
    }
}

Note that I encountered problems regarding XML suffixes since not only is the namespace checked, but also the prefix name itself (which is a terrible idea).

Included and imported schema resolution

On one side, JAX-WS inherently resolves included/imported schemas without a glitch.
On the other side, we need to add specific beans in the Spring context in order to be able to do it with Spring WS:





    

Miscellaneous

JAX-WS provides an overview of all published services at the root of the JAX-WS servlet mapping:

Adress Informations
Service name : {http://impl.wsinject.blog.frankel.ch/}FindPersonSoapImplService
Port name : {http://impl.wsinject.blog.frankel.ch/}FindPersonSoapImplPort
Adress : http://localhost:8080/wsinject/jax/findPerson
WSDL: http://localhost:8080/wsinject/jax/findPerson?wsdl
Implementation class : ch.frankel.blog.wsinject.impl.FindPersonSoapImpl

Conclusion

Considering contract-first web services, my little experience has driven me to choose JAX-WS in favor of Spring WS: testing benefits do not balance the ease-of-use of the standard. I must admit I was a little surprised by these results since most Spring components are easier to use and configure than their standard counterparts, but results are here.

You can find the sources for this article here.

Note: the JAX-WS version used is 2.2 so the Maven POM is rather convoluted to override Java 6 native JAX-WS 2.1 classes.

Categories: JavaEE Tags: , ,

Lessons learned from integrating jBPM 4 with Spring

September 9th, 2012 1 comment

When I was tasked with integrating a process engine into one of my projects, I quickly decided in favor of Activiti. Activiti is the next version of jBPM 4, is compatible with BPMN 2.0, is well documented and has an out-of-the-box module to integrate with Spring. Unfortunately, in a cruel stroke of fate, I was overruled by my hierarchy (because of some petty reason I dare not write here) and I had to use jBPM. This articles tries to list all lessons I learned in this rather epic journey.

Lesson 1: jBPM documentation is not enough

Although jBPM 4 has plenty of available documentation, when you’re faced with the task of starting from scratch, it’s not enough, especially when compared to Activiti’s own documentation. For example, there’s no hint on how to bootstrap jBPM in a Spring context.

Lession 2: it’s not because there’s no documentation that it’s not there

jBPM 4 wraps all needed components to bootstrap jBPM through Spring, even though it stays strangely silent on how to do so. Google is no helper here since everybody seems to have infered its own solution.For the curious, here’s how I finally ended doing it:
For example, here’s how I found some declared the jBPM configuration bean:


    
        jbpm.cfg.xml
    

At first, I was afraid to use a class in a package which contained the dreaded “internal” word but it seems the only way to do so…

Lesson 3: Google is not really your friend here

… Was it? In fact, I also found this configuration snippet:


    

And I think if you search long enough, you’ll find many other ways to achieve engine configuration. That’s the problem when documentation is lacking: we developers are smart people, so we’ll find a way to make it work, no matter what.

Lesson 4: don’t forget to configure the logging framework

This one is not jBPM specific, but it’s important nonetheless. I lost much (really much) time because I foolishly ignored the message warning me about Log4J not finding its configuration file. After having created the damned thing, I finally could read an important piece of information I receveid when programmatically deploying a new jPDL file:

WARNING: no objects were deployed! Check if you have configured a correct deployer in your jbpm.cfg.xml file for the type of deployment you want to do.

Lesson 5: the truth is in the code

This lesson is a direct consequences of the previous one: since I already had double-checked my file had the jbpm.xml extension and that the jBPM deployer was correctly set in my configuration, I had to understand what the code really did. In the end, this meant I had to get the sources and debug in my IDE to watch what happened under the cover. The culprit was the following line:

repositoryService.createDeployment().addResourceFromInputStream("test", new FileInputStream(file));

Since I used a stream on the file, and not the stream itself, I had to supply a fictitious resource name (“test”). The latter was checked for a jpdl.xml file extension and of course, failed miserably. This was however not readily apparent by reading the error message… The fix was the following:

repositoryService.createDeployment().addResourceFromFile(file);

Of course, the referenced file had the correct jbpm.xml extension and it worked like a charm.

Lesson 6: don’t reinvent the wheel

Tightly coupled with lesson 2 and 3, I found many snippets fully describing the jbpm.cfg.xml configuration file. Albeit working (well, I hope so since I didn’t test it), it’s overkill, error-prone and a perhaps sign of too much Google use (vs brain use). For example, the jbpm4-spring-demo project published on Google Code provides a full-fledged configuration file. With a lenghty trial-and-error process, I managed to achieve success with a much shorter configuration file that reused existing configuration snippets:




    
    
    

Lesson 7: jPBM can access the Spring context

jBPM offers the java activity to call some arbitrary Java code: it can be EJB but we can also wire the Spring context to jBPM so as to make the former accessible by the latter. It’s easily done by modifying the former configuration:



    
    
    

    
        
            
        
    

Note that I haven’t found an already existing snippet for this one, feedback welcome.

Lesson 8: use the JTA transaction manager if needed

You’ll probably end having a business database along your jBPM database. In most companies, you’ll be gently asked to put them in different schemas by the DBAs. In this case, don’t forget to use a JTA transaction manager along some XA datasources to use 2-phase commit. In your tests, you’d better use the same schema and a simple transaction manager based on the data source will be enough.

For those of you that need yet another way to configure their jBPM/Spring integration, here are the sources I used in Maven/Eclipse format. I hope my approach is a lot more pragmatic. In all cases, remember I’m a newbie in this product.

Categories: Java Tags: ,

XML validation with imported/included schemas

September 2nd, 2012 2 comments

Recently, I tried to help a teammate design a WSDL file. I gently drove him toward separating the interface itself in the WSDL file and domain objects in a XML Schema file. One thing leading to another, I also made him split this XSD into two separate files, one including another for design purposes. Alas, tests were already present, and they failed miserably after my refactoring, complaining about a type in the included file not being found. The situation was extremely unpleasant, not only because I looked a little foolish in front of one of my co-worker, but also because despite my best efforts, I couldn’t achieve validation.

I finally found the solution, and I hope to spread it as much as I can in order for other developers to stop loosing time regarding this issue. The root of the problem is that the Java XML validation API cannot resolved included XML schemas (and imported ones as well), period. However, it allows for registering a (crude) resolver that can provide the content of the included/imported XSD. So, the solution is to implement your own resolver and your own content holder (there’s none provided in the JDK 6).

  1. Create a “input” implementation. This class is responsible for holding the content of the resolved schema.
    public class LSInputImpl implements LSInput {
    
        private Reader characterStream;
        private InputStream byteStream;
        private String stringData;
        private String systemId;
        private String publicId;
        private String baseURI;
        private String encoding;
        private boolean certifiedText;
    
        // Getters and setters here
    }
  2. Create a resolver implementation. This one is based on the premise that the included/imported schemas lies at the root of the classpath, and is relatively simple. More complex implementations can provide for a variety of locations (filesystem, internet, etc.).
    public class ClasspathResourceResolver implements LSResourceResolver {
    
        @Override
        public LSInput resolveResource(String type, String namespaceURI, String publicId, String systemId, String baseURI) {
    
            LSInputImpl input = new LSInputImpl();
    
            InputStream stream = getClass().getClassLoader().getResourceAsStream(systemId);
    
            input.setPublicId(publicId);
            input.setSystemId(systemId);
            input.setBaseURI(baseURI);
            input.setCharacterStream(new InputStreamReader(stream));
    
            return input;
        }
    }
  3. Finally, just set the resolver on the schema factory:
    SchemaFactory schemaFactory = SchemaFactory.newInstance(W3C_XML_SCHEMA_NS_URI);
    
    schemaFactory.setResourceResolver(new ClasspathResourceResolver());

These 3 steps will go a long way toward cleanly splitting your XML schemas.

Categories: Java Tags: ,

Re-use your test classes across different projects

August 25th, 2012 1 comment

Sometimes, you need to reuse your test classes across different projects. These are two use-cases that I know of:

  • Utility classes that create relevant domain objects used in different modules
  • Database test classes (ans resources) that need to be run in the persistence project as well as the integration test project

Since I’ve seen more than my share of misuses, this article aim to provide an elegant solution once and for all.

Creating the test artifact

First, we have to use Maven: I know that not everyone is a Maven fanboy, but it get the work done – and in our case, it does it easily. Then, we configure the JAR plugin to attach tests. This will compile test classes and copy test resources, and package them in an attached test artifact.


  
    
     
       org.apache.maven.plugins
       maven-jar-plugin
       2.2
       
         
           
             test-jar
           
         
       
     
    
  

The test artifact is stored side-by-side with the main artifact once deployed in the repository. Note that the configured test-jar is bound to the install goal.

Using the test artifact

The newly-created test artifact can be expressed as a dependency of a projet with the following snippet:


  ch.frankel.blog.foo
  foo
  1.0.0
  test-jar
  test

The type has to be test-jar instead of simply jar in order to Maven to pick the attached artifact and not the main one. Also, note although you could configure the dependency with a classifier instead of a type, the current documentation warns about possible bugs and favor the type configuration.

To go further:

Categories: Java Tags: ,

Spring Data, Spring Security and Envers integration

August 20th, 2012 3 comments

Spring Data JPA, Spring Security and Envers are libraries that I personally enjoy working with (and I tend to think they are considered best-of-breed in their respective category). Anyway, I wanted to implement what I consider a simple use-case: entities have to be Envers-audited but the revision has to contain the identity of the user that initiated the action. Although it seem simple, I had some challenges to overcome to achieve this. This article lists them and provide a possible solution.

Software architecture

I used Spring MVC as the web framework, configured to use Spring Security. In order to ease my development, Spring Security was configured with the login/password to recognize. It’s possible to connect it to a more adequate backend easily. In the same spirit, I use a datasource wrapper around a driver manager connection on a H2 memory database. This is simply changed by Spring configuration.

A typical flow is handled by my Spring MVC Controller, and passed to the injected service, which manages the Spring Data JPA repository (the component that accesses the database).

Facts

Here are the facts that have proven to be a hindrance (read obstacles to overcome) during this development:

  • Spring Security 3.1.1.RELEASE (latest) uses Spring 3.0.7.RELEASE (not latest)
  • Spring Data JPA uses (wait for it) JPA 2. Thus, you have to parameterize your Spring configuration with LocalContainerEntityManagerFactoryBean, then configure the EntityManager factory bean with the implementation to use (Hibernate in our case). Some passed parameters are portable across different JPA implementations, other address Envers specifically and are thus completely non-portable.
    
        
            
        
    
    
        
        
            
                
                
                
            
        
        
            
                
                _HISTORY
            
        
    
  • Spring Data JPA does provide some auditing capabilities in the form of the Auditable interface. Auditables have createdBy, createdDate, modifiedBy and modifiedDateproperties. Unfortunately, the library doesn’t store previous state of the entities: we do have to use Envers that provide this feature
  • After having fought like mad, I realized integrating Envers in Spring Data JPA was no small potatoes and stumbled upon the Spring Data Envers module, which does exactly that job. I did founf no available Maven artifacts in any repository, so I cloned the Git repo and built the current version (0.1.0.BUILD-SNAPSHOT then). It provided me with the opportunity to give a very modest contribution to the project.

On the bright side, and despite all examples I googled, the latest versions of Hibernate not only are shipped with Envers, but there’s no configuration needed to register Envers listeners, thanks to a smart service provider use. You only need to provide the JAR on the classpath, and the JAR itself takes care of registration. This makes Envers integration much simpler.

How to

Beyond adding Spring Data Envers, there’s little imagination to have, as it’s basic Envers use.

  • Create a revision entiy that has the needed attribute (user):
    @RevisionEntity(AuditingRevisionListener.class)
    @Entity
    public class AuditedRevisionEntity extends DefaultRevisionEntity {
    
        private static final long serialVersionUID = 1L;
    
        private String user;
    
        public String getUser() {
    
            return user;
        }
    
        public void setUser(String user) {
    
            this.user = user;
        }
    }
  • Create the listener to get the identity from the Spring Secuirty context and set is on the revision entity:
    public class AuditingRevisionListener implements RevisionListener {
    
        @Override
        public void newRevision(Object revisionEntity) {
    
            AuditedRevisionEntity auditedRevisionEntity = (AuditedRevisionEntity) revisionEntity;
    
            String userName = SecurityContextHolder.getContext().getAuthentication().getName();
    
            auditedRevisionEntity.setUser(userName);
        }
    }

Presto, you’re done…

Bad news

… Aren’t you? The aforementioned solution works perfectly when writing to the database. The rub comes from trying to get the information once it’s there, because the Spring Data API doesn’t allow that (yet?). So, there are basically three options (assuming you care about retrieval: this kind of information may not be for the user to see, and you can always connect to the database to query when -if – you need to access the data):

  • Create code to retrieve the data. Obviously, you cannot use Spring Data JPA (your entity is missing the userfield, it’s “created” at runtime). The best way IMHO would be to create a listener that get data on read queries for the audited entity and returns an enhanced entity. Performance as well as design considerations aren’t in favor of this one.
  • Hack the API, using reflection that make it dependent on internals of Spring Data. This is as bad an option as the one before, but I did it for educational purposes:
    Revisions revisions = stuffRepository.findRevisions(stuff.getId());
    
    List auditedRevisionEntities = new ArrayList();
    
    for (Revision revision : revisions.getContent()) {
    
        Field field = ReflectionUtils.findField(Revision.class, "metadata");
    
        // Oh, it's ugly!
        ReflectionUtils.makeAccessible(field);
    
        @SuppressWarnings("rawtypes")
        RevisionMetadata metadata = (RevisionMetadata) ReflectionUtils.getField(field, revision);
    
        AuditedRevisionEntity auditedRevisionEntity = (AuditedRevisionEntity) metadata.getDelegate();
    
        // Do what your want with auditedRevisionEntity...
    }
  • Last but not least, you can contribute to the code of Spring Data. Granted, you’ll need knowledge of the API and the internals, skills and time, but it’s worth it :-)

Conclusion

Integrating heterogeneous libraries is hardly a walkover. In the case of Spring Data JPA and Envers, it’s as easy as pie thanks to the Spring Data Envers library. However, if you need to make your audit data accessible, you need to integrate further.

The sources for this article are here, in Eclipse/Maven format.

To go further:

Method injection with Spring

July 29th, 2012 3 comments

Spring core comes out-of-the-box with two scopes: singletons and prototypes. Singletons implement the Singleton pattern, meaning there’s only a single instance at runtime (in a JVM). Spring instantiate them during context creation, caches them in the context, and serves them from the cache when needed (or something like that). Prototypes are instantiated each time you access the context to get the bean.

Problems arise when you need to inject a prototype-scoped bean in a singleton-scoped bean. Since singletons are created (and then injected) during context creation: it’s the only time the Spring context is accessed and thus prototype-scoped beans are injected only once, thus defeating their purpose.

In order to inejct prototypes into singletons, and side-by-syde with setter and constructor injection, Spring proposes another way for injection, called method injection. It works in the following way: since singletons are instantiated at context creation, it changes the way prototype-scoped are handled, from injection to created by an abstract method. The following snippet show the unsuccessful way to achieve injection:

public class Singleton {

    private Prototype prototype;

    public Singleton(Prototype prototype) {

        this.prototype = prototype;
    }

    public void doSomething() {

        prototype.foo();
    }

    public void doSomethingElse() {

        prototype.bar();
    }
}

The next snippet displays the correct code:

public abstract class Singleton {

    protected abstract Prototype createPrototype();

    public void doSomething() {

        createPrototype().foo();
    }

    public void doSomethingElse() {

        createPrototype().bar();
    }
}

As you noticed, code doesn’t specify the createPrototype() implementation. This responsibility is delegated to Spring, hence the following needed configuration:




	

Note that an alternative to method injection would be to explicitly access the Spring context to get the bean yourself. It’s a bad thing to do since it completely defeats the whole Inversion of Control pattern, but it works (and is essentially the only option when a nasty bug happens on the server – see below).

However, using method injection has several main limitations:

  • Spring achieves this black magic by changing bytecode. Thus, you’ll need to have the CGLIB libraryon the classpath.
  • The feature is only available by XML configuration, no annotations (see this JIRAfor more information)
  • Finally, some application servers have bugs related to CGLIB (such as this one)

To go further:

Transaction management: EJB3 vs Spring

July 22nd, 2012 1 comment

Transaction management is a subject that is generally left to the tender care of a senior developer (or architect). Given the messages coming from soem actors of the JavaEE community that with newer versions of JavaEE you don’t need Spring anymore, I was interested in some fact-checking on how transaction management was handled in both technologies.

Note: these messages were already sent one year and a half ago and prompted me to write this article.

Transaction demarcation

Note that although both technologies provide programmatic transaction demarcation (start transaction then commit/rollback), we’ll focus on declarative demarcation since they are easier to use in real life.

In EJB3, transactions are delimited by the @TransactionAttribute annotation. The annotation can be set on the class, in which case every method will have the transactional attribute, or per method. Annotation at the method level can override the annotation at the class level. Annotations are found automatically by the JavaEE-compliant application server.

In Spring, the former annotation is replaced by the proprietary @Transactional annotation. The behaviour is exactly the same (annotation at method/class level and possibility of overriding). Annotations can only be found when the Spring bean definition file contains the http://www.springframework.org/schema/tx namespace as well as the following snippet:

Alternatively, both technologies provide an orthogonal way to set transaction demarcation: in EJB3, one can use the EJB JAR deployment descriptor (ejb-jar.xml) while in Spring, any Spring configuration file will do.

Transactions propagation

In EJB3, there are exactly 6 possible propagation values: MANDATORY, REQUIRED (default), REQUIRES_NEW, SUPPORTS, NOT_SUPPORTED and NEVER.

Spring adds support for NESTED (see below).

Nested transactions

Nested transactions are transactions that are started then commited/rollbacked during the execution of the root transaction. Nested transaction results are limited to the scope of this transaction only (it has no effect on the umbrella transaction).

Nested transactions are not allowed in EJB3; they are in Spring.

Read-only transaction

Read-only transactions are best used with certain databases or ORM frameworks like Hibernate. In the latter case, Hibernate optimizes sessions so that they never flush (i.e. never push changes from the cache to the underlying database).

I haven’t found a way (yet) to qualify a transaction as read-only in EJB3 (help welcome). In Spring, @Transactional has a readOnly parameter to implement read-only transactions:

@Transactional(readOnly = true)

Local method calls

In local method calls, a bean’s method A() calls another method B() on the same instance. The expected behavior should be that transaction attributes of method B() is taken into account. It’s not the case in neither technologies, but both offer some workaround.

In EJB3, you have to inject an instance of the same class and call the method on this instance in order to use transaction attributes. For example:

@Stateless
public MyServiceBean implements MyServiceLocal {

    @EJB
    private MyServiceLocal service;

    @TransactionAttribute(TransactionAttributeType.NOT_SUPPORTED)
    public void A() {

        service.B();
    }

    @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW)
    public void B() {

        ...
    }
}

In Spring, by default, transaction management is handled through pure Java proxies. If you want to have transaction management in local method calls, you’ll have to turn on AspectJ. This is easily done in the Spring beans definition file (but beware the side-effects, see the Spring documentation for more details):

Exception handling and rollback

Exception handling is the area with the greater differences between Spring and EJB3.

In EJB3, only runtime exceptions thrown from a method rollback a transaction delimited around this method by default. In order to mimic this behavior for checked exceptions, you have to annotate the exception class with @ApplicationException(rollback = true). Likewise, if you wish to discard this behavior for runtime exceptions, annotate your exception class with @ApplicationException(rollback = false).

This has the disadvantage of not being able to use the same exception class to rollback in a method and to still to commit despite the exception in another method. In order to achieve this, you have to manage your transaction programmatically:

@Stateless
public MyServiceBean implements MyServiceLocal {

    @Resource
    private SessionContext context;

    public void A() {

        try {

            ...

        } catch (MyException e) {

            context.setRollbackOnly();
        }
    }
}

In Spring, runtime exceptions also cause transaction rollback. In order to change this behavior, use the rollbackFor or noRollbackFor attributes of @Transactional:

public MyServiceImpl {

    @Resource
    private SessionContext context;

	@Transactional(rollbackFor = MyException.class)
    public void A() {

		...
    }
}

Conclusion

There’s no denying that JavaEE has made giant steps in the right direction with its 6th version. And yet, small details keep pointing me toward Spring. If you know only about one – Spring or JavaEE 6, I encourage you to try “the other” and see for yourself which one you’re more comfortable with.

To go further:

Categories: JavaEE Tags: , ,