Posts Tagged ‘persistence’
  • No more Hibernate? Really?

    I recently stumbled upon this punchy one-liner: “No More Hibernate!”.

    At first, I couldn’t believe what I read. Then, scrolling down, I noticed that the site was linked to jOOQ, a framework that advocates for SQL to have a first-class status in Java:

    SQL was never meant to be abstracted. To be confined in the narrow boundaries of heavy mappers, hiding the beauty and simplicity of relational data. SQL was never meant to be object-oriented. SQL was never meant to be anything other than… SQL!

    So the jOOQ people do not like Hibernate, EclipseLink and JPA, and prefer writing SQL albeit in Java syntax. I couldn’t swallow this statement and thought about this article. Anyway, its final form is somewhat different from the flaming I originally intended.

    I don’t remember fondly the time when I had to write SQL by hand. Perhaps I lost my ability to understand SQL when I understood Object-Oriented Programming? Sure, I wrote once or twice monster PL/SQL scripts, but beyond that, I struggle to write queries with more than 2 joins. And it really bothers me to concatenate strings to create queries with optional parameters.

    When I discovered Hibernate - 7 years ago, I could only think: “Wow, these guys understand exactly what is wrong with plain JDBC!”. Since that time, there has been myBatis (formerly iBatis), more Hibernate versions, JPA v1, JPA v2, and so on. It isn’t as if SQL was really welcomed in the Java world… So, who is foolish enough to challenge this?

    After letting some background thread digest this “No more Hibernate” stuff, some facts came back to the surface. In particular, do you have ever been in a interview with a candidate proudly displaying Hibernate skills on his CV? For me, it usually goes like this:

    • First question: “Could you describe the reason behind LazyInitializationException?” <<blank stare>
    • Second question: “Ok, what’s the difference between eager and lazy loading?” <…>
    • When candidates answer both questions correctly - and that’s not so common, my third question is something along: “What’s your way of avoiding those exceptions?”

    At this point, given the fraction of matching candidates, I’m to the point of accepting nearly every answer, even some I’m strongly against, such as the OpenSessionInViewFilter.

    Dare I also mention the time I had to review an application code and discovered that each and every attribute was eagerly loaded? Or when I was asked to investigate an OutOfMemoryError in staging but only when logging was activated? Thanks to Dynatrace, I discovered that Hibernate entities all inherited from an abstract class that overrode toString() to use introspection to log every attribute value. To be honest, I had showed this trick to the developer earlier to use when debugging, I wouldn’t have thought he would log entities.

    And those are things I know and understand; I’m also aware of more Hibernate’s pitfalls: please check this awesome presentation, by Patrycja Wegrzynowicz to realize its overall scope.

    On one hand, Hibernate really rocked the JDBC world, on the other hand, it really set the developer skills bar higher. Even if “No more Hibernate” is really too strong for my own taste, I guess having a look at your team skills and knowledge before choosing your DAO technology stack can go a long way toward meeting your project deadlines and avoiding bad surprises in the later stages.

    Categories: Java Tags: hibernatejooqpersistence
  • Easier JPA with Spring Data JPA

    Database access in Java went through some steps:

    • at first, pure JDBC
    • proprietary frameworks
    • standards such as EJB Entities and JDO
    • OpenSource frameworks such as Hibernate and EclipseLink (known as TopLink at the time)

    When JPA was finally released, it seemed my wishes came true. At last, there was a standard coming from the trenches to access databases in Java

    Unfortunately, JPA didn’t hold its promises when compared to Hibernate: for example, you don’t have Query by Example. Even worse, in its first version, JPA didn’t provide simple features like Criteria, so that even simple queries would have to be implemented through JPQL and thus achieved with String concatenation. IMHO, this completely defeated ORM purposes.

    JPA2 to the rescue

    At last, JPA2 supplies something usable in a real-world application. And yet, I feel there’s still so much boilerplate code to write a simple CRUD DAO:

    public class JpaDao {
        private EntityManager em;
        private Class managedClass;
        private JpaDao(Class managedClass) {
            this.managedClass = managedClass;
        public void persist(E entity) {
        public void remove(E entity) {
        public E findById(PK id) {
            return em.find(managedClass, id);

    Some would (and do) object that in such a use-case, there’s no need for a DAO: the EntityManager just needs to be injected in the service class and used directly. This may be a relevant point-of-view, but only when there’s no query for as soon as you go beyond that, you need to separate between data access and business logic.

    Boilerplate code in JPA2

    Two simple use-cases highlight the useless boilerplate code in JPA 2: @NamedQuery and simple criteria queries. In the first case, you have to get the handle on the named query through the entity manager, then set potential parameters like so:

    Query query = em.createNamedQuery("Employee.findHighestPaidEmployee");

    In the second, you have to implement your own query with the CriteriaBuilder:

    CriteriaBuilder builder = em.getCriteriaBuilder();
    CriteriaQuery query = builder.createQuery(Person.class);
    Root fromPerson = query.from(Person.class);
    return em.createQuery(;

    IMHO, these lines of code bring nothing to the table and just clutter our own code. By chance, some time ago, I found project Hades, a product which was based on this conclusion and wrote simple code for you.

    Spring Data JPA

    Given the fate of some excellent OpenSource projects, Hades fared much better since it has been brought into the Spring ecosystem under the name Spring Data JPA. Out of the box, SDJ provides DAOs that have advanced CRUD features. For example, the following interface can be used as-is:

    public interface EmployeeRepository extends JPARepository

    Given some Spring magic, an implementation will be provided at runtime with the following methods:

    • void deleteAllInBatch()
    • void deleteInBatch(Iterable<Employee> entities)
    • List<Employee> findAll()
    • List<Employee> findAll(Sort sort)
    • void flush()
    • <S extends Employee> List<S> save(Iterable<S> entities)
    • Employee saveAndFlush(Employee entity)
    • Page<Employee> findAll(Pageable pageable)
    • Iterable<Employee> findAll(Sort sort)
    • long count()
    • void delete(ID id)
    • void delete(Iterable<? extends Employee> entities)
    • void delete(Employee entity)
    • void deleteAll()
    • boolean exists(Long id)
    • Iterable<Employee> findAll()
    • Iterable<Employee> findAll(Iterable ids)
    • Employee findOne(Long id)
    • <S extends Employee> Iterable<S> save(Iterable<S> entities)
    • <S extends Employee> S save(S entity)

    Yes, SDJ provides you with a generic DAO, like so many frameworks around but here, wiring into the underlying implementation is handled by the framework, free of charge. For those that don’t need them all and prefer the strict minimum, you can also use the following strategy, where you have to choose the methods from the list above (and use the annotation):

    @RepositoryDefinition(domainClass = Employee.class, idClass = Long.class)
    public interface EmployeeRepository {
        long count();
        Employee save(Employee employee);

    It sure is nice, but the best is yet to come. Remember the two above use-cases we had to write on our own? The first is simply handled by adding the unqualified query name to the interface like so:

    @RepositoryDefinition(domainClass = Employee.class, idClass = Long.class)
    public interface EmployeeRepository {
        Employee findHighestPaidEmployee();

    The second use-case, finding all employees, is provided in the JPA repository. But let’s pretend for a second we have a WHERE clause, for example on the first name. SDJ is capable of handling simple queries based on the method name:

    @RepositoryDefinition(domainClass = Employee.class, idClass = Long.class)
    public interface EmployeeRepository {
        List findByLastname(String firstName);

    We had to code only an interface and its methods: no implementation code nor metamodel generation was involved! Don’t worry, if you need to implement some complex queries, SDJ let you wire your own implementation.


    If you’re already a Spring user, Spring Data JPA is really (really!) a must. If you’re not, you’re welcome to test it to see its added value by yourself. IMHO, SDJ is one of the reason JavaEE has not killed Spring yet: it bridged the injection part, but the boilerplate code is still around every corner.

    This article is not a how-to but a teaser to let you into SDJ. You can find the sources for this article here, in Maven/Eclipse format.

    To go further:

    For those that aren’t into JPA yet, there’s Data JDBC; for those that are well beyond that (think Big Data); there’s a Data Hadoop. Check all of Spring Data projects!

    Categories: Java Tags: jpapersistencespring data
  • How to test code that uses Envers

    Envers is a Hibernate module that can be configured to automatically audit changes made to your entities. Each audited entity are thus associated with a list of revisions, each revision capturing the state of the entity when a change occurs. There is however an obstacle I came across while I was “unit testing” my DAO, and that’s what I want to share to avoid others to fall in the same pit.

    First, let’s have an overview of the couple of steps needed to use Envers:

    • Annotate your entity with the @Audited annotation:
    public class Person {
        // Properties
    • Register the Envers AuditEventListener in your Hibernate SessionFactory through Spring:
    <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
        <property name="dataSource" ref="dataSource" />
        <property name="packagesToScan" value="" />
        <property name="hibernateProperties">
                <prop key="hibernate.dialect">org.hibernate.dialect.H2Dialect</prop>
        <property name="schemaUpdate" value="true" />
        <property name="eventListeners">
                <entry key="post-insert" value-ref="auditListener" />
                <entry key="post-update" value-ref="auditListener" />
                <entry key="post-delete" value-ref="auditListener" />
                <entry key="pre-collection-update" value-ref="auditListener" />
                <entry key="pre-collection-remove" value-ref="auditListener" />
                <entry key="post-collection-recreate" value-ref="auditListener" />
    <bean id="auditListener" class="org.hibernate.envers.event.AuditEventListener" />
    • Configure the Hibernate transaction manager as your transaction manager. Note auditing won’t be triggered if you use another transaction manager (DataSourceTransactionManager comes to mind):
    <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager">
            <property name="sessionFactory" ref="sessionFactory" />
    • Now is the time to create your test class:
    @TransactionConfiguration(defaultRollback = false)
    public class PersonDaoImplTest extends AbstractTransactionalTestNGSpringContextTests {
        private PersonDao personDao;
        protected void setUp() {
            // Populate database
        public void personShouldBeAudited() {
            Person person = personDao.get(1L);
            List<Person> history = personDao.getPersonHistory(1L);
            assertEquals(history.size(), 1);

    Strangely, when you execute the previous test class, the test method fails when checking the list is not empty: it is, meaning there’s no revision associated with the entity. Morevoer, nothing shows up in the log. However, the revision shows up in the audited table at the end of the test (provide you didn’t clear the table after its execution).

    Comes the dreaded question: why? Well, it seems Hibernate post-event listeners are only called when the transaction is commited. In our case, it matches: the transaction is commited by Spring after method completion, and our test trie to assert inside the method.

    In order for our test to pass, we have to manually manage a transaction inside our method, to commit the update to the database.

    public void personShouldBeAuditedWhenUpdatedWithManualTransaction() {
        PlatformTransactionManager txMgr = applicationContext.getBean(PlatformTransactionManager.class);
    	// A new transaction is required, the wrapping transaction is for Envers
        TransactionStatus status = txMgr.getTransaction(new DefaultTransactionDefinition(PROPAGATION_REQUIRES_NEW));
        Person person = personDao.get(1L);
        List<Person> history = personDao.getPersonHistory(1L);
        assertEquals(history.size(), 1);

    On one hand, the test passes and the log shows the SQL commands accordingly. On the other hand, the cost is the additional boilerplate code needed to make it pass.

    Of course, one could (should?) question the need to test the feature in the first place. Since it’s a functionality brought by a library, the reasoning behind could be that if you don’t trust the library, don’t use it at all. In my case, it was the first time I used Envers, so there’s no denying I had to build the trust between me and the library. Yet, even with trusted libraries, I do test specific cases: for example, when using Hibernate, I create test classes to verify that complex queries get me the right results. As such, auditing qualifies as a complex use-case whose misbehaviors I want to be aware of as soon as possible.

    You’ll find the sources for this article here, in Maven/Eclipse format.

    Categories: Java Tags: envershibernatepersistence
  • Hades, your next persistence angel?

    A year ago, a colleague of mine showed me a very interesting framework named Krank (latter renamed to Crank because the previous name means “sick” in German, which does not bode well to any framework). Crank’s goal was to ease development on top of Java Persistence API 1.0. Two interesting features caught my attention at the time:

    • a generic DAO which implements CRUD operations out-of-the-box. This is a Grail of sort, just try to Google for "Generic DAO" and watch the results: everyone seems to provide such a class. Whether each one is a success, I leave to the reader.
    • a binding mechanism between this generic DAO and named queries, releasing you from the burden of having to create the query object yourself

    Unfortunately, there’s no activity for Crank since 2008 and I think it can be categorized as definitely dead. However, and I don’t know if there’s a link, a new project has emerged and not only does it implement the same features but it also adds even more innovative ones. This project I’ve only recently discovered is project Hades, which goal is to improve productivity on the persistence layer in general and for JPA v2 in particular. It now definitely stands on top of my “Hot topic” list.

    In order to evaluate Hades, I’ve implemented some very simple unit tests: it just works, as is! Let’s have an overview of Hades features.

    Configuring Hades

    Hades configuration is based on Spring, whether you like it or not. Personally, I do since it makes configuring Hades a breeze. Hades uses a little known feature of Spring, namely authoring, in order to do that (for more info on the subject, see my previous Spring authoring article). Consider we already have a Spring beans configuration file and that the entity manager factory is already defined. Just add Hades namespace to the header and reference the base package of your DAO classes:

      xsi:schemaLocation="... ...">
      <hades:dao-config base-package="" />

    Since Hades uses convention over configuration, it will:

    • transparently create a Spring bean for each of your DAO interface (which must inherit from GenericDAO) in your configured package
    • reference it under the unqualified class name where the first letter is set to lower-case
    • inject it with the default entity manager factory and transaction manager provided they are respectively declared as "entityManagerFactory" and "transactionManager"

    Generic DAO

    Hades generic DAO has support for standard CRUD operations on single entities and whole tables, plus COUNT. Under the cover, it will use the injected entity manager factory to get a reference to an entity manager and use the latter for these operations.

    Named query

    Using a JPA’s named query needs minimal boilerplate code (but still) and do not provide a generics signature (which will need to be cast afterwards):

    Query query = em.createNamedQuery("findUsersByName").setParameter(name);
    List result = query.getResultList();

    Hades, on the other hand, provides a binding mechanism between a named query and an interface’s methode name:

    public interface UserDao extends GenericDao {
      List<User> findUsersByName(String name);

    Now, you just have to inject the DAO into your service and use it as is.

    Simple criteria query from method name

    Most of named queries you write manually are of the form SELECT * FROM MYTABLE WHERE A = 'a' and B = 'b' or other simple criteria. Hades can automatically generate and execute queries that are relevant to your method name. For example, if your DAO’s method signature is List findByLastNameOrAgeLessThan(String name, Date age), Hades will create the associated query SELECT * FROM User u where u.lastName = ?1 and u.age < ?2 and bind the passed parameters.

    Although I was a bit wary of such feature based on method’s name, I came to realized it ensures the semantics of the method is aligned with what it does. Moreover, there’s no code to write! Truly, I could easily fall in love with this feature…

    Enhancing your DAOs

    Crank’s generic DAO had a major drawback. If you wanted to add methods, you had to create a concrete DAO class, compose the DAO with the generic one, then delegate all the standard CRUD operations to the generic. You could then code the extra methods on your concrete DAO. At least, it is the design I came up with when I had to do it.This was not very complex since delegation could be coded with your favorite IDE, but it was a bore and you ended up with a very very long class full of delegating calls. Not what I call simple code.

    Hades designed this behaviour from the start. When you need to add methods to a specific DAO, all you have to do is:

    • create an interface with the extra methods
    • create a concrete class that implements this interface and code these methods. Since you use Spring, just reference it as a Spring bean to inject the entity manager factory
    • reuse the simple DAO interface and make it extend your specific interface as well as GenericDao (like before)

    Done: easy as pie and beautifully designed. What more could you ask for?


    Hades seems relatively new (the first ticket dating back from April 2009) yet it looks very promising. Until then, the only “flaw” I may have see is transaction management: CRUD operations are transactional by default although, IMHO, transactionality should be handled in the service layer. However, it is relatively minor in regard to all the benefits it brings. Moreover, reluctance to use such a new project can be alleviated since it will join the Spring Data in the near future, which I take as a very good sign of Hades simplicity and capabilities.

    As for me, I haven’t use but taken a casual glance at Hades. Does anyone has used it in “real” projects yet? In which context? With which results? I would really be interested in your feedbacks if you have any.

    As usual, you can found sources for this article here.

    To go further:

    Categories: JavaEE Tags: hadespersistence
  • Spring Persistence with Hibernate

    This review is about Spring Persistence with Hibernate by Ahmad Reza Seddighi from Packt Publishing.


    1. 15 chapters, 441 pages, 38€99
    2. This book is intended for beginners but more experienced developers can learn a thing or two
    3. This book covers Hibernate and Spring in relation to persistence


    1. The scope of this book is what makes it very interesting. Many books talk about Hibernate and many talk about Spring. Yet, I do not know of many which talk about the use of both in relation to persistence. Explaining Hibernate without describing the transactional side is pointless
    2. The book is well detailed, taking you by the hand from the bottom to reach a good level of knowledge on the subject
    3. It explains plain AOP, then Spring proxies before heading to the transactional stuff


    1. The book is about Hibernate but I would have liked to see a more tight integration with JPA. It is only described as an another way to configure the mappings
    2. Nowadays, I think Hibernate XML configuration is becoming obsolete. The book views XML as the main way of configuration, annotations being secondary
    3. Some subjects are not documented: for some, that's not too important (like Hibernate custom SQL operations), for others, that's a real loss (like the @Transactional Spring annotation)


    Despite some minor flaws, Spring Persistence with Hibernate let you go head first into the very complex sujbect of Hibernate. I think that Hibernate has a very low entry ticket, and you can be more productive with it very quickly. On the downside, mistakes will cost you much more than with old plain JDBC. This book serves you Hibernate and Spring concepts on a platter, so you will make less mistakes.

    Categories: Bookreview Tags: hibernatepersistencespring
  • Framework agnostic JPA

    With new JEE 5 standard has come the EJB3 specifications. From an historical point of view, EJBs come into 3 different flavors: (i) Entity for persistence, (ii) Session for business logic and (iii) Message-Driven for listeners. Entity EJB are the most time-consuming to develop in their 2.1 version. Apart from the inherent complexity of EJB (local and remote interfaces, homes), developing an EJB 2 is error-prone because of the mapping mechanism. All in all, EJB 2 development really needs a very specialized IDE and expert developers. That’s the main reason why EJB 2 didn’t achieve a significant market share, and had to compete with JDO, Hibernate and other third-party frameworks.

    Sun eventually realized this and did come with a much simpler solution with EJB 3. Using Java 5 annotations, developing Entity EJB 3 is a breeze. The part of EJB 3 specifications that deal with Entity is called Java Persistence API (JPA). It is a specification in its own right, and in the next specifications, it will have its own. Developing JPA applications is a two-step process. First, you have to create you model classes. Such classes will be the foundation of your persistence layer: it would be a good idea to use this layer throughout your entire organization, since these classes should closely map your database model. A simple JPA enhanced class would look like this:

    import static javax.persistence.GenerationType.IDENTITY;
    import javax.persistence.Entity;
    import javax.persistence.GeneratedValue;
    import javax.persistence.Id;
    public class Book implements Serializable {
        /** Unique class's serial version identifier. */
        private static final long serialVersionUID = 7292903017883188330L;
        /** Book's unique identifier. */
        @GeneratedValue(strategy = IDENTITY)
        private long id;
        /** ISBN. */
        private String isbn;
        /** Book's title. */
        private String title;
        /** Publication's date. */
        private Date published;
        public long getId() {
            return id;
        public String getIsbn() {
            return isbn;
        public Date getPublished() {
            return published;
        public String getTitle() {
            return title;
        public void setId(long aId) {
            id = aId;
        public void setIsbn(String aIsbn) {
            isbn = aIsbn;
        public void setPublished(Date aPublished) {
            published = aPublished;
        public void setTitle(String aTitle) {
            title = aTitle;

    You have noticed 3 annotations:

    • javax.persistence.Entity (line 9): just this annotation enables your model's class to be used by JPA,
    • javax.persistence.Id (line 16): identifies the primary key column,
    • javax.persistence.GeneratedValue (line 17): how to generate PK while inserting.

    Mapping your classes against the database is the most important operation of JPA. It is out of the scope of this article, suffice to say the least is you can do is use the 3 above annotations.

    If you’re using Eclipse, and if you don’t want to learn all JPA annotations, I recommend using JPA facet. Just righ-click your project, select Properties and go to Project Facets. Then select Java Persistence. Apart from creating needed configuration files (see below) and providing a nice utility view for them, it allows two more views for each of you model classes:

    • JPA Structure which should map your database structure,
    • JPA Details where you can add and change JPA annotations.

    JPA details in Eclipse

    Once mapping is done, either through brute force or the neat little wizard provided just above, how can you CRUD theses entities? First, you can continue using your favorite JPA-compatible persistence framework. In my case, I use Hibernate, which happens to be JPA reference implementation. The following code is called once, preferably in an utility class:

    AnnotationConfiguration configuration = new AnnotationConfiguration();
    sessionFactory = configuration.buildSessionFactory();

    Now you only have to get the session for every unit-of-work and use it in your DAO:

    Session session = sessionFactory.getSession();
    // Get every book in database
    List books = session.createCriteria(Book.class).list();

    Now we got model’s classes that are Hibernate independent but our DAOs are not. Since using JPA API instead of Hibernate API decouples our code from the underlying framework, and this at no significant performance cost, it becomes highly desirable to remove this dependency. You need to have :

    • a compile-time dependency to ejb3-persistence.jar,
    • a runtime dependency to hibernate-entitymanager.jar or its TopLink equivalent. Transitive dependencies are of course framework dependent.

    JPA configuration is done through 2 files:

    • META-INF/orm.xml for mapping informations. In our case, it is already done with annotations.
    • META-INF/persistence.xml for meta information, e.g JNDI location of the datasource to use.

    When done, the calling sequence is very similar to Hibernate’s:

    // This done once per application
    // Notice its similitude with Hibernate's SessionFactory
    EntityManagerFactory emf = Persistence.createEntityManagerFactory("myManager");
    // This done per unit of work
    // Notice its similitude with Hibernate's Session
    EntityManager em = emf.createEntityManager();
    // Get every book in database
    List books = em.createQuery("SELECT b FROM Book").getResultList();

    Now, using the above code, passing from an Hibernate implementation to a TopLink implementation is transparent. Just remove Hibernate JARs and put TopLink JARs on the runtime classpath.

    Categories: Java Tags: hibernatejpapersistence