Posts Tagged ‘hibernate’
  • No more Hibernate? Really?

    I recently stumbled upon this punchy one-liner: “No More Hibernate!”.

    At first, I couldn’t believe what I read. Then, scrolling down, I noticed that the site was linked to jOOQ, a framework that advocates for SQL to have a first-class status in Java:

    SQL was never meant to be abstracted. To be confined in the narrow boundaries of heavy mappers, hiding the beauty and simplicity of relational data. SQL was never meant to be object-oriented. SQL was never meant to be anything other than… SQL!

    So the jOOQ people do not like Hibernate, EclipseLink and JPA, and prefer writing SQL albeit in Java syntax. I couldn’t swallow this statement and thought about this article. Anyway, its final form is somewhat different from the flaming I originally intended.

    I don’t remember fondly the time when I had to write SQL by hand. Perhaps I lost my ability to understand SQL when I understood Object-Oriented Programming? Sure, I wrote once or twice monster PL/SQL scripts, but beyond that, I struggle to write queries with more than 2 joins. And it really bothers me to concatenate strings to create queries with optional parameters.

    When I discovered Hibernate - 7 years ago, I could only think: “Wow, these guys understand exactly what is wrong with plain JDBC!”. Since that time, there has been myBatis (formerly iBatis), more Hibernate versions, JPA v1, JPA v2, and so on. It isn’t as if SQL was really welcomed in the Java world… So, who is foolish enough to challenge this?

    After letting some background thread digest this “No more Hibernate” stuff, some facts came back to the surface. In particular, do you have ever been in a interview with a candidate proudly displaying Hibernate skills on his CV? For me, it usually goes like this:

    • First question: “Could you describe the reason behind LazyInitializationException?” <<blank stare>
    • Second question: “Ok, what’s the difference between eager and lazy loading?” <…>
    • When candidates answer both questions correctly - and that’s not so common, my third question is something along: “What’s your way of avoiding those exceptions?”

    At this point, given the fraction of matching candidates, I’m to the point of accepting nearly every answer, even some I’m strongly against, such as the OpenSessionInViewFilter.

    Dare I also mention the time I had to review an application code and discovered that each and every attribute was eagerly loaded? Or when I was asked to investigate an OutOfMemoryError in staging but only when logging was activated? Thanks to Dynatrace, I discovered that Hibernate entities all inherited from an abstract class that overrode toString() to use introspection to log every attribute value. To be honest, I had showed this trick to the developer earlier to use when debugging, I wouldn’t have thought he would log entities.

    And those are things I know and understand; I’m also aware of more Hibernate’s pitfalls: please check this awesome presentation, by Patrycja Wegrzynowicz to realize its overall scope.

    On one hand, Hibernate really rocked the JDBC world, on the other hand, it really set the developer skills bar higher. Even if “No more Hibernate” is really too strong for my own taste, I guess having a look at your team skills and knowledge before choosing your DAO technology stack can go a long way toward meeting your project deadlines and avoiding bad surprises in the later stages.

    Categories: Java Tags: hibernatejooqpersistence
  • How to test code that uses Envers

    Envers is a Hibernate module that can be configured to automatically audit changes made to your entities. Each audited entity are thus associated with a list of revisions, each revision capturing the state of the entity when a change occurs. There is however an obstacle I came across while I was “unit testing” my DAO, and that’s what I want to share to avoid others to fall in the same pit.

    First, let’s have an overview of the couple of steps needed to use Envers:

    • Annotate your entity with the @Audited annotation:
    public class Person {
        // Properties
    • Register the Envers AuditEventListener in your Hibernate SessionFactory through Spring:
    <bean id="sessionFactory" class="org.springframework.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
        <property name="dataSource" ref="dataSource" />
        <property name="packagesToScan" value="" />
        <property name="hibernateProperties">
                <prop key="hibernate.dialect">org.hibernate.dialect.H2Dialect</prop>
        <property name="schemaUpdate" value="true" />
        <property name="eventListeners">
                <entry key="post-insert" value-ref="auditListener" />
                <entry key="post-update" value-ref="auditListener" />
                <entry key="post-delete" value-ref="auditListener" />
                <entry key="pre-collection-update" value-ref="auditListener" />
                <entry key="pre-collection-remove" value-ref="auditListener" />
                <entry key="post-collection-recreate" value-ref="auditListener" />
    <bean id="auditListener" class="org.hibernate.envers.event.AuditEventListener" />
    • Configure the Hibernate transaction manager as your transaction manager. Note auditing won’t be triggered if you use another transaction manager (DataSourceTransactionManager comes to mind):
    <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager">
            <property name="sessionFactory" ref="sessionFactory" />
    • Now is the time to create your test class:
    @TransactionConfiguration(defaultRollback = false)
    public class PersonDaoImplTest extends AbstractTransactionalTestNGSpringContextTests {
        private PersonDao personDao;
        protected void setUp() {
            // Populate database
        public void personShouldBeAudited() {
            Person person = personDao.get(1L);
            List<Person> history = personDao.getPersonHistory(1L);
            assertEquals(history.size(), 1);

    Strangely, when you execute the previous test class, the test method fails when checking the list is not empty: it is, meaning there’s no revision associated with the entity. Morevoer, nothing shows up in the log. However, the revision shows up in the audited table at the end of the test (provide you didn’t clear the table after its execution).

    Comes the dreaded question: why? Well, it seems Hibernate post-event listeners are only called when the transaction is commited. In our case, it matches: the transaction is commited by Spring after method completion, and our test trie to assert inside the method.

    In order for our test to pass, we have to manually manage a transaction inside our method, to commit the update to the database.

    public void personShouldBeAuditedWhenUpdatedWithManualTransaction() {
        PlatformTransactionManager txMgr = applicationContext.getBean(PlatformTransactionManager.class);
    	// A new transaction is required, the wrapping transaction is for Envers
        TransactionStatus status = txMgr.getTransaction(new DefaultTransactionDefinition(PROPAGATION_REQUIRES_NEW));
        Person person = personDao.get(1L);
        List<Person> history = personDao.getPersonHistory(1L);
        assertEquals(history.size(), 1);

    On one hand, the test passes and the log shows the SQL commands accordingly. On the other hand, the cost is the additional boilerplate code needed to make it pass.

    Of course, one could (should?) question the need to test the feature in the first place. Since it’s a functionality brought by a library, the reasoning behind could be that if you don’t trust the library, don’t use it at all. In my case, it was the first time I used Envers, so there’s no denying I had to build the trust between me and the library. Yet, even with trusted libraries, I do test specific cases: for example, when using Hibernate, I create test classes to verify that complex queries get me the right results. As such, auditing qualifies as a complex use-case whose misbehaviors I want to be aware of as soon as possible.

    You’ll find the sources for this article here, in Maven/Eclipse format.

    Categories: Java Tags: envershibernatepersistence
  • Hibernate hard facts – Part 7

    In the seventh article of this serie, we’ll have a look at the difference between saveOrUpdate() and merge().


    Hibernate’s way for updating entities is through the update() method. Wa can get an object, detach it from the session and then update it later with no problem. The following snippet shows how it works:

    Session session = factory.getCurrentSession();
    Customer customer = (Customer) session.get(Customer.class, 1L);
    session = factory.getCurrentSession();

    This will issue a SQL’s UPDATE statement.

    However, update()’s Javadoc states that “If there is a persistent instance with the same identifier, an exception is thrown”.

    This means that loading the same customer just before the update (and after the transaction’s beginning) will miserably fail - with a NonUniqueObjectException.

    Session session = factory.getCurrentSession();
    Customer customer = (Customer) session.get(Customer.class, 1L);
    session = factory.getCurrentSession();
    session.get(Customer.class, 1L);
    // Fail here


    Merging is a feature brought by Hibernate 3, inspired by JPA. Whereas update conflicts if the same entity is already managed by Hibernate, merge also copies the changes made to the detached entity so there’s no exception thrown.

    Session session = factory.getCurrentSession();
    Customer customer = (Customer) session.get(Customer.class, 1L);
    session = factory.getCurrentSession();
    session.get(Customer.class, 1L);
    // Succeeds!

    As opposed to update, note that merge doesn’t associate the detached instance to Hibernate’s memory store but returns a new instance that’s in it instead. Further operations should be made to the returned instance.


    When in a web context, detached entities are everywhere.  Most of the time, update() should be enough to do the trick. When not, think about merge() but keep in mind to use the returned instance.

    Finally, if the entity cannot be reconstructed - because some attributes are missing from the web page - the best strategy is to load the entity from the database and then modify its attributes as needed. In this case, there’s no need to update nor merge: the statement will be executed during session flush.

    You can download sources for this article here in Maven/Eclipse format.

    Categories: Java Tags: hibernate
  • The OpenSessionInView antipattern

    With such a controversial title, I’m bound to be the target of heated comments. Although provocative, such is not my goal, however, I just want to initiate an educated debate between people that are interested into thinking about the problem.

    The origin of this post was a simple discussion between developers of different level of experience on Hibernate. The talk was about eager-, lazy-loading and the infamous LazyInitializationException. Since I imagine not everyone has the problem in his mind right now, let me resume it at first.

    Most of the times, your entities have relations to one another: a course has students, an invoice has a customer, etc. These relations could take you really far (a student follow courses, which are taught by teachers, who in turn have other courses), and since most of the time, you don’t need the whole objects cluster, Hibernate let you define relations as either eager or lazy. Eager relations do get the association, while lazy relations are replaced by Hibernate by a proxy. These proxies do nothing until activated by calling the getter for the association. At this time, the proxy queries the database through its encapsulated Hibernate session.

    If the latter is already closed, you’ll get the LayInitializationException, since Hibernate tries to query the DB without connection. However, there are 3 solutions to the exception.

    The simplest is, of course to tag the relation as eager. Hibernate will never use a proxy, and thus, you’ll never have this exception (from this relation at least). This is not a solution per se, since if you do this with all relations, Hibernate will load all associations, and you’ll end up with a mighty big objects cluster. This defeats Hibernate lazy strategy as well as clutter the JVM’s memory with unused objects. Associations that are used throughout the application are a good candidate for eager tagging, but it’s the exception, not the norm.

    Another solution for this is the OpenSessionInView “pattern”. Before going into explaining it, let me layout the whole thing.Good web applications are layered, meaning the persistence layer (the one nearer to the database) is responsible for interacting with the DB. Most DAO will open the Hibernate session at the beginning of the method (or get it from the current thread but it’s another matter), use it to CRUD the entity. and return control to the service layer. From this point on, there should be no more interaction with the DB.

    Now the “pattern” tells us that if an Hibernate proxy need to access the DB in the presentation layer, well, no problem, we should open the session on demand. This approach has 3 big disadvantages IMHO:

    1. each lazy initialization will get you a query meaning each entity will need N + 1 queries, where N is the number of lazy associations. If your screen presents tabular data, reading Hibernate's log is a big hint that you do not do as you should
    2. this completely defeats layered architecture, since you sully your nails with DB in the presentation layer. This is a conceptual con, so I could live with it but there is a corollary
    3. last but not least, if an exception occurs while fetching the session, it will occur during the writing of the page: you cannot present a clean error page to the user and the only thing you can do is write an error message in the body

    Another variant of this is to use Value Objects and let a mapping framework (such as Dozer) call your proxied getter methods in the service layer. This let you correctly handle errors but your number of queries is still the same.

    The best solution I’ve found so far to lazy initialization is the “join fetch” feature in Hibernate, available from both the Criteria API (setFetchMode(association, FetchMode.EAGER)) and HQL (JOIN FETCH). This means your query methods will be aimed at providing a full objects cluster to your upper layers. This also means, and I must confess this is a disadvantage, that your presentation/service layers will leak to your persistence methods: for example, you will have a getCourseWithTeacher() method and a getCourseWithStudents() method where in both cases some relations will be eagerly fetched (the teacher in the first case, the students in another).

    Nevertheless, this approach let you do only one query for each screen/service and it lets you cleanly manage your exceptions.

    I see only a use case for OpenSessionInView: if you have a bunch of junior Hibernate developers, no time to properly explain them the whole shebang, and no performance issues (perhaps because of a very simple entity model or from very simple screens), then it is a good trade-off (but still not a pattern). As soon as you diverge from this configuration, it will lead you to a world of pain…

    I’m interested in having your inputs and arguments about advantages  I may have missed about OpenSessionInView so long as it has nothing to do with:

    • I've always done so successfully
    • I never had any problems
    • XXX advertise for OpenSessionInView in YYY book (yes, I know about Gavin King but it shouldn't stop anyone from thinking on his own). A variant of this is: the ZZZ framework has a class that does it (I know about Spring too)

    I’m also interested in other solutions for the LazyInitializationException, if you’ve have creative insights about it.

    Categories: JavaEE Tags: Architecturehibernate
  • Hibernate hard facts - Part 6

    In the sixth article of this serie, I will show you how to use the fetch profile feature introduced in Hibernate 3.5.

    Lazy loading is a core feature in Hibernate: it saves memory space. The reasoning behind it is, if you don’t use an association, you don’t need the object and thus, Hibernate does not load it into memory. Hibernate fills the lazy loaded attribute with a proxy that makes the SQL request when the getter is called. By default, Hibernate makes all one-to-many and man-to-many associations lazy by default.

    This is all well and good but  in some use-cases, the associations are needed. Two things may happen from here:

    • either the session is still open and you make one more call to the database (and if done all the time, will cause performance problems),
    • or the session is closed and you will get the infamous LazyInitializationException. This one is a favorite of many fresh Hibernate developers.

    In order to mitigate these, Hibernate propose a fetch strategy that works not on the mapping level, but on the request level. Thus, you can still have lazy loading mappings but eager fetching in some cases. This strategy is available both in the Criteria API (criteria.setFetchMode()) and HQL ("FETCH JOIN").

    Before Hibernate 3.5, however, you were stuck with setting the fetch mode of an association with each request. If you had a bunch of requests that needed an eager association, flagging the association as “join” each time was not only a waste of time but also a source of potential errors.

    Hibernate 3.5 introduced the notion of fetch profiles: a fetch profile is a placeholder where you configure fetch mode for specific associations.

    Each fetch profile has a name and an array of fetch overrrides. Fetch overrides do override the mapping association type. In the following example, all customer.getAccounts() class will be eager when activating the EAGER-ACCOUNTS profile.

      name = "EAGER-ACCOUNTS",
      fetchOverrides = @FetchOverride(entity = Customer.class, association = "accounts", mode = FetchMode.JOIN))

    Then, enabling the profile for a session is a no-brainer.


    Granted, it’s not much, but you can gain much time with this feature. This example is very simple but you can have many fetch overrides under one profile (this may even be a good practice).

    To go further:

    Categories: Java Tags: hibernate
  • Debugging Hibernate generated SQL

    In this article, I will explain how to debug Hibernate’s generated SQL so that unexpected query results be traced faster either to a faulty dataset or a bug in the query.

    There’s no need to present Hibernate anymore. Yet, for those who lived in a cave for the past years, let’s say that Hibernate is one of the two ORM frameworks (the second one being TopLink) that dramatically ease database access in Java.

    One of Hibernate’s main goal is to lessen the amount of SQL you write, to the point that in many cases, you won’t even write one line. However, chances are that one day, Hibernate’s fetching mechanism won’t get you the result you expected and the problems will begin in earnest. From that point and before further investigation, you should determine which is true:

    • either the initial dataset is wrong
    • or the generated query is
    • or both if you’re really unlucky

    Being able to quickly diagnose the real cause will gain you much time. In order to do this, the greatest step will be viewing the generated SQL: if you can execute it in the right query tool, you could then compare pure SQL results to Hibernate’s results and assert the true cause. There are two solutions for viewing the SQL.

    11 Show SQL

    The first solution is the simplest one. It is part of Hibernate’s configuration and is heavily documented. Just add the following line to your hibernate.cfg.xml file:

        <property name="hibernate.show_sql">true</property>

    The previous snippet will likely show something like this in the log:

    select this_.PER_N_ID as PER1_0_0_, this_.PER_D_BIRTH_DATE as PER2_0_0_,  
    this_.PER_T_FIRST_NAME as PER3_0_0_,  this_.PER_T_LAST_NAME as PER4_0_0_ from T_PERSON this_

    Not very readable but enough to copy/paste in your favourite query tool.

    The main drawback of this is that if the query has parameters, they will display as ? and won’t show their values, like in the following output:

    select this_.PER_N_ID as PER1_0_0_, this_.PER_D_BIRTH_DATE as PER2_0_0_,  
    this_.PER_T_FIRST_NAME as PER3_0_0_, this_.PER_T_LAST_NAME as PER4_0_0_  
    from T_PERSON this_ where (this_.PER_D_BIRTH_DATE=? and this_.PER_T_FIRST_NAME=? and this_.PER_T_LAST_NAME=?)

    If they’re are too many parameters, you’re in for a world of pain and replacing each parameter with its value will take too much time.

    Yet, IMHO, this simple configuration should be enabled in all environments (save production), since it can easily be turned off.

    Proxy driver

    The second solution is more intrusive and involves a third party product but is way more powerful. It consists of putting a proxy driver between JDBC and the real driver so that all generated SQL will be logged. It is compatible with all ORM solutions that rely on the JDBC/driver architecture.

    P6Spy is a driver that does just that. Despite its age (the last release dates from 2003), it is not obsolete and server our purpose just fine. It consists of the proxy driver itself and a properties configuration file (, that both should be present on the classpath.

    In order to leverage P6Spy feature, the only thing you have to do is to tell Hibernate to use a specific driver:

        <!-- Only this line changes -->
        <property name="connection.driver_class">com.p6spy.engine.spy.P6SpyDriver</property>

    This is a minimal


    Notice the `realdriver parameter so that P6Spy knows where to redirect the calls.

    With just these, the above output becomes:

      select this_.PER_N_ID as PER1_0_0_, this_.PER_D_BIRTH_DATE as PER2_0_0_,
      this_.PER_T_FIRST_NAME as PER3_0_0_, this_.PER_T_LAST_NAME as PER4_0_0_ from T_PERSON this_ where (
      this_.PER_D_BIRTH_DATE=? and this_.PER_T_FIRST_NAME=? and this_.PER_T_LAST_NAME=?)|
      select this_.PER_N_ID as PER1_0_0_, this_.PER_D_BIRTH_DATE as PER2_0_0_,
      this_.PER_T_FIRST_NAME as PER3_0_0_, this_.PER_T_LAST_NAME as PER4_0_0_ from T_PERSON this_ where (
      this_.PER_D_BIRTH_DATE='2010-04-10' and this_.PER_T_FIRST_NAME='Johnny' and this_.PER_T_LAST_NAME='Be Good')

    Of course, the configuration can go further. For example, P6Spy knows how to redirect the logs to a file, or to Log4J (it currently misses a SLF4J adapter but anyone could code one  easily).

    If you need to use P6Spy in an application server, the configuration should be done on the application server itself, at the datasource level. In that case, every single use of this datasource will be traced, be it from Hibernate, TopLink, iBatis or plain old JDBC.

    In Tomcat, for example, put in common/classes and update the datasource configuration to use P6Spy driver.

    The source code for this article can be found here.

    To go further:

    • P6Spy official site
    • Log4jdbc, a Google Code contender that aims to offer the same features
    Categories: Java Tags: hibernatep6spySQL
  • Spring Persistence with Hibernate

    This review is about Spring Persistence with Hibernate by Ahmad Reza Seddighi from Packt Publishing.


    1. 15 chapters, 441 pages, 38€99
    2. This book is intended for beginners but more experienced developers can learn a thing or two
    3. This book covers Hibernate and Spring in relation to persistence


    1. The scope of this book is what makes it very interesting. Many books talk about Hibernate and many talk about Spring. Yet, I do not know of many which talk about the use of both in relation to persistence. Explaining Hibernate without describing the transactional side is pointless
    2. The book is well detailed, taking you by the hand from the bottom to reach a good level of knowledge on the subject
    3. It explains plain AOP, then Spring proxies before heading to the transactional stuff


    1. The book is about Hibernate but I would have liked to see a more tight integration with JPA. It is only described as an another way to configure the mappings
    2. Nowadays, I think Hibernate XML configuration is becoming obsolete. The book views XML as the main way of configuration, annotations being secondary
    3. Some subjects are not documented: for some, that's not too important (like Hibernate custom SQL operations), for others, that's a real loss (like the @Transactional Spring annotation)


    Despite some minor flaws, Spring Persistence with Hibernate let you go head first into the very complex sujbect of Hibernate. I think that Hibernate has a very low entry ticket, and you can be more productive with it very quickly. On the downside, mistakes will cost you much more than with old plain JDBC. This book serves you Hibernate and Spring concepts on a platter, so you will make less mistakes.

    Categories: Bookreview Tags: hibernatepersistencespring
  • Hibernate hard facts – Part 5

    In the fifth article of this serie, I will show you how to manage logical DELETE in Hibernate.

    Most of the time, requirements are not concerned about deletion management. In those cases, common sense and disk space plead for physical deletion of database records. This is done through the DELETE keyword in SQL. In turn, Hibernate uses it when calling the Session.delete() method on entities.

    Sometimes, though, for audit or legal purposes, requirements enforce logical deletion. Let’s take a products catalog as an example. Products regularly go in and out of the catalog. New orders shouldn’t be placed on outdated products. Yet, you can’t physically remove product records from the database since they could have been used on previous orders.

    Some strategies are available in order to implement this. Since I’m not a DBA, I know of only two. From the database side, both add a column, which represents the deletion status of the record:

    • either a boolean column which represent either active or deleted status
    • or, for more detailed information, a timestamp column which states when the record was deleted; a NULL meaning the record is not deleted and thus active

    Managing logical deletion is a two steps process: you have to manage both selection so that only active records are returned and deletion so that the status marker column is updated the right way.


    A naive use of Hibernate would map this column to a class attribute. Then, selecting active records would mean a WHERE clause on the column value and deleting a record would mean setting the attribute and calling the update() method. This approach has the merit of working. Yet, it fundamentally couples your code to your implementation. In the case you migrate your status column from boolean to timestamp, you’ll have to update your code everywhere it is used.

    The first thing you have to do to mitigate the effects of such a migration is to use a filter.

    Hibernate3 provides an innovative new approach to handling data with “visibility” rules. A Hibernate filter is a global, named, parameterized filter that can be enabled or disabled for a particular Hibernate session.

    Such filters can the be used throughout your code. Since the filtering criteria is thus coded in a single place, updating the database schema has only little incidence on your code. Back to the product example, this is done like this:

    @FilterDef(name = "activeProducts")
    @Filter(name = "activeProducts", condition = "DELETION_DATE IS NULL")
    public class Product {
      @Column(nullable = false)
      @GeneratedValue(strategy = AUTO)
      private Integer id;

    Note: in the attached source, I also map the DELETION_DATE on an attribute. This is not needed in most cases. In mine, however, it permits me to auto-create the schema with Hibernate.

    Now, the following code will filter out logically deleted records:


    In order to remove the filter, either use Session.disableFilter() or use a new Session object (remember that factory.getCurrentSession() will probably use the same, so factory.openSession() is in order).


    The previous step made us factorize the “select active-only records” feature. Logically deleting a product is still coupled to your implementation. Hibernate let us decouple further: you can overload any CRUD operations on entities! Thus, deletion can be overloaded to use an update of the right column. Just add the following snippet to your entity:


    Now, calling session.delete() on an Product entity will produce the updating of the record and the following output in the log:


    With CRUD overloading, you can even suppress the ability to select inactive records altogether. I wouldn’t recommend this approach however, since you wouldn’t be able to select inactive records then. IMHO, it’s better to stick to filters since they can be enabled/disabled when needed.


    Hibernate let you loosen the coupling between your code and the database so that you can migrate from physical deletion to logical deletion with very localized changes. In order to do this, Hibernate offers two complementary features: filters and CRUD overloading. These features should be part of any architect’s bag of tricks since they can be lifesavers, like in previous cases.

    You can find the sources of this article here in Maven/Eclipse format.

    To go further:

    Categories: Java Tags: hibernate
  • Hibernate hard facts - Part 4

    In the fourth article of this serie, I will show the subtle differences between get() and load() methods.

    Hibernate, like life, can be full of suprises. Today, I will share one with you: have you ever noticed that Hibernate provides you with 2 methods to load a persistent entity from the database tier? These two methods are get(Class, Serializable) and load(Class, Serializable) of the Session class and their respective variations.

    Strangely enough, they both have the same signature. Strangely enough, both of their API description starts the same:

    Return the persistent instance of the given entity class with the given identifier.

    Most developers use them indifferently. It is a mistake since, if the entity is not found, get() will return null when load() will throw an Hibernate exception. This is well described in the API:

    Return the persistent instance of the given entity class with the given identifier, assuming that the instance exists. You should not use this method to determine if an instance exists (use get() instead). Use this only to retrieve an instance that you assume exists, where non-existence would be an actual error.

    Truth be told, the real difference lies elsewhere: the get() method returns an instance, whereas the load() method returns a proxy. Not convinced? Try the following code snippet:

    Session session = factory.getCurrentSession();
    Owner owner = (Owner) session.get(Owner.class, 1);
    // Test the class of the object
    assertSame(owner.getClass(), Owner.class);

    The test pass, asserting that the owner’s class is in fact Owner. Now, in another session, try the following:

    Session session = factory.getCurrentSession();
    Owner owner = (Owner) session.load(Owner.class, 1);
    // Test the class of the object
    assertNotSame(owner.getClass(), Owner.class);

    The test will pass too, asserting that the owner’s class is not Owner. If you spy the object in the debugger, you’ll see a Javassist proxyed instance and that fields are not initialized! Notice that in both cases, you are able to safely cast the instance to Owner. Calling getters will also return expected results.

    Why call the load() method then? Because since it is a proxy, it won’t hit the DB until a getter method is called.

    Moreover, these features are also available in JPA from the EntityManager, respectively with the find() and getReference() methods.

    Yet, both behaviours are modified by Hibernate’s caching mechanism. Try the following code snippet:

    // Loads the reference
    session.load(Owner.class, 1);
    Owner owner = (Owner) session.get(Owner.class, 1);

    According to what was said before, owner’s real class should be the real McCoy. Dead wrong! Since Hibernate previously called load(), the get() looks in the Session cache (the 1st level one) and returns a proxy!

    The behaviour is symmetrical with the following test, which will pass although it’s counter-intuitive:

    // Gets the object
    session.get(Owner.class, 1);
    // Loads the reference, but looks for it in the cache and loads
    // the real entity instead
    Owner owner = (Owner) session.load(Owner.class, 1);
    // Test the class of the object
    assertSame(owner.getClass(), Owner.class);

    Conclusion: Hibernate does a wonderful job at making ORM easier. Yet, it’s not an easy framework: be very wary for subtle behaviour differences.

    The sources for the entire hard facts serie is available here in Eclipse/Maven format.

    Categories: Java Tags: gethibernatejpaloadproxy
  • Hibernate hard facts part 3

    Hibernate Logo

    In the third article of this serie, I will show how to tweak Hibernate so as to convert any database data types to and from any Java type and thus decouple your database model from your object model.

    Custom type mapping

    Hibernate is a very powerful asset in any application needing to persist data. As an example, I was tasked this week to generate the Object-Oriented model for a legacy database. It seemed simple enough, at first glance. Then I discovered a big legacy design flaw: for historical reasons, dates were stored as number in the YYYYMMDD format. For example, 11th december 2009 was 20091211. I couldn’t or rather wouldn’t change the database and yet, I didn’t want to pollute my neat little OO model with Integer instead of java.util.Date.

    After browsing through Hibernate documentation, I was confident it made this possible in a very simple way.

    Creating a custom type mapper

    The first step, that is also the biggest, consist in creating a custom type. This type is not a real “type” but a mapper that knows how to convert from the database type to the Java type and vice-versa. In order to do so, all you have is create a class that implements org.hibernate.usertype.UserType. Let’s have a look at each implemented method in detail.

    The following method gives away what class will be returned at the end of read process. Since I want a Date instead of an Integer, I naturally return the Date class.

    public Class returnedClass() {
      return Date.class;

    The next method returns what types (in the Types constants) the column(s) that will be read fromhave. It is interesting to note that Hibernate let you map more than one column, thus having the same feature as the JPA @Embedded annotation. In my case, I read from a single numeric column, so I should return a single object array filled with Types.INTEGER.

    public int[] sqlTypes() {
      return new int[] {Types.INTEGER};

    This method will check whether returned class instances are immutable (like any normal Java types save primitive types and Strings) or mutable (like the rest). This is very important because if false is returned, the field using this custom type won’t be checked to see whether an update should be performed or not. It will be of course if the field is replaced, in all cases (mutable or immutable). Though there’s is a big controversy in the Java API, the Date is mutable, so the method should return true.

    public boolean isMutable() {
      return true;

    I can’t guess how the following method is used but the API states:

    Return a deep copy of the persistent state, stopping at entities and at collections. It is not necessary to copy immutable objects, or null values, in which case it is safe to simply return the argument.

    Since we just said Date instances were mutable, we cannot just return the object but we have to return a clone instead: that’s made possible because Date’s clone() method is public.

    public Object deepCopy(Object value) throws HibernateException {
      return ((Date) value).clone();

    The next two methods do the real work to respectively read from and to the database. Notice how the API exposes ResultSet object to read from and PreparedStatement object to write to.

    public Object nullSafeGet(ResultSet rs, String[] names, Object owner)
      throws HibernateException, SQLException {
      Date result = null;
      if (!rs.wasNull()) {
        Integer integer = rs.getInt(names[0]);
        if (integer != null) {
          try {
            result = new SimpleDateFormat("yyyyMMdd").parse(String.valueOf(integer));
          } catch (ParseException e) {
            throw new HibernateException(e);
      return result;
    public void nullSafeSet(PreparedStatement statement, Object value, int index)
      throws HibernateException, SQLException {
      if (value == null) {
        statement.setNull(index, Types.INTEGER);
      } else {
        Integer integer = Integer.valueOf(new SimpleDateFormat("yyyyMMdd").format((String) value));
        statement.setInt(index, integer);

    The next two methods are implementations of equals() and hasCode() from a persistence point-of-view.

    public int hashCode(Object x) throws HibernateException {
      return x == null ? 0 : x.hashCode();
    public boolean equals(Object x, Object y) throws HibernateException {
      if (x == null) {
        return y == null;
      return x.equals(y);

    For equals(), since Date is mutable, we couldn’t just check for object equality since the same object could have been changed.

    The replace() method is used for merging purpose. It couldn’t be simpler.

    public Object replace(Object original, Object target, Object owner) throws HibernateException {
      Owner o = (Owner) owner;
      return ((Date) original).clone();

    My implementation of the replace() method is not reusable: both the owning type and the name of the setter method should be known, making reusing the custom type a bit hard. If I wished to reuse it, the method’s body would need to use the lang.reflect package and to make guesses about the method names used. Thus, the algorithm for creating a reusable user type would be along the lines of:

    1. list all the methods that are setter and take the target class as an argument
      1. if no method matches, throw an error
      2. if a single method matches, call it with the target argument
      3. if more than one method matches, call the associated getter to check which one returns the original object

    The next two methods are used in the caching process, respectively in serialization and deserialization. Since Date instances are serializable, it is easy to implement them.

    public Serializable disassemble(Object value) throws HibernateException
      return (Date) ((Date) value).clone();
    public Object assemble(Serializable cached, Object owner) throws HibernateException {
      return ((Date) cached).clone();

    Declare the type on entity

    Once the custom UserType is implemented, you need to make it accessible it for the entity.

    @TypeDef(name="dateInt", typeClass = DateIntegerType.class)
    public class Owner {

    Use the type

    The last step is to annotate the field.

    @TypeDef(name="dateInt", typeClass = DateIntegerType.class)
    public class Owner {
      private Date date;
      public Date getDate() {
        return date;
      public void setDate(Date date) { = date;

    You can download the sources for this article here</a>.

    To go further:

  • Shark! Shark! in the IT business

    Shark! Shark! screenshot

    Do you remember the classic Atari ST game where you, as a fish, eat other fishes while getting bigger and avoiding bigger fishes to eat you? It seems the last 4 years has seen its share of fishes eat and being eaten in the IT business.

    Oh, it began innocently enough. JBoss just hired the Hibernate team. It was in November 2005. Less than 6 months later, JBoss was bought by Red Hat. I thought: “Wow, now Red Hat can provide a whole stack from Operating System to Middleware!”

    January 2008: Oracle finally buys BEA Systems, the only serious commercial concurrent of IBM in the application server market. It already tried the buyout in October 2007. I then thought: “Oracle can provide the last 2 tier of any JEE application, Middleware and Database (please don’t raise the issue of Oracle Application Server). That’s interesting!”

    About the same time, Sun has got the same idea the other way around when it buys MySQL. I thought: “They have both invested in OpenSource, that’s good strategy!”

    When SpringSource bought Hyperion, my sense of wonder began to dry.

    Now that Oracle has bought Sun and VMWare has bought SpringSource, I’m finally more concerned than ecstatic. Concurrent products that competed against one another are now in the same provider’s portfolio. No business has interest in having the same redundant softwares in its catalog. And lacking competition means no evolution, like Darwin theorized.

    I always complained about Microsoft’s choices being more marketing oriented than technically sound. It could do that until it was the major player in its field. Java and Google put an end to that (ok, not entirely, this is open to discussion but let me make my point). In turn, and although Java was meant to stay a loooong time in version 1.4, Sun made it evolve because of the progress made by the .Net framework. From my humble point of view, that’s a vertuous circle in IT darwinism.

    One may rightfully think the circle is now broken, when one looks at the following points:

    • Oracle Application Server and WebLogic are now owned by Oracle. Whereas stopping the development of OC4J may not be a bad thing in itself, I can't be so sure about the whole stack surrounding both.
    • Even worse, why would Oracle need to put money in Glassfish development, now that it has WebLogic? I'm not an ardent Glassfish defender but it plays its role in the JEE ecosystem.
    • I can't even think about Oracle database and MySQL, now that Oracle distributes Oracle Lite for free. That goes without saying but it is for development purposes only and without an OpenSource license of course.
    • What about JDeveloper and NetBeans? I fear only very bad things since if NetBeans development is stopped, it will mean IBM will do as it pleases with Eclipse (yes, I know about the Eclipse Foundation, but it still smells).
    • And poor JRockit?
    • And so on, ad nauseam.

    The present concentration trend raises concerns, at least from me, because it may well end when there’s only one single player left standing. Remember the Atari game I talked about at the beginning? It was aptly named “Shark! Shark!” because regardless of the size you were, you could always be eaten by the shark. Whoever will end up being the shark, I can only guess, but if this doesn’t stop soon, we are all about to get eaten!

  • Hibernate hard facts part 2

    Hibernate Logo

    Hibernate is a widely-used ORM framework. Many organizations use it across their projects in order to manage their data access tier. Yet, many developers working on Hibernate don’t fully understand the full measure of its features. In this serie of articles, I will try to dispel some common misunderstandings regarding Hibernate.

    Part 1 : Updating a persistent object

    Directional associations

    Association is a relationship between two classes that denotes a connection between the two. Navigability is the way by which an instance of one class can access the instance of the second one. UML describes the following navigabilities:

    • undirectional navigability, from class A to class B
    • bidirectional navigability, from class A to class B and from the latter to the former
    • unknown navigability (early in the design)

    When talking about persistence, associations are designed from (or to, depending on what comes first) foreign key constraints. If I want to describe that a customer can have multiple orders, chances are the Order table has a foreign key constraint on the Customer table. On the other hand, there’s no such thing as directionality in the relational world. This means that the designed directionality is a feature brought to you by the object world, and the mapping tool used.

    In the relational world, we speak of a relationship’s ownership to identify the where the data used by the join resides. In the previous example, the Order table is the owner since it has the foreign key constraint on the Customer table. This ownership has no relation to navigability whatsoever.

    In the case of a bidirectional association, one would think that creating the association from A to B would be enough. For example, in a one-to-many association, having a customer, if I create an account, I just have to add this account to my customer’s, haven’t I? Unfortunately, this is not the case. If I forget to associate both ends, I end up catching very nasty exceptions. Interestingly enough, they are different depending on which association you miss.

    It is thus a good practice to create an addOrder() method on the Customer class to manage such subtleties:

     * Adds an order to this customer's. Manage bidirectional associations.
     * @param order
    public void addOrder(Order order) {

    Whether you use this tip or not, remember to document its use (or not) in the Javadoc, since if you don’t, client code will need to.

    Now if one wants to remove an order from the database, one should normally call the session.delete() method. Like above, this will throw an exception if one didn’t previously remove this order from this customer’s. This will be slightly more hindering since it cannot be factorized in the domain class.

    // We got the session somehow and the snippet is running in a transaction
    Order order = (Order) session.load(Order.class, 1L);
    Customer customer = order.getCustomer();

    We’ve seen previously a simple many-to-one association creation. This gets even worse in the case of a many-to-many association update, since updating means removing and creating.

    You can do it in a method, provided getters return the real collection and not an unmodifiable (a usually good practice).

    // In the Customer class
    public void removeOrder(Order order) {
    // In the Order class
    public void removeCustomer(Customer customer) {

    You will find the test cases here</a>.

    Categories: Java Tags: associationbidirectionalhibernate
  • Hibernate hard facts part 1

    Hibernate Logo

    Hibernate is a widely-used ORM framework. Many organizations use it across their projects in order to manage their data access tier. Yet, many developers working on Hibernate don’t fully understand the full measure of its features. In this serie of articles, I will try to dispel some common misunderstandings regarding Hibernate.

    Updating a persistent object

    A widely made mistake using Hibernate is to call the update() method on an already persistent object :

    Person person = (Person) session.load(Person.class, 1L);

    This will get you the following Hibernate outputs :

    Hibernate: select as id0_0_,
        person0_.birthDate as birthDate0_0_,
        person0_.FIRST_NAME as FIRST3_0_0_,
        person0_.LAST_NAME as LAST4_0_0_ from Person person0_ where
    Hibernate: update Person set birthDate=?, FIRST_NAME=?, LAST_NAME=? where id=?

    Now remove the line numbered 7. You get exactly the same output! If you check the database, the result will be the same: calling update() does nothing since the object is already persistent. If you are in debug mode, you can even check that the output’s update line is called when commiting the transaction in both cases.

    You will find proof of this assertion here.

  • Framework agnostic JPA

    With new JEE 5 standard has come the EJB3 specifications. From an historical point of view, EJBs come into 3 different flavors: (i) Entity for persistence, (ii) Session for business logic and (iii) Message-Driven for listeners. Entity EJB are the most time-consuming to develop in their 2.1 version. Apart from the inherent complexity of EJB (local and remote interfaces, homes), developing an EJB 2 is error-prone because of the mapping mechanism. All in all, EJB 2 development really needs a very specialized IDE and expert developers. That’s the main reason why EJB 2 didn’t achieve a significant market share, and had to compete with JDO, Hibernate and other third-party frameworks.

    Sun eventually realized this and did come with a much simpler solution with EJB 3. Using Java 5 annotations, developing Entity EJB 3 is a breeze. The part of EJB 3 specifications that deal with Entity is called Java Persistence API (JPA). It is a specification in its own right, and in the next specifications, it will have its own. Developing JPA applications is a two-step process. First, you have to create you model classes. Such classes will be the foundation of your persistence layer: it would be a good idea to use this layer throughout your entire organization, since these classes should closely map your database model. A simple JPA enhanced class would look like this:

    import static javax.persistence.GenerationType.IDENTITY;
    import javax.persistence.Entity;
    import javax.persistence.GeneratedValue;
    import javax.persistence.Id;
    public class Book implements Serializable {
        /** Unique class's serial version identifier. */
        private static final long serialVersionUID = 7292903017883188330L;
        /** Book's unique identifier. */
        @GeneratedValue(strategy = IDENTITY)
        private long id;
        /** ISBN. */
        private String isbn;
        /** Book's title. */
        private String title;
        /** Publication's date. */
        private Date published;
        public long getId() {
            return id;
        public String getIsbn() {
            return isbn;
        public Date getPublished() {
            return published;
        public String getTitle() {
            return title;
        public void setId(long aId) {
            id = aId;
        public void setIsbn(String aIsbn) {
            isbn = aIsbn;
        public void setPublished(Date aPublished) {
            published = aPublished;
        public void setTitle(String aTitle) {
            title = aTitle;

    You have noticed 3 annotations:

    • javax.persistence.Entity (line 9): just this annotation enables your model's class to be used by JPA,
    • javax.persistence.Id (line 16): identifies the primary key column,
    • javax.persistence.GeneratedValue (line 17): how to generate PK while inserting.

    Mapping your classes against the database is the most important operation of JPA. It is out of the scope of this article, suffice to say the least is you can do is use the 3 above annotations.

    If you’re using Eclipse, and if you don’t want to learn all JPA annotations, I recommend using JPA facet. Just righ-click your project, select Properties and go to Project Facets. Then select Java Persistence. Apart from creating needed configuration files (see below) and providing a nice utility view for them, it allows two more views for each of you model classes:

    • JPA Structure which should map your database structure,
    • JPA Details where you can add and change JPA annotations.

    JPA details in Eclipse

    Once mapping is done, either through brute force or the neat little wizard provided just above, how can you CRUD theses entities? First, you can continue using your favorite JPA-compatible persistence framework. In my case, I use Hibernate, which happens to be JPA reference implementation. The following code is called once, preferably in an utility class:

    AnnotationConfiguration configuration = new AnnotationConfiguration();
    sessionFactory = configuration.buildSessionFactory();

    Now you only have to get the session for every unit-of-work and use it in your DAO:

    Session session = sessionFactory.getSession();
    // Get every book in database
    List books = session.createCriteria(Book.class).list();

    Now we got model’s classes that are Hibernate independent but our DAOs are not. Since using JPA API instead of Hibernate API decouples our code from the underlying framework, and this at no significant performance cost, it becomes highly desirable to remove this dependency. You need to have :

    • a compile-time dependency to ejb3-persistence.jar,
    • a runtime dependency to hibernate-entitymanager.jar or its TopLink equivalent. Transitive dependencies are of course framework dependent.

    JPA configuration is done through 2 files:

    • META-INF/orm.xml for mapping informations. In our case, it is already done with annotations.
    • META-INF/persistence.xml for meta information, e.g JNDI location of the datasource to use.

    When done, the calling sequence is very similar to Hibernate’s:

    // This done once per application
    // Notice its similitude with Hibernate's SessionFactory
    EntityManagerFactory emf = Persistence.createEntityManagerFactory("myManager");
    // This done per unit of work
    // Notice its similitude with Hibernate's Session
    EntityManager em = emf.createEntityManager();
    // Get every book in database
    List books = em.createQuery("SELECT b FROM Book").getResultList();

    Now, using the above code, passing from an Hibernate implementation to a TopLink implementation is transparent. Just remove Hibernate JARs and put TopLink JARs on the runtime classpath.

    Categories: Java Tags: hibernatejpapersistence