Posts Tagged ‘database’
  • A use-case for Google Fusion Tables

    :page-liquid: :icons: font :experimental: :imagesdir: /assets/resources/use-case-google-fusion-tables

    I’ve been traveling somewhat during the last few years, and I wanted to check on a map which countries I already visited. There’s one requirement: I want the whole area of countries I’ve visited to be highlighted. There are a couple of solutions for that:

    The Elastic stack:: Data can be stored in ElasticSearch, and there’s one world map visualization available for the Kibana dashboard. This works, but: +

    1. The visualization handles geo-location, but no highlighting an entire country
    2. It requires some setup, and I’m a developer - read that I’m lazy. + Dedicated app:: To be honest, I didn’t search but I assume “there is an app for that”. But by using a dedicated app, you’re not the owner of your own data anymore and that doesn’t suit me. Google Maps:: Google Maps allows to add layers, and dedicated data. This solution would be a great fit, if it was possible to easily highlight countries. And still, there’s still a lot of JavaScript to write.

    Come[Google Fusion Tables^].

    WARNING: Fusion Tables is an experimental service offered by Google.

    == Creating the table

    While it’s possible to input the data directly into Fusion Tables, it’s easier to do that in a simple Google Spreadsheet. The following is sample data:

    [width=”80%”,cols=”5*“,options=”header”,align=”center”] |=== h|Date h|Country h|Place h|Type h|Event

    Voxxed Days

    Such a spreadsheet can easily be imported into Fusion Tables.

    1. Connect Fusion Tables on Google Drive. From the Drive’s homepage, go on menu:My Drive[More > Connect More Apps]. Search for “fusion”. Click on btn:[Connect]. + image::connect-fusion.png[Connect Google Fusion Table on Drive,737,675,align=”center”] +
    2. Create a new Fusion Tables document. From the Drive’s homepage, click on menu:New[More > Google Fusion Tables]. Then select btn:[Google Spreadsheet]. + image::import-spreadsheet.png[Import Google Spreadsheet into Fusion Tables,831,547,align=”center”] +
    3. Select the desired spreadsheet and click btn:[Select]. + image::import-new-table.png[Import new table,786,401,align=”center”] +
    4. Name the table accordingly and click btn:[Finish]. It yields something akin to the following: + image::fusion-table.png[Import new table,596,284,align=”center”]

    Out-of-the-box, there’s a Map of Country tab that displays each data line on a world map. Unfortunately, it’s a simple dot at the center of the country. It doesn’t fulfil the initial requirement of highlighting the entire country area.

    image::default-map.png[Default world map view,613,452,align=”center”]

    Changing the Location field to “Place” instead of “Country” will place dots at the correct location instead of the country center, but still no highlighting.

    == Merging multiple Fusion Tables

    Fusion Tables support geometries that can be defined using the[Keyhole Markup Language^] format. That can fulfil the highlighting requirement. Yet, that would mean defining the geometry of each country visited manually; it requires an effort I’m not prepared to make. Fortunately, it’s possible to “join” multiple tables - it’s called[merging^]. Merging creates a new table, with both tables associated in it. Even better, if any of the initial table data changes, it’s reflected in the merged table.

    Good news: there’s an existing publicly accessible[table^] defining all country geographies. Let’s merge the existing table with it in menu:File[Merge]. In the Or paste a web address here field, paste the URL from the world countries above. Click btn:[Next]. The opening pop-up requires to define the “join” columns of the tables.

    image::merge-column.png[Default world map view,478,470,align=”center”]

    Click btn:[Next]. In the opening pop-up, tick the checkboxes of columns that will be part of the merged table. Click btn:[Merge]. Wait for the merge to happen. Click btn:[View table].

    Now, on the world map tab, changing the Location field to “geometry” yields the expected result.

    image::highlighted-map.png[Highlighted world map view,616,457,align=”center”]

    At this point, the requirement is fulfilled. Further refinements would be to access the data via its[REST API^].

    == Conclusion

    Fusion Tables is a no-fluff, just-stuff cloud service that allows to easily display data in various ways. With its ability to join on other tables, it’s easy to re-use existing tabular data.

    Categories: Technical Tags: databasedata visualization
  • Connection is a leaky abstraction

    As junior Java developers, we learn very early in our career about the JDBC API. We learn it’s a very important abstraction because it allows to change the underlying database in a transparent manner. I’m afraid what appeared as a good idea is just over-engineering because:

    1. I've never seen such a database migration happen in more than 10 years
    2. Most of the time, the SQL written is not database independent

    Still, there’ s no denying that JDBC is at the bottom of every database interaction in Java. However, I recently stumbled upon another trap hidden very deeply at the core of the javax.sql.Connection interface. Basically, you perhaps have been told to close the Statement returned by the Connection? And also to close the ResultSet returned by the Statement? But perhaps you also have been told that closing the Connection will close all underlying objects - Statement and ResultSet?

    So, which one is true? Well, “it depends” and there’s the rub…

    • One one hand, if the connection is returned from the DriverManager, calling Connection.close() will close the physical connection to the database and all underlying objects.
    • On the other hand, if the connection is returned from a DataSource, calling Connection.close() will only return it to the pool and you'll need to close statements yourself.

    In the latter case, if you don’t close those underlying statements, database cursors will stay open, the RDBMS limit will be reached at some point and new statements won’t be executed. Conclusion: always close statement objects (as I already wrote about)! Note the result set will be closed when the statement is.

    If you’re lucky to use Java 7 - and don’t use a data access framework, the code to use is the following:

    try (PreparedStatement ps = connection.prepareStatement("Put SQL here")) {
        try (ResultSet rs = ps.executeQuery()) {
            // Do something with ResultSet
    } catch (SQLException e) {
        // Handle exception

    And if you want to make sure cursors will be closed even with faulty code, good old Tomcat provides the StatementFinalizer interceptor for that. Just configure it in the server.xml configuration file when you declare your Resource:

    <Resource name="jdbc/myDB" auth="Container" type="javax.sql.DataSource"
     jdbcInterceptors="org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer" />

    Note: while you’re there, you can also check the ResetAbandonedTimer interceptor. It can be used in conjunction with the removeAbandonedTimeout attribute: this configures the time after which the connection will be returned back to the pool. If the attribute’s value is too low, connections in use might be returned. With the interceptor, each time the connection is used resets the timer.

    Categories: Java Tags: bugdatabase
  • Hibernate hard facts part 3

    Hibernate Logo

    In the third article of this serie, I will show how to tweak Hibernate so as to convert any database data types to and from any Java type and thus decouple your database model from your object model.

    Custom type mapping

    Hibernate is a very powerful asset in any application needing to persist data. As an example, I was tasked this week to generate the Object-Oriented model for a legacy database. It seemed simple enough, at first glance. Then I discovered a big legacy design flaw: for historical reasons, dates were stored as number in the YYYYMMDD format. For example, 11th december 2009 was 20091211. I couldn’t or rather wouldn’t change the database and yet, I didn’t want to pollute my neat little OO model with Integer instead of java.util.Date.

    After browsing through Hibernate documentation, I was confident it made this possible in a very simple way.

    Creating a custom type mapper

    The first step, that is also the biggest, consist in creating a custom type. This type is not a real “type” but a mapper that knows how to convert from the database type to the Java type and vice-versa. In order to do so, all you have is create a class that implements org.hibernate.usertype.UserType. Let’s have a look at each implemented method in detail.

    The following method gives away what class will be returned at the end of read process. Since I want a Date instead of an Integer, I naturally return the Date class.

    public Class returnedClass() {
      return Date.class;

    The next method returns what types (in the Types constants) the column(s) that will be read fromhave. It is interesting to note that Hibernate let you map more than one column, thus having the same feature as the JPA @Embedded annotation. In my case, I read from a single numeric column, so I should return a single object array filled with Types.INTEGER.

    public int[] sqlTypes() {
      return new int[] {Types.INTEGER};

    This method will check whether returned class instances are immutable (like any normal Java types save primitive types and Strings) or mutable (like the rest). This is very important because if false is returned, the field using this custom type won’t be checked to see whether an update should be performed or not. It will be of course if the field is replaced, in all cases (mutable or immutable). Though there’s is a big controversy in the Java API, the Date is mutable, so the method should return true.

    public boolean isMutable() {
      return true;

    I can’t guess how the following method is used but the API states:

    Return a deep copy of the persistent state, stopping at entities and at collections. It is not necessary to copy immutable objects, or null values, in which case it is safe to simply return the argument.

    Since we just said Date instances were mutable, we cannot just return the object but we have to return a clone instead: that’s made possible because Date’s clone() method is public.

    public Object deepCopy(Object value) throws HibernateException {
      return ((Date) value).clone();

    The next two methods do the real work to respectively read from and to the database. Notice how the API exposes ResultSet object to read from and PreparedStatement object to write to.

    public Object nullSafeGet(ResultSet rs, String[] names, Object owner)
      throws HibernateException, SQLException {
      Date result = null;
      if (!rs.wasNull()) {
        Integer integer = rs.getInt(names[0]);
        if (integer != null) {
          try {
            result = new SimpleDateFormat("yyyyMMdd").parse(String.valueOf(integer));
          } catch (ParseException e) {
            throw new HibernateException(e);
      return result;
    public void nullSafeSet(PreparedStatement statement, Object value, int index)
      throws HibernateException, SQLException {
      if (value == null) {
        statement.setNull(index, Types.INTEGER);
      } else {
        Integer integer = Integer.valueOf(new SimpleDateFormat("yyyyMMdd").format((String) value));
        statement.setInt(index, integer);

    The next two methods are implementations of equals() and hasCode() from a persistence point-of-view.

    public int hashCode(Object x) throws HibernateException {
      return x == null ? 0 : x.hashCode();
    public boolean equals(Object x, Object y) throws HibernateException {
      if (x == null) {
        return y == null;
      return x.equals(y);

    For equals(), since Date is mutable, we couldn’t just check for object equality since the same object could have been changed.

    The replace() method is used for merging purpose. It couldn’t be simpler.

    public Object replace(Object original, Object target, Object owner) throws HibernateException {
      Owner o = (Owner) owner;
      return ((Date) original).clone();

    My implementation of the replace() method is not reusable: both the owning type and the name of the setter method should be known, making reusing the custom type a bit hard. If I wished to reuse it, the method’s body would need to use the lang.reflect package and to make guesses about the method names used. Thus, the algorithm for creating a reusable user type would be along the lines of:

    1. list all the methods that are setter and take the target class as an argument
      1. if no method matches, throw an error
      2. if a single method matches, call it with the target argument
      3. if more than one method matches, call the associated getter to check which one returns the original object

    The next two methods are used in the caching process, respectively in serialization and deserialization. Since Date instances are serializable, it is easy to implement them.

    public Serializable disassemble(Object value) throws HibernateException
      return (Date) ((Date) value).clone();
    public Object assemble(Serializable cached, Object owner) throws HibernateException {
      return ((Date) cached).clone();

    Declare the type on entity

    Once the custom UserType is implemented, you need to make it accessible it for the entity.

    @TypeDef(name="dateInt", typeClass = DateIntegerType.class)
    public class Owner {

    Use the type

    The last step is to annotate the field.

    @TypeDef(name="dateInt", typeClass = DateIntegerType.class)
    public class Owner {
      private Date date;
      public Date getDate() {
        return date;
      public void setDate(Date date) { = date;

    You can download the sources for this article here</a>.

    To go further: