Dangers of implicitness

March 25th, 2012 1 comment

This article is a reaction to the “Unlearn, Young Programer” article I read the other day.

It would be better if you would take some time to read it before, but for the lazy kind (of those I’m part of), here’s a short resume. The author seems to be a seasoned software developer. He makes the observation that when he tasks other junior developers to do some things, results are often not what he expected. In order to illustrate his point, he takes the example of having to hang a picture. I must admit the examples are very funny and speak very much to me. However, at the end, the conclusion is that young developers have to ask questions, period.

Take responsibility

As an architect managing developers from my ivory tower (read: not having enough time to coach them), it happens that I’m not satisfied with the results. I don’t know if it’s a trait coming from age, culture, country or whatever, but the first question I ask myself is what I didn’t provide my developer so he could achieve the results I wanted. This is not because I’m modest or insecure but because there’s a rule stating that you can only change yourself, the rest is up to the others. Thus, it’s not up to the developer to ask you questions (albeit it would be more comfortable for most of us), but up to you to give them as many relevant information pertaining to their job. Putting the responsibility on developers is too common a flaw seen in management that we should strive not to do it ourselves.

The danger of implicit

My second thought is about the gap between what is asked for and what is implied, because there lies the biggest problem. As an example taken from real life, six months ago, I had a motorbike accident in a foreign country (with police and ambulance and the whole show, but I had no wounds although I was a bit shaken). My bike was badly damaged so it had to be towed away. The trucker gave me his card, and I assumed he drove my bike to his garage, because that’s how it works in my country. Unfortunately, that was not the case: I only realized this when I received the notification that my bike was impounded and was destined to be destroyed by decree if I didn’t do anything about it.

How many times does such things happen in life (with less dramatic consequences)? As soon as you’re confronted with a foreign culture/country that seems similar on the surface, there are many chances for implicitness to creep through. Implicit is much worse than ignorance: when you don’t know, you won’t imply anyhting and most of use will be very careful about it. For software development, that means it may be better to have a Business Analyst who has no knowledge of the business domain. He/she will probably dig deep into every corner compaired to a BA who has knowledge of a comparable domain because the latter will make assumptions. Given Murphy’s law, that will probably be false assumptions, thus leading to dire consequences later in the project.

Conclusion

While I can only agree with the very graphic examples of the reference article, I strongly disagree with the conclusion. Senior software engineers should be as explicit as possible when giving tasks to more junior ones. While I have had the best work experience with programers that came up with questions, they’re not all like that despite we having to be successful in each project.

Categories: Development Tags:

I made the step forward into virtualization

March 18th, 2012 5 comments

I have heard of virtualization 5 or 6 years ago and at the time, I didn’t understand the real implications of its use, save it was a tool toward an agile infrastructure. More recently, I heard that a virtual platform could be run on top of an undetermined number of physical platforms: you need more power, you can add platforms; a platform is down, there’s no effect (except a decrease in performance). At this time, I realized all the power we could gain from virtualization, or more precisely, all the power operations could gain from it.

As for myself, I tried VMWare to be able to get my hands on a Linux distro once, but I quickly lost interest since I had no real needs. Meanwhile, I had some seemingly unrelated problems bothering me in parallel:

  • When writing a book (ok, I only wrote one, but it still applies), I installed all the needed software on my main system. Never mind the latest products versions, I installed the version I had to and had to play with scripts to have different environment variables configurations. Moreover, I had to describe the install process of my operating system (Windows), which may be suitable… or not.
  • The same could be said for Proof-Of-Concepts: I had to manage a whole new bunch of products and a new set of environment variables that had the potential to wreak havoc with my existing system.
  • When I have to pre-sale, this is even more damaging since the demo effect enters the scene.
  • Finally, despite having both a WordPress blog and a Drupal site, I couldn’t resign myself to install PHP on my system and I made all changes, whatever their criticity, directly on the production platform. I admit it’s bad, very bad indeed, but well, the heart has reasons and all that.

Very recently, I found myself teaching basic Linux to students and we used virtualization to run a Ubuntu distro on top of Windows. That’s when it became clear to me; I had only to create a virtual machine for each of the above problems!

Since my top priority was to create a staging area for my blog and CMS, here are the steps I took to cover my needs:

  1. Download and install the latest VMWare Player(free, but need to register)
  2. Download the latest Ubuntu distribution in ISO format
  3. Create a new virtual machine with the ISO
  4. Run the new VM, welcome to Ubuntu!
  5. Install versions of Apache Web Server, PHP, MySQL and the PHP MySQL driver that mirror those provided by my host. With Ubuntu, this is easy as pie with my friend the apt-getcommand.
  6. Configure my etc/hostfile to resolve my sites URL to 127.0.0.1
  7. Configure Apache with two virtual hosts, one to /var/www/blogfrankel, the other to /var/www/morevaadin
  8. Export SQL from both sites thanks to the Plesk interface providing phpMyAdmin and import them in my virtualized MySQL instance
  9. FTP files from both sites to my virtualized filesystem
  10. Optional but just to be sure: configure both sites to have a big “Staging” warning as the site name to avoid confusion

That was it. I fumbled a little here and there since I’m not an Ubuntu guru but the process was fairly straightforward. The part I spent the most time on was configuring Apache to redirect requests to the right site (I’m also not an Apache guru). Now I can do and undo what I want and have no effect on my production environment until I decide. As an added value, I get no security hole from having a PHP environment since I shut the VM down when not needing it.

It’s a big step from where I come from but I see some improvements to be made:

  • Being able to directly push updates to the right site. At present, I have to redo actions manually (thus taking the risk to do something wrong/forget something)
  • Being able to synchronize the VM with the production data (SQL and files), either periodically or on-demand so as to have the latest image on my VM

Any ideas on how to accomplish these would be welcome; even more welcome if they are simple to implement (KISS), remember I’m neither a sysadmin nor a devops.

Categories: Development Tags:

HTML5, still a non-event?

March 11th, 2012 No comments

A little less than 4 years ago, I wrote an article entitled “HTML 5: a non-event”. That was before the HTML5 umbrella move grouping HTML 5, CSS 3, WebSockets, etc. and before the HTML5 logo. At that time, there was a buzz around HTML 5 coming to save developers from pesky HTML 4 limitations. At the end of the article, I was doubtful of the real value of HTML 5 in the short-term and predicted the coming of the final specification 2 years after (2010) at best.

Here we are, 4 years later and the (sore) state of HTML 5 has not changed much.

What still applies

HTML 5 is still a Working Draft

HTML 5 and all its related technologies are technical standards hold by the World Wide Web Consortium. The organization has organized its specifications into 4 logical and consecutive lifecycle steps:

  1. Working Draft (WD). Citing the W3C documentation regarding its specification processes:

    A Working Draft is a document that W3C has published for review by the community, including W3C Members, the public, and other technical organizations. Some, but not all, Working Drafts are meant to advance to Recommendation;[...]

  2. Candidate Recommendation (CR)
  3. Proposed Recommendation (PR)
  4. W3C Recommendation (REC)

If you read between the line, the WD is basically a work in progress, meaning it can change drastically between versions. Guess what? The specification is still a WD after all those years and the last update date reads 25 May 2011. Granted, they have worked on it since my previous article but nothing was released as a specification.

Notes:

  • To be entirely honest, an editor’s draft version is much more recent (7 March 2012) but I couldn’t find anywhere the status of an Editor’s draft regarding the above document lifecycle.
  • When browsing through the specification itself, a big message warns about it being a work in progress.

Editors are building on sand

This point is the logical conclusion of the previous one. Google, Adobe, Mozilla and others are implementing features that are by definition not set: they’re basically investing time and resources on sand.

Granted, it’s highly unlikely a major change would take place, but then why not advance the specification maturity then to send a clear message to the community? The state of things only increases the probability of an editor walking its own path.

Heterogeneous features

You thought cross-browser compatibility was hard with HTML 4, CSS 2 and JavaScript? HTML5 brings the concept to a whole new level: for fun, check this site for a list of all features and the select the desired feature. Come back when you’re done weeping.

The good people at Modernizr thought (rather cleverly) there was something to address: how could we developers adapt to each and every browser and degrade gracefully? The library let us test for a feature and see whether it’s available. Even more importantly, some libraries are available to backport unsupported features with the help a magic JS script. The plumbing implementation is left to you. Good luck, JavaScript ninjas!

Heterogeneous implementations

Whereas heterogenous features are a part of HTML since its inception – the complexity in HTML5 coming from the numerous features, we relied on the same implementation thoughout browsers. Now, a single example being more vocal than words, how can we developers ensure a rounded corners display in CSS3 today? The answer makes my heart sink:

. rounded {
  -moz-border-radius: 10px;    /* Firefox */
  -webkit-border-radius: 10px; /* Webkit usd in Safari and Chrome */
  border-radius: 10px;         /* True CSS3 */
}

Who may call that progress?

We’re definitely not in Fairyland

This article finds in origin in a high-level presentation on HTML5 where the speaker cluttered its talk with “it’s simple” and sold dreams to the audience. Please, we’re working in real-life, not in conceptual PowerPoints. I’m no plumber but I think I’m fairly capable of drawing a plumbing diagram. Woe to the poor plumber that has to realize it in reality, however.

Don’t take all HTML5 examples on the web at face value, they’re prototypes (even if they work on your modern browser). If you’re curious, look at the sources: is it really so simple? Does it degrade gracefully?

Buzz, buzz, buzz and some more buzz

Technical people are generally poor marketers. In the case of HTML5, there’s a whole site dedicated to promoting HTML5. If there was a solid specification beyond this, I would be an ardent supporter of “the project which can communicate”. With all the previous points raised, I’m asking myself whether the time and money should have better been spent on foundation rather than decorum.

Miscellaneous, but severe

Ever heard of Slideshare migrating from Flash to HTML5? At the time of the announcement, I checked and found that fonts shown in large size were not anti-aliased. The presentations were unreadable since they were so ugly you could not focus on content. It seems they have corrected the problem by now, but some presentations still look slightly worse than their Flash predecessors.

During the writing of this article, I also found some other implementation failings regarding CSS property values. I’m sure there’re plenty more available for those looking for them.

And still…

Despite all these black points, there are definitely some advancements toward achieving HTML5.

Some editors let go of their proprietary products to walk the HTML5 path: Adobe gave Flex to the Apache Foundation (which some understood, including me, as something akin to a murder in cold blood) and Microsoft redirected Silverlight’s underlying technology to HTML5. More editors coming to HTML5 mean more momentum and more drive. For fairness’s sake, it also means more petty interests to take into account… Clouds and their silver linings reversed.

Conclusion

IMHO, the root of all Evil definitely is the maturity level of the specification. When the specification is released in a consecutive level, there will be browsers that follow it (in part or globally) and browsers that won’t. End-users and developers will then be able to choose which browsers to support, according to their respective strategy: in the zeigeist, I’m relatively confident standards-following browsers will emerge as winners.

Currently, HTML5 is a dream, surely a beautiful dream, but a dream nonetheless.

Categories: Technical Tags:

A Spring hard fact about transaction management

March 4th, 2012 3 comments

In my Hibernate hard facts article serie, I tackled some misconceptions about Hibernate: there are plenty of developers using Hibernate (myself including) that do not use it correctly, sometimes from a lack of knowledge. The same can be said about many complex products, but I was dumbfounded this week when I was faced with such a thing in the Spring framework. Surely, something as pragmatic as Spring couldn’t have shadowy areas in some corner of its API.

About Spring’s declarative transaction boundaries

Well, I’ve found at least one, in regard to declarative transaction boundaries:

Spring recommends that you only annotate concrete classes (and methods of concrete classes) with the @Transactional annotation, as opposed to annotating interfaces. You certainly can place the @Transactional annotation on an interface (or an interface method), but this works only as you would expect it to if you are using interface-based proxies. The fact that Java annotations are not inherited from interfaces means that if you are using class-based proxies (proxy-target-class="true") or the weaving-based aspect (mode="aspectj"), then the transaction settings are not recognized by the proxying and weaving infrastructure, and the object will not be wrapped in a transactional proxy, which would be decidedly bad.

From Spring’s documentation

Guess what? Even though it would be nice to have transactional behavior as part of the contract, it’s sadly not the case as it depends on your context configuration, as stated in the documentation! To be sure, I tried and it’s (sadly) true.

Consider the following contract and implementation :

public interface Service {

  void forcedTransactionalMethod();

  @Transactional
  void questionableTransactionalMethod();

}

 public class ImplementedService implements Service {

  private DummyDao dao;

  public void setDao(DummyDao dao) {

    this.dao = dao;
  }

  @Transactional
  public void forcedTransactionalMethod() {

    dao.getJdbcTemplate().update("INSERT INTO PERSON (NAME) VALUES ('ME')");
  }

  public void questionableTransactionalMethod() {

    dao.getJdbcTemplate().update("INSERT INTO PERSON (NAME) VALUES ('YOU')");
  }
}

Now, depending on whether we activate the CGLIB proxy, the questionableTransactionMethod behaves differently, committing in one case and not in the other.

Note: for Eclipse users – even without Spring IDE, this is shown as a help popup when CTRL-spacing for a new attribute (it doesn’t show when the attribute already exists in the XML though).

Additional facts for proxy mode

Spring’s documentation also documents two other fine points that shouldn’t be lost on developers when using proxies (as opposed to AspectJ weaving):

  • Only annotate public methods. Annotating methods with other visibilities will do nothing – truly nothing – as no errors will be raised to warn you something is wrong
  • Be wary of self-invocation. Since the transactional behavior is based on proxys, a method of the target object calling another of its method won’t lead to transactional behavior even though the latter method is marked as transactional

Both those limitations can be removed by weaving the transactional behavior inside the bytecode instead of using proxies (but not the first limitation regarding annotations on interfaces).

To go further:

You can found the sources for this article in Eclipse/Maven format here (but you’ll have to configure a MySQL database instance).

Categories: Java Tags: ,

OSGi in action

February 26th, 2012 1 comment

This review is about OSGi in action from Richard Hall, Karl Pauls, Stuart McCulloch, and David Savage from Manning Publications.

Facts

  • 11 chapters, 548 pages, $49.99
  • Covers OSGi R4.2

Pros

  • Easy step-by-step learning approach
  • Covers many OSGi pitfalls
  • Use-case study included

Cons

  • Too much material ;-)

Conclusion

I tried once or twice to tackle OSGi on my own, without much success.

I purchased the book in order to remedy the situation and I wasn’t disappointed: it’s a rich mine of information on OSGi and is riddled with practical tricks to address real-life situations.

Unfortunately, despite the authors claims to the contrary, OSGi is a complex technology and I don’t think many organizations are mature enough to implement it. It’s a shame but thos who do will certainly benefit from this book.

Categories: Book review Tags:

Trust stores and Java versions

February 19th, 2012 1 comment

My debugging contest of the week happened to take place on a IBM AIX system. The bug happened when we upgraded from Java version 1.4 to version 6 (which I admit is a pretty big step). Suddenly, an old application stopped working and its log displayed NoSuchAlgorithmException.

A bit of context: when Java applications have to connect to hosts with SSL over HTTP, they must trust the host – it’s the same as when you browse a site with HTTPS. If the site can provide a SSL certificate that can proves its trustworthiness by tracing it back to a trust authority (Verisign and others), all is well. However, when browsing, you can always force the browser to trust a certificate that is not backed by a trusted authority. Such a luxury is not permitted when running an application, there’s no callback.

Therefore, you can add certificates to the JVM truststore, which is located under the $JRE_HOME/security/lib. Alternatively, you can also pass a truststorewith the -Djavax.net.ssl.trustStore=<path/to/store> Java launch parameter. Before this problem, I was foolish to think you could keep the same truststore between different Java versions without a glitch. This is not the case: going back and forth a few times, we finally located the root problem.

It seems that between Java version 1.4 and 6, the good people at IBM decided to completely change their security providers. This means that when a certificate is stored by a Java 1.4 JVM, the Java 6 JVM has no chance to read it.If you’ve had told me that before then, I would have laughed in your face. Reality is weirder than fiction.

Conclusion: for Ops, it may be a good idea to consider always using the same security provider regardless of the operating system. Bouncy Castle is one of such providers, others surely exist.

Note: Sun may be defunct, but their engineers kept the same security providers between Java 1.4 and 6

Categories: Java Tags:

Announcing More Vaadin

February 12th, 2012 3 comments

During the writing of ‘Learning Vaadin‘, I had many themes I wanted to write about: components data, SQL container filtering, component alignment and expand ration, separation of concerns between graphic designers and developers, only to name a few. Unfortunately, books are finite in space as well as in time and I was forced to leave out some interesting areas of Vaadin that couldn’t fit in, much to my chagrin.

Give the success of ‘Learning Vaadin’, I’ve decided to create a site that is meant to gap the bridge between what I wanted and what I could: More Vaadin. Expect to see there articles related to Vaadin! I’ve also decided to open the platform to other contributors, for those interested.

As a bonus, this is the first article (original article here).

Table generated buttons

Before diving in the middle of the subject, let’s have a use-case first. Imagine we have a CRUD application and our current screen lists the datastore entities in a table, a line per entity and a column per property.

Now, in order to implement the Delete functionality, we use the addGeneratedColumn method of the Table component to display a “Delete” button, like in the following screenshot:

The problem lies in the action that has to take place when the button is pressed. The behavior needs to be determined when the page is generated, so that the event is processed directly on the server-side: in other words, we need a way to pass the entity identifier (or the container’s item or even the container’s item id) to the button somehow.

The solution is very simple to implement, with just the use of the final keyword, to let nested anonymous class methods access parameters:

table.addGeneratedColumn("", new ColumnGenerator() {

    @Override
    public Object generateCell(final Table source, final Object itemId, Object columnId) {

        Button button = new Button("Delete");

        button.addListener(new ClickListener() {

            @Override
            public void buttonClick(ClickEvent event) {

                source.getContainerDataSource().removeItem(itemId);
            }
        });

        return button;
    }
});

Remember to regularly check More Vaadin for other reindeer articles :-)

Categories: Java Tags:

CDI worse than Spring for autowiring?

February 5th, 2012 11 comments

Let’s face it, there are two kinds of developers: those that favor Spring autowiring because it alleviates them from writing XML (even though you can do autowiring with XML) and those that see autowiring as something risky.

I must admit I’m of the second brand. In fact, I’d rather face a rabbied 800-pounds gorilla than use autowiring. Sure, it does all the job for you, doesn’t it? Maybe, but it’s a helluva job and I’d rather dirty my hands than let some cheap bot do it for me. The root of the problem lies in the implicit-ness of autowiring. You declare two beans, say one needs a kind of the other and off we go.

It seems simple on the paper and it is if we let it at that. Now, autowiring comes into two major flavors:

  • By name where matching is done between the property’s name and the bean’s name
  • By type where matching is done between the property’s type and the bean’s type

The former, although relatively benign, can lead to naming nightmares where developers are to tweak names to make them autowire together. The second is an utter non-sense: in this case, you can create problems in a working context by creating a bean in it, only because it has the same class as another bean already existing in the context. Worse, autowiring errors can occur in a completely unrelated location, just because of the magic involved by autowiring. And no, solutions don’t come from mixing autowiring and explicit wiring, mixing autowiring between name and type or even excluding beans from being candidates for autowiring; that just worsens the complexity as developers have to constantly question what will be the behavior.

Autowiring fans that are not convinced should read the Spring documentation itself for a list of limitations and disadvantages. This is not to say that autowiring in itself is bad, just that is has to be kept strictly in check. So far, I’ve allowed it only for small teams and only for tests (i.e. code that doesn’t ship to production).

All in all, Spring autowiring has one redeeming quality: candidates are only chosen from the context, meaning instances outside the context cannot wreak havoc our nicely crafted application.

CDI developers should have an hint where I’m heading. Since in CDI every class on the classpath is candidate for autowiring, this means that adding a new JAR on the application’s classpath can disrupt CDI and prevent the application from launching. In this light, only autowiring by name should be used for CDI… and then, only by those courageous to take the risk :-)

Categories: Java Tags: , ,

Adapters in JAXB

January 29th, 2012 1 comment

Java XML Binding (aka JAXB) is part of many applications since it provides a convenient API to marshall/unmarshall Java to/from XML.

Like so many area, the devil is in the detail, like when one has to unmarshall a JAXB-incompatible class. Such classes come across one’s code for a variety of reasons: design, legacy, third-party… Suffice to say that such a sand grain can be a blocker for any project.

Luckily, JAXB designers must have had this use-case in mind when they crafted their API, because an adapter is provided to bypass such problems. In this case, a JAXB-compatible class is made available and JAXB bridges between it and the incompatible one. This is easily done through the use of the XmlJavaTypeAdapter annotation: it accepts a class parameter which bridges between the JAXB-friendly and -unfriendly worlds. In other words, it must implement the XmlAdapter<OriginalType, CompatibleType> contract. The annotation can then be put on the field which is of incompatible type.

This is all well, and many resources on the web tells how to do it. Alas, most examples stay on the safe side and completely forget to address the following:

  • My class to be marshalled (and back) has a field of type A
  • Type A has an associations of type B, which in turn has an association of type C, etc
  • All of these types are third-party
  • Type A through Y are JAXB-compatible, type Z is not

Now things get interesting… Creating the adapter for type A is out of the question, since I would have to create the structure for the whole tree, which would be a huge time sink.

The solution I’ve found so far is to create a package-info.java in the third-party package and annotate it with my adapter. It’s not clean in that it may come in conflict with the real package-info (i.e. the one coming from the third-party library), but it was not the case for me (none was provided). Btw, I’m amazed how few providers are creating this object in their library.

Anyone having faced this problem and found a more elegant solution?

Categories: Java Tags:

Shoud you change your design for testing purposes?

January 22nd, 2012 2 comments

As Dependency Injection frameworks go, the standard is currently CDI. When switching from Spring, one has to consider the following problem: how do you unit test your injected classes?

In Spring, DI is achieved through either constructor injection or setter injection. Both allow for simple unit testing by providing the dependencies and calling either the constructor or the desired setter. Now, how do you unit test the following code, which uses field injection:

public class MyMainClass {

  @Inject
  private MyDependencyClass dependency;

  // dependency is used somewhere else
  ...
}

Of course, there are some available options:

  1. you can provide a setter for the dependency field, just as you did in Spring
  2. you can use reflection to access the field
  3. you can use a testing framework that does the reflection for you (PrivateAccessor from JUnit addons or Powermock come to mind)
  4. you can increase the field visibility to package (i.e. default) and put the test case in the same package

Amazingly enough, when Googling through the web, the vast majority of unit testing field-injected classes code demonstrate the increased visibility. Do a check if you do not believe me: they do not display the private visibility (here, here and here for example). Granted, it’s a rather rethorical question, but what annoys me is that it’s implicit, whereas IMHO it should be a deeply thought-out decision.

My point is, changing design for testing purpose is like shooting yourself in the foot. Design shouldn’t be cheaply sold. Else, what will be the next step, changing your design for build purposes? Feels like a broken window to me. As long as it’s possible, I would rather stick to using the testing framework that enable private field-access. It achieves exactly the same purpose but at least it keeps my initial design.

Categories: Development Tags: