Playing with constructors

May 4th, 2014 1 comment

Immutability is a property I look after when designing most of my classes. Achieving immutability requires:

  • A constructor initializing all attributes
  • No setter for those attributes

However, this design prevents or makes testing more complex. In order to allow (or ease) testing, a public no-arg constructor is needed.

Other use-cases requiring usage of a no-arg constructor include:

  • De-serialization of serialized objects
  • Sub-classing with no constructor invocation of parent classes
  • etc.

There are a couple of solutions to this.

1. Writing a public no-arg constructor

The easiest way is to create a public no-arg constructor, then add a big bright Javadoc to warn developers not to use. As you can imagine, in this case easy doesn’t mean it enforces anything, as you are basically relying on developers willingness to follow instructions (or even more on their ability to read them in the first place – a risky bet).

The biggest constraint, however, is that it you need to be able to change the class code.

2. Writing a package-visible no-arg constructor

A common approach used for testing is to change visibility of a class private methods to package-visible, so they can be tested by test classes located in the same package. The same approach can be used in our case: write a package-visible no-arg constructor.

This requires the test class to be in the same package as the class which constructor has been created. As in case 1 above, you also need to change the class code.

3. Playing it Unsafe

The JDK is like a buried treasure: it contains many hidden and shiny features; the sun.misc.Unsafe class is one of them. Of course, as both its name and package imply, its usage is extremely discouraged. Unsafe offers a allocateInstance(Class<?>) method to create new instances without calling any constructors whatsoever, nor any initializers.

Note Unsafe only has instance methods, and its only constructor private… but offers a private singleton attribute. Getting a reference on this attribute requires a bit of reflection black magic as well as a lenient security manager (by default).

Field field = Unsafe.class.getDeclaredField("theUnsafe");
field.setAccessible(true);
Unsafe unsafe = (Unsafe) field.get(null);

java.sql.Date date = (java.sql.Date) unsafe.allocateInstance(java.sql.Date.class);
System.out.println(date);

Major constraints of this approach include:

  • Relying on a class outside the public API
  • Using reflection to access a private field
  • Only available in Oracle’s HotSpot JVM
  • Setting a lenient enough security manager

4. Objenesis

Objenesis is a framework which sole goal is to create new instances without invoking constructors. It offers an abstraction layer upon the Unsafe class in Oracle’s HotSpot JVM. Objenesis also works on different JVMs including OpenJDK, Oracle’s JRockit and Dalvik (i.e. Android) in many different versions by using strategies adapted to each JVM/version pair.

The above code can be replaced with the following:

Objenesis objenesis = new ObjenesisStd();
ObjectInstantiator instantiator = objenesis.getInstantiatorOf(java.sql.Date.class);

java.sql.Date date = (java.sql.Date) instantiator.newInstance();
System.out.println(date);

Running this code on Oracle’s HotSpot will still require a lenient security manager as Objenesis will use the above Unsafe class. However, such requirements will be different from JVM to JVM, and handled by Objenesis.

Conclusion

Though a very rare and specialized requirement, creating instances without constructor invokation might sometimes be necessary. In this case, the Objenesis framework offers a portable and abstract to achieve this at the cost of a single additional dependency.

Categories: Java Tags:

The Visitor design pattern

April 27th, 2014 5 comments

I guess many people know about the Visitor design pattern, described in the Gang of Four’s Design Patterns: Elements of Reusable Object-Oriented Software book. The pattern itself is not very complex (as many design patterns go):

Visitor UML class diagram

I’ve known Visitor since ages, but I’ve never needed it… yet. Java handles polymorphism natively: the method call is based upon the runtime type of the calling object, not on its compile type.

interface Animal {
    void eat();
}
public class Dog implements Animal {
    public void eat() {
        System.out.println("Gnaws bones");
    }
}

Animal a = new Dog();
a.eats(); // Prints "Gnaws bones"

However, this doesn’t work so well (i.e. at all) for parameter types:

public class Feeder {
    public void feed(Dog d) {
        d.eat();
    }
    public void feed(Cat c) {
        c.eat();
    }
}

Feeder feeder = new Feeder();
Object o = new Dog();
feeder.feed(o); // Cannot compile!

This issue is called double dispatch as it requires calling a method based on both instance and parameter types, which Java doesn’t handle natively. In order to make it compile, the following code is required:

if (o instanceof Dog) {
    feeder.feed((Dog) o);
} else if (o instanceof Cat) {
    feeder.feed((Cat) o);
} else {
    throw new RuntimeException("Invalid type");
}

This gets even more complex with more overloaded methods available – and exponentially so with more parameters. In maintenance phase, adding more overloaded methods requires reading the whole if stuff and updating it. Multiple parameters are implemented through embedded ifs, which is even worse regarding maintainability. The Visitor pattern is an elegant way to achieve the same, with no ifs, at the expense of a single method on the Animal class.

public interface Animal {
    void eat();
    void accept(Visitor v);
}

public class Cat {
    public void eat() { ... }
    public void accept(Visitor v) {
        v.visit(this);
    }
}

public class Dog {
    public void eat() { ... }
    public void accept(Visitor v) {
        v.visit(this);
    }
}

public class FeederVisitor {
    public void visit(Cat c) {
        new Feeder().feed(c);
    }
    public void visit(Dog d) {
        new Feeder().feed(d);
    }
}

Benefits:

  • No evaluation logic anywhere
  • Only adherence between Animal and FeederVisitor is limited to the visit() method
  • As a corollary, when adding new Animal subtypes, the Feeder type is left untouched
  • When adding new Animal subtypes, the FeederVisitor type may implement an additional method to handle it
  • Other cross-cutting logic may follow the same pattern, e.g. a train feature to teach animals new tricks

It might seem overkill to go to such lengths for some simple example. However, my experience taught me that simple stuff as above are fated to become more complex with time passing.

Categories: Java Tags:

Introduction to Mutation Testing

April 20th, 2014 2 comments

Last week, I took some days off to attend Devoxx France 2014 3rd edition. As for oysters, the largest talks do not necessarily contain the prettiest pearls. During this year’s edition, my revelation came from a 15 minutes talk by my friend Alexandre Victoor, who introduced me to the wonders of Mutation Testing. Since I’m currently writing about Integration Testing, I’m very much interested in Testing flavors I don’t know about.

Experience software developers know not to put too much faith in code coverage metrics. Reasons include:

  • Some asserts may have been forgotten (purposely or not)
  • Code with no value, such as getters and setters, may have been tested
  • And so on…

Mutation Testing tries to go beyond code coverage metrics to increase one’s faith in tests. Here’s is how this is achieved: random code changes called mutations are introduced in the tested code. If a test still succeed despite a code change, something is definitely fishy as the test is worth nothing. As an example is worth a thousand words, here is a snippet that needs to be tested:

public class DiscountEngine {

    public Double apply(Double discount, Double price) {

        return (1 - discount.doubleValue()) * price.doubleValue();
    }
}

The testing code would be akin to:

public class DiscountEngineTest {

    private DiscountEngine discounter;

    @BeforeMethod
    protected void setUp() {

        discounter = new DiscountEngine();
    }

    @Test
    public void should_apply_discount() {

        Double price = discounter.apply(new Double(0.5), new Double(10));

        assertEquals(price, new Double(5));
    }
}

Now, imagine line 16 was forgotten: results from DiscountEngineTest will still pass. In this case, however, wrong code updates in the DiscountEngine would not be detected. That’s were mutation testing enters the arena. By changing DiscountEngine, DiscountEngineTest will still pass and that would mean nothing is tested.

PIT is a Java tool offering mutation testing. In order to achieve this, PIT creates a number of alternate classes called mutants, where the un-mutated class is the initial source class. Those mutants will be tested against existing tests targeting the original class. If the test still pass, well, there’s a problem and the mutant is considered to have survived; if not, everything is fine as the mutant has been killed. For a single un-mutated class, this goes until the mutant gets killed or all tests targeting the class have been executed and it is still surviving.

Mutation testing in general and PIT in particular has a big disadvantage: the higher the number of mutants for a class, the higher the confidence in the results, but the higher the time required to execute tests. Therefore, it is advised to run Mutating Testing only on nightly builds. However, this cost is nothing in comparison to having trust in your tests again…

Out-of-the-box, PIT offers:

  • Maven integration
  • Ant integration
  • Command-line

Also, Alexandre has written a dedicated plugin for Sonar.

Source code for this article can be found in IntelliJ/Maven format there.

Categories: Java Tags: ,

Can we put an end to this ‘Estimate’ game of fools?

April 13th, 2014 4 comments

When I was a young software programmer, I had to develop features with estimates given by more senior programmers. If more time was required for the task, I had to explain the reasons – and I’d better be convincing about that. After some years, I became the one who had to provide feature estimates, but this did no mean it was easier: if the development team took more time to develop, I had to justify it to my management. Now, after even more years, I have to provide estimates for entire projects, not just fine-grained features.

But in essence, what do we need estimates for? For big enough companies, those are required by internal processes. Whatever the process, it goes somewhat along these lines: in order to start a project, one needs estimate in order to request a budget, then approved (or not) by management.

Guess what? It doesn’t work, it never did and I’m pretty sure it never will.

Note: some organizations are smart enough to realize this and couple estimates with a confidence factor. Too bad this factor has no place in an Excel sheets and that it is lost at some point during data aggregation :-(

Unfortunately, my experience is the following: estimates are nearly always undervalued! Most of the time, this has the following consequences, (that might not be exclusive):

  1. In general, the first option is to cancel all planned vacations of team members. The second step is to make members work longer hours, soon followed by cutting on week-ends so they work 6/7. While effective in the very short-term, it brings down the team productivity very soon afterwards. People need rest and spirit – and developers are people too!
  2. After pressure on the development team, it’s time to negotiate. In this phase, project management goes to the customer and try to strike a deal to remove parts of the project scope (if it was ever defined…). However, even if the scope is reduced, it generally is not enough to finish on budget and on time.
  3. The final and last step is the most painful: go back to the powers that be, and ask for more budget. It is painful for the management, because it’s acknowledging failure. At this point, forget the initial schedule.
  4. Meanwhile and afterwards, management will communicate everything is fine and goes according to the plan. Human nature…

In some cases (e.g. a bid), that’s even worse, as the project manager will frequently (always?) complain that those estimates are too high and pressuring you to get lower ones, despite the fact that lowering estimates never lowers workload.

You could question why estimates are always wrong. Well, this is not the point of this post but my favorite answer is that Civil Engineering is around 5,000 years old and civil engineers also rarely get their estimates right. Our profession is barely 50 years old and technology and methodologies keep changing all the time.

I’m not a methodologist, not a Project Manager, not a Scrum Master… only a Software Architect. I don’t know if Agile will save the world; however, I witnessed first-hand every upfront estimate attempt as a failure. I can only play this game of fools for so long, because we’re all doomed to loose by participating in it.

My recap of JavaLand

March 30th, 2014 No comments

This week, I have been attending the first edition of JavaLand in Brühl, organized by Oracle User Groups of Germany, Austria and Switzerland. Here’s quick recap of my experience there.

The first thing that deserves special mention is that the event took place in a theme park. Picture this, an empty theme park (or more likely a part of it) only opened for specially privileged geeks (like me).

You can imagine that some people doing this stuff non-stop, like 8 times in a row – where in standard conditions you wait like 1.5 hours to enjoy it. Pretty crazy, huh?

Now, the serious stuff: sessions I attended. Note that only could choose from English sessions, as my German is quite poor.

Is It A Car? Is It A Computer? No, It’s a Raspberry Pi Java Carputer by Simon Ritter
A feedback on how to wire a Raspberry Pi into one’s car and get car metrics – such as torque, nicely shown on a graphic display. Doing that require some hands-on manipulation, as well as some hardware knowledge, but if you happen to have both, you can really create great stuff. Despite me having none of them, I learned that modern-day cars provide interfaces to read data from.
JavaFX Apps in the Real World
Six talkers for a 45 minutes slot, of various interest. The most stunning piece was a demo for a JavaFX application involving Sudoku grids. That app is able to parse a Sudoku grid presented to a connected webcam, resolve it and then display the final result on the video image embedded in the app!
55 New Features in Java SE 8 by Simon Ritter
A nice refresher of all new stuff brought by Java 8. Of course, lambdas were (again) much talked about, but also static and default methods for interfaces as well as the new APIs (including Date & Time) and various enhancements. This is was a very worthwhile talk in a limited amount of time.
JVM and application bottlenecks troubleshooting with simple tools by Daniel Witkowski
Description of the JVM, its memory model (Eden, Young Generation, Old Generation, …) and its available options. Despite the talker warning and apparent good will, I couldn’t quite shake the feeling this was a pep talk: the conclusion was JVM tuning is hard (as if people didn’t know) and the simple tool referenced in the title is a commercial product.
Modular JavaScript by Sander Mak, Paul Bakker
Real interesting stuff about designing JavaScript applications with modularity in mind. Basically, a detailed typology of JavaScript modularity solutions according to different locations (server, client and both) was presented. In particular, the talk described Asynchronous Module Definition (AMD) and the RequireJS implementation. I definitely have to re-read slides, once they are available.
Spring 4, Java EE 7 or Both? by Ivar Grimstad
This talk picked my curiousity. After having presented Spring 4 and JavaEE 7 in the context of web applications, the speaker showed leads on how to develop applications integration both Spring 4 and JavaEE 7. However, IMHO, the talk missed the real point: why would do that? I would have expected this to have been the main point of the talk…
Apache TomEE, JavaEE Web Profile and More on Tomcat by David Blevins
Presentation of the famed TomEE (pronounced Tommy) application server, as well its backing company, Tomitribe. I did learn nothing new, but it was a pleasure to see David Blevins on stage!
The Adventurous Developer’s Guide to Application Servers by Simon Maple & Oliver White
A really good show about Rebel Labs Great Java Application Server Debate, with interactive data from the audience set back into the presentation. I already read the report, but the presentation was really entertaining.
Testing the Enterprise Layers: The ABCs of Integration Testing by Andrew Lee Rubinger
A presentation built around the Continuous Enterprise Development book, written by the speaker and Aslak Knutsen. The latter presents the basics as well as the Arquillian tool. I really have to read the book!

Myself, I presented Cargo Culting and Memes in JavaLand (I’ll let people say what they thought about it in the comments).

All in all, I have to say that for a first edition, it was really interesting as well as packed with geekyness. See you there next year, and Jatumba!

Categories: Event Tags:

Vaadin and Spring integration through JavaConfig

March 9th, 2014 No comments

When I wrote the first version of Learning Vaadin, I hinted at how to integrate Vaadin with the Spring framework (as well as CDI). I only described the overall approach by providing a crude servlet that queried the Spring context to get the Application instance.

At the time of Learning Vaadin 7, I was eager to work on add-ons the community provided in terms of Spring integration. Unfortunately, I was sorely disappointed, as I found only few and those were lacking in one way or another. The only stuff mentioning was an article by Petter Holmström – a Vaadin team member (and voluntary fireman) describing how one should do to achieve Vaadin & Spring integration. It was much more advanced than my own rant but still not a true ready-to-be-used library.

So, when I learned that both Vaadin and Spring teams joined forces to provided a true integration library between two frameworks I love, I was overjoyed. Even better, this project was developed by none other than Petter for Vaadin and Josh Long for Pivotal. However, the project was aimed at achieving DI through auto-wiring. Since JavaConfig makes for a cleaner and more testable code, I filled an issue to allow that. Petter kindly worked on this, and in turn, I spent some time making it work.

The result of my experimentation with Spring Boot Vaadin integration has been published on morevaadin.com, a blog exclusively dedicated to Vaadin.

Categories: JavaEE Tags: , ,

Teaser for Cargo Culting and Memes in JavaLand

March 2nd, 2014 No comments

At the end of the month, I will be speaking at the JavaLand conference in a talk called “Cargo Culting and Memes in JavaLand”. Without revealing too much about this talk, here is a teaser article that I hope will make you book a place at JavaLand if you have not done so at this point.

Basically, Cargo Culting is a way to reproduce outer aspect, in order to gain some properties. It became famous just after WWII when primitive tribes from the Pacific began building makeshift radios and airports, as a way to attract planes full of cargo – hence the name. In software, this is unfortunately more widespread than expected in regard to our scientific and rational background.

Take Data Transfer Object also known as Transfer Object, for example. This pattern was originally proposed as a way to transfer business data despite limitations of Entity EJBs. Strangely, despite Entity Beans never having been extensively used in the past(to says the least), and being used even less (read – not at all) currently, DTOs are like a Pavlovian reaction to many a software developer when setting up a new Java Web project.

Cargo Culting is bad, but is just applying a technique with no understanding of the context. It can be mitigated and even removed completely if Cultists are willing to listen to reason and improve their skills. Memes up the ante to another whole new level as they are based on faith – and how can you convince a faithful to renounce his faith? An example of interaction between a Software Engineer driven by logic and one following a Meme would go like this:

- Good code is self-documented, it doesn’t require comments!
- But obviously, this snippet require comments…
- Then it’s bad code!!!

There’s no way to convince one party or the other that there’s might be a kernel of truth in the opposite party point of view.

If at this point, you want to know more, you know what to do: meet me at JavaLand for the whole talk!

Categories: Development Tags:

Chaining URL View resolvers in Spring MVC

February 16th, 2014 3 comments

Standard Java EE forward to internal resources go something like this:

public class MyServlet extends HttpServlet {

  public void doGet(HttpServletRequest req, HttpServletResponse resp) {

    req.getRequestDispatcher("/WEB-INF/page/my.jsp").forward(req, resp);
  }
}

Admittedly, there’s no decoupling between the servlet code and the view technology, even not with the JSP location.

Spring MVC introduces the notion of ViewResolver. The controller just handles logical names, mapping between the logical name and the actual resource is handled by the ViewResolver. Even better, controllers are completely independent from resolvers: just registering the latter in the Spring context is enough.

Here’s a very basic controller, notice there’s no hint as to the final resource location.

@Controller

public class MyController {

  @RequestMapping("/logical")
  public String displayLogicalResource() {

    return "my";
  }
}

Even better, there’s nothing here as to the resource nature; it could be a JSP, an HTML, a Tiles, an Excel sheet, whatever. Each has a location strategy based on a dedicated ViewResolver. The most used resolver is the InternalResourceViewResolver; it meant to forward to internal resources, most of the time, JSPs. It is initialized like this:

@Bean
  public ViewResolver pageViewResolver() {

    InternalResourceViewResolver resolver = new InternalResourceViewResolver();

    resolver.setPrefix("/WEB-INF/page/");
    resolver.setSuffix(".jsp");

  return resolver;
}

Given this view resolver available in the Spring context, the logical name "my" will tried to be resolved with the "/WEB-INF/page/my.jsp" path. If the resource exists, fine, otherwise, Spring MVC will return a 404.

Now, what if I’ve different folders with JSP? I expect to be able to configure two different view resolvers, one with a certain prefix, the other with a different one. I also expect them to be checked in a determined order, and to fallback from the first to the last. Spring MVC offers multiple resolvers with deterministic order, with a big caveat: it does not apply to InternalResourceViewResolver!

Quoting Spring MVC Javadoc:

When chaining ViewResolvers, an InternalResourceViewResolver always needs to be last, as it will attempt to resolve any view name, no matter whether the underlying resource actually exists.

This means I cannot configure two InternalResourceViewResolver in my context, or more precisely I can but the first will terminate the lookup process. The reasoning behind (as well as the actual code), is that the resolver gets an handle on the RequestDispatcher configured with the resource path. Only much later is the dispatcher forwarded to, only to find that it does not exist.

To me, this is not acceptable as my use-case is commonplace. Furthermore, configuring only "/WEB-INF" for prefix and returning the rest of the path ("/page/my")  is out of the question as it ultimately defeats the purpose of decoupling the logical name from the resource location. Worst of all, I’ve seen controller code such as the following to cope with this limitation:

return getViews().get("my"); // The controller has a Map view property with "my" as key and the complete path as the "value"

I think there must be some more Spring-ish way to achieve that and I’ve come to what I think is an elegant solution in the form of a ViewResolver that checks if the resource exists.

public class ChainableUrlBasedViewResolver extends UrlBasedViewResolver {

  public ChainableUrlBasedViewResolver() {

      setViewClass(InternalResourceView.class);
  }

  @Override
  protected AbstractUrlBasedView buildView(String viewName) throws Exception {

    String url = getPrefix() + viewName + getSuffix();

    InputStream stream = getServletContext().getResourceAsStream(url);

    if (stream == null) {

      return new NonExistentView();
    }

    return super.buildView(viewName);
  }

  private static class NonExistentView extends AbstractUrlBasedView {

    @Override
    protected boolean isUrlRequired() {

        return false;
    }

    @Override
    public boolean checkResource(Locale locale) throws Exception {

      return false;
    }

    @Override
    protected void renderMergedOutputModel(Map<String, Object> model,
                                           HttpServletRequest request,
                                           HttpServletResponse response) throws Exception {

      // Purposely empty, it should never get called
    }
  }
}

My first attempt was trying to return null within the buildView() method. Unfortunately, there was some NPE being thrown later in the code. Therefore, the method returns a view that a. tells caller that the underlying resource does not exist b. does not allow for its URL to be checked (it also fails at some point if this is not set).

I’m pretty happy with this solution, as it enables me to configure my context like that:

@Configuration
@EnableWebMvc
@ComponentScan(basePackages = "ch.frankel.blog.spring.viewresolver.controller")
public class WebConfig {

  @Bean
  public ViewResolver pageViewResolver() {

    UrlBasedViewResolver resolver = new ChainableUrlBasedViewResolver();

    resolver.setPrefix("/WEB-INF/page/");
    resolver.setSuffix(".jsp");
    resolver.setOrder(0);

    return resolver;
  }

  @Bean
  public ViewResolver jspViewResolver() {

    InternalResourceViewResolver resolver = new InternalResourceViewResolver();

    resolver.setPrefix("/WEB-INF/jsp/");
    resolver.setSuffix(".jsp");
    resolver.setOrder(1);

    return resolver;
  }
}

Now, I’m pretty well inside Spring philosophy: I’m completely decoupled, and I’m using Spring nominal resolver ordering. The only con is that one resource can shadow another another by having the same logical name pointing to different resources given different view resolvers. As it is already the case with multiple view resolvers, I’m ready to accept the risk.

A showcase project can be found here in IntelliJ IDEA/Maven format.

Categories: JavaEE Tags:

Reusing front-end components in web applications

February 9th, 2014 2 comments

In the Java SE realm, GUI components are based on Java classes with the help of libraries such as AWT, Swing or the newer JavaFX. As such, they can be shared across projects, to be inherited and composed.

Things are entirely different in the Java EE world, as GUI components are completely heterogeneous in nature: they may include static HTML pages, JavaScript files, stylesheets, images, Java Server Pages or Java Server Faces. Solutions to share these resources must be tailored to each type.

  1. Since Servlet 3.0 (Java EE 6), static resources, such as HTML, JavaScript, CSS and images can be shared quite easily. Those resources need to be packaged into the META-INF/resources folder of a JAR. At this point, putting the JAR inside the WEB-INF/lib folder of a webapp will make any such resource available at the webapp’s context root.
    A disadvantage of this approach is that shared resources are also exposed publicly, including JSP that are not meant to be.
  2. An alternative to share resources protected under WEB-INF, which is also available before Servlet 3.0, is to leverage the build tool. In this case, Maven offers a so-called overlay feature through the Maven WAR plugin. This requires both adding the WAR containing resources and dependencies as well as some POM configuration.
    <project...>
      ...
      <build>
        <plugins>
          <plugin>
            <artifactId>maven-war-plugin</artifactId>
            <version>2.4</version>
            <configuration>
              <overlays>
                <overlay>
                  <groupId>ch.frankel.blog</groupId>
                  <artifactId>resources</artifactId>
                </overlay>
              </overlays>
            </configuration>
          </plugin>
        </plugins>
      </build>
      <dependencies>
        <dependency>
          <groupId>ch.frankel.blog</groupId>
          <artifactId>resources</artifactId>
          <version>1.0.0</version>
          <type>war</type>
          <scope>runtime</scope>
        </dependency>
      </dependencies>
    </project>

    At this point, resources belonging to the dependent WAR artifact will be copied to the project at build-time. Not resources existing in the project may be overwritten… on purpose or by accident. The biggest disadvantage of WAR overlays, however, is that resources have to be packaged in the WAR artifact while corresponding classes have to be in another JAR artifact.

  3. I’ve not much experience in Java Server Faces technology, but it seems sharing pages across different webapps requires the use of ResourceResolver.
  4. Finally, some frameworks are entirely built toward sharing GUI resources. For exaemple, with Vaadin, GUI components are based on Java classes, as for Java SE, thus making those components inheritable and composable. Furthermore, using images can be achieved in a few lines of code and is easy as pie:
    Image image = new Image("My image", new ClassResource("ch/frankel/blog/resources/image.png"));
    ui.setContent(image);

I think Java EE is sadly lacking regarding reuse of front-end resources. Of course, one can choose client-based frameworks to overcome this limitation though they bring their own pros and cons. In all cases, ease of reuse should be an important criteria for choosing front-end technologies.

Categories: JavaEE Tags: ,

Doubly geeky stuff: AngularJS meets Marvel comics

February 2nd, 2014 4 comments

Let’s face it: despite us having very serious titles like Principal Consultant, Senior Software Architect or Team Leader, most of us are geeks through and through. Each shows it in a different way; some fiddle with machines, some like cosplay, me I like comic books.

When I learned that Marvel Comics provided a developers REST API, I couldn’t resist playing with it. I’m more of a backend guy, and though I love Vaadin, using it to call REST services would be like proxying with no added value. Though I’ve no prior experience with AngularJS, this is a much more relevant option in this case. This article is by no mean a how-to article, on the contrary, it contains many questions and some answers I’ve come across. In no particular order, those are the following:

Using webjars with Maven
Front-end is front-end, period. I used Maven with webjars to get AngularJS and Bootstrap dependencies, but it doesn’t add anything. Worse, it adds an unecessary build process. It would have been to better to use a relevant tool like Bower. Lesson learned: use tools tailored to your language.
Using Tomcat inside IntelliJ IDEA
Likewise, using a back-end tool means no added value value but instead added complexity. It only slows down the development process. A simple web server would have been good enough.
Knowing about $resource
After using $http for a few hours, I learned about $resource. It is much better and more usable than $http. However, this requires an optional AngularJS module, ngResource, which in turn needs an additional <script> include in the HTML page.
AngularJS and Bootstrap integration
I’ve used Bootstrap for styling, because I suck at doing style sheets myself. Fortunately, there’s an dedicated Bootstrap directive for AngularJS, but I didn’t use it.
Fragment cache
I used ngView to create a Single-Page Interface application. However, AngularJS has some powerful caching features, that prevents the rendered page to be updated. In order to bypass caching, I used Firefox’s private window.
Paging synchronization
The Characters list REST service has paging capabilities. Using $index in ngRepeat for numbering with $resource fetching, I get some de-synchronization between the renumbering that occurs nearly instantly and the resource fetching.
Variable columns
I have tried multiple combinations of ngClass, ngSwitch and ngIf directives to have div columns spanning different Bootstrap units… to no avail. Any suggestion?

The project has been published on Github. Pull-requests are welcome, as well as advices, so I can improve. In all cases, please provide the reason behind, I’m an engineer after all.

Categories: Development Tags: