Archive

Archive for the ‘Java’ Category

Avoid conditional logic in @Configuration

December 7th, 2014 2 comments

Integration Testing Spring applications mandates to create small dedicated configuration fragments and to assemble them either during normal run of the application or during tests. Even in the latter case, different fragments can be assembled in different tests.

However, this practice doesn’t handle the use-case where I want to use the application in two different environments. As an example, I might want to use a JNDI datasource in deployed environments and a direct connection when developing  on my local machine. Assembling different fragment combinations is not possible, as I want to run the application in both cases, not test it.

My only requirement is that the default should use the JNDI datasource, while activating a flag – a profile, should switch to the direct connection. The Pavlovian reflex in this case would be to add a simple condition in the @Configuration class.

@Configuration
public class MyConfiguration {

    @Autowired
    private Environment env;

    @Bean
    public DataSource dataSource() throws Exception {
        if (env.acceptsProfiles("dev")) {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:file:~/localisatordb");
            dataSource.setUsername("sa");
            return dataSource;
        }
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional"); 
    }
}

Starting to use this kind flow control statements is the beginning of the end, as it will lead to adding more control flow statements in the future, which will lead in turn to a tangled mess of spaghetti configuration, and ultimately to an unmaintainable application.

Spring Boot offers a nice alternative to handle this use-case with different flavors of @ConditionalXXX annotations. Using them have the following advantages while doing the job: easy to use, readable and limited. While the latter point might seem to be a drawback, it’s the biggest asset IMHO (not unlike Maven plugins). Code is powerful, and with great power must come great responsibility, something that is hardly possible during the course of a project with deadlines and pressure from the higher-ups. That’s the main reason one of my colleagues advocates XML over JavaConfig: with XML, you’re sure there won’t be any abuse while the project runs its course.

But let’s stop the philosophy and back to @ConditionalXXX annotations. Basically, putting such an annotation on a @Bean method will invoke this method and put the bean in the factory based on a dedicated condition. There are many of them, here are some important ones:

  • Dependent on Java version, newer or older – @ConditionalOnJava
  • Dependent on a bean present in factory – @ConditionalOnBean, and its opposite, dependent on a bean name not present – @ConditionalOnMissingBean
  • Dependent on a class present on the classpath – @ConditionalOnClass, and its opposite @ConditionalOnMissingClass
  • Whether it’s a web application or not – @ConditionalOnWebApplication and @ConditionalOnNotWebApplication
  • etc.

Note that the whole list of existing conditions can be browsed in Spring Boot’s org.springframework.boot.autoconfigure.condition package.

With this information, we can migrate the above snippet to a more robust implementation:

@Configuration
public class MyConfiguration {

    @Bean
    @Profile("dev")
    public DataSource dataSource() throws Exception {
        org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
        dataSource.setDriverClassName("org.h2.Driver");
        dataSource.setUrl("jdbc:h2:file:~/localisatordb");
        dataSource.setUsername("sa");
        return dataSource;
    }

    @Bean
    @ConditionalOnMissingBean(DataSource.class)
    public DataSource fakeDataSource() {
        JndiDataSourceLookup dataSourceLookup = new JndiDataSourceLookup();
        return dataSourceLookup.getDataSource("java:comp/env/jdbc/conditional");
    }
}

The configuration is now neatly separated into two different methods, the first method will be called only when the dev profile is active while the second will be when the first method is not called, hence when the dev profile is not active.

Finally, the best thing in that feature is that it is easily extensible, as it depends only on the @Conditional annotation and the Condition interface (who are part of Spring proper, not Spring Boot).

Here’s a simple example in Maven/IntelliJ format for you to play with. Have fun!

Send to Kindle
Categories: Java Tags: ,

Metrics, metrics everywhere

November 16th, 2014 No comments

With DevOps, metrics are starting to be among the non-functional requirements any application has to bring into scope. Before going further, there are several comments I’d like to make:

  1. Metrics are not only about non-functional stuff. Many metrics represent very important KPI for the business. For example, for an e-commerce shop, the business needs to know how many customers leave the checkout process, and in which screen. True, there are several solutions to achieve this, though they are all web-based (Google Analytics comes to mind) and metrics might also be required for different architectures. And having all metrics in the same backend mean they can be correlated easily.
  2. Metrics, as any other NFR (e.g. logging and exception handling) should be designed and managed upfront and not pushed in as an afterthought. How do I know that? Well, one of my last project focused on functional requirement only, and only in the end did project management realized NFR were important. Trust me when I say it was gory – and it has cost much more than if designed in the early phases of the project.
  3. Metrics have an overhead. However, without metrics, it’s not possible to increase performance. Just accept that and live with it.

The inputs are the following: the application is Spring MVC-based and metrics have to be aggregated in Graphite. We will start by using the excellent Metrics project: not only does it get the job done, its documentation is of very high quality and it’s available under the friendly OpenSource Apache v2.0 license.

That said, let’s imagine a “standard” base architecture to manage those components.

First, though Metrics offer a Graphite endpoint, this requires configuration in each environment and this makes it harder, especially on developers workstations. To manage this, we’ll send metrics to JMX and introduce jmxtrans as a middle component between JMX and graphite. As every JVM provides JMX services, this requires no configuration when there’s none needed – and has no impact on performance.

Second, as developers, we usually enjoy develop everything from scratch in order to show off how good we are – or sometimes because they didn’t browse the documentation. My point of view as a software engineer is that I’d rather not reinvent the wheel and focus on the task at end. Actually, Spring Boot already integrates with Metrics through the Actuator component. However, it only provides GaugeService – to send unique values, and CounterService – to increment/decrement values. This might be good enough for FR but not for NFR so we might want to tweak things a little.

The flow would be designed like this: Code > Spring Boot > Metrics > JMX > Graphite

The starting point is to create an aspect, as performance metric is a cross-cutting concern:

@Aspect
public class MetricAspect {

    private final MetricSender metricSender;

    @Autowired
    public MetricAspect(MetricSender metricSender) {
        this.metricSender = metricSender;
    }

    @Around("execution(* ch.frankel.blog.metrics.ping..*(..)) ||execution(* ch.frankel.blog.metrics.dice..*(..))")
    public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
        StopWatch stopWatch = metricSender.getStartedStopWatch();
        try {
            return pjp.proceed();
        } finally {
            Class clazz = pjp.getTarget().getClass();
            String methodName = pjp.getSignature().getName();
            metricSender.stopAndSend(stopWatch, clazz, methodName);
        }
    }
}

The only thing outside of the ordinary is the usage of autowiring as aspects don’t seem to be able to be the target of explicit wiring (yet?). Also notice the aspect itself doesn’t interact with the Metrics API, it only delegates to a dedicated component:

public class MetricSender {

    private final MetricRegistry registry;

    public MetricSender(MetricRegistry registry) {
        this.registry = registry;
    }

    private Histogram getOrAdd(String metricsName) {
        Map registeredHistograms = registry.getHistograms();
        Histogram registeredHistogram = registeredHistograms.get(metricsName);
        if (registeredHistogram == null) {
            Reservoir reservoir = new ExponentiallyDecayingReservoir();
            registeredHistogram = new Histogram(reservoir);
            registry.register(metricsName, registeredHistogram);
        }
        return registeredHistogram;
    }

    public StopWatch getStartedStopWatch() {
        StopWatch stopWatch = new StopWatch();
        stopWatch.start();
        return stopWatch;
    }

    private String computeMetricName(Class clazz, String methodName) {
        return clazz.getName() + '.' + methodName;
    }

    public void stopAndSend(StopWatch stopWatch, Class clazz, String methodName) {
        stopWatch.stop();
        String metricName = computeMetricName(clazz, methodName);
        getOrAdd(metricName).update(stopWatch.getTotalTimeMillis());
    }
}

The sender does several interesting things (but with no state):

  • It returns a new StopWatch for the aspect to pass back after method execution
  • It computes the metric name depending on the class and the method
  • It stops the StopWatch and sends the time to the MetricRegistry
  • Note it also lazily creates and registers a new Histogram with an ExponentiallyDecayingReservoir instance. The default behavior is to provide an UniformReservoir, which keeps data forever and is not suitable for our need.

The final step is to tell the Metrics API to send data to JMX. This can be done in one of the configuration classes, preferably the one dedicated to metrics, using the @PostConstruct annotation on the desired method.

@Configuration
public class MetricsConfiguration {

    @Autowired
    private MetricRegistry metricRegistry;

    @Bean
    public MetricSender metricSender() {
        return new MetricSender(metricRegistry);
    }

    @PostConstruct
    public void connectRegistryToJmx() {
        JmxReporter reporter = JmxReporter.forRegistry(metricRegistry).build();
        reporter.start();
    }
}

The JConsole should look like the following. Icing on the cake, all default Spring Boot metrics are also available:

Sources for this article are available in Maven “format”.

To go further:

Send to Kindle

On resources scarcity, application servers and micro-services

October 26th, 2014 2 comments

While attending JavaZone recently, I went to a talk by Neal Ford. To be honest, the talk was not something mind-blowing, many tools he showed were either outdated or not best of breed, but he stated a very important fact: application servers are meant to address resources scarcity by sharing resources, while those resources are no more scarce in this day and age.

In fact, this completely matches my experience. Remember 10 years ago when we had to order hardware 6 months in advance? At that time, all webapps were deployed on the same application server – which weren’t always clustered. After some years, I noticed that application servers grew in number. Some were even required to be clustered because we couldn’t afford the service offered to be off. It was the time when people started to think about which application to deploy on which application server, because of their specific load profile. In the latest years, a new behavior appeared: deploying a single webapp to a single application server because it was deemed too critical to be potentially affected by other apps on the same application server. And that led sometimes to the practice of doing that for every application, whether the latter was seen as critical or not.

Nowadays, hardware is available instantly, in any required quantity and for nothing: they call it the Cloud. Then why are we still using application servers? I think that the people behind the Spring framework asked themselves the same question and came up with a radical answer: we don’t need them. Spring has always been about pragmatic development when JavaEE (called J2EE then) was still a big bloated standard, full of barbaric acronyms and a real pain in the ass to develop with (remember EJB 2.0?) – no wonder so many developers bitch about Java. Spring valued simple JSP/Servlet containers over fully-compliant JavaEE servers and now they finally crossed the Rubicon as no external application server is necessary anymore.

When I first heard about this, I was flabbergasted, but in the age of micro-services, I guess this makes pretty much sense. Imagine you just finished developing your application. Instead of creating a WAR, an EAR or whatever package you normally do, you just push to a Git repo. Then, a hook pushes the code to the server, stops the existing application and starts it again. Wouldn’t that be not only fun but really Agile/Devops/whatever-cool-concept-you-want? I think that would, and that’s exactly the kind of deployment Spring Boot allows. This is not the only feature of Spring Boot, it also provides a real convention over configuration, useful Maven POM, out-of-the-box metrics and health checks and much much more, but embedding Tomcat is the most important one (IMHO).

On the opposite side, big shops such as IBM, Oracle, even Red Hat still invest huge sums of money into developing their full profile Java EE compliant application servers. The funny stuff is that in order to be JavaEE compliant, you have to implement “interesting” bits such as Java Connector Architecture, something I’ve seen only once early in my career to connect to a CICS. Interestingly, the Web Profile defines a lightweight standard, leaving out JCA… but also JavaMail. However, it goes the way of going lightweight.

Now, only the future will tell what and how happens next, but I can see a trend forming there.

Send to Kindle

You shouldn’t follow rules… blindly

October 12th, 2014 2 comments

Some resources on the Internet are written in a very imperative style – you must do that in this way. And beware those that don’t follow the rule! They remind me of a french military joke (or more precisely a joke about the military) – but I guess other countries probably have their own version, regarding military rules. They are quite simple and can be summarized in two articles:

Art. 1: It’s mandatory to obey the orders of a superior.

Art. 2: When the superior is obviously wrong, refer to article 1.

What applies to the military domain, however, doesn’t apply to the software domain. I’ve been fighting for a long time about good practices having to be put in context, so that a specific practice may be the right in one context but plain wrong in another context. The reason is that in the latter case, disadvantages outweigh advantages. Of course, some practices have a wider scope than others. I mistakenly thought that some even have an all-encompassing scope, meaning they give so many benefits, they apply in all contexts. I have been proven wrong this week, regarding 2 of such practices I use:

  • Use JavaConfig over XML for Spring configuration
  • Use constructor injection over attribute injection for Dependency Injection

The use-case is the development of Spring CGLIB-based aspects (the codebase is legacy and interfaces may or may not exist) to collect memory metrics. I must admit this context is very specific, but that doesn’t change that it’s still a context.

First thing first, Spring aspects are not yet completely compatible with JavaConfig – and in any case, the Spring version is also legacy (3.x), so JavaConfig is out of the question. But at least annotations? In this case, two annotations may come into play: @Aspect for the class and @Around for the method that has to be used. The first is used in a very straightforward way, while the second needs to be passed the pointcut… as a String argument.

@Aspect
public class MetricsCollectorAspect

    @Around("execution(...)") // This spans many many lines
    public Object collectMetrics {
        ...
    }
}

The corresponding XML is the following:




    
        
        
    

Benefits of using annotations over XML? None. Beside, the platform’s product we use do not embed Spring configuration fragments, so that it’s quite easy to update it and check results in deployed environment – pointcut included. XML: 1, annotations 0.

Another fun stuff: I’ve been an ardent defender of using constructor injection. This has some advantages, including highlighting dependencies, fewer boiler plate code and immutability. The 3.x version of Spring uses a version of CGLIB that cannot create proxies when there’s no no-args constructor on the proxied class. The paradox is that “good” design prevents proxying, while “bad” design – attribute injection with no-args constructor allows it. Sure, there are a couple of solutions to allow this: add interfaces to allow pure Spring proxies, add a no-args constructor on or filter out those unproxyable classes, but none of them are without impact.

Morality: rules are meant to help you, not hinder you. If you cannot follow them because of a good reason (like the cost is prohibitive), just ignore the. Just write down in comments the reason why you didn’t for your future code’s maintainers.

Send to Kindle
Categories: Java Tags:

Throwing a NullPointerException… or not

September 28th, 2014 7 comments

This week, I’ve lived again an experience from a few years ago, but in the opposite seat.

As a software architect/team leader/technical lead (select the term you’re more comfortable with), I was doing code reviews on an project we were working on and  I stumbled upon a code that looked like that:

public void someMethod(Type parameter) {
    if (parameter == null) {
        throw new NullPointerException("Parameter Type cannot be null");
    }
    // Rest of the method
}

I was horribly shocked! An applicative code throwing a NullPointerException, that was a big coding mistake. So I gently pointed to the developer that it was a bad idea and that I’d like him to throw an IllegalArgumentException instead, which exactly the exception type matching the use-case. This was very clear in my head, until the dev pointed me the NullPointerException‘s Javadoc. For simplicity’s sake, here’s the juicy part:

Thrown when an application attempts to use null in a case where an object is required. These include:

  • Calling the instance method of a null object.
  • Accessing or modifying the field of a null object.
  • Taking the length of null as if it were an array.
  • Accessing or modifying the slots of null as if it were an array.
  • Throwing null as if it were a Throwable value.

Applications should throw instances of this class to indicate other illegal uses of the null object.

Read the last sentence again: “Applications should throw instances of this class to indicate other illegal uses of the null object“. It seems to be legal for an application to throw an NPE, even recommended by the Javadocs.

In my previous case, I read it again and again… And ruled that yeah, it was OK for an application to throw an NPE. Back to today: a dev reviewed the pull request from another one, found out it threw an NPE and ruled it was bad practice. I took the side of the former dev and said it was OK.

What would be your ruling and more importantly, why?

Note: whatever the decision, this is not a big deal and probably not worth the time you spend bickering about it ;-)

Send to Kindle

Another valid Open/Closed principle

September 15th, 2014 1 comment

I guess many of you readers are familiar with the Open/Closed principle. It states that:

Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification
- Meyer, Bertrand (1988). Object-Oriented Software Construction

This other Open/Closed principle is completely unrelated, but just as important. It states that:

When “something” is opened by one’s code, one MUST close it as well!
-Me (2014). My bathroom

Simple enough? Then why do I regularly stumble upon database connections not closed (i.e. released to the pool), sockets as well, input streams not closed, output streams not closed, etc.? (But to be honest, I also am guilty of sometimes forgetting the later). When resources are not released just after their use, it might generate memory leaks. Other possible consequences are related to the resource’s nature: for example, when not releasing database connections, sooner or later the pool will be exhausted so that no new connections to the database can be acquired – and basically your application will be unusable, requiring a reboot of the application server.

Still, nothing prevents us from taking care of closing the resource, there are just some little things to address:

  1. As seen above, closing a resource is not an option, it must be ensured. Java makes it possible through the finally block (so that a related try block is necessary as well).
  2. Most close() (or release() or destroy() or what have you) method signatures throw a checked exception. If you really want my opinion, this is a design mistake but we developers have to deal with it (and to be honest, this is not the first time someone made a decision where I had to handle the consequences of, that’s know as Management 1.0).

Here’s a very simple example:

File file = new File("/path/to/file");
FileInputStream fis = new FileInputStream(file);
try {
    // Do something with file
} finally {
    fis.close();
}

Note line 6 uses a method whose signature throws a checked exception and must be dealt accordingly. As in most cases, exception thrown while closing hard hardly recoverable (read not at all), not throwing it further is more than a valid option. Instead of further cluttering the code, just use Apache Commons IO‘s IOUtils.closeQuietly() (or its database counterpart Apache Commons DB‘s DBUtils.closeQuitely()).

The finally block above can be replaced as follows:

finally {
    LOGGER.log("Couldn't close input stream on file " + file.getAbsolutePath());
    IOUtils.closeQuietly(fis);
}

(Yes, logging that a resource cannot be released is a good practice in all cases)

Finally (no pun intended), one might remember that Java 7 brings the AutoCloseable interface along with the try-with-resources syntax. This makes it even simpler to handle the close() if the method signature doesn’t throw an exception:

try (StringReader reader = new StringReader(string)){
    // Do something with reader
}

(Note StringWriter.close() signature throws an exception, even though it does nothing)

In conclusion, remember that Continuous Integration platforms are your friend and tools such as Sonar (or Checkstyle, PMD and FinddBugs) are a good way to check unreleased resources.

Send to Kindle
Categories: Java Tags:

Using exceptions when designing an API

August 31st, 2014 2 comments

Many knows the tradeoff of using exceptions while designing an application:

  • On one hand, using try-catch block nicely segregates between regular code and exception handling code
  • On the other hand, using exceptions has a definite performance cost for the JVM

Every time I’ve been facing this quandary, I’ve ruled in favor of the former, because “premature optimization is evil”. However, this week has proved me that exception handling in designing an API is a very serious decision.

I’ve been working to improve the performances of our application and I’ve noticed many silent catches coming from the Spring framework (with the help of the excellent dynaTrace tool). The guilty lines comes from the RequestContext.initContext() method:

if (this.webApplicationContext.containsBean(REQUEST_DATA_VALUE_PROCESSOR_BEAN_NAME)) {
    this.requestDataValueProcessor = this.webApplicationContext.getBean(
    REQUEST_DATA_VALUE_PROCESSOR_BEAN_NAME, RequestDataValueProcessor.class);
}

Looking at the JavaDocs, it is clear that this method (and the lines above) are called each time the Spring framework handles a request. For web applications under heavy load, this means quite a lot! I provided a pass-through implementation of the RequestDataValueProcessor and patched one node of the cluster. After running more tests, we noticed response times were on average 5% faster on the patched node compared to the un-patched node. This is not my point however.

Should an exception be thrown when the bean is not present in the context? I think not… as the above snippet confirms. Other situations e.g. injecting dependencies, might call for an exception to be thrown, but in this case, it has to be the responsibility of the caller code to throw it or not, depending on the exact situation.

There are plenty of viable alternatives to exceptions throwing:

Returning null
This means the intent of the code is not explicit without looking at the JavaDocs, and so the worst option on our list
Returning an Optional
This makes the intent explicit compared to returning null. Of course, this requires Java 8
Return a Guava’s Optional
For those of us who are not fortunate enough to have Java 8
Returning one’s own Optional
If you don’t use Guava and prefer to embed your own copy of the class instead of relying on an external library
Returning a Try
Cook up something like Scala’s Try, which wraps either (hence its old name – Either) the returned bean or an exception. In this case, however, the exception is not thrown but used like any other object – hence there will be no performance problem.

Conclusion: when designing an API, one should really keep using exceptions for exceptional situations only.

As for the current situation, Spring’s BeanFactory class lies at center of a web of dependencies and its multiple getBean() method implementation cannot be easily replaced with one of the above options without forfeiting backward compatibility. One solution, however, would be to provide additional getBeanSafe() methods (or a better relevant name) using one of the above options, and then replace usage of their original counterpart step by step inside the Spring framework.

Send to Kindle
Categories: Java Tags: , ,

Sanitizing webapp outputs as an an afterthought

August 10th, 2014 No comments

For sure, software security should be part of every developer’s requirements: they should be explained and detailed before development. Unfortunately, it happens in real life that this is not always the case. Alternatively, even when it is, developers make mistakes and/or have to make with tight (read impossible) plannings. In the absence of security checks automated tools, sooner or later, an issue will appear.

I’ve been thinking about a way to sanitize the output of a large-scale legacy Spring MVC application in a reliable way (i.e. not go on each page to fix issues). Basically, there are 4 ways output is displayed in the HTML page.

# Name Sample snippet Description
1 Form taglib Outputs a bean attribute
2 Spring taglib Outputs a message from a properties file
3 Java Standard Taglib Library Outputs a value
4 Expression Language ${pageContext.request.requestURI} Outputs a value

Spring taglibs

Spring taglibs are a breeze to work with. Basically, Spring offers multiple ways to sanitize the output, each scope parameter having a possibility to be overridden by a narrower one:

  1. Application scoped, with the boolean defaultHtmlEscape context parameter
    
      defaultHtmlEscape
      true
    
  2. Page scoped (i.e. all forms on page), with the tag
  3. Tag scoped, with the htmlEscape attribute of the tag

There’s only one catch; the  tag can take not only a code (the key in the property file) but also arguments – however, those are not escaped :

Hello, ${0} ${1}

A possible sanitization technique consists of the following steps:

  1. Create a new SanitizeMessageTag:
    • Inherit from Spring’s MessageTag
    • Override the relevant revolveArguments(Object) method
    • Use the desired sanitization technique (Spring uses its own HtmlUtils.htmlEscape(String))
  2. Copy the existing Spring TagLib Descriptor and create a new one out of it
  3. Update it to bind the message tag to the newly created SanitizeMessageTag class
  4. Last but not least, override the configuration of the taglib in the web deployment descriptor:
    
      
        http://www.springframework.org/tags
        /WEB-INF/tld/sanitized-spring-form.tld
      
    

    By default, the JavaEE specifications mandates for the container to look for TLDs insides JARs located under the WEB-INF/lib directory. It is also possible to configure them in the web deployment descriptor. However, the configuration takes precedence over automatic scanning.

This way, existing JSP using the Spring taglib will automatically benefit from the new tag with no page-to-page update necessary.

JSTL

The tag works the same way as the one, the only difference being there’s no global configuration parameter, only a escapeXml tag attribute which defaults to false.

The same technique as above can be used to default to true instead.

EL

The EL syntax enables output outside any taglib so that the previous TLD override technique cannot be used to solve this.

Not known to many developers, EL snippets are governed by so-called EL resolvers. Standard application servers (including servlet containers like Tomcat) provide standard EL resolvers, but it is also possible to add others at runtime.

Note: though only a single EL resolver can be set in the JSP context, the resolver hierarchy implements the Composite pattern, so it’s not an issue.

Steps required to sanitize EL syntax by default are:

  1. Subclasses relevant necessary EL resolvers – those are ScopedAttributeELResolver, ImplicitObjectELResolver and BeanELResolver, since they may return strings
  2. For each, override the getValue() method:
    • Call super.getValue()
    • Check the return value
    • If it is a string, sanitize the value before returning it, otherwise, leave it as it is
  3. Create a ServletContextListener to register these new EL resolvers
    public class SanitizeELResolverListener implements ServletContextListener {
        public void contextInitialized(ServletContextEvent event) {
            ServletContext context = event.getServletContext();
            JspFactory jspFactory = JspFactory.getDefaultFactory();
            JspApplicationContext jspApplicationContext = jspFactory.getJspApplicationContext(context);
            ELResolver sber = new SanitizeBeanELResolver();
            jspApplicationContext.addELResolver(sber);
            // Register other EL resolvers
        }
    }

Summary

Trying to sanitize the output of an application after it has been developed is not the good way to raise developers concerns about security. However, dire situations require dire solutions. When the application has already been developed, the above approaches – one for taglibs, one for EL, show how to achieve this in a way that does not impact existing code and get the job done.

Send to Kindle
Categories: JavaEE Tags: , ,

Session Fixation and how to fix it

August 3rd, 2014 3 comments

These last few weeks, I’ve been tasked to fix a number of security holes in our software. Since I’m not a security expert, I’ve been extremely interested in this, and have learned quite a few things. Among them is the Session Fixation attack.

The context is an online Java application. One part is avalailable through simple HTTP, where you can do simple browsing;  when you enter credentials and successfully log in, you’re switched to HTTPS. This is a very common setup found online. For example, Amazon works this way: you browse the product catalog and put products in your basket in HTTP, but as soon as you login to checkout, you’re switched to HTTPS.

Now, the attack scenario is the following:

  1. Alice visits this online application and gets the value of the JSESSIONID cookie returned by the server
  2. Alice crafts a link to the application, including the previous JSESSIONID
  3. Alice sends the link to Bob
  4. Bob clicks on the link sent (rather stupidly, I’d say) or copies the link to his browser (the result is the same)
  5. In the same session, Bob enters his credentials to enter the secured part of the application. He’s now authentified within the session referenced by the JSESSIONID sent by Alice
  6. Using the JSESSIONID sent in her own browser, Alice is able to operate the application with the same credentials as Bob

Wow! If the site is an online banking site, this is extremely serious, giving potential attackers access to your bank account. This issue is known as Session Fixation and is referenced by OWASP.

Though we can require users not to click on links sent by emails, that’s a request for “aware” users, not everyone’s grandmother. We definitely need a more robust solution. The proposed remediation is quite easy to design: when the user switches from HTTP to HTTPS, he’s sent another JSESSIONID. Basically, his old session is destroyed, and a new one is created with all attributes of his former session.

It is possible to implement this behavior. However, if one is using Spring Security, it’s available out of the box through the SessionFixationProtectionStrategy class. Just plug it into the UserNamePasswordAuthenticationFilter.


  
    
      
    
  

Beware, most examples available on the Web only show usage of the strategy for session management!

Better yet, using JavaConfig and the WebSecurityConfigurerAdapter, it is configured out-of-the-box. Here’s an example of such configuration, with the strategy already configured:

@Configuration
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {

    @Autowired
    protected void configure(AuthenticationManagerBuilder amb) throws Exception {
        // Creates a simple principal in memory
        amb.inMemoryAuthentication().withUser("frankel").password("").roles("USER");
    }

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http
            .authorizeRequests()
             // Requires user to have USER role
            .antMatchers("/hello").hasRole("USER")
            .and().requiresChannel()
            // Requires channel to be HTTPS
            .antMatchers("/hello").requiresSecure()
            // Dont forget a simple form login!
            .and().formLogin();
    }
}

Get the sample project – it is also a good template project for Spring MVC & Spring Security with JavaConfig, and check the JSESSIONID cookie value changed.

Note: you’ll need a SSL-enabled servlet container.

Send to Kindle
Categories: Java Tags: ,

Spring configuration modularization for Integration Testing

July 27th, 2014 1 comment

Object-Oriented Programming advocates for modularization in order to build small and reusable components. There are however other reasons for this. In the case of the Spring framework, modularization enables Integration Testing, the ability to test the system or parts of it, including assembly configuration.

Why is it so important to test the system assembled with the final configuration? Let’s take a simple example, the making of a car. Unit Testing the car would be akin to testing every nuts and bolts of the car separately, while Integration Testing the car would be like driving it on a circuit. By testing only the car’s components separately, selling the assembled car is a huge risk as nothing guarantees it will behave correctly in real-life conditions.

Now that we have asserted Integration Testing is necessary to guarantee the adequate level of internal quality, it’s time to enable Integration Testing with the Spring framework. Integration Testing is based on the notion of SUT. Defining the SUT is to define the boundaries between what is tested and its dependencies. In nearly all cases, test setup will require to provide some kind of test double for each required dependency. Configuring those test doubles can only be achieved by modularizing Spring configuration, so that they can replace dependent beans located outside the SUT.

Sample bean dependency diagram

Fig. 1 – Sample bean dependency diagram

Spring’s DI configuration comes in 3 different flavors: XML – the legacy way, autowiring and the newest JavaConfig. We’ll have a look at how modularization can be achieved for each flavor. Mixed DI modularization can be deduced from each separate entry.

Autowiring

Autowiring is an easy way to assemble Spring applications. It is achieved through the use of either @Autowiring or @Inject. Let’s cover quickly autowiring: as injection is implicit, there’s no easy way to modularize configuration. Applications using autowiring will just have to migrate to another DI flavor to allow for Integration Testing.

XML

XML is the legacy way to inject dependencies, but is still in use. Consider the following monolithic XML configuration file:



  
  
    
  
  
    
  
  
    
  
  
    
    
    
  

At this point, Integration Testing orderService is not easy as it should be. In particular, we need to:

  • Download the application server
  • Configure the server for the jdbc/MyDataSource data source
  • Deploy all classes to the server
  • Start the server
  • Stop the server After the test(s)

Of course, all previous tasks have to be automated! Though not impossible thanks to tools such as Arquillian, it’s contrary to the KISS principle. To overcome this problem and make our life (as well as test maintenance) easier in the process requires tooling and design. On the tooling part, we’ll be using a local database. Usually, such a database is of the in-memory kind e.g. H2. On the design part, his requires separating our beans by creating two different configuration fragments, one solely dedicated to the data source to be faked and the other one for the beans constituting the SUT.

Then, we’ll use a Maven classpath trick: Maven puts the test classpath in front of the main classpath when executing tests. This way, files found in the test classpath will “override” similarly-named files in the main classpath. Let’s create two configuration files fragments:

  • The “real” JNDI datasource as in the monolithic configuration
    
      
    
  • The Fake datasource
    
      
          
          
          
          
      
    

    Note we are using a Tomcat datasource object, this requires the org.apache.tomcat:tomcat-jdbc:jar library on the test classpath. Also note the maxActive property. This reflects the maximum number of connections to the database. It is advised to always set it to 1 for test scenarios so that connections pool exhaustion bugs can be checked as early as possible.

The final layout is the following:

Fig. 2 – Project structure for Spring XML configuration Integration Testing

  1. JNDI datasource
  2. Other beans
  3. Fake datasource

The final main-config.xml file looks like:



  
  

Such a structure is the basics to enable Integration Testing.

JavaConfig

JavaConfig is the most recent way to configure Spring applications, bringing both compile-time (as autowiring) and explicit configuration (as XML) safety.

The above datasources fragments can be “translated” in Java as follows:

  • The “real” JNDI datasource as in the monolithic configuration
    @Configuration
    public class DataSourceConfig {
    
        @Bean
        public DataSource dataSource() throws Exception {
            Context ctx = new InitialContext();
            return (DataSource) ctx.lookup("jdbc/MyDataSource");
        }
    }
  • The Fake datasource
    public class FakeDataSourceConfig {
    
        public DataSource dataSource() {
            org.apache.tomcat.jdbc.pool.DataSource dataSource = new org.apache.tomcat.jdbc.pool.DataSource();
            dataSource.setDriverClassName("org.h2.Driver");
            dataSource.setUrl("jdbc:h2:~/test");
            dataSource.setUsername("sa");
            dataSource.setMaxActive(1);
            return dataSource;
        }
    }

However, there are two problems that appear when using JavaConfig.

  1. It’s not possible to use the same classpath trick with an import as with XML previously, as Java forbids to have 2 (or more) classes with the same qualified name loaded by the same classloader (which is the case with Maven). Therefore, JavaConfig configuration fragments shouldn’t explicitly import other fragments but should leave the fragment assembly responsibility to their users (application or tests) so that names can be different, e.g.:
    @ContextConfiguration(classes = {MainConfig.class, FakeDataSource.class})
    public class SimpleDataSourceIntegrationTest extends AbstractTestNGSpringContextTests {
    
        @Test
        public void should_check_something_useful() {
            // Test goes there
        }
    }
  2. The main configuration fragment uses the datasource bean from the other configuration fragment. This mandates for the former to have a reference on the latter. This is obtained by using the @Autowired annotation (one of the few relevant usage of it).
    @Configuration
    public class MainConfig {
    
        @Autowired
        private DataSource dataSource;
    
        // Other beans go there. They can use dataSource!
    }

Summary

In this article, I showed how Integration Testing to a Fake data source could be achieved by modularizing the monolithic Spring configuration into different configuration fragments, either XML or JavaConfig.

However, the realm of Integration Testing – with Spring or without, is vast. Should you want to go further, I’ll hold a talk on Integration Testing at Agile Tour London on Oct. 24th and at Java Days Kiev on Oct. 17th-18th.

This article is an abridged version of a part of the Spring chapter of Integration Testing from the Trenches. Have a look at it, there’s even a sample free chapter!

Integration Testing from the Trenches

Send to Kindle
Categories: Java Tags: ,