Archive

Posts Tagged ‘good practice’
  • A use-case for Spring component scan

    :page-liquid: :experimental:

    Regular readers of this blog know I’m a big proponent of the Spring framework, but I’m quite opinionated in the way it should be used. For example, I favor explicit object instantiation and explicit component wiring over self-annotated classes, component scanning and autowiring.

    == Concepts

    Though those concepts are used by many Spring developers, my experience has taught me they are not always fully understood. Some explanation is in order.

    === Self-annotated classes

    Self-annotated classes are classes which define how they will be instantiated by Spring via annotations on the classes themselves. @Component, @Controller, @Service and @Repository are the usual self-annotated classes found in most Spring projects.

    [source,java] @Component class MySelfAnnotatedClass {}

    The main disadvantages of self-annotated classes is the hard coupling between the class and the bean. For example, it’s not possible to instantiate 2 singleton beans of the same class, as in explicit creation. Worse, it also couples the class to the Spring framework itself.

    Note that @Configuration classes are also considered to be self-annotated.

    === Component scanning

    Self-annotated classes need to be listed and registered in the context. This can be done explicitly:

    [source,java]

    @Configuration class MyConfig {

    @Bean
    fun myClass() =  MySelfAnnotatedClass() } ----
    

    However, the most widespread option is to let the Spring framework search for every self-annotated class on the project classpath and register them according to the annotations.

    [source,java]

    @Configuration @ComponentScan class MyConfig —-

    === Autowiring

    Some beans require dependencies in order to be initialized. Wiring the dependency into the dependent bean can be either:

    • explicit: it’s the developer’s responsibility to tell which beans will fulfill the dependency.
    • implicit (or auto): the Spring framework is responsible to provide the dependency. In order to do that, a single bean must be eligible.

    Regarding the second option, please re-read an old link:{% post_url 2013-11-03-my-case-against-autowiring %}[post^] of mine for its related problems.

    Sometimes, however, there’s no avoiding autowiring. When bean Bean1 defined in configuration fragment Config1 depends on bean Bean2 defined in fragment Config2, the only possible injection option is autowiring.

    [source,java]

    @Configuration class Config2 {

    @Bean
    fun bean2() = Bean2() }
    

    @Configuration class Config1 {

    @Bean @Autowired
    fun bean1(bean2: Bean2) = Bean1(bean2) } ----
    

    In the above snippet, autowiring is used without self-annotated classes.

    == Reinventing the wheel

    This week, I found myself re-implementing Spring Boot’s actuator in a legacy non-Spring Boot application.

    The architecture is dead simple: the HTTP endpoint returns a Java object (or a list of them) serialized through the Jackson library. Every endpoint might return a different object, and each can be serialized using a custom serializer.

    I’ve organized the project in a package-per-endpoint way (as opposed to package-per-layer), and have already provided several endpoints. I’d like people to contribute other endpoints, and I want it to be as simple as possible. In particular, they should only:

    . Declare controllers . Declare configuration classes . Instantiate Jackson serializers

    The rest should taken care of by generic code written by me.

    == The right usage of autowire

    Controllers and configuration classes are easily taken care of by using @ComponentScan on the main configuration class located in the project’s main package. But what about serializers?

    Spring is able to collect all beans of a specific class registed into a context into a list. That means that every package will declare its serializers independently, and common code can take care of the registration:

    [source, java]

    @Configuration @EnableWebMvc @ComponentScan class WebConfiguration : WebMvcConfigurerAdapter() {

    @Autowired
    private lateinit var serializers: List<StdSerializer<*>> <1>
    
    override fun configureMessageConverters(converters: MutableList<HttpMessageConverter<*>>) {
        converters.add(MappingJackson2HttpMessageConverter().apply {
            objectMapper.registerModule(SimpleModule().apply {
                serializers.forEach { addSerializer(it) }
            })
        })
    } } ----
    

    <1> Magic happens here. This configuration class has already been written, and new packages don’t need to do anything, serializers will be part of the list.

    Here’s an example of such a configuration class, declaring a serializer bean:

    [source,java]

    @Configuration class FooConfiguration {

    @Bean
    fun fooSerializer()  = FooSerializer() }
    

    class FooSerializer : StdSerializer(Foo::class.java) { ... } ----

    Even better, if packages need to be modularized further into full-fledged JARs, this setup will work in the exact same way without any change.

    == Conclusion

    A better understanding of self-annotated classes, component-scanning and autowiring is beneficial to all Spring developers.

    Moreover, while they have a lot of drawbacks in “standard” beans classe, it’s not only perfectly acceptable but even an advantage to do so within the scope of configuration classes. In projects designed in a package-by-feature way, it improves modularization and decoupling.

    Categories: Development Tags: Springautowiringcomponent scangood practice
  • Making sure inter-teams communication doesn't work

    The 3 monkeys

    :page-liquid: :experimental:

    +++{% twitter https://twitter.com/nicolas_frankel/status/781051073414979584 %}+++

    This post explains the reasons behind this tweet, sent on the spur of the moment.

    Let’s face it, most developers - myself included, prefer to deal with code than with people. However, when communicating with other developers, we share a common context and understanding, which makes a lot of explanations unnecessary. It’s the same for all specialized jobs, such as lawyers or civil engineers.

    Now, if I have to communicate with a developer (or other technical people) of another team, I’d approach him directly and have a chat. Sometimes, managers need to be involved for they have to be in the know and/or take decisions. In that case, I’d organize a meeting with this dev and both our managers, we would chat, come to a common understanding and then explain the meat of the stuff to managers and let them decide (and perhaps influence them a little…).

    That would be crazy to spend a lot of time trying to explain to my non-technical manager the core matter, let him deal with the other manager without my presence, and finally let this other manager handle the issue with his team. Any child who has played https://en.wikipedia.org/wiki/Chinese_whispers[Chinese whispers^] knows that multiple layers of intermediaries in passing a message result in a garbled mess. If a simple message that every layer can understand gets distorted, what would be the result when it becomes complex? And when not every player can understand it perfectly?

    I’ve dealt with this kind of “process” early in my career - a lot. I assume it was for 2 reasons:

    Silo’ed structures:: Because companies are generally organized in columns (Chief Financial Officer, Chief Marketing Officer, Chief Technology Officer, Chief Human Resource Officer, etc.) , communication is mainly vertical - from top to bottom (hardly reversed). As objectives are completely different, trusting another team is quite hard. So, this is a way to keep things in control… Or at least to pretend to do so. Middle manager:: Cohorts of middle managers were required in factories, to handle assembly line workers and most traditional companies mirror that organization perfectly. In tech companies, the workforce is composed of educated developers: the need for middle managers is nowhere near as much. Hence, they must prove their “value” to the organization or be exposed as useless. Setting themselves as necessary intermediaries has become a survival strategy.

    I was naively hoping that with all Agile projects around, this kind of behavior would have had disappeared. So imagine my surprise that it happened to me this week. After having explained to my manager the above part - about communication efficiency, I was gently (but firmly) told that it was not in his power to change that. I pushed back again pointing that if nobody does anything, things will never improve, but without results.

    If you find yourself in this situation and have no way of improving the state of things, I’d suggest you vote with your feet. In my case, I’m near the end of my collaboration with this specific team, so my problem will resolve itself soon enough. I’m afraid the “process” is too deeply ingrained to change in the nearest future - if ever…

    Categories: Development Tags: communicationgood practice
  • Should tests be ordered or not?

    Everything ordered and tidy

    :imagesdir: /assets/resources/should-tests-be-ordered-or-not/

    // Image by Karsten Seiferlin licensed under CC https://www.flickr.com/photos/timecaptured/14471140911

    Most of our day-to-day job is learned through mentorship and experience and not based upon scientific research. Once a dogma has permeated a significant minority of practitioners, it becomes very hard to challenge it.

    Yet, in this post, I’ll attempt to not only challenge that sometimes tests must be ordered but prove that in different use-cases.

    == Your tests shall not be ordered (shall they?)

    Some of my conference talks are more or less related to testing, and I never fail to point out that TestNG is superior to JUnit if only because it allows for test method ordering. At that point, I’m regularly asked at the end of the talk why method ordering matters. It’s a widespread belief that tests shouldn’t be ordered. Here are some samples found here and there:

    [quote,JUnit Wiki - Test execution order,https://github.com/junit-team/junit4/wiki/test-execution-order] Of course, well-written test code would not assume any order, but some do.

    [quote,JUnit FAQ - How do I use a test fixture?,http://junit.org/junit4/faq.html=atests_2] __ Each test runs in its own test fixture to isolate tests from the changes made by other tests. That is, tests don’t share the state of objects in the test fixture. Because the tests are isolated, they can be run in any order. __

    [quote, Writing Great Unit Tests: Best and Worst Practices,http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises] You’ve definitely taken a wrong turn if you have to run your tests in a specific order […]

    [quote,Top 12 Selected Unit Testing Best Practices,http://www.sw-engineering-candies.com/blog-1/unit-testing-best-practices=TOC-Always-Write-Isolated-Test-Cases] __ Always Write Isolated Test Cases

    The order of execution has to be independent between test cases. This gives you the chance to rearrange the test cases in clusters (e.g. short-, long-running) and retest single test cases. __

    And this goes on ad nauseam

    In most cases, this makes perfect sense. If I’m testing an add(int, int) method, there’s no reason why one test case should run before another. However, this is hardly a one-size-fits-all rule. The following use-cases take advantage of test ordering.

    == Tests should fail for a single reason

    Let’s start with a simple example: the code consists of a controller that stores a list of x Foo entities in the HTTP request under the key bar.

    === The naive approach

    The first approach would be to create a test method that asserts the following:

    . a value is stored under the key bar in the request . the value is of type List . the list is not empty . the list has size x . the list contains no null entities . the list contains only Foo entities

    Using AssertJ, the code looks like the following:

    [source,java]

    // 1: asserts can be chained through the API // 2: AssertJ features can make the code less verbose @Test public void should_store_list_of_x_Foo_in_request_under_bar_key() { controller.doStuff(); Object key = request.getAttribute(“bar”); assertThat(key).isNotNull(); // <1> assertThat(key).isInstanceOf(List.class); // <2> List list = (List) key; assertThat(list).isNotEmpty(); // <3> assertThat(list).hasSize(x); // <4> list.stream().forEach((Object it) -> { assertThat(object).isNotNull(); // <5> assertThat(object).isInstanceOf(Foo.class); // <6> }); } —-

    If this test method fails, the reason can be found in any of the previous steps. A customary glance at the failure report is not enough to tell exactly which one.

    image::single-test-result.png[Single tests result,541,52,align=”center”]

    To know that, one has to analyze the stack trace then the source code.


    java.lang.AssertionError: Expecting actual not to be null

    at ControllerTest.should_store_list_of_x_Foo_in_request_under_bar_key(ControllerTest.java:31) ----
    

    === A test method per assertion

    An alternative could be to refactor each assertion into its own test method:

    [source,java]

    @Test public void bar_should_not_be_null() { controller.doStuff(); Object bar = request.getAttribute(“bar”); assertThat(bar).isNotNull(); }

    @Test public void bar_should_of_type_list() { controller.doStuff(); Object bar = request.getAttribute(“bar”); assertThat(bar).isInstanceOf(List.class); }

    @Test public void list_should_not_be_empty() { controller.doStuff(); Object bar = request.getAttribute(“bar”); List<?> list = (List) bar; assertThat(list).isNotEmpty(); }

    @Test public void list_should_be_of_size_x() { controller.doStuff(); Object bar = request.getAttribute(“bar”); List<?> list = (List) bar; assertThat(list).hasSize(x); }

    @Test public void instances_should_be_of_type_foo() { controller.doStuff(); Object bar = request.getAttribute(“bar”); List<?> list = (List) bar; list.stream().forEach((Object it) -> { assertThat(it).isNotNull(); assertThat(it).isInstanceOf(Foo.class); }); } —-

    Now, every failing test is correctly displayed. But if the bar attribute is not found in the request, every test will still run and still fail, whereas they should merely be skipped.

    image::unordered-test-results.png[Unordered tests results,385,136,align=”center”]

    Even if the waste is small, it still takes time to run unnecessary tests. Worse, it’s waste of time to analyze the cause of the failure.

    === A private method per assertion

    It seems ordering the tests makes sense. But ordering is bad, right? Let’s try to abide by the rule, by having a single test calling private methods:

    [source,java]

    public void should_store_list_of_x_Foo_in_request_under_bar_key() { controller.doStuff(); Object bar = request.getAttribute(“bar”); bar_should_not_be_null(bar); bar_should_of_type_list(bar); List<?> list = (List) bar; list_should_not_be_empty(list); list_should_be_of_size_x(list); instances_should_be_of_type_foo(list); }

    private void bar_should_not_be_null(Object bar) { assertThat(bar).isNotNull(); }

    private void bar_should_of_type_list(Object bar) { assertThat(bar).isInstanceOf(List.class); }

    private void list_should_not_be_empty(List<?> list) { assertThat(list).isNotEmpty(); }

    private void list_should_be_of_size_x(List<?> list) { assertThat(list).hasSize(x); }

    private void instances_should_be_of_type_foo(List<?> list) { list.stream().forEach((Object it) -> { assertThat(it).isNotNull(); assertThat(it).isInstanceOf(Foo.class); }); } —-

    Unfortunately, it’s back to square one: it’s not possible to just know in which step the test failed just at a glance.

    image::single-test-result.png[Single test result,541,52,align=”center”]

    At least the stack trace conveys a little more information:


    java.lang.AssertionError: Expecting actual not to be null

    at ControllerTest.bar_should_not_be_null(ControllerTest.java:40)
    at ControllerTest.should_store_list_of_x_Foo_in_request_under_bar_key(ControllerTest.java:31) ----
    

    How to skip unnecessary tests, and easily know the exact reason of the failure?

    === Ordering it is

    Like it or not, there’s no way to achieve skipping and easy analysis without ordering:

    [source,java]

    // Ordering is achieved using TestNG

    @Test public void bar_should_not_be_null() { controller.doStuff(); Object bar = request.getAttribute(“bar”); assertThat(bar).isNotNull(); }

    @Test(dependsOnMethods = “bar_should_not_be_null”) public void bar_should_of_type_list() { controller.doStuff(); Object bar = request.getAttribute(“bar”); assertThat(bar).isInstanceOf(List.class); }

    @Test(dependsOnMethods = “bar_should_of_type_list”) public void list_should_not_be_empty() { controller.doStuff(); Object bar = request.getAttribute(“bar”); List<?> list = (List) bar; assertThat(list).isNotEmpty(); }

    @Test(dependsOnMethods = “list_should_not_be_empty”) public void list_should_be_of_size_x() { controller.doStuff(); Object bar = request.getAttribute(“bar”); List<?> list = (List) bar; assertThat(list).hasSize(x); }

    @Test(dependsOnMethods = “list_should_be_of_size_x”) public void instances_should_be_of_type_foo() { controller.doStuff(); Object bar = request.getAttribute(“bar”); List<?> list = (List) bar; list.stream().forEach((Object it) -> { assertThat(it).isNotNull(); assertThat(it).isInstanceOf(Foo.class); }); } —-

    The result is the following:

    image::ordered-test-results.png[Ordered tests results,431,202,align=”center”]

    Of course, the same result is achieved when the test is run with Maven:


    Tests run: 5, Failures: 1, Errors: 0, Skipped: 4, Time elapsed: 0.52 sec «< FAILURE! bar_should_not_be_null(ControllerTest) Time elapsed: 0.037 sec «< FAILURE! java.lang.AssertionError: Expecting actual not to be null at ControllerTest.bar_should_not_be_null(ControllerTest.java:31)

    Results :

    Failed tests: ControllerTest.bar_should_not_be_null:31 Expecting actual not to be null

    Tests run: 5, Failures: 1, Errors: 0, Skipped: 4

    In this case, by ordering in unit test methods, one can achieve both optimization of testing time and fast failure analysis by skipping tests that are bound to fail anyway.

    == Unit testing and Integration testing

    In my talks about Integration testing, I usually use the example of a prototype car. Unit testing is akin to testing every nut and bolt of the car, while Integration testing is like taking the prototype on a test drive.

    No project manager would take the risk of sending the car on a test drive without having made sure its pieces are of good enough quality. It would be too expensive to fail just because of a faulty screw; test drives are supposed to validate higher-levels concerns, those that cannot be checked by Unit testing.

    Hence, unit tests should be ran first, and only integration tests only afterwards. In that case, one can rely on the https://maven.apache.org/surefire/maven-failsafe-plugin/[Maven Failsafe plugin^] to run Integration tests later in the Maven lifecycle.

    == Integration testing scenarios

    What might be seen as a corner-case in unit-testing is widespread in integration tests, and even more so in end-to-end tests. In the latest case, an example I regularly use is the e-commerce application. Steps of a typical scenario are as follow:

    . Browse the product catalog . Put one product in the cart . Display the summary page . Fill in the delivery address . Choose a payment type . Enter payment details . Get order confirmation

    In a context with no ordering, this has several consequences:

    • Step X+1 is dependent on step X e.g. to enter payment details, one must have chosen a payment type first, requiring that the latter works
    • Step X+2 and X+1 both need to set up step X. This leads either to code duplication - as setup code is copied-pasted in all required steps, or common setup code - which increases maintenance cost (yes, sharing is caring but it’s also more expensive).
    • The initial state of step X+1 is the final state of step X i.e. at the end of testing step X, the system is ready to start testing step X+1
    • Trying to test step X+n if step X failed already is time wasted, both in terms of server execution time and and of failure analysis time. Of course, the higher n, the more waste.

    This is very similar to the section above about unit tests order. Given this, it makes no doubt for me that ordering steps in an integration testing scenario is far from a bad practice but good judgement.

    == Conclusion

    As in many cases in software development, a rule has to be contextualized. While in general, it makes no sense to have ordering between tests, there are more than a few cases where it does.

    Software development is hard because the “real” stuff is not learned by sitting on universities benches but through repeated practice and experimenting under the tutorship of more senior developers. If enough more-senior-than-you devs tend to hold the same opinion on a subject, chances are you’ll take that for granted as well. At some point, one should single out of such opinion and challenge it to check whether it’s right or not in one’s own context.

  • Are you guilty of over-engineering?

    Construction site

    :imagesdir: /assets/resources/are-you-guilty-of-overengineering/

    If you listen to other language communities - such as Python or Ruby, it seems Java developers have a strong tendency of over-engineering.

    Perhaps they’re just jealous of our superior platform (wink), perhaps there is some very slight reason that they believe so. I do believe so. And it’s quite interesting that I realized it by doing code review - while I may be guilty of over-engineering myself when writing code. But I’m working on it.

    Of course, you’re a “simple” developer, and only architects have the power to design, so only they can be guilty of over-engineering, aren’t they? I’m afraid that’s not the case: developers do that all the time. In this article, I’ll focus on one symptom that I commented a lot in my reviews, but it can be extended to many others.

    I’ve been taught early in my career to design my class hierarchy to be as extensible as possible. It translates into a parent interface, implemented by a concrete class. Sometimes, there might even be an abstract class in between. It looks like the following diagram:

    image::class-diagram.png[Class diagram,229,367,align=”center”]

    • It looks great
    • It makes for an extensible hierarchy
    • And it realizes the https://stackoverflow.com/questions/1992384/program-to-an-interface-what-does-it-mean[programming to interface^] paradigm

    For example, the Spring framework uses this design a lot, throughout its packages. One great example is the http://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/web/servlet/ViewResolver.html[ViewResolver^] interface, which has a rich children hierarchy and a lot of different implementing classes e.g. InternalResourceViewResolver and VelocityLayoutViewResolver among others. Other examples abound (bean registry, context, etc.) in different parts of the framework.

    However, please note quite important facts about Spring:

    . There are a lot of different children classes organized within a structured hierarchy . It’s a framework, and it’s meant to be open for extension and closed to modification by definition

    Back to our current project, let’s say a regular web app. Locate one of the defined interface. From this point, it’s quite easy to find its children implementations; in most cases, there’s only one and it’s prefixed with Default (or alternatively suffixed with Impl). Also, most of the time, the interface resides in the x.y.z package and the implementation in the x.y.z.impl. One might wonder about such utter lack of imagination. Well, coming up with a relevant name for the implementation is hard because there is no semantic difference between it and the interface i.e. the former doesn’t specialize the later. The only possible conclusion is that the interface is not necessary…

    Some architects might argue that even if useless now, the abstraction might be useful at a later time. Yet, adding unnecessary code only because it might become useful in the future just decreases the signal to noise ratio of the code base - it hinders readability for no reason. Moreover, this extra code needs to be created, tested and maintained, hence increased costs and waste. It’s also opposite to the Agile way of “just enough” to achieve the goal. Finally, if (and only if) the need arises, introducing a parent interface is quite easy by just changing the code of the app. It’s a no-brainer, thanks to refactoring features of any +++IDE+++ worthy of the name.

    I would qualify the above approach as over-engineering if one designs it on purpose to prepare for an hypothetical future but as https://en.wikipedia.org/wiki/Cargo_cult[Cargo Cult^] if one does it just because it has always been done like this.

    Useless interfaces are but one simple example of over-engineering. There are many more. I’d happy to read the ones you’ve found (or practiced yourself).

  • Immutable data structures in Java

    Before being software developers, we are people - and thus creatures of habits. It’s hard for someone to change one’s own habits, it’s harder for someone to change someone else’s habits - and for some of us, it’s even harder.

    This, week, during a code review, I stumbled upon this kind of structure:

    [source,java]

    public class MyStructure {

    private String myProp1;
    private String myProp2;
    // A bunch of other String properties
    
    public MyStructure(String myProp1, String myProp2 /* All other properties here */) {
        this.myProp1 = myProp1;
        this.myProp2 = myProp2;
        // All other properties set there
    }
    
    public String getMyProp1() { ... }
    public String getMyProp2() { ... }
    // All other getters
    
    public void setMyProp1(String myProp1) { ... }
    public void setMyProp2(String myProp2) { ... }
    // All other setters } ----
    

    Note: it seems like a JavaBean, but it’s not because there’s no no-argument constructor.

    Looking at the code, I see that setters are never used in our code, making it a nice use-case for an immutable data structure - and saving a good number of lines of code:

    [source,java]

    public class MyStructure {

    private final String myProp1;
    private final String myProp2;
    // A bunch of other String properties
    
    public MyStructure(String myProp1, String myProp2 /* All other properties here */) {
        this.myProp1 = myProp1;
        this.myProp2 = myProp2;
        // All other properties set there
    }
    
    public String getMyProp1() { ... }
    public String getMyProp2() { ... }
    // All other getters } ----
    

    At this point, one realizes String are themselves immutable, which leads to the second proposal, which again save more lines of code:

    [source,java]

    public class MyStructure {

    public final String myProp1;
    public final String myProp2;
    // A bunch of other String properties
    
    public MyStructure(String myProp1, String myProp2 /* All other properties here */) {
        this.myProp1 = myProp1;
        this.myProp2 = myProp2;
        // All other properties set there
    } } ----
    

    Given that attributes are final and that Java String are immutable, the class still safe against unwanted changes. Note that it works only because String are immutable by definition in Java. With a Date property, it wouldn’t work as Date are mutable.

    The same can be done with stateless services, with embedded services that needs to be accessed from children classes. There’s no need to have a getter:

    [source,java]

    public class MyService {

    // Can be accessed from children classes
    protected final EmbeddedService anotherService;
    
    public MyService(EmbeddedService anotherService) {
        this.anotherService = anotherService;
    } } ----
    

    Note this approach is 100% compatible with for Dependency Injection, either Spring or CDI.

    Now, you cannot imagine the amount of back and forth comments this simple review caused. Why? Because even if that makes sense from a coding point of view, it’s completely different from what we usually do.

    In that case, laziness and IDEs don’t serve us well. The latter make it too easy to create accessors. I’m pretty sure if we had to code getters and setters by hand, the above proposals would be more in favor.

    This post could easily have been titled “Don’t let habits get the best of you”. The lesson here is to regularly challenge how you code, even for simple easy stuff. There might be better alternatives after all.

    Categories: Java Tags: good practice
  • Throwing a NullPointerException... or not

    This week, I’ve lived again an experience from a few years ago, but in the opposite seat.

    As a software architect/team leader/technical lead (select the term you’re more comfortable with), I was doing code reviews on an project we were working on and  I stumbled upon a code that looked like that:

    [source,java]

    public void someMethod(Type parameter) {

    if (parameter == null) {
        throw new NullPointerException("Parameter Type cannot be null");
    }
    // Rest of the method } ----
    

    I was horribly shocked! An applicative code throwing a NullPointerException, that was a big coding mistake. So I gently pointed to the developer that it was a bad idea and that I’d like him to throw an IllegalArgumentException instead, which exactly the exception type matching the use-case. This was very clear in my head, until the dev pointed me the NullPointerException http://docs.oracle.com/javase/6/docs/api/java/lang/NullPointerException.html[Javadoc^]. For simplicity’s sake, here’s the juicy part:

    [quote] __ Thrown when an application attempts to use null in a case where an object is required. These include:

    • Calling the instance method of a null object.
    • Accessing or modifying the field of a null object.
    • Taking the length of null as if it were an array.
    • Accessing or modifying the slots of null as if it were an array.
    • Throwing null as if it were a Throwable value.

    Applications should throw instances of this class to indicate other illegal uses of the null object. __

    Read the last sentence again: “Applications should throw instances of this class to indicate other illegal uses of the null object”. It seems to be legal for an application to throw an NPE, even recommended by the Javadocs.

    In my previous case, I read it again and again… And ruled that yeah, it was OK for an application to throw an NPE. Back to today: a dev reviewed the pull request from another one, found out it threw an NPE and ruled it was bad practice. I took the side of the former dev and said it was OK.

    What would be your ruling and more importantly, why?

    Note: whatever the decision, this is not a big deal and probably not worth the time you spend bickering about it 😉