Archive

Posts Tagged ‘design’
  • Coping with stringly-typed

    UPDATED on March 13, 2017: Add Builder pattern section

    Most developers have strong opinions regarding whether a language should be strongly-typed or weakly-typed, whatever notions they put behind those terms. Some also actively practice stringly-typed programming - mostly without even being aware of it. It happens when most of attributes and parameters of a codebase are String. In this post, I will make use of the following simple snippet as an example:

    public class Person {
    
        private final String title;
        private final String givenName;
        private final String familyName;
        private final String email;
      
        public Person(String title, String givenName, String familyName, String email) {
            this.title = title;
            this.givenName = givenName;
            this.familyName = familyName;
            this.email = email;
        }
        ...
    }
    

    The original sin

    The problem with that code is that it’s hard to remember which parameter represents what and in which order they should be passed to the constructor.

    Person person = new Person("[email protected]", "John", "Doe", "Sir");
    

    In the previous call, the email and the title parameter values were switched. Ooops.

    This is even worse if more than one constructor is available, offering optional parameters:

    public Person(String givenName, String familyName, String email) {
        this(null, givenName, familyName, email);
    }
    
    Person another = new Person("Sir", "John", "Doe");
    

    In that case, title was the optional parameter, not email. My bad.

    Solving the problem the OOP way

    Object-Oriented Programming and its advocates have a strong aversion to stringly-typed code for good reasons. Since everything in the world has a specific type, so must it be in the system.

    Let’s rewrite the previous code à la OOP:

    public class Title {
        private final String value;
        public Title(String value) {
        	this.value = value;
        }
    }
    
    public class GivenName {
        private final String value;
        public FirstName(String value) {
        	this.value = value;
        }
    }
    
    public class FamilyName {
        private final String value;
        public LastName(String value) {
        	this.value = value;
        }
    }
    
    public class Email {
        private final String value;
        public Email(String value) {
        	this.value = value;
        }
    }
    
    public class Person {
    
        private final Title title;
        private final GivenName givenName;
        private final FamilyName familyName;
        private final Email email;
      
        public Person(Title title, GivenName givenName, FamilyName familyName, Email email) {
            this.title = title;
            this.givenName = givenName;
            this.familyName = familyName;
            this.email = email;
        }
        ...
    }
    
    
    Person person = new Person(new Title(null), new FirstName("John"), new LastName("Doe"), new Email("[email protected]"));
    

    That way drastically limits the possibility of mistakes. The drawback is a large increase in verbosity - which might lead to other bugs.

    Pattern to the rescue

    A common way to tackle this issue in Java is to use the Builder pattern. Let’s introduce a new builder class and rework the code:

    public class Person {
    
        private String title;
        private String givenName;
        private String familyName;
        private String email;
    
        private Person() {}
    
        private void setTitle(String title) {
            this.title = title;
        }
    
        private void setGivenName(String givenName) {
            this.givenName = givenName;
        }
    
        private void setFamilyName(String familyName) {
            this.familyName = familyName;
        }
    
        private void setEmail(String email) {
            this.email = email;
        }
    
        public static class Builder {
    
            private Person person;
    
            public Builder() {
                person = new Person();
            }
    
            public Builder title(String title) {
                person.setTitle(title);
                return this;
            }
    
            public Builder givenName(String givenName) {
                person.setGivenName(givenName);
                return this;
            }
    
            public Builder familyName(String familyName) {
                person.setFamilyName(familyName);
                return this;
            }
    
            public Builder email(String email) {
                person.setEmail(email);
                return this;
            }
    
            public Person build() {
                return person;
            }
        }
    }
    

    Note that in addition to the new builder class, the constructor of the Person class has been set to private. Using the Java language features, this allows only the Builder to create new Person instances. The same is used for the different setters.

    Using this pattern is quite straightforward:

    Person person = new Builder()
                   .title("Sir")
                   .givenName("John")
                   .familyName("Doe")
                   .email("[email protected]")
                   .build();
    

    The builder patterns shifts the verbosity from the calling part to the design part. Not a bad trade-off.

    Languages to the rescue

    Verbosity is unfortunately the mark of Java. Some other languages (Kotlin, Scala, etc.) would be much more friendly to this approach, not only for class declarations, but also for object creation.

    Let’s port class declarations to Kotlin:

    class Title(val value: String?)
    class GivenName(val value: String)
    class FamilyName(val value: String)
    class Email(val value: String)
    
    class Person(val title: Title, val givenName: GivenName, val familyName: FamilyName, val email: Email)
    

    This is much better, thanks to Kotlin! And now object creation:

    val person = Person(Title(null), GivenName("John"), FamilyName("Doe"), Email("[email protected]"))
    

    For this, verbosity is only marginally decreased compared to Java.

    Named parameters to the rescue

    OOP fanatics may stop reading there, for their way is not the only one to cope with stringly-typed.

    One alternative is about named parameters, and is incidentally also found in Kotlin. Let’s get back to the original stringly-typed code, port it to Kotlin and use named parameters:

    class Person(val title: String?, val givenName: String, val familyName: String, val email: String)
    
    val person = Person(title = null, givenName = "John", familyName = "Doe", email = "[email protected]")
    
    val another = Person(email = "[email protected]", title = "Sir", givenName = "John", familyName = "Doe")
    

    A benefit of named parameters besides coping with stringly-typed code is that they are order-agnostic when invoking the constructor. Plus, they also play nice with default values:

    class Person(val title: String? = null, val givenName: String, val familyName: String, val email: String? = null)
    
    val person = Person(givenName = "John", familyName = "Doe")
    val another = Person(title = "Sir", givenName = "John", familyName = "Doe")
    

    Type aliases to the rescue

    While looking at Kotlin, let’s describe a feature released with 1.1 that might help.

    A type alias is as its name implies a name for an existing type; the type can be a simple type, a collection, a lambda - whatever exists within the type system.

    Let’s create some type aliases in the stringly-typed world:

    typealias Title = String
    typelias GivenName = String
    typealias FamilyName = String
    typealias Email = String
    
    class Person(val title: Title, val givenName: GivenName, val familyName: FamilyName, val email: Email)
    
    val person = Person(null, "John", "Doe", "[email protected]")
    

    The declaration seems more typed. Unfortunately object creation doesn’t bring any betterment.

    Note the main problem of type aliases is that they are just that - aliases: no new type is created so if 2 aliases point to the same type, all 3 are interchangeable with one another.

    Libraries to the rescue

    For the rest of this post, let’s go back to the Java language.

    Twisting the logic a bit, parameters can be validated at runtime instead of compile-time with the help of specific libraries. In particular, the Bean validation library does the job:

    public Person(@Title String title, @GivenName String givenName, @FamilyName String familyName, @Email String email) {
        this.title = title;
        this.givenName = givenName;
        this.familyName = familyName;
        this.email = email;
    }
    

    Admittedly, it’s not the best solution… but it works.

    Tooling to the rescue

    I have already written about tooling and that it’s as important (if not more) as the language itself.

    Tools fill gaps in languages, while being non-intrusive. The downside is that everyone has to use it (or find a tool with the same feature).

    For example, when I started my career, coding guidelines mandated developers to order methods by alphabetical order in the class file. Nowadays, that would be senseless, as every IDE worth its salt can display the methods of a class in order.

    Likewise, named parameters can be a feature of the IDE, for languages that lack it. In particular, latest versions of IntelliJ IDEA emulates named parameters for the Java language for types that are deemed to generic. The following shows the Person class inside the IDE:

    Conclusion

    While proper OOP design is the historical way to cope with stringly-typed code, it also makes it quite verbose and unwieldy in Java. This post describes alternatives, with their specific pros and cons. Each needs to be evaluated in the context of one’s own specific context to decide which one is the best fit.

  • A use-case for local class declaration

    Polar bear in a bubble

    One of the first things one learns when starting with Java development is how to declare a class into its own file. Potential later stages include:

    But this doesn’t stop there: the JLS is a trove full of surprises. I recently learned classes can be declared inside any block, including methods. This is called local class declarations (§14.3).

    A local class is a nested class (§8) that is not a member of any class and that has a name. All local classes are inner classes (§8.1.3). Every local class declaration statement is immediately contained by a block. Local class declaration statements may be intermixed freely with other kinds of statements in the block.

    The scope of a local class immediately enclosed by a block (§14.2) is the rest of the immediately enclosing block, including its own class declaration. The scope of a local class immediately enclosed by in a switch block statement group (§14.11)is the rest of the immediately enclosing switch block statement group, including its own class declaration.

    Cool, isn’t it? But use it just for the sake of it is not reason enough… until this week: I started to implement something like the Spring Boot Actuator in a non-Boot application, using Jackson to serialize the results.

    Jackson offers several ways to customize the serialization process. For objects that require only to hide fields or change their names and which classes stand outside one’s reach, it offers mixins. As an example, let’s tweak serialization of the following class:

    public class Person {
    
        private final String firstName;
        private final String lastName;
        
        public Person(String firstName, String lastName) {
            this.firstName = firstName;
            this.lastName = lastName;
        }
        
        public String getFirstName() {
            return firstName;
        }
        
        public String getLastName() {
            return lastName;
        }
    }
    

    Suppose the requirement is to have givenName and familyName attributes. In a regular Spring application, the mixin class should be registered during the configuration of message converters:

    public class WebConfiguration extends WebMvcConfigurerAdapter {
    
        @Override
        public void configureMessageConverters(List<HttpMessageConverter<?>> converters) {
            MappingJackson2HttpMessageConverter jackson2HttpMessageConverter = new MappingJackson2HttpMessageConverter();
            jackson2HttpMessageConverter.getObjectMapper().addMixIn(Person.class, PersonMixin.class);
            converters.add(jackson2HttpMessageConverter);
        }
    }
    

    Now, where does it make the most sense to declare this mixin class? The principle to declare something in the smallest possible scope applies: having it in a dedicated file is obviously wrong, but even a private nested class is overkill. Hence, the most restricted scope is the method itself:

    public class WebConfiguration extends WebMvcConfigurerAdapter {
    
        @Override
        public void configureMessageConverters(List<HttpMessageConverter<?>> converters) {
            MappingJackson2HttpMessageConverter jackson2HttpMessageConverter = new MappingJackson2HttpMessageConverter();
            abstract class PersonMixin {
                @JsonProperty("givenName") abstract String getFirstName();
                @JsonProperty("familyName") abstract String getLastName();
            }
            jackson2HttpMessageConverter.getObjectMapper().addMixIn(Person.class, PersonMixin.class);
            converters.add(jackson2HttpMessageConverter);
        }
    }
    

    While this way makes sense from a pure software engineering point of view, there is a reason not to design code like this: the principle of least surprise. Unless every member of the team is aware and comfortable with local classes, this feature shouldn’t be used.

    Categories: Java Tags: classmethoddesign
  • Extension functions for more consistent APIs

    Hair extensions

    Kotlin’s extension functions are a great way to add behavior to a type sitting outside one’s control - the JDK or a third-party library.

    For example, the JDK’s String class offers the toLowerCase() and toUpperCase() methods but nothing to capitalize the string. In Kotlin, this can be helped by adding the desired behavior to the String class through an extension function:

    fun String.capitalize() = when {
        length < 2 -> toUpperCase()
        else -> Character.toUpperCase(toCharArray()[0]) + substring(1).toLowerCase()
    }
    
    println("hello".capitalize())
    

    Extension functions usage is not limited to external types, though. It can also improve one’s own codebase, to handle null values more elegantly.

    This is a way one would define a class and a function in Kotlin:

    class Foo {
        fun foo() = println("foo")
    }
    

    Then, it can be used on respectively non-nullable and nullable types like that:

    val foo1 = Foo()
    val foo2: Foo? = Foo()
    foo1.foo()
    foo2?.foo()
    

    Notice the compiler enforces the usage of the null-safe ?. operator on the nullable type instance to prevent NullPointerException.

    Instead of defining the foo() method directly on the type, let’s define it as an extension function, but with a twist. Let’s make the ?. operator part of the function definition:

    class Foo
    
    fun Foo?.safeFoo() = println("null-safe foo")
    

    Usage is now slightly modified:

    val foo1 = Foo()
    val foo2: Foo? = Foo()
    val foo3: Foo? = null
    foo1.safeFoo()
    foo2.safeFoo()
    foo3.safeFoo()
    

    Whether the type is non-nullable or not, the calling syntax is consistent. Interestingly enough, the output is the following:

    null-safe foo
    null-safe foo
    null-safe foo
    

    Yes, it’s possible to call a method on a null instances! Now, let’s update the Foo class slightly:

    class Foo {
        val foo = 1
    }
    

    What if the foo() extension function should print the foo property instead of a constant string?

    fun Foo?.safeFoo() = println(foo)
    

    The compiler complains:

    Only safe (?.) or non-null asserted (!!.) calls are allowed on a nullable receiver of type Foo?
    

    The extension function needs to be modified according to the compiler’s error:

    fun Foo?.safeFoo() = println(this?.foo)
    

    The output becomes:

    1
    1
    null
    

    Extension functions are a great way to make API more consistent and to handle null elegantly instead of dropping the burden on caller code.

    Categories: Kotlin Tags: designAPIextension function
  • Encapsulation: I don't think it means what you think it means

    My post about immutability provoked some stir and received plenty of comments, from the daunting to the interesting, both on reddit and here.

    Comment types

    They can be more or less divided into those categories:

    • Let’s not consider anything and don’t budge an inch - with no valid argument beside “it’s terrible”
    • One thread wondered about the point of code review, to catch bugs or to share knowledge
    • Rational counter-arguments that I’ll be happy to debate in a future post
    • “It breaks encapsulation!” This one is the point I’d like to address in this post.

    Encapsulation, really?

    I’ve already written about encapsulation but if the wood is very hard, I guess it’s natural to hammer down the nail several times.

    Younger, when I learned OOP, I was told about its characteristics:

    1. Inheritance
    2. Polymorphism
    3. Encapsulation

    This is the definition found on Wikipedia:

    Encapsulation is used to refer to one of two related but distinct notions, and sometimes to the combination thereof:
    • A language mechanism for restricting direct access to some of the object's components.
    • A language construct that facilitates the bundling of data with the methods (or other functions) operating on that data.
    -- Wikipedia

    In short, encapsulating means no direct access to an object’s state but only through its methods. In Java, that directly translated to the JavaBeans conventions with private properties and public accessors - getters and setters. That is the current sad state that we are plagued with and that many refer to when talking about encapsulation.

    For this kind of pattern is no encapsulation at all! Don’t believe me? Check this snippet:

    public class Person {
    
        private Date birthdate = new Date();
    
        public Date getBirthdate() {
            return birthdate;
        }
    }
    

    Given that there’s no setter, it shouldn’t be possible to change the date inside a Person instance. But it is:

    Person person = new Person();
    Date date = person.getBirthdate();
    date.setTime(0L);
    

    Ouch! State was not so well-encapsulated after all…

    It all boils down to one tiny little difference: we want to give access to the value of the birthdate but we happily return the reference to the birthdate field which holds the value. Let’s change that to separate the value itself from the reference:

    public class Person {
    
        private Date birthdate = new Date();
    
        public Date getBirthdate() {
            return new Date(birthdate.getTime());
        }
    }
    

    By creating a new Date instance that shares nothing with the original reference, real encapsulation has been achieved. Now getBirthdate() is safe to call.

    Note that classes that are by nature immutable - in Java, primitives, String and those that are developed like so, are completely safe to share. Thus, it’s perfectly acceptable to make fields of those types public and forget about getters.

    Note that injecting references e.g. in the constructor entails the exact same problem and should be treated in the same way.

    public class Person {
    
        private Date birthdate;
    
        public Person(Date birthdate) {
            this.birthdate = new Date(birthdate.getTime());
        }
    
        public Date getBirthdate() {
            return new Date(birthdate.getTime());
        }
    }
    

    The problem is that most people who religiously invoke encapsulation blissfully share their field references to the outside world.

    Conclusions

    There are a couple of conclusions here:

    • If you have mutable fields, simple getters such as those generated by IDEs provide no encapsulation.
    • True encapsulation can only be achieved with mutable fields if copies of the fields are returned, and not the fields themselves.
    • Once you have immutable fields, accessing them through a getter or having the field final is exactly the same.

    Note: kudos to you if you understand the above meme reference.

    Categories: Development Tags: designobject oriented programming
  • Using exceptions when designing an API

    Many knows the tradeoff of using exceptions while designing an application:

    • On one hand, using try-catch block nicely segregates between regular code and exception handling code
    • On the other hand, using exceptions has a definite performance cost for the JVM

    Every time I’ve been facing this quandary, I’ve ruled in favor of the former, because “premature optimization is evil”. However, this week has proved me that exception handling in designing an API is a very serious decision.

    I’ve been working to improve the performances of our application and I’ve noticed many silent catches coming from the Spring framework (with the help of the excellent dynaTrace tool). The guilty lines comes from the RequestContext.initContext() method:

    if (this.webApplicationContext.containsBean(REQUEST_DATA_VALUE_PROCESSOR_BEAN_NAME)) {
        this.requestDataValueProcessor = this.webApplicationContext.getBean(
        REQUEST_DATA_VALUE_PROCESSOR_BEAN_NAME, RequestDataValueProcessor.class);
    }
    

    Looking at the JavaDocs, it is clear that this method (and the lines above) are called each time the Spring framework handles a request. For web applications under heavy load, this means quite a lot! I provided a pass-through implementation of the RequestDataValueProcessor and patched one node of the cluster. After running more tests, we noticed response times were on average 5% faster on the patched node compared to the un-patched node. This is not my point however.

    Should an exception be thrown when the bean is not present in the context? I think not… as the above snippet confirms. Other situations e.g. injecting dependencies, might call for an exception to be thrown, but in this case, it has to be the responsibility of the caller code to throw it or not, depending on the exact situation.

    There are plenty of viable alternatives to exceptions throwing:

    Returning null
    This means the intent of the code is not explicit without looking at the JavaDocs, and so the worst option on our list
    Returning an Optional<T>
    This makes the intent explicit compared to returning `null`. Of course, this requires Java 8
    Return a Guava's Optional<T>
    For those of us who are not fortunate enough to have Java 8
    Returning one's own Optional<T>
    If you don't use Guava and prefer to embed your own copy of the class instead of relying on an external library
    Returning a Try
    Cook up something like Scala's Try, which wraps either (hence its old name - Either) the returned bean or an exception. In this case, however, the exception is not thrown but used like any other object - hence there will be no performance problem.

    Conclusion: when designing an API, one should really keep using exceptions for exceptional situations only.

    As for the current situation, Spring’s BeanFactory class lies at center of a web of dependencies and its multiple getBean() method implementation cannot be easily replaced with one of the above options without forfeiting backward compatibility. One solution, however, would be to provide additional getBeanSafe() methods (or a better relevant name) using one of the above options, and then replace usage of their original counterpart step by step inside the Spring framework.

    Categories: Java Tags: designexceptionspring
  • A single simple rule for easier Exception hierarchy design

    Each new project usually requires setting up an Exception hierarchy, usually always the same.

    I will not go into details whether we should extend RuntimeException or directly Exception, or whether the hierarchy roots should be FunctionalException/TechnicalException or TransientException/PersistentException. Those will be rants for another time as my current problem is completely unrelated.

    The situation is the following: when something bad happens deep in the call layer (i.e. an authentication failure from the authentication provider), a new FunctionalException is created with a known error code, say 123.

    public class FunctionalException extends RuntimeException {
    
        private long errorCode;
    
        public FunctionalException(long errorCode) {
            this.errorCode = errorCode;
        }
        // Other constructors
    }
    

    At this point, there are some nice advantages to this approach: the error code can be both logged and shown to the user with an adequate error message.

    The downside is in order to analyze where the authentication failure exception is effectively used in the code is completely impossible. As I’m stuck with the task of adding new features on this codebase, I must say this sucks big time. Dear readers, when you design an Exception hierarchy, please add the following:

    public class AuthenticationFailureException extends FunctionalException {
        public AuthenticationFailureException() {
           super(123L);
     }
     // Other constructors
    }
    

    This is slightly more verbose, of course, but you’ll keep all aforementioned advantages as well as letting poor maintainers like me analyze code much less painlessly. Many thanks in advance!

    Categories: Java Tags: designexception
  • Ego Driven Architecture

    Whoever is in charge of software architecture, be they senior developers, whole teams like in agile practice or architects-per-se, it is a deep trend to apply what I like to call Ego Driven Architecture (or EDA for short, not to be mistaken with Event Driven Architecture).

    When one has to choose an architecture, one should design it from a number of objective criteria, including:

    • business requirements,
    • technical constraints,
    • ease of use,
    • maintenance costs,
    • etc.

    One could even argue you should take care of subjective yet real constraints:

    • cross-service warfare (between the development team and the productio team),
    • interpersonal problems (between a project manager and the lead developer),
    • historical bias (not using EJB3 because EJB2 where too complex to develop),
    • etc.

    Normaly, each criterion is then assigned a priority, and the designed architecture’s objective is to answer the most criteria, based on their priority. Alas, when using EDA, the top priority criterion is completely different: whatever the constraints and the requirements, the technologies used on the project should make the CV of the architect(s) shine brighter.

    If you already don’t know what I speak of, imagine an architect using EJB3 on a project though the production application server is not ready to run them yet. Or an architect pressing the production team to upgrade to the latest version of the application server, although there’s no real need to. I know you’ve already seen such behaviours.

    There’re a number of factors that contribute to the thriving of  EDA:

    1. There's no validation of projects architecture by a dedicated technical team. Either there's no such cross-project team or it has no influence over architectural choices, due to the business funding (read power). Remember that since the money comes from the business, it is always very hard to tell them 'no'.
    2. Architects in charge of the design are junior. They definitely want to put to good use what they read in articles posted on the Web. Not to offense anyone, but before using Scala and such, wouldn't it be better if someone already had enough feedbacks?
    3. Architects are only hired for the time of the project. If your contribution stops when the project is shipped, it is a very big incentive to try unconventional technologies. Try talking with internal IS teams and you'll find the 'veni, vedi, vici' syndrom a top resentment cause for outsourcing since they usually end up cleaning the crap.

    Still, the root cause of EDA is a miscalculation on the architect’s part. The IS world, even globalized, is very very finite. You always work with the same persons. If you practice EDA, sooner or later, you’ll find someone who will remember you for being the one who proposed the architecture who killed the project. Our work is complex enough taking every business requirement, every technical constraint, every service warfare into account without bringing the architect’s ego into the fray. So please, refrain from using EDA

    This doesn’t mean you shouldn’t strive to use pertinent technologies in your architecture but it should be based on objective reasons, not you carreer.

    Categories: Development Tags: Architecturedesignedaegoego driven architecture