Archive

Posts Tagged ‘scala’
  • Scala vs Kotlin: inline and infix

    This is the third post in the Scala vs Kotlin comparison serie:

    1. Pimp my library
    2. Operator overloading

    This week, I’d like to address two features: inline and infix - not because they’re related but because neither of them would be enough to fill a post.

    Inlining comes from C (and then C++). In those languages, a hint could be provided to the compiler through the inline keyword. By doing so, it may replace an inlined function call by the function body itself in order to skip the overhead of a function call.

    Infix notation is to be in line with prefix and postfix, it’s relative to the place of the operator compared its 2 operands. Hopefully, the following example is clear enough:

    • Prefix: + 2 2
    • Postfix: 2 2 +
    • Infix: 2 + 2

    Scala

    Scala offers inlining through the @inline annotation on a function. As for C/C++, this is a an hint to the compiler. As per the ScalaDocs:

    An annotation on methods that requests that the compiler should try especially hard to inline the annotated method.

    The compiler has the final say in whether the function will be inlined, or not. On the opposite site, a function can be annotated with @noinline to prevent inlining altogether:

    An annotation on methods that forbids the compiler to inline the method, no matter how safe the inlining appears to be.

    As for infix annotation, it’s interestingly quite different from the definition above. In this context, it means that dot and parentheses can be omitted while calling functions that have a single parameter. There are some additional constraints:

    • Either the function must have no side-effects - be pure
    • Or the parameter must be a function

    {% highlight scala linenos %} val isLess1 = 1.<(2) val isLess2 = 1 < 2 {% endhighlight %}

    Lines 1 and 2 are equivalent. Obviously, line 2 is much more readable. Thanks to infix annotation, Scala doesn’t need operators, as every function can not only look but be called like an operator.

    Kotlin

    In Kotlin, inlining is set with the inline keyword. However, it’s much more than just a compiler hint: it’s a requirement. Whenever inline is used, the compiler will inline the function, no matter what.

    As such, it’s very important to use inlining only on small functions. Other limitations might include keeping its use to code under our control, e.g. to use it only for application code or code that is not part of a library’s public API.

    Note that inlining affects both the function itself as well as arguments that are lambdas. To make lambda arguments not inlined, use the noinline keyword.

    Infix notation is not automatic in Kotlin as it requires the function to be marked with the infix keyword. Additionally, the function needs to be attached to a class, either because it’s a member or an extension. Of course, the single parameter still applies.

    // Defined in Kotlin's runtime
    infix fun and(other: kotlin.Int): kotlin.Int { /* compiled code */ }
    
    val bool1 = 1.and(2)
    val bool2 = 1 and 2
    

    Be aware that infix notation only looks similar to an operator, it’s still a regular method call underneath.

    // This is valid
    val bool3 = 1 < 2
    
    // This is not valid, because < is an operator
    val bool4 = 1.<(2)
    
    Categories: Development Tags: scalakotlin
  • Scala vs Kotlin: Operator overloading

    Last week, I started my comparison of Scala and Kotlin with the Pimp my library pattern. In the second part of this serie, I’d like to address operator overloading.

    Overview

    Before to dive into the nitty-gritty details, let’s try first to tell what it’s all about.

    In every language where there are functions (or methods), a limited set of characters is allowed to define the name of said functions. Some languages are more lenient toward allowed characters: naming a function \O/ might be perfectly valid.

    Some others are much more strict about it. It’s interesting to note that Java eschewed the ability to use symbols in function names besides $ - probably in response to previous abuses in older languages. It definitely stands on the less lenient part of the spectrum and the Java compiler won’t compile the previous \O/ function.

    The name operator overloading is thus slightly misleading, even if widespread. IMHO, it’s semantically more correct to talk about operator characters in function names.

    Scala

    Scala stands on the far side of leniency spectrum, and allows characters such as + and £ to be used to name functions, alone or in combinations. Note I couldn’t find any official documentation regarding accepted characters (but some helpful discussion is available here).

    This enables libraries to offer operator-like functions to be part of their API. One example is the foldLeft function belonging to the TraversableOnce type, which is also made available as the /: function.

    This allows great flexibility, especially in defining DSLs. For example, mathematics: functions can be named π, or . On the flip side, this flexibility might be subject to abuse, as \O/, ^_^ or even |-O are perfectly valid function names. Anyone for an emoticon-based API?

    def ∑(i: Int*) = i.sum
    
    val s = ∑(1, 2, 3, 5) // = 11
    

    Kotlin

    Kotlin stands on the middle of the leniency scale, as it’s possible to define only a limited set of operators.

    Each such operator has a corresponding standard function signature. To define a specific operator on a type, the associated function should be implemented and prepended with the operator keyword. For example, the + operator is associated with the plus() method. The following shows how to define this operator for an arbitrary new type and how to use it:

    class Complex(val i: Int, val j: Int) {
        operator fun plus(c: Complex) = Complex(this.i + c.i, this.j + c.j)
    }
    
    val c = Complex(1, 0) + Complex(0, 1) // = Complex(1, 1)
    

    Conclusion

    Scala’s flexibility allows for an almost unlimited set of operator-looking functions. This makes it suited to design DSL with a near one-to-one mapping between domains names and function names. But it also relies on implicitness: every operator has to be known to every member of the team, present and future.

    Kotlin takes a much more secure path, as it allows to define only a limited set of operators. However, those operators are so ubiquitous that even beginning software developer know them and their meaning (and even more so experienced ones).

    Categories: Development Tags: scalakotlin
  • Scala vs Kotlin: Pimp my library

    I’ve been introduced to the world of immutable data structures with the Scala programming language - to write I’ve been introduced to the FP world would sound too presumptuous. Although I wouldn’t recommend its usage in my day-to-day projects, I’m still grateful to it for what I learned: my Java code is now definitely not the same because Scala made me aware of some failings in both the language and my coding practices.

    On the other hand, I became recently much interested in Kotlin, another language that tries to bridge between the Object-Oriented and Functional worlds. In this serie of articles, I’d like to compare some features of Scala and Kotlin and how each achieve it.

    In this article, I’ll be tackling how both offer a way to improve the usage of existing Java libraries.

    Scala

    Let’s start with Scala, as it coined the term Pimp My Library 10 years ago.

    Scala’s approach is based on conversion. Consider a base type lacking the desired behavior. For example, Java’s double primitive type - mapped to Scala’s scala.Double type, is pretty limited.

    The first step is to create a new type with said behavior. Therefore, Scala provides a RichDouble type to add some methods e.g. isWhole().

    The second step is to provide an implicit function that converts from the base type to the improved type. The signature of such a function must follow the following rules:

    • Have a single parameter of the base type
    • Return the improved type
    • Be tagged implicit

    Here’s how the Scala library declares the Double to RichDouble conversion function:

    private[scala] abstract class LowPriorityImplicits {
        ...
        implicit def doubleWrapper(x: Double) = new runtime.RichDouble(x)
        ...
    }
    

    An alternative is to create an implicit class, which among other requirements must have a constructor with a single parameter of base type.

    The final step step is to bring the conversion in scope. For conversion functions, it means importing the function in the class file where the conversion will be used. Note that in this particular case, the conversion function is part of the automatic imports (there’s no need to explicitly declare it).

    At this point, if a function is not defined for a type, the compiler will look for an imported conversion function that transforms this type to a new type that provides this function. In that case, the type will be replaced with the conversion function.

    val x = 45d
    val isWhole = x.isWhole // Double has no isWhole() function
    
    // But there's a conversion function in scope which transforms Double to RichDouble
    // And RichDouble has a isWhole() function
    val isWhole = doubleWrapper(x).isWhole
    

    Kotlin

    One of the main reasons I’m cautious about using Scala is indeed the implicit part: it makes it much harder to reason about the code - just like AOP. Homeopathic usage of AOP is a life saver, widespread usage is counter-productive.

    Kotlin eschews implicitness: instead of conversions, it provides extension methods (and properties).

    Let’s analyze how to add additional behavior to the java.lang.Double type.

    The first step is to provide an extension function: it’s a normal function, but grafted to an existing type. To add the same isWhole() function as above, the syntax is the following:

    fun Double.isWhole() = this == Math.floor(this) && !java.lang.Double.isInfinite(this)
    

    As for Scala, the second step is to bring this function in scope. As of Scala, it’s achieved through an import. If the previous function has been defined in any file of the ch.frankel.blog package:

    import ch.frankel.blog.isWhole
    
    val x = 45.0
    val isWhole = x.isWhole // Double has no isWhole() function
    
    // But there's an extension function in scope for isWhole()
    val isWhole = x == Math.floor(x) && !java.lang.Double.isInfinite(x)
    

    Note that extension methods are resolved statically.

    Extensions do not actually modify classes they extend. By defining an extension, you do not insert new members into a class, but merely make new functions callable with the dot-notation on instances of this class.

    We would like to emphasize that extension functions are dispatched statically, i.e. they are not virtual by receiver type. This means that the extension function being called is determined by the type of the expression on which the function is invoked, not by the type of the result of evaluating that expression at runtime.

    Conclusion

    Obviously, Scala has one more indirection level - the conversion. I let anyone decide whether this is a good or a bad thing. For me, it makes it harder to reason about the code.

    The other gap is the packaging of the additional functions. While in Scala those are all attached to the enriched type and can be imported as a whole, they have to be imported one by one in Kotlin.

    Categories: Development Tags: scalakotlin
  • Forget the language, the important is the tooling

    There’s not one week passing without stumbling upon a post claiming language X is superior to all others, and offers you things you cannot do in other languages, even make your kitchenware shine brightier and sometimes even return lost love. I wouldn’t mind these claims, because some features really open my Java developer mind to the lacking of what I’m using now, but in general, they are just bashing another language - usually Java.

    For those that love to bitch, here’s a quote that might be of interest:

    There are only two kinds of programming languages: those people always bitch about and those nobody uses.

    -- Bjarne Stroustrup

    That said, my current situation spawned some thinking. I was trying to migrate the Android application I’m developing in my spare time to Kotlin. I used Android Studio to do that, for which JetBrains provide a migration tool through the Kotlin plugin. The process is quite straightforward, requiring only minor adjustments for some files. This made me realize the language is not the most important, the tooling is. You can have the best language in the world - and it seems each person has his own personal definition of “best”, if the tooling lacks, it amounts to just nothing.

    Take Scala for example. I don’t pretend to be an expert in Scala, but I know enough to know it’s very powerful. However, if you don’t have a tool to handle advanced language features, such as implicit parameters, you’re in for a lot of trouble. You’d better have an advanced IDE to display where they come from. If to go beyond mere languages, the same can be said about technologies such as Dependency Injection - whether achieved though Spring or CDI or aspects from AOP.

    Another fitting example would be XML. It might not be everyone’s opinion, but XML is still very much used in the so-called enterprise world. However, beyond a hundred of lines of code and a few namespaces, XML becomes quite hard to read without help. Come Eclipse or XMLSpy and presto, XML files can be displayed in a very handy tree-like representation.

    On the opposite, successful languages (and technologies) come in with tooling. Look around and see for yourself. Still, I don’t pretend I know between the cause and the consequence: are languages successful because of their tooling, or are tools constructed around successful languages? Perhaps it’s both…

    Previously, I didn’t believe Kotlin and its brethren had many chances for success. Given that Jetbrains is behind both Kotlin and IntelliJ IDEA, I believe Kotlin might have a very bright future ahead.

    Categories: Development Tags: kotlinlanguagescala
  • Scala on Android and stuff: lessons learned

    I play Role-Playing since I’m eleven, and me and my posse still play once or twice a year. Recently, they decided to play Earthdawn again, a game we didn’t play since more than 15 years! That triggered my desire to create an application to roll all those strangely-shaped dice. And to combine the useful with the pleasant, I decided to use technologies I’m not really familiar with: the Scala language, the Android platform and the Gradle build system.

    The first step was to design a simple and generic die-rolling API in Scala, and that was the subject of one of my former article. The second step was to build upon this API to construct something more specific to Earthdawn and design the GUI. Here’s the write up of my musings in this development.

    Here’s the a general component overview:

    Component overview

    Reworking the basics

    After much internal debate, I finally changed the return type of the roll method from (Rollable[T], T) to simply T following a relevant comment on reddit. I concluded that it’s to the caller to get hold of the die itself, and return it if it wants. That’s what I did in the Earthdawn domain layer.

    Scala specs

    Using Scala meant I also dived into Scala Specs 2 framework. Specs 2 offers a Behavior-Driven way of writing tests as well as integration with JUnit through runners and Mockito through traits. Test instances can be initialized separately in a dedicated class, isolated of all others.

    My biggest challenge was to configure the Maven Surefire plugin to execute Specs 2 tests:

    <plugin>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>2.17</version>
        <configuration>
            <argLine>-Dspecs2.console</argLine>
                <includes>
                    <include>**/*Spec.java</include>
                </includes>
        </configuration>
    </plugin>
    

    Scala + Android = Scaloid

    Though perhaps disappointing at first, wanting to run Scala on Android is feasible enough. Normal Java is compiled to bytecode and then converted to Dalvik-compatible Dex files. Since Scala is also compiled to bytecode, the same process can also be applied. Not only can Scala be easily be ported to Android, some frameworks to do that are available online: the one which seemed the most mature was Scaloid.

    Scaloid most important feature is to eschew traditional declarative XML-based layout in favor of a Scala-based Domain Specific Language with the help of implicit:

    val layout = new SLinearLayout {
      SButton("Click").<<.Weight(1.0f).>>
    }
    

    Scaloid also offers:

    • Lifecycle management
    • Implicit conversions
    • Trait-based hierarchy
    • Improved getters and setters
    • etc.

    If you want to do Scala on Android, this is the prject you’ve to look at!

    Some more bitching about Gradle

    I’m not known to be a big fan of Gradle - to say the least. The bigest reason however, is not because I think Gradle is bad but because using a tool based on its hype level is the worst reason I can think of.

    I used Gradle for the API project, and I must admit it it is more concise than Maven. For example, instead of adding the whole maven-scala-plugin XML snippet, it’s enough to tell Gradle to use it with:

    apply plugin: 'scala'
    

    However the biggest advantage is that Gradle keeps the state of the project, so that unnecessary tasks are not executed. For example, if code didn’t change, compilation will not be executed.

    Now to the things that are - to put in politically correct way, less than optimal:

    • First, interestingly enough, Gradle does not output test results in the console. For a Maven user, this is somewhat unsettling. But even without this flaw, I'm afraid any potential user is interested in the test ouput. Yet, this can be remedied with some Groovy code: ```groovy test { onOutput { descriptor, event -> logger.lifecycle("Test: " + descriptor + ": " + event.message ) } } ```
    • Then, as I wanted to install my the resulting package into my local Maven repository, I had to add the Maven plugin: easy enough... This also required Maven coordinates, quite expected. But why am I allowed to install without executing test phases??? This is not only disturbing, but I cannot accept any rationale to allow installing without testing.
    • Neither of the previous points proved to be a show stopper, however. For the Scala-Android project, you might think I just needed to apply both scala and android plugins and be done with that. Well, this is wrong! It seems that despite Android being showcased as the use-case for Gradle, scala and android plugins are not compatible. This was the end of my Gradle adventure, and probably for the foreseeable future.

    Testing with ease and Genymotion

    The build process i.e. transforming every class file into Dex, packaging them into an .apk and signing it takes ages. It would be even worse if using the emulator from Google Android’s SDK. But rejoice, Genymotion is an advanced Android emulator that not only very fast but easy as pie to use.

    Instead of doing an adb install, installing an apk on Genymotion can be achieved by just drag’n’dropping it on the emulated device. Even better, it doesn’t require first uninstalling the old version and it launches the application directly. Easy as pie I told you!

    Conclusion

    I have now a working Android application, complete with tests and repeatable build. It’s not much, but it gets the job done and it taught me some new stuff I didn’t know previously. I can only encourage you to do the same: pick a small application and develop it with languages/tools/platforms that you don’t use in your day-to-day job. In the end, you will have learned stuff and have a working application. Doesn’t it make you feel warm inside?

     

    Categories: Java Tags: androidgenymotiongradlemavenscala
  • Dead simple API design for Dice Rolling

    I wanted to create a small project where I could achieve results fairly quickly in technologies I never (or rarely) use. At the Mix-IT conference, I realized the few stuff I learned in Scala had been quickly forgotten. And I wanted wanted to give Gradle a try, despite my regular bitching about it. Since my Role-Playing crew wants to play Earthdawn (we stopped for like 20 years), I decided to create a Dice Roller app in Scala, running on Android (all of my friends have Android devices) and built with Gradle (I promised it at Devoxx).

    I soon realized that there were a definite Dice Rolling API that could be isolated from the rest. Rolling a die in Earthdawn has some definite quirk - if you roll the highest result, you re-roll and add it to the result and so on until you roll not the highest, but the basics of rolling a die is similar in every system. It was time to design an extensible API. By extensible, I’m talking about something I could re-use in every RPG system.

    I’d already been trying to create such an API and the root of it is the roll() method signature. Before, I used 2 methods in conjunction:

    roll:Unit
    
    getResult:T
    

    This is a big mistake, as implementations will require state handling! This created plenty of problems, among them:

    • creating instances each time I needed to roll
    • calling 2 different methods in the right order, also known as time coupling

    This time, having learned from my mistake, I replaced that with the following:

    roll:T
    

    This time, implementation can (and will) do without state.

    The next design decision is about the result type. I’m not really sure about this point, but I formerly returned only the result. This time, I returned the result as well as the object itself, so I can pass both along and it let users know which die was rolled as well as the result. This is possible without creating a new class structure thanks to Scala’s out-of-the-box Tuple2. My final interface looks like:

    trait Rollable[T] {
      roll:(Rollable[T],T)
    }
    

    As I’m aiming toward RPG, I just need to set the sides number as a parameter (to be able to create those strange 12 and 20-sided dice). A naive implementation is very straightforward:

    class Die (val sides:Int)extends Rollable[Int] {
      var random:SecureRandom
      override def roll:(Die, Int) = (this, random.nextInt(sides) + 1)
    }
    

    Seems good enough. However, how can we test this design? If I need to build upon this, I will need to be able to set desired results: in this case, this is not cheating, it’s faking! As it turns out, I need to pass the random as a constructor parameter (but without a getter).

    class Die (val sides:Int, random:Random)extends Rollable[Int] {
      override def roll:(Die, Int) = (this, random.nextInt(sides) + 1)
    }
    

    That’s better, but only marginally. With this code, I need to pass the random parameter each time I create a new instance. A slightly better option would be to add a constructor with a default SecureRandom instance. However, what if the next Java version offers an even better Random implementation? Or if the API user prefers to rely on an external secure entropy source? It would he would still have to pass the new improved random for each call - back to square one. Fortunately, Scala offers this one nice language feature called implicit. With implicit, API users only have to reference the random generator once in a file to use it everywhere. The improved design now looks like:

    class Die (val sides:Int)(implicit random:Random) extends Rollable[Int] {
      override def roll:(Die, Int) = (this, random.nextInt(sides) + 1)
    }
    
    object SecureDie {
      implicit val random = new SecureRandom
    }
    

    Callers then just need to import the provided random, or to use their own and call the constructor with the desired sides number:

    import SecureDie.random // or import MyQuantumRandomGenerator.random
    val die = new Die(6)
    

    The final step is to create Scala object (singletons) in order to offer a convenient API. This is only possible since we designed our classes with no state:

    import SecureDie.random
    
    object d3 extends Die(3)
    object d4 extends Die(4)
    object d6 extends Die(6)
    object d8 extends Die(8)
    object d10 extends Die(10)
    object d12 extends Die(12)
    object d20 extends Die(20)
    object d100 extends Die(100)
    

    Users now just need to call d6.roll to roll dice!

    With a simple domain, I showed how Scala’s most basic feature can really help getting a clean design. Results are available on Github.

    In the next article I will detail how I got Scala to run on Android and pitfalls I stumbled upon. Spoiler: there will some Gradle involved…

    Categories: Development Tags: apiscala
  • A dive into the Builder pattern

    The Builder pattern has been described in the Gang of Four “Design Patterns” book:

    The builder pattern is a design pattern that allows for the step-by-step creation of complex objects using the correct sequence of actions. The construction is controlled by a director object that only needs to know the type of object it is to create.

    A common implementation of using the Builder pattern is to have a fluent interface, with the following caller code:

    Person person = new PersonBuilder().withFirstName("John").withLastName("Doe").withTitle(Title.MR).build();
    

    This code snippet can be enabled by the following builder:

    public class PersonBuilder {
    
        private Person person = new Person();
        public PersonBuilder withFirstName(String firstName) {
            person.setFirstName(firstName);
            return this;
        }
    
        // Other methods along the same model
        // ...
    
        public Person build() {
            return person;
        }
    }
    

    The job of the Builder is achieved: the Person instance is well-encapsulated and only the build() method finally returns the built instance. This is usually where most articles stop, pretending to have covered the subject. Unfortunately, some cases may arise that need deeper work.

    Let’s say we need some validation handling the final Person instance, e.g. the lastName attribute is to be mandatory. To provide this, we could easily check if the attribute is null in the build() method and throws an exception accordingly.

    public Person build() {
    
        if (lastName == null) {
            throw new IllegalStateException("Last name cannot be null");
        }
    
        return person;
    }
    

    Sure, this resolves our problem. Unfortunately, this check happens at runtime, as developers calling our code will find (much to their chagrin). To go the way to true DSL, we have to update our design - a lot. We should enforce the following caller code:

    Person person1 = new PersonBuilder().withFirstName("John").withLastName("Doe").withTitle(Title.MR).build(); // OK
    Person person2 = new PersonBuilder().withFirstName("John").withTitle(Title.MR).build(); // Doesn't compile
    

    We have to update our builder so that it may either return itself, or an invalid builder that lacks the build() method as in the following diagram. Note the first PersonBuilder class is kept as the entry-point for the calling code doesn’t have to cope with Valid-/InvaliPersonBuilder if it doesn’t want to.

    This may translate into the following code:

    public class PersonBuilder {
    
        private Person person = new Person();
    
        public InvalidPersonBuilder withFirstName(String firstName) {
            person.setFirstName(firstName);
            return new InvalidPersonBuilder(person);
        }
    
        public ValidPersonBuilder withLastName(String lastName) {
            person.setLastName(lastName);
            return new ValidPersonBuilder(person);
        }
    
        // Other methods, but NO build() methods
    }
    
    public class InvalidPersonBuilder {
    
        private Person person;
    
        public InvalidPersonBuilder(Person person) {
            this.person = person;
        }
    
        public InvalidPersonBuilder withFirstName(String firstName) {
            person.setFirstName(firstName);
            return this;
        }
    
        public ValidPersonBuilder withLastName(String lastName) {
            person.setLastName(lastName);
            return new ValidPersonBuilder(person);
        }
    
        // Other methods, but NO build() methods
    }
    
    public class ValidPersonBuilder {
    
        private Person person;
    
        public ValidPersonBuilder(Person person) {
            this.person = person;
        }
    
        public ValidPersonBuilder withFirstName(String firstName) {
            person.setFirstName(firstName);
            return this;
        }
    
        // Other methods
    
        // Look, ma! I can build
        public Person build() {
            return person;
        }
    }
    

    This is a huge improvement, as now developers can know at compile-time their built object is invalid.

    The next step is to imagine more complex use-case:

    1. Builder methods have to be called in a certain order. For example, a house should have foundations, a frame and a roof. Building the frame requires having built foundations, as building the roof requires the frame.
    2. Even more complex, some steps are dependent on previous steps (e.g. having a flat roof is only possible with a concrete frame)

    The exercise is left to interested readers. Links to proposed implementations welcome in comments.

    There's one flaw with our design: just calling the setLastName() method is enough to qualify our builder as valid, so passing null defeats our design purpose. Checking for null value at runtime wouldn't be enough for our compile-time strategy. The Scala language features may leverage an enhancement to this design called the type-safe builder pattern.

    Summary

    1. In real-life software, the builder pattern is not so easy to implement as quick examples found here and there
    2. Less is more: create an easy-to-use DSL is (very) hard
    3. Scala makes it easier for complex builder implementation's designers than Java
    Categories: Development Tags: design patternJavascala
  • On the merits of verbosity and the flaws of expressiveness

    Java is too verbose! Who didn’t stumble on such a rant on the Internet previously? And the guy bragging about [Insert expressive language there], that which soon replace Java because it is much more concise: it can replace those 10 lines of Java code with a one-liner. Ah, the power!

    Unfortunately, in order to correlate conciseness with power (and verbosity with lack of power), those people take many shortcuts that once put into perspective make no sense at all. This article aims to surgically deconstruct such shortcuts to expose their weaknesses. And because I like a factual debate - and because there are not only trolls on the Internet, this post will be linked to Andy Petrella’s different point of view.

    Verbose is not bad

    Having more data than necessary prevents messages corruption by indirect communication. Here are two simple but accurate examples:

    • In network engineering, there’s a concept of DBIt or Delivery Confirmation Bit. It is just a summary of all 0 or 1 of the current sequence and guarantees the delivered packet has been transmitted correctly, despite not-so-adequate network quality.
    • In real-life, bank accounts have a 2-digits control key (at least in most European countries I know of). Likewise, it is to avoid wiring funds to a erroneous account.

    It’s exactly the same for “verbose” languages; it decreases probability of understanding mistakes.

    More concise is not (necessarily) better

    Telling that one-liner are better than 10 lines implies that shorter is better: this is sadly not the case.

    Let us take a code structure commonly found in Java:

    public List<Product> getValidProducts(List<Product> products) {
    
        List<Product> validProducts = new ArrayList<Product>();
    
        for (Product product : products) {
            if (product.isValid()) {
                validProducts.add(product);
            }
        }
        return validProducts;
    }
    

    In this case, we have a list of Product and want to filter out invalid ones.

    An easy way to reduce verbosity is to set everything on the same line:

    public List<Product> getValidProducts(List<Product> products) { List<Product> validProducts = new ArrayList<Product>(); for (Product product : products) { if (product.isValid()) { validProducts.add(product); } } return validProducts;}

    Not concise enough? Let’s rename our variables and method obfuscated-style:

    public List<Product> g(List<Product> s) { List<Product> v = new ArrayList<Product>(); for (Product p : s) { if (p.isValid()) { v.add(p); } } return v;}

    We drastically reduced our code verbosity - and we coded everything in a single line! Who said Java was too verbose?

    Expressiveness implies implicitness

    I already wrote previously on the dangers of implicitness: those same arguments could be used for programming.

    Let us rewrite our previous snippet à la Java 8:

    public List<Product> getValidProducts(List<Product> products) {
        return products.stream().filter(p -> p.isValid()).collect(Collectors.toList());
    }
    

    Much more concise and expressive. But it requires, you, me and everybody (sorry, I couldn’t resist) to have a good knowledge of the API. The same goes for Scala operators and so on. And here comes my old friend, the context:

    • The + operator is known to practically everyone on the planet
    • The ++ operator is known to practically every software developer
    • The Elvis operator is probably known to Groovy developers, but can be inferred and remembered quite easily by most programmers
    • While the :/ operator (aka foldLeft()) requires Scala and FP knowledge

    So from top to bottom, the operator requires more and more knowledge thus reducing more and more the number of people being able to read your code. As you write code to be read, readibility becomes the prime factor in code quality.

    Thus, the larger the audience you want to address, the more verbose your code should be. And yes, since you will not use advanced features nor perfect expressiveness, seasoned developers will probably think this is just average code, and you will not be praised for your skills… But ego is not the reason why you code, do you?

    Summary

    In this article, I tried to prove that “Less is more” does not apply to programming with these points:

    • Verbosity prevents communication mistakes
    • Spaces and tabs, though not needed by the compiler/interpreter, do improve readibility
    • Expressiveness requires you and your readers share some common and implicit understanding

    I will be satisfied if this post made you consider perhaps verbosity is not so bad after all.

    Now is the time to read Andy’s post if you didn’t already and make your own mind.

    Categories: Development Tags: Javascalaverbosity
  • Devoxx France 2013 - Day 2

    Object and Functions, conflict without a cause by Martin Odersky

    The aim of Scala is to merge features of Object-Programming and Functional Programming. The first popular OOP language was Simula in 67, aimed at simulations; the second one was Smalltalk for GUIs. What is the reason OOP became popular: only because of the things you could do., not because of its individual features (like encapsulation). Before OOP, the data structure was well known with an unbounded number of operations while with OOP, the number of operations is fixed but the number of implementation is unbounded. Though it is possible for procedural languages (such as C) to apply to the field of simulation & GUI, it is to cumbersome to develop with them in real-life projects.

    FP has advantages over OOP but none of them is enough to led to mainstream adoption (remember it has been around for 50 years). What can spark this adoption is the complexity to develop OOP applications multicores and cloud computing ready. Requirements for these scopes include:

    • parallel
    • reactive
    • distributed

    In each of these, mutable state is a huge liability. Shared mutable state and concurrent threads leads to non-determinism. To avoid this, just avoid mutable state :-) or at least reduce it.

    The essence of FP is to concentrate on transformation of immutable values instead of stepwise updates of a single mutable data structure.

    In Scala, the .par member turns a collection into a parallel collection. But then, you have to became FP and forego of any side-effects. With Future and Promise, non-blocking is also possible but is hard to write (and read!), while Scala for-expressions syntax is an improvement. It also make parallel calls very easy.

    Objects are not to put put away: in fact, they are not about imperative, they are about modularization. There are no module systems (yet) that are on par with OOP. It feels like using FP & OOP is like sitting between two chairs. Bridging the gap require letting go of some luggage first.

    Objects are characterized by state, identity and behavior Grady Booch

    It would be better to focus on behavior…

    Ease development of offline applications in Java with GWT by Arnaud Tournier

    HTML5 opens new capabilities that were previously the domain of native applications (local storage, etc.). However, it is not stable and mature yet: know that it will have a direct impact on development costs.

    GWT is a tool of choice for developing complex Java applications leveraging HTML5 features. A module called “elemental” completes lacking features. Moreover, the JNSI API is able to use JavaScript directly. In GWT, one develops in Java and a compiler transforms Java code into JavaScript instead of bytecode. Generated code is compatible with most modern browsers.

    Mandatory features for offline include application cache and local storage. Application cache is a way for browsers to store files locally to use when offline. It is based on a manifest file, and has to be referenced by desired HTML pages (in the <html> tag). A cache management API is provided to listen to cache-related events. GWT already manages resources: we only need to provide a linker class to generate the manifest file that includes wanted resources. Integration of the cache API is achieved through usual JSNI usage [the necessary code is not user-friendly… in fact, it is quite gory].

    Local storage is a feature that stores user data on the client-side. Some standards are available: WebSQL, IndexedDB, LocalStorage, etc. Unfortunately, only the latter is truly cross-browser and is based on a key-value map of strings. Unlike application cache, there’s an existing out-of-the-box GWT wrapper around local storage. Objects stored being strings and running client-side, JSON is the serialization mechanism of choice. Beware that standard mandates 5MB maximum of storage (while some browsers provide more).

    We want:

    1. Offline authentication
    2. A local database to be able to run offline
    3. JPA features for developers
    4. Transparent data synch when coming online again for users

    In regard to offline authentication, it is not a real problem. Since local storage is not secured, we just have to store the password hash. Get a SHA-1 Java library and GWT takes care of the rest.

    SQL capabilities is a bigger issue, there are many incomplete solutions. sql.js a JavaScript SQLite port that provides limited SQL capabilities. As for integration, back to JSNI again ([* sigh *]). You will be responsible for developing a high-level API to ease usage of this, as you have to talk to either a true JPA backend or local storage. Note that JBoss Errai is a proposed JPA implementation to resolve this (unfortunately, it is not ready for production use - yet).

    State sync between client and server is the final problem. It can be separated into 3 ascending complexity levels: read-only, read-add and read-add-delete-update. Now, sync has to be done manually, only the process itself is generic. In the last case, there are no rules, only different conflict resolution strategies. What is mandatory is to have causality relations data (see Lamport timestamps).

    Conclusion is that developing offline applications now is a real burden, with a large mismatch between possible HTML5 capabilities and existing tools.

    Comparing JVM web frameworks by Matt Raible

    Starting with JVM Web frameworks history, it all began with PHP 1.0 in 1995. In the J2EE world, Struts replaced proprietary frameworks in 2001.

    Are there many too many Java web frameworks? Consider Vaadin, MyFaces, Struts2, Wicket, Play!, Stripes, tapestry, RichFaces, Spring MVC, Rails, Sling, Stripes, Grails, Flex, PrimeFaces, Lift, etc.

    And now, for SOFEA architecture, there are again so many frameworks on the client-side: Backbone.ja, AngularJS, HTML5, etc. But, “traditional” frameworks are still relevant because of client-side development limitations, including development speed and performance issues.

    In order to make relevant decision when faced with a choice, first set your goals and then evaluate each option in regard to these goals. Pick your best option and then re-set your goals. Maximizers trie to make the best possible choice, satisficers try to find the first suitable choice. Note that the former are generally more unhappy than the latter.

    Here is a proposed typology (non-exhaustive):

    Pure web Full stack SOFEA
    Apache GWT JSF Miscellaneous API JavaScript MVC
    • Wicket
    • Struts
    • Spring
    • Tapestry
    • Click
    • SmartGWT
    • GXT
    • Vaadin
    • Errai
    • Mojarra (RI)
    • MyFaces
    • Tomahawk
    • IceFaces
    • RichFaces
    • PrimeFaces
    • Spring MVC
    • Stripes
    • RIFE
    • ZK
    • Rails
    • Grails
    • Play!
    • Lift
    • Spring Roo
    • Seam
    • RESTEasy
    • Jersey
    • CXF
    • vert.x
    • Dropwizard
    • Backbone.js
    • Batman.js
    • JavaScript MVC
    • Ember.js
    • Sprout Core
    • Knockout.js
    • AngularJS

    The former matrix with fine-grained criteria is fine, but you probably have to create your own, with your own weight for each criterion. There are so many ways to tweak the comparison: you can assign more fine-grained grades, compare performances, locs, etc. Most of the time, you are influenced by your peers and by people who have used such and such frameworks. Interestingly enough, performance-oriented tests show that most of the time, bottlenecks appear in the database.

    • For full stack, choose by language
    • For pure web, Spring MVC, Struts 2, Vaadin, Wicket, Tapestry, PrimeFaces. Then, eliminate further by books, job trends, available skills (i.e. LinkedIn), etc.

    Fun facts: a great thing going for Grails and Spring MVC is backward compatibility. On the opposite side, Play! is the first framework that has community revive a legacy version.

    Conclusion: web frameworks are not the productivity bottleneck (administrative tasks are as show in the JRebel productivity report), make your own opinion, be nether a maximizer (things change too quickly) nor a picker.

    Puzzlers and curiosities by Guillaume Tardif & Eric Lefevre-Ardant

    [Interesting presentation on self-reference, in art and code. Impossible to resume in written form! Wait for the Parleys video…]

    Exotic data structures, beyond ArrayList, HashMap & HashSet by Sam Bessalah

    If all you have is a hammer, everything looks like a nail

    In some cases, problem can be solved in an easier way by using the right data structure instead of the one we know. Those 4 different “exotic” data structures are worth knowing:

    1. Skip lists are ordered data sets. The benefit of skip lists over array lists is that every operation (insertion, removal, contains and retrieval, ranges) is in o(log N). It is achieved by adding extra levels for "express" lines. Within the JVM, it is even faster with JVM region localizing feature. The type is non-locking (thread-safe) and included in Java 6 with ConcurrentSkipListMap and ConcurrentSkipListSet. The former is ideal for cache implementations.
    2. Tries are ordered trees. Whereas traditional trees have complexity of o(log N) where N is the tree depth, tries have constant time complexity whatever the depth. A specialized kind of trie is the Hash Array Mapped Trie (HAMT), a functional data structure for fast computations. Scala offers CTrie structure, a concurrent trie.
    3. Bloom filters are probabilistic data structures, designed to return very fast whether an element belongs to a data structure. In this case, there are no false negatives, accurately returning when an element does not belong to the structure. On the contrary, false positives are possible: it may return true when it is not the case. In order to reduce the probability of false positives, one can choose an optimal hash function (cryptographic functions are best suited), in order to avoid collision between hashed values. To go further, one can add hash functions. The trade off is memory space consumption. Because of collisions, you cannot remove elements from Bloom filters. In order to achieve them, you can enhance Bloom filters with counting, where you also store the number of elements at a specific location.
    4. Count Min Sketches are advanced Bloom filters. It is designed to work best when working with highly uncorrelated, unstructured data. Heavy hitters are based on Count Min Sketches.
    Categories: Event Tags: devoxxgwtscala
  • Scala cheatsheet part 1 - collections

    As a follow-up of point 4 of my previous article, here’s a first little cheatsheet on the Scala collections API. As in Java, knowing API is a big step in creating code that is more relevant, productive and maintainable. Collections play such an important part in Scala that knowing the collections API is a big step toward better Scala knowledge.

    Type inference

    In Scala, collections are typed, which means you have to be extra-careful with elements type. Fortunaltey, constructors and companion objects factory have the ability to infer the type by themselves (most of the type). For example:

    scala>val countries = List("France", "Switzerland", "Germany", "Spain", "Italy", "Finland")
    countries: List[java.lang.String] = List(France, Switzerland, Germany, Spain, Italy, Finland)
    

    Now, the countries value is of type List[String] since all elements of the collections are String.

    As a corollary, if you don’t explicitly set the type if the collection is empty, you’ll have a collection typed with Nothing .

    scala>val empty = List()
    empty: List[Nothing] = List()
    scala> 1 :: empty
    res0: List[Int] = List(1)
    scala> "1" :: empty
    res1: List[java.lang.String] = List(1)
    

    Adding a new element to the empty list will return a new list, typed according to the added element. This is also the case if a element of another type is added to a typed-collection.

    scala> 1 :: countries
    res2: List[Any] = List(1, France, Switzerland, Germany, Spain, Italy, Finland)
    

    Default immutability

    In Functional Programming, state is banished in favor of “pure” functions. Scala being both Object-Oriented and Functional in nature, it offers both mutable and immutable collections under the same name but under different packages: scala.collection.mutable and scala.collection.immutable. For example, Set and Map are found under both packages (interstingly enough, there’s a scala.collection.immutable.List but a scala.collection.mutable.MutableList). By default, collections that are imported in scope are those that are immutable in nature, through the scala.Predef companion object (which is imported implicitly).

    The collections API

    The heart of the matter lies in the API themselves. Beyond expected methods also found in Java (like size() and indexOf()), Scala brings to the table a unique functional approach to collections.

    Filtering and partitioning

    Scala collections can be filtered so that they return:

    • either a new collection that retain only elements that satisfy a predicate (filter())
    • or those that do not (filterNot())

    Both take a function that takes the element as a parameter and return a boolean. The following example returns a collection which only retains countries whose name has more than 6 characters.

    scala> countries.filter(_.length > 6)
    res3: List[java.lang.String] = List(Switzerland, Germany, Finland)
    

    Additionally, the same function type can be used to partition the original collection into a pair of two collections, one that satisfies the predicate and one that doesn’t.

    scala> countries.partition(_.length > 6)
    res4: (List[java.lang.String], List[java.lang.String]) = (List(Switzerland, Germany, Finland),List(France, Spain, Italy))
    

    Taking, droping and splitting

    • Taking a collection means returning a collection that keeps only the first n elements of the original one
    scala> countries.take(2)
    res5: List[java.lang.String] = List(France, Switzerland)
    
    • Droping a collection consists of returning a collection that keeps all elements but the first n elements of the original one.
    scala> countries.drop(2)
    res6: List[java.lang.String] = List(Germany, Spain, Italy, Finland)
    
    • Splitting a collection consists in returning a pair of two collections, the first one being the one before the specified index, the second one after.
    scala> countries.splitAt(2)
    res7: (List[java.lang.String], List[java.lang.String]) = (List(France, Switzerland),List(Germany, Spain, Italy, Finland))
    

    Scala also offers takeRight(Int) and dropRight(Int) variant methods that do the same but start with the end of the collection.

    Additionally, there are takeWhile(f: A => Boolean) and dropWhile(f: A => Boolean) variant methods that respectively take and drop elements from the collection sequentially (starting from the left) while the predicate is satisfied.

    Grouping

    Scala collections elements can be grouped in key/value pairs according to a defined key. The following example groups countries by their name’s first character.

    countries.groupBy(_(0))
    res8: scala.collection.immutable.Map[Char,List[java.lang.String]] =\
      Map(F -> List(France, Finland), S -> List(Switzerland, Spain), G -> List(Germany), I -> List(Italy))
    

    Set algebra

    Three methods are available in the set algebra domain:

    • union (::: and union())
    • difference (diff())
    • intersection (intersect())

    Those are pretty self-explanatory.

    Map

    The map(f: A => B) method returns a new collection, which length is the same as the original one, and whose elements have been applied a function.

    For example, the following example returns a new collection whose names are reversed.

    scala> countries.map(_.reverse)
    res9: List[String] = List(ecnarF, dnalreztiwS, ynamreG, niapS, ylatI, dnalniF)
    

    Folding

    Folding is the operation of, starting from an initial value, applying a function to a tuple composed of an accumulator and the element under scrutiny. Considering that, it can be used as the above map if the accumulator is a collection, like so:

    scala> countries.foldLeft(List[String]())((list, x) => x.reverse :: list)
    res10: List[String] = List(dnalniF, ylatI, niapS, ynamreG, dnalreztiwS, ecnarF)
    

    Alternatively, you can provide other types of accumulator, like a string, to get different results:

    scala> countries.foldLeft("")((concat, x) => concat + x.reverse)
    res11: java.lang.String = ecnarFdnalreztiwSynamreGniapSylatIdnalniF
    

    Zipping

    Zipping creates a list of pairs, from a list of single elements. There are two variants:

    • zipWithIndex() forms the pair with the index of the element and the element itself, like so:
    scala> countries.zipWithIndex
    res12: List[(java.lang.String, Int)] = List((France,0), (Switzerland,1), (Germany,2), (Spain,3), (Italy,4), (Finland,5))
    

    Note: zipping with index is very important when you want to use an iterator but still want to have a reference to the index. It keeps you from declaring a variable outside the iteration and incrementing the former inside the latter.

    • Additionally, you can also zip two lists together:
    scala> countries.zip(List("Paris", "Bern", "Berlin", "Madrid", "Rome", "Helsinki"))
    res13: List[(java.lang.String, java.lang.String)] = \
      List((France,Paris), (Switzerland,Bern), (Germany,Berlin), (Spain,Madrid), (Italy,Rome), (Finland,Helsinki))
    

    Note that the original collections don’t need to have the same size. The returned collection’s size will be the min of the sizes of the two original collections.

    The reverse operation is also available, in the form of the unzip() method which returns two lists when provided with a list of pairs. The unzip3() does the same with a triple list.

    Conclusion

    I’ve written this article in the form of a simple fact-oriented cheat sheet, so you can use it as such. In the next months, I’ll try to add other such cheatsheets.

    To go further:

    I’ve found the following references around the web:

    Categories: Development Tags: scala
  • My view on Coursera's Scala courses

    I’ve spent my last 7 weeks trying to follow Martin Odersky’s Scala courses on the Coursera platform.

    In doing so, my intent was to widen my approach on Functional Programming in general, and Scala in particular. This article sums up my personal thoughts about this experience.

    Time, time and time

    First, the courses are quite time-consuming! The course card advises for 5 to 7 hours of personal work a week and that’s the least. Developers familiar with Scala will probably take less time, but other who have no prior experience with it will probably have to invest as much.

    Given that I followed the course during my normal work time, I can assure you it can be challenging. People who also followed the course confirmed this appreciation.

    Functional Programming

    I believe the course allowed me to put the following Functional Programming principles in practice:

    • Immutable state
    • Recursivity

    Each assignment was checked for code-quality, specifically for mutable state. Since in Scala, mutable variables have to be defined with the var keyword, the check was easily enforced.

    Algorithmics

    I must admit I only received the barest formal computer programming education. I’ve picked up the tricks of the trade only from hard-won experience, thus I’ve only the barest algorithmics skills.

    The offered Scala course clearly required much needed skills in this area and I’m afraid I couldn’t fulfill some assignments because of these lackings.

    Area of improvement

    Since I won’t code any library or framework in Scala anytime soon, I feel my next area of improvement will be focused on the whole Scala collections API.

    I found I missed a lot of knowledge of these API during my assignments, and I do think improving this knowledge will let me code better Scala applications in the future.

    What's next

    At the beginning, I aimed to have 10/10 grade in all assignments but in the end, I only succeeded to achieve these in about half of them. Some reasons for this have been provided above. To be frank, it bothers the student part in me… but the more mature part sees this as a way to improve myself. I won’t be able to get much further, since Devoxx takes place the following week in Antwerp (Belgium). I’ll try to write about the conferences I’ll attend to or I’ll see you there: in the later case, don’t miss out my hands-on lab on Vaadin 7!

    Categories: Development Tags: courserascala
  • Why I enrolled in an online Scala course

    When I heard that the Coursera online platform offered free Scala courses, I jumped at the opportunity.

    Here are some reasons why:

    • Over the years, I've been slowly convinced that whatever the language you program in your professional life, learning new languages is an asset as it change the way you design your code. For example, the excellent LambdaJ library gave me an excellent overview of how functional programming can be leveraged to ease manipulation of collections in Java.
    • Despite my pessimistic predictions toward how Scala can penetrate mainstream enterprises, I still see the language as being a strong asset in small companies with high-level developers. I do not wish to be left out of this potential market.
    • The course if offered by Martin Oderski himself, meaning I get data directly from the language creator. I guess one cannot hope for a better teacher than him.
    • Being a developer means you've to keep in touch with the way the world is going. And the current trend is revolutions everyday. Think about how nobody speak about Big Data 3 or 4 years ago. Or how at the time, you developed your UI layer with Flex. Amazing how things are changing, and changing faster and faster. You'd better keep the rythm...
    • The course is free!
    • There's the course, of course, but there are also a weekly assignment, each being assigned a score. Those assignments fill the gap in most online courses, where you lose motivation with each passing day: this way, the regular challenge ensures a longer commitment.
    • Given that some of my colleagues have also enrolled in the course, there's some level of competition (friendly, of course). This is most elating and enables each of us not only to search for a solution, but spend time looking for the most elegant one.
    • The final reason is that I'm a geek. I love learning new concepts and new ways to do things. In this case, the concept is functional programming and the way is Scala. Other alternatives are also available: Haskell, Lisp, Erlang or F#. For me, Scala has the advantage of being able to be run on the JVM.

    In conclusion, I cannot recommend you enough to do likewise, there are so many reasons to choose from!

    Note: I also just stumbled upon this Git kata; an area where I also have considerable progress to make.

    Categories: Development Tags: coursescala
  • Java, Scala, complexity and aardvarks

    This week saw another flame war from some of the Scala crowd. This time, it was toward Stephen Colebourne, the man behind Joda time.

    The article in question can be found here, and Stephen’s answer here.

    To be frank, I tend to agree to Stephen’s predicat but for very different reasons. Now, if you’re a Scala user, there are basically 2 options:

    • either you react like some did before, telling me I'm too stupid or too lazy to really learn the language and stop reading at this point. In this case, I'm afraid there's nothing I can say apart from 'Please, don't harm me' because it has a feeling of religion war coming from supposedly scientific-minded people.
    • Or you can read on and we'll debate like educated people.

    Truth is, I’m interested in Scala. I try to program in Scala for personal projects. So far, I’ve gotten the hold on traits (what I would do to have them in Java) and closures and I’ve understood some of the generics concepts (so long as it’s not too complex). I plan on diving in the Collections API to better use closures next.

    I think that Scala, like EJB2, was designed by smart, even brilliant people, but with a different experience than mine.

    In real life, projects are full of developpers of heterogeneous levels: good developers, average developers and bad developers. And please don’t blame it on the HR department, the CEO or the global strategy of the company: it’s just Gaussian distribution, just like in any population.

    In effect, that makes Scala a no-go in most contexts. Take Java as an example. What did it make it so successful, even with all its lackings? The compile-once, run-everywhere motto? Perhaps, but IMHO, the first and foremost reason behind Java success is the change of responsibility in memory management, from developers (like in C/C++) to system administrators. Before, the worse the developer, the higher the probability of a bug in memory management but Java changed all this: it’s not perfect, but much simpler.

    Like Freddy Mallet, I’m sure a technology has to be simple to become mainstream and Scala has not taken this road. As a consequence, it’s fated to stay in a niche market… Stating this should raise no reactions from anybody, it’s just a fact.

    Note: aardvarks will be a point for a future blog post.

    Categories: Java Tags: Javascala
  • Second try with Vaadin and Scala

    My article from last week left me mildly depressed: my efforts trying to ease my Vaadin development was brutally stopped when I couldn’t inherit from a Java inner class in Scala. I wondered if it was an impossibility or mere lack of knowledge on my part.

    Fortunately, Robert Lally and Dale gave me the solution in their comments (many thanks to them). The operator used to access an inner class from Java in Scala is #. Simple, yet harder to google… This has an important consequence: I don’t have to create Java wrapper classes, and I can have a single simple Maven project!

    Afterwared, my development went much better. I tried using the following Scala features.

    Conciseness

    As compaired to last week, my listener do not use any Java workaround and just becomes:

    @SerialVersionUID(1L)
    class SimpleRouterClickListener(eventRouter:EventRouter) extends ClickListener {
     def buttonClick(event:Button#ClickEvent) = eventRouter.fireEvent(event)
    }
    

    Now, I can appreciate the lesser verbosity of Scala. This becomes apparent in 3 points:

    • @SerialVersionUID : 10 times better than the field of the same name
    • No braces! One single line that is as readable as the previous Java version, albeit I have to get used to it
    • A conciseness in the class declaration, since both the constructor and the getter are implicit

    Trait

    CustomComponent does not implement the Observer pattern. Since it is not the only one, it would be nice to have such a reusable feature, in other words, a Scala trait. Let’s do that:

    trait Router {
      val eventRouter:EventRouter = new EventRouter()
    }
    

    Note to self: next time, I will remember that Class<?> in Java is written Class[_] in Scala. I had the brackets right from the start but lost 10 minutes with the underscore…

    Now, I just have to declare a component class that mixin the trait and presto, my component has access to an event router object.

    Late binding

    Having access to the event router object is nice, but not a revolution. In fact, the trait exposes it, which defeats encapsulation and loose coupling. It would definitely be better if my router trait would use the listener directly. Let’s add this method to the trait:

    def addListener(clazz:Class[_], method:String) = eventRouter.addListener(clazz, this, method)
    

    I do not really know if it’s correct to use the late-binding term in this case, but it looks like it, since this references nothing in the trait and will be bound later to the concrete class mixing the trait.

    The next stop

    Now, I’ve realized that the ClickListener interface is just a thin wrapper around a single function: it’s the Java way of implementing closures. So, why should I implement this interface at all?

    Why can’t I write something like this val f(event:Button#ClickEvent) = (eventRouter.fireEvent(_)) and pass this to the addListener() method of the button? Because it doesn’t implement the right interface. So, I ask myself if there is a bridge between the Scala first-class function and single method interfaces.

    Conclusion

    I went further this week with my Scala/Vaadin experiments. It’s getting better, much better. As of now, I don’t see any problems developing Vaadin apps with Scala. When I have a little more time, I will try a simple app in order to discover other problems, if they exist.

    You can find the sources of this article here, in Maven/Eclipse format.

    To go further:

    Categories: JavaEE Tags: scalavaadin
  • Mixing Vaadin and Scala (with a touch of Maven)

    People who are familiar with my articles know that I’m interested in Vaadin (see the “Go further” section below) and also more recently in Scala since both can increase your productivity.

    Environment preparation

    Thus it is only natural that I tried to embed the best of both worlds, so to speak, as an experience. As an added challenge, I also tried to add Maven to the mix. It wasn’t as successful as I had wished, but the conclusions are interesting (at least to me).

    I already showed how to create Scala projects in Eclipse in a previous post, so this was really a no-brainer. However, layering Maven on top of that was trickier. The scala library was of course added to the dependencies list, but I didn’t found how to make Maven compile Scala code so that each Eclipse save does the compilation (like it is the case with Java). I found the maven-scala-plugin provided by Scala Tools. However, I wasn’t able to use it to my satisfaction with m2eclipse. Forget the Maven plugin… Basically what I did was create the Maven project, then update the Eclipse configuration from Maven with m2eclipse and finally add the Scala builder: not very clean and utterly brittle since any update would overwrite Eclipse files. I’m all ears if anyone knows the “right” way to do!

    Development time

    Now to the heart of the matter: I just want a text field and a button that, when pressed, displays the content of the field. Simple enough? Not really. The first problem I encountered was to create an implementation of the button click listener in Scala. In Vaadin, the listener interface has a single method void buttonClick(Button.ClickEvent event). Notice the type of the event? It is an inner class of Button and wasn’t able to import it in Scala! Anyone who has the solution is welcome to step forward and tell it.

    Faced with this limitation, I decided to encapsulate both the listener and the event class in two standard Java classes, one in each. In order to be decoupled, not to mention to ease my life, I created a parent POM project, and two modules, one for the Java workaround classes, the other for the real application.

    Next obstacle is also Scala-related, and due to a lack of knowledge on my part. I’m a Java boy so, in order to pass a Class instance, I’m used to write something like this:

    eventRouter.addListenerVisibleClickEvent.class, this, "displayMessage")
    

    Scala seems to frown upon it and refuses to compile the previous code. The message is “identifier expected but ‘class’ found”. The correct syntax is:

    eventRouter.addListener(classOf[VisibleClickEvent], this, "displayMessage")
    

    Moreover, while developing, I wrote a cast the Java way:

    getWindow().showNotification(button.getCaption() + " " + (String) field.getValue())
    

    My friend Scala loudly complained that “object String is not a value” and I corrected the code like so:

    getWindow().showNotification(button.getCaption() + " " + field.getValue().asInstanceOf[String])
    

    Astute readers should have remarked that concatenating strings render this cast unnecessary and I gladly removed it in the end.

    Conclusion

    In the end, it took more time than if I had done the example in Java.

    • For sure, some of the lost time is due to a lack of Scala knowledge on my part.
    • However, I'm not sure the number of code lines would have been lower in Java, due to the extra-code in order to access inner classes .
    • In fact, I'm positive that the project would have been simpler with Java instead of Scala (one project instead of a parent and 2 modules).

    The question I ask myself is, if Scala cannot extend Java inner classes - and that being no mistake on my part, should API evolve with this constraint? Are inner classes really necessary in order to achieve a clean design or are they only some nice-to-have that are not much used?

    In all cases, developers who want to code Vaadin applications in Scala should take extra care before diving in and be prepared to have a lower productivity than in Java because there are many inner classes in Vaadin component classes.

    You can find the sources for this article here in Maven/Eclipse format.

    To go further:

    Categories: JavaEE Tags: mavenscalavaadin
  • My first Scala servlet (with Eclipse)

    In this article, I will show you how to tweak Eclipse so that you will be able to code “classical” webapps in Scala.

    Note: I know about Lift, I just want to see how Scala can integrate in my existing infrastructure step-by-step.

    Setting up your Eclipse project

    Eclipse has a plugin(found at the Scala IDE site) that let you help develop with Scala. The plugin is by no mean perfect (autocompletion does not works in Ganymede), but it gets the job done, letting you create Scala projects that automatically compile Scala code with scalac.

    The Web Tools Platform, another Eclipse plugin, let you create web application projects that can be deployed to application servers.

    Most projects created with one or another Eclipse plugin are compatible with one another. For example, you can create a new Dynamic Web Project, then add a JPA extension and presto, you can now use the Java Persistence API in you project. Likewise, you can add a Vaadin extension to your Web Project and you can use Vaadin in it. These extensions are called “Facets” in Eclipse. Each project can have many facets: Java, JPA, Vaadin, etc. depending on the plugins you installed.

    Unfortunately, the Scala plugin does not use the Facet mechanism. Thus, Dynamic Web Projects cannot be enhanced with the Scala facet. Creating a web project that compiles your Scala code requires a little tweaking. First, you should create a simple Dynamic Web Project. Then, open the .project file. If you do not see the file, go to Customize View in the view and uncheck .*resources in the Filter tab (the first one). It should look something like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <projectDescription>
        <name>scala-servlet-example</name>
        <comment></comment>
        <projects>
        </projects>
        <buildSpec>
            <buildCommand>
                <name>org.eclipse.wst.jsdt.core.javascriptValidator</name>
                <arguments>
                </arguments>
            </buildCommand>
            <buildCommand>
                <name>org.eclipse.jdt.core.javabuilder</name>
                <arguments>
                </arguments>
            </buildCommand>
            <buildCommand>
                <name>org.eclipse.wst.common.project.facet.core.builder</name>
                <arguments>
                </arguments>
            </buildCommand>
            <buildCommand>
                <name>org.eclipse.wst.validation.validationbuilder</name>
                <arguments>
                </arguments>
            </buildCommand>
        </buildSpec>
        <natures>
            <nature>org.eclipse.jem.workbench.JavaEMFNature</nature>
            <nature>org.eclipse.wst.common.modulecore.ModuleCoreNature</nature>
            <nature>org.eclipse.wst.common.project.facet.core.nature</nature>
            <nature>org.eclipse.jdt.core.javanature</nature>
            <nature>org.eclipse.wst.jsdt.core.jsNature</nature>
        </natures>
    </projectDescription>
    

    Replace the org.eclipse.jdt.core.javabuilder with org.scala-ide.sdt.core.scalabuilder. Note: you can remove the Java builder with the IDE but you cannot add another one so hacking the configuration file is a necessary evil. Now add the Scala nature to your project:

    <nature>org.scala-ide.sdt.core.scalanature</nature>
    

    The .project file should now look like this:

    <?xml version="1.0" encoding="UTF-8"?>
    <projectDescription>
        <name>scala-servlet-example</name>
        <comment></comment>
        <projects>
        </projects>
        <buildSpec>
            <buildCommand>
                <name>org.eclipse.wst.jsdt.core.javascriptValidator</name>
                <arguments>
                </arguments>
            </buildCommand>
            <buildCommand>
                <name>org.scala-ide.sdt.core.scalabuilder</name>
                <arguments>
                </arguments>
            </buildCommand>
            <buildCommand>
                <name>org.eclipse.wst.common.project.facet.core.builder</name>
                <arguments>
                </arguments>
            </buildCommand>
            <buildCommand>
                <name>org.eclipse.wst.validation.validationbuilder</name>
                <arguments>
                </arguments>
            </buildCommand>
        </buildSpec>
        <natures>
            <nature>org.scala-ide.sdt.core.scalanature</nature>
            <nature>org.eclipse.jem.workbench.JavaEMFNature</nature>
            <nature>org.eclipse.wst.common.modulecore.ModuleCoreNature</nature>
            <nature>org.eclipse.wst.common.project.facet.core.nature</nature>
            <nature>org.eclipse.jdt.core.javanature</nature>
            <nature>org.eclipse.wst.jsdt.core.jsNature</nature>
        </natures>
    </projectDescription>
    

    If you made the right change, you should see a S instead of a J on your project’s icon (and it should keep the globe!).

    The last thing to do should be to add the Scala library to your path. Depending on your application server configuration, either add the Scala library to your build path (Build Path -> Configure Build Path -> Add Library -> Scala Library) or manually add the needed library to your WEB-INF/lib folder. You’re all set to create your first Scala servlet!

    Creating your servlet

    In order to create your servlet “the easy way”, just do New -> Other -> Scala Class. Choose a package (it is a good practice to respect the Java guidelines regarding the class location according to its package name). Choose the HttpServlet as the superclass. Name it however you please and click OK.

    It seems to compile but you and I know there will be some problem later since you don’t override the servlet’s doGet() method. Do it:

    override def doGet(request:HttpServletRequest, response:HttpServletResponse) {
    
    }
    

    Don’t forget the imports. Let’s use the Scala way:

    import javax.servlet.http.{HttpServlet, HttpServletRequest, HttpServletResponse}
    

    Last but not least, let’s code some things for your servlet to do. Since I’m feeling very innovative, I will print the good old “Hello world!”:

    override def doGet(request: HttpServletRequest, response: HttpServletResponse) {
      response setContentType ("text/html")
      val out = response getWriter
      out println """<html>
      <head>
          <title>Scala Servlet
      <body>
          <p>Hello world!"""
      }
    }
    

    Don’t forget to add the servlet to your web.xml and presto, you should see some familiar example, in a familiar IDE, but with some unorthodox language.

    Conclusion

    Once Eclipse correctly set up, it’s pretty straightforward to develop your webapp in Scala. I do hope the next version of the Scala IDE plugin will definitely improve Scala’s integration into Eclipse and WTP further.

    Here are the sources of the example, in Eclipse format.

    To go further:

    Categories: JavaEE Tags: eclipsescala