• My case for Event Bus

    Note: as I’ve attended Joker conference this week, I have to make a break in my “Going the microservices way” serie. However, I hope you dear readers will still enjoy this article.

    Informal talks with colleagues around the coffee machine are a great way to improve your developer skills. Most of the time, people don’t agree and that’s a good way to learn about thinking in context. One of the latest subject was about the Event Bus. Though no Android developer, I’m part of a Mobile team that uses an Event Bus to dispatch events among the different components of the application. Amazingly enough (at least for me), one team member doesn’t like Event Bus. As coincidence go, I just happened to read this post a few days later.

    I guess it’s my cue to make a case for Event Bus.

    There are several abstractions available to implement messaging, one of the first that comes to mind probably being the Observer pattern. It’s implemented in a variety of UI frameworks, such as AWT, Swing, Vaadin, you name it. Though it reverses the caller responsibility from the Subscriber (aka Observer) to the Observable (aka Subject), this pattern has a big drawback: in order for Subscriber to be notified of Observable’s event firing, the Observable has to hold a reference to a collection of Subscribers. This directly translates as strong coupling between them, though not in the usual sense of the term.

    To decrease coupling, the usual solution is to insert a third-party component between the two. You know where I’m headed: this component is the Event Bus. Now, instead of Observers registering to the Subject, they register to the Event Bus and Subject publishes events to the Event Bus.

    The coupling between Subject and Observer is now replaced by a coupling between Subject and EventBus : the Observable doesn’t need to implement a register() method with the Subscriber’s type as a parameter. Thus, the event can come from any source and that source can change in time with no refactoring needed on the Subscriber’s part.

    In the Java world, I know of 3 different libraries implementing Event Bus:

    In the latest two, the implementation of the notify() method is replaced by the use of annotation while in the former, it’s based on an agreed upon method signature(i.e. onEvent(Event<T>)). Thus, any type can be notified, removing the need to implement an Observer interface so that the coupling is reduced even further.

    Finally, all those implementations offer a way to pass an additional payload in the form of an event. This is the final design:

    Advantages of this design are manifold:

    • No need to reference both the Subscriber and the Observable in a code fragment to make the later register the former
    • No need to implement an Observer interface
    • Ability to pass additional payload
    • Many-to-many relations: one Subscriber can subscribe to different events, and an Observable can send different kind of events

    The only con, but it might be an issue depending on your context, is the order in which events are published. In this case, a traditional coupled method-calling approach is better suited. In all other cases - especially in the user interface layer and even more so when it’s designed with true graphical components (AWT, Swing, Android, Vaadin, etc.), you should give Event Bus a shot given all its benefits.

    For example, here’s an old but still relevant article on integrating Guava Event Bus with Vaadin.

  • Going the microservices way - part 3

    In the first post of this serie, I created a simple microservice based on a Spring Boot + Data JPA stack to display a list of available products in JSON format. In the second part, I demoed how this app could be uploaded on Pivotal Cloud Foundry. In this post, I’ll demo the required changes to deploy on Heroku.

    Heroku

    As for PCF, Heroku requires a local dedicated app. For Heroku, it’s called the Toolbelt. Once installed, one needs to login on one’s account through the toolbelt:

    heroku login
    

    The next step is to create the application on Heroku. The main difference between Cloud Foundry and Heroku is that the former deploys ready binaries while Heroku builds them from source. Creating the Heroku app will also create a remote Git repository to push to, as Heroku uses Git to manage code.

    heroku create
    
    Creating salty-harbor-2168... done, stack is cedar-14
    https://salty-harbor-2168.herokuapp.com/ | https://git.heroku.com/salty-harbor-2168.git
    Git remote heroku added
    

    The heroku create command also updates the .git config by adding a new heroku remote. Let’s push to the remote repository:

    git push heroku master
    
    Counting objects: 21, done.
    Delta compression using up to 8 threads.
    Compressing objects: 100% (14/14), done.
    Writing objects: 100% (21/21), 2.52 KiB | 0 bytes/s, done.
    Total 21 (delta 1), reused 0 (delta 0)
    remote: Compressing source files... done.
    remote: Building source:
    remote: 
    remote: -----> Java app detected
    remote: -----> Installing OpenJDK 1.8... done
    remote: -----> Installing Maven 3.3.3... done
    remote: -----> Executing: mvn -B -DskipTests=true clean install
    remote:        [INFO] Scanning for projects...
    
    // Here the app is built remotely with Maven
    remote:        [INFO] ------------------------------------------------------------------------
    remote:        [INFO] BUILD SUCCESS
    remote:        [INFO] ------------------------------------------------------------------------
    remote:        [INFO] Total time: 51.955 s
    remote:        [INFO] Finished at: 2015-10-03T09:13:31+00:00
    remote:        [INFO] Final Memory: 32M/473M
    remote:        [INFO] ------------------------------------------------------------------------
    remote: -----> Discovering process types
    remote:        Procfile declares types -> (none)
    remote: 
    remote: -----> Compressing... done, 74.8MB
    remote: -----> Launching... done, v5
    remote:        https://salty-harbor-2168.herokuapp.com/ deployed to Heroku
    remote: 
    remote: Verifying deploy.... done.
    
    To https://git.heroku.com/salty-harbor-2168.git
     * [new branch]      master -> master
    

    At this point, one need to tell Heroku how to launch the newly built app. This is done through a Procfile located at the root of the project. It should contain an hint to Heroku that we are using a web application - reachable through http(s), and the standard Java command-line. Also, the web port should be bound an Heroku documented environment variable:

    web: java -Dserver.port=$PORT -jar target/microservice-sample.jar
    

    The application doesn’t work yet as the log will tell you. To display the remote log, type:

    heroku logs --tail
    

    There should be something like that:

    Web process failed to bind to $PORT within 60 seconds of launch
    

    That hints that a web node is required, or in Heroku’s parlance, a dyno. Let’s start one:

    heroku ps:scale web=1
    

    The application should now be working at https://salty-harbor-2168.herokuapp.com/products (you should be patient, since it’s a free plan, the app is probably sleeping). Yet, something is still missing: it’s still running on the embedded H2 database. We should switch to a more resilient database. By default, Heroku provides a PostgreSQL development database for free. Creating such a database can be done either through the user interface or the command-line.

    Spring Boot documentation describes the Spring Cloud Connectors library. It’s able to automatically detect the app is running on Heroku and to create a datasource bound to the Postgres database provided by Heroku. However, despite my best effort, I haven’t been able to make this work, running each time into the following:

    org.springframework.cloud.CloudException: No unique service matching interface javax.sql.DataSource found. Expected 1, found 0

    Time to get a little creative. Instead of sniffing the database and its configuration, let’s configure it explicitly. This requires creating a dedicated profile configuration properties file for Heroku, src/main/resources/application-heroku.yml:

    spring:
        datasource:
            url: jdbc:postgresql://<url>
            username: <username>
            password: <password>
            driver-class-name: org.postgresql.Driver
        jpa:
            database-platform: org.hibernate.dialect.PostgreSQLDialect
    

    The exact database connection settings you can find in the Heroku user interface. Note the dialect seems to be required. Also, the POM should be updated to add the Postgres driver to the dependencies.

    Finally, to activate the profile on Heroku, change the Procfile as such:

    web: java -Dserver.port=$PORT -Dspring.profiles.active=heroku -jar target/microservice-sample.jar
    

    Commit and push to Heroku again, the application should be updated to use the provided Postgres.

    Next week, I’ll create a new shopping bag microservice that depend on this one and design for failure.

    Categories: Java Tags: herokumicroservicesspring boot
  • Going the microservices way - part 2

    In my previous post, I developed a simple microservices REST application based on Spring Boot in a few lines of code. Now is the time to put this application in the cloud. In the rest of the article, I suppose you already have an account configured for the provider.

    Pivotal Cloud Foundry

    Pivotal Cloud Foundry is the Cloud offering from Pivotal, based on Cloud Foundry. By registering, you have a 60-days free trial, which I happily use.

    Pivotal CF requires a local executable to push apps. There’s one such executable for each major platform (Windows, Mac, Linux deb and Linux rpm). Once installed, it requires authentication with the same credentials as with your Pivotal CF account.

    To push, one uses the cf push command. This commands needs at least the application name, which can be provided either on the command line cf push myapp or via a manifest.yml file. If one doesn’t provide a manifest, every file (from where the command is launched) will be pushed recursively to the cloud. However, Pivotal CF won’t be able to understand the format of the app in this case. Let’s be explicit and provide both a name and the path to the executable JAR in the manifest.yml:

    ---
    applications:
    -   name: product
        path: target/microservice-sample.jar
    

    Notice I didn’t put the version in the JAR name. By default, Maven will generate a standard JAR will only the apps resources and through the Spring Boot plugin, a fat JAR that includes all the necessary libraries. This last JAR is the one to be pushed to Pivotal CF. This requires a tweak to the POM to removing the version name in it:

    <plugin>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-maven-plugin</artifactId>
      <configuration>
        <finalName>${project.artifactId}</finalName>
      </configuration>
    </plugin>
    

    Calling cf push at this point will push the fat JAR to Pivotal CF. The output should read something like that:

    Using manifest file /Volumes/Data/IdeaProjects/microservice-sample/manifest.yml
    Updating app product in org frankel / space development as [email protected]
    OK
    Uploading product...
    Uploading app files from: /Volumes/Data/IdeaProjects/microservice-sample/target/microservice-sample.jar
    Uploading 871.1K, 109 files
    Done uploading               
    OK
    Stopping app product in org frankel / space development as [email protected]
    OK
    Starting app product in org frankel / space development as [email protected]
    -----> Downloaded app package (26M)
    -----> Java Buildpack Version: v3.2 | https://github.com/cloudfoundry/java-buildpack.git#3b68024
    -----> Downloading Open Jdk JRE 1.8.0_60 from 
           https://download.run.pivotal.io/openjdk/trusty/x86_64/openjdk-1.8.0_60.tar.gz (0.9s)
           Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.0s)
    -----> Downloading Open JDK Like Memory Calculator 2.0.0_RELEASE from
           https://download.run.pivotal.io/memory-calculator/trusty/x86_64/memory-calculator-2.0.0_RELEASE.tar.gz (0.0s)
           Memory Settings: -XX:MetaspaceSize=104857K -Xmx768M -XX:MaxMetaspaceSize=104857K -Xss1M -Xms768M
    -----> Downloading Spring Auto Reconfiguration 1.10.0_RELEASE from 
           https://download.run.pivotal.io/auto-reconfiguration/auto-reconfiguration-1.10.0_RELEASE.jar (0.0s)
    -----> Uploading droplet (71M)
    0 of 1 instances running, 1 starting
    1 of 1 instances running
    App started
    OK
    App product was started using this command 
     `CALCULATED_MEMORY=$($PWD/.java-buildpack/open_jdk_jre/bin/java-buildpack-memory-calculator-2.0.0_RELEASE
     -memorySizes=metaspace:64m.. -memoryWeights=heap:75,metaspace:10,native:10,stack:5
     -memoryInitials=heap:100%,metaspace:100%
     -totMemory=$MEMORY_LIMIT) && SERVER_PORT=$PORT $PWD/.java-buildpack/open_jdk_jre/bin/java 
     -cp $PWD/.:$PWD/.java-buildpack/spring_auto_reconfiguration/spring_auto_reconfiguration-1.10.0_RELEASE.jar
     -Djava.io.tmpdir=$TMPDIR -XX:OnOutOfMemoryError=$PWD/.java-buildpack/open_jdk_jre/bin/killjava.sh $CALCULATED_MEMORY
     org.springframework.boot.loader.JarLauncher`
    Showing health and status for app product in org frankel / space development as [email protected]
    OK
    requested state: started
    instances: 1/1
    usage: 1G x 1 instances
    urls: product.cfapps.io
    last uploaded: Sun Sep 27 09:05:49 UTC 2015
    stack: cflinuxfs2
    buildpack: java-buildpack=v3.2-https://github.com/cloudfoundry/java-buildpack.git#3b68024 java-main
                 open-jdk-like-jre=1.8.0_60 open-jdk-like-memory-calculator=2.0.0_RELEASE
                 spring-auto-reconfiguration=1.10.0_RELEASE
         state     since                    cpu    memory         disk           details   
    #0   running   2015-09-27 11:06:35 AM   0.0%   635.5M of 1G   150.4M of 1G 
    

    With the modified configuration, Pivotal CF is able to detect it’s a Java Spring Boot application and deploy it. It’s now accessible at https://product.cfapps.io/products!

    However, this application doesn’t reflect a real-world one for it still uses the h2 embedded database. Let’s decouple the database from the application but only when in the cloud, as h2 should still be used locally.

    The first step is to create a new database service in Pivotal CF. Nothing fancy here, as JPA is used, we will use a standard SQL database. The ClearDB MySQL service is available in the marketplace and the SparkDB is available for free. Let’s create a new instance for our app and give it a relevant name (mysql-service in my case). It can be done either in the Pivotal CF GUI or via the command-line.

    The second step is to bind the service to the app and it can also be achieved either way or even by declaring it in the manifest.yml.

    The third step is to actually use it and thanks to Pivotal CF auto-reconfiguration feature, there’s nothing to do at this point! During deployment, the platform will introspect the app and replace the existing h2 data source with one wrapping the MySQL database. This can be asserted by checking the actuator (provided it has been added as a dependency).

    This is it for pushing the Spring Boot app to Pivotal CF. The code is available on Github. Next week, let’s deploy to Heroku to check if there are any differences.

  • Going the microservices way - part 1

    Microservices, are trending right now, whether you like it or not. There are good reasons for that as it resolves many issues organizations are faced with. It also opens a Pandora box as new issues pop up every now and then… But that is a story for another day: in the end, microservices are here to stay.

    In this serie of articles, I’ll take a simple Spring Boot app ecosystem and turn it into microservices to deploy them into the Cloud. As an example, I’ll use an ecommerce shop, that requires different services such as:

    • an account service
    • a product service
    • a cart service
    • etc.

    This week is dedicated to creating a sample REST application that lists products. It is based on Spring Boot because Boot makes developing such an application a breeze, as will see in the rest of this article.

    Let’s use Maven. The relevant part of the POM is the following:

    <project>
      <modelVersion>4.0.0</modelVersion>
      <parent>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-parent</artifactId>
        <version>1.3.0.M5</version>
      </parent>
      <groupId>ch.frankel.microservice</groupId>
      <artifactId>microservice-sample</artifactId>
      <version>1.0.0-SNAPSHOT</version>
      <properties>
        <java.version>1.8</java.version>
      </properties>
      <dependencies>
        <dependency>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-starter-data-jpa</artifactId>
        </dependency>
        <dependency>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-starter-data-rest</artifactId>
        </dependency>
        <dependency>
          <groupId>com.h2database</groupId>
          <artifactId>h2</artifactId>
          <scope>runtime</scope>
        </dependency>
      </dependencies>
    </project>
    

    Easy enough:

    1. Use Java 8
    2. Inherit from Spring Boot parent POM
    3. Add Spring Data JPA & Spring Data REST dependencies
    4. Add a database provider, h2 for now

    The next step is the application entry point, it’s the standard Spring Boot main class:

    @SpringBootApplication
    public class ProductApplication {
    
        public static void main(String... args) {
            SpringApplication.run(ProductApplication.class);
        }
    }
    

    It’s very straightforward, thanks to Spring Boot inferring which dependencies are on the classpath.

    Besides that, it’s just adding a simple Product entity class and a Spring Data repository interface:

    @Entity
    public class Product {
    
        @Id
        @GeneratedValue(strategy = AUTO)
        private long id;
    
        private String name;
    
        // Constructors and getters
    }
    
    public interface ProductRepository extends JpaRepository<Product, Long> {}
    

    At this point, we’re done. Spring Data REST will automatically provide a REST endpoint to the repository. After executing the main class of the application, browsing to http://localhost:8080/products will yield the list of available products in JSON format.

    If you don’t believe how easy this, then you’re welcome to take a look at the Github repo and just launch the app with mvn spring-boot:run. There’s a script to populate initial data present.

    Next week, I’ll try to upload the application in the cloud (probably among Cloud Foundry, Heroku and YaaS).

    Categories: Java Tags: microservicesspring boot
  • SpringOne2GX 2015

    This week, I had the privilege to talk at SpringOne2GX in Washington D.C. in not only one but 2 talks:

    Apart from preparing and rehearsing, I also used the occasion to attend some talks. Here follows a resume of the best.

    Spring Framework, the ultimate configuration battle

    The talk compared 3 methods to configure a Spring application, good old XML, JavaConfiguration and Groovy. There were a number of use cases, and one speaker was to implement it in one of the dedicated way. The talk was very entertaining, as the 3 speakers seemed to really enjoy their show. I think JavaConfig was downplayed in some aspects, such as not using an anonymous inner class, to make Groovy shinier. However, I learned that XML and Groovy returned Bean Definitions and later instantiated while JavaConfig returned the beans directly.

    Hands on Spring Security

    Nice talk from Spring Security’s lead himself, with plenty of demos that highlight different security problems. For me, the magical moment was when the browser displayed an image that triggered a JS script. Thanks for Internet Explorer for content sniffing. I think security is often undervalued and that talk reminds you about it.

    Intro to Spring Boot for the web tier

    Spring Boot developers showed how to kick-start a web application from an empty app, to a complete fully-configured one. A step-by-step demo, like I like them, with each step the basis for the next one. Special mention to the error page, it’s worth a sight.

    Apache Spark for Big Data Processing

    The talk was separated into two parts, with a speaker for each: the first presented raw Apache Spark and it’s was quite possible to understand thanks to the many drawings and a demo; the second described Spring XD but unfortunately, slides with bullet points were not enough for that in regard to my existing knowledge.

    Comparing Hot JavaScript Frameworks: AngularJS, Ember.js and React.js

    This had nothing to do with Spring, but since I don’t do JavaScript, I decided to go in there to have an overview of the beasts. Plus the speaker was Matt Raible who kind of specialized himself in comparing frameworks, and he’s a good speaker. The talk was entertaining, but the usual conclusion was the same: do as you want, which kind of defeats the purpose.

    Reactive Web Applications

    Last but for sure not least, this talk really blew my mind. The good folks at Spring are aware of the Reactive wave and are surfing it full speed. In this session, the presenters demoed a web application which implemented the Reactive paradigm using different APIs (I remember RxJava and Netty but I’m afraid there were more): the server served content in a non-blocking way, depending on the speed of the subscribing client! I think when this is available for GA, this means web developers - myself included, will have to rethink the way they model client-server interactions. No more request-response but infinite streams…​

    Conclusion

    The 3 main ideas I come back with are: Spring Boot, Cloud Foundry and Reactive. Those are already either in place and evolving at light speed or soon to be implemented. Watch for them, they’re going to be the next big things (or already are).

    However, as always, talks are only second to social interactions, meeting conference pals or being acquainted with new people and their ideas. It has been fun, folks, hope to see you next year!

    Categories: Event Tags: spring
  • True singletons with Dagger 2

    I’ve written some time ago about Dagger 2. However, I still don’t understand every nook and cranny. In particular, the @Singleton annotation can be quite misleading as user Zhuiden was kind enough to point out:

    If you create a new ApplicationComponent each time you inject, you will get a new instance in every place where you inject; and you will not actually have singletons where you expect singletons. An ApplicationComponent should be managed by the Application and made accessible throughout the application, and the Activity should have nothing to do with its creation.

    After a quick check, I could only agree. The Singleton pattern is only applied in the context of a specific @Component, and one is created each time when calling:

    DaggerXXXComponent.create();
    

    Thus, the only problem is to instantiate the component once and store it in a scope available from every class in the application. Guess what, this scope exists: only 2 simple steps are required.

    1. Extend the Application class and in the onCreate() method, create a new XXXComponent instance and store it as a static attribute. ```java public class Global extends Application { private static ApplicationComponent applicationComponent; public static ApplicationComponent getApplicationComponent() { return applicationComponent; } @Override public void onCreate() { super.onCreate(); applicationComponent = DaggerApplicationComponent.create(); } } ```
    2. The next step is to hook into the created class into the application lifecycle in the Android manifest: ```xml ... ```

    At this point, usage is quite straightforward. Replace the first snippet of this article with:

    Global.getApplicationComponent();
    

    This achieves real singletons in Dagger 2.

    Categories: Java Tags: androiddaggersingleton
  • Running your domain on HTTPS for free

    This blog runs on HTTP for a long time as there is no transaction taking place so no security is needed (I use SFTP for file transfer). However, since Google’s latest search algorithm change, I’ve noticed a sharp decrease in the number of monthly visits, from more than 20k to around 13k.

    While my goal has never been to have the highest number of visits, it’s still good feedback to me (as well as a nice warm feeling). As it seems turning on HTTPS didn’t seem like a big deal, I thought “Why not?” and asked my hosting provider about that. The answer was that I had to both purchase a fixed IP and the Rapid SSL certificate acquired through them. As the former is a limitation of their infrastructure and the later simply a Domain Validated certificate sold with a comfortable margin, I thought that I could do without. But first, I wanted to try on a simpler site. The good thing is I have such a site already available, morevaadin.com.

    morevaadin.com is a static site generated offline with Jekyll and themed with Jekyll Bootstrap. The advantages of such an offline process is that the site is provided very fast to users and that the security risks are very low (no SQL injection, etc.).

    After a quick poll among my colleagues, I found out that Github was the most suitable to my use-case. Github provides Github Pages: basically, you push Markdown documents on Github and not only it generates the corresponding HTML for you but it also makes it available online. My previous publishing process was to generate the pages on my laptop with Jekyll and upload them through SFTP but Github pages do that for me, I just have to push the Jekyll-flavored Markdown pages. Plus, Github is available on HTTPS out-of-the-box.

    Github Pages provides two kinds of sites: user/organization sites and project sites. There are a couple of differences, but I chose to create the morevaadin Github organization to neatly organize my repository. That means that everything I push on the master branch of a repository named morevaadin.github.io will end up exposed on the https://morevaadin.github.io domain, with HTTPS as icing on the cake (and the reason I started the whole stuff).

    Yet, I was not finished as the remaining issue was to set my own domain on top of that. The first move was to choose a provider that would direct the traffic to Github while providing a SSL certificate at low cost: I found Cloudflare that does just that achieves that for… $0. Since I didn’t know about that business, I was a little wary at first but it seems I’m not the first non-paying customer and I’m happy so far. Once registered, you have to point your domain to their name servers: art.ns.cloudflare.com and val.ns.cloudflare.com.

    The second step takes care of the routing by adding a DNS record and depends on your custom domain. If you’re using an apex domain (i.e. morevaadin.com), then you have to add two A records named as your domain that point to Github IP for Github pages, 192.30.252.153 and 192.30.252.154. If you’re using a subdomain (i.e. www.morevaadin.com), then you have to add a single CNAME record named as your subdomain that point to your Github pages (i.e. morevaadin.github.io).

    The next step is to enable HTTPS: in front of your DNS record, click on the cloud so that he becomes orange. This means Cloudflare will handle the request. On the opposite, a grey cloud will mean Cloudflare will just do the routing. On the Crypto tab, enable SSL (with Speedy) by choosing Flexible. That’s it for Cloudflare.

    Now you have to tell Github pages that it will serve its content for another domain. At the root of your project, create a file named CNAME that contains your custom domain (but no protocol). Check this example. You also have to tell Jekyll what domain will be used in order for absolute link generations to work. This is achieved by setting the BASE_PATH to your URL (this time with the protocol) in your _config.yml. Check what I did.

    As an added bonus, I added the following:

    • Force HTTPS traffic (redirect everything to HTTPS)
    • Minify JavaScript, CSS and HTML

    This should end your journey to free HTTPS. Basically, it just boils down to let Cloudflare acts as a proxy between users and your site to add his magic SSL . Note that if you’re interested about security, it doesn’t solve everything as the path between Cloudflare and your content is not secured within this architecture.

    Now that I know the ropes, I have several options available:

    • Just keep my current hosting provider and put Cloudflare in front
    • Choose another provider and put Cloudflare in front. Any of you know about a decent and cheap Wordpress provider besides wordpress.com?
    • Take on my current provider's offer
    • Migrate from Wordpress to Jekyll and put Cloudflare in front

    I will think about these this week…

  • Creative Rx usage in testing setup

    I don’t know if events became part of software engineering since Graphical-User Interface interactions, but for sure they are a very convenient way of modeling them. With more and more interconnected systems, asynchronous event management has become an important issue to tackle. With Functional-Programming also on the raise, this gave birth to libraries such as RxJava. However, modeling a problem as the handling of a stream of events shouldn’t be restricted to system events handling. It can also be used in testing in many different ways.

    One common use-case for testing setup is to launch a program, for example an external dependency such as a mock server. In this case, we need to wait until the program has been successfully launched. On the contrary, the test should stop as soon as the external program launch fails. If the program has a Java API, that’s easy. However, this is rarely the case and the more basics API are generally used, such as ProcessBuilder or Runtime.getRuntime().exec():

    ProcessBuilder builder;
    
    @BeforeMethod
    protected void setUp() throws IOException {
        builder = new ProcessBuilder().command("script.sh");
        process = builder.start();
    }
    
    @AfterMethod
    protected void tearDown() {
        process.destroy();
    
    }
    

    The traditional way to handle this problem was to put a big Thread.sleep() just after the launch. Not only was it system dependent as the launch time changed from system to system, it didn’t tackle the case where the launch failed. In this later case, precious computing time as well as manual relaunch time were lost. Better solutions exist, but they involve a lot of lines of code as much at some (or a high) degree of complexity. Wouldn’t it be nice if we could have a simple and reliable way to start the program and depending on the output, either continue the setup or fail the test? Rx to the rescue!

    The first step is to create an Observable around the launched process’s input stream:

    Observable<String> observable = Observable.create(subscriber -> {
    
        InputStream stream = process.getInputStream();
    
        try (BufferedReader reader = new BufferedReader(new InputStreamReader(stream))) {
            String line;
            while ((line = reader.readLine()) != null) {
                subscriber.onNext(line);
            }
            subscriber.onCompleted();
    
        } catch (Exception e) {
            subscriber.onError(e);
        }
    });
    

    In the above snippet:

    • Each time the process writes on the output, a next event is sent
    • When there is no more output, a complete event is sent
    • Finally, if an exception happens, it's an error event

    By Rx definition, any of the last two events mark the end of the sequence.

    Once the observable has been created, it just needs to be observed for events. A simple script that emits a single event can be listened to with the following snippet:

    BlockingObservable<String> blocking = observable.toBlocking();
    
    blocking.first();
    

    The thing of interest here is the wrapping of the Observable instance in a BlockingObservable. While the former can be combined together, the later adds methods to manage events. At this point, the first() method will listen to the first (and single) event.

    For a more complex script that emits a random number of regular events terminated by a single end event, a code snippet could be:

    BlockingObservable<String> blocking = observable
        .filter("Done"::equals)
        .toBlocking();
    
    blocking.first();
    

    In this case, whatever the number of regular events, the filter() method provides the way to listen to the only event we are interested in.

    Previous cases do not reflect the reality, however. Most of the time, setup scripts should start before and run in parallel to the tests i.e. the end event is never sent - at least until after tests have finished. In this case, there are some threading involved. Rx let it handle that quite easily:

    BlockingObservable<String> blocking = observable
        .subscribeOn(Schedulers.newThread())
        .take(5)
        .toBlocking();
    
    blocking.first();
    

    There is a simple difference there: the subscriber will listen on a new thread thanks to the subscribeOn() method. Alternatively, events could have been emitted on another thread with the observeOn() method. Note I replaced the filter() method with take() to pretend to be interested only in the first 5 events.

    At this point, the test setup is finished. Happy testing!

    Sources for this article can be found here in IDEA/Maven format.

    Categories: Java Tags: rxtestthread
  • Compile-time dependency injection tradeoffs in Android

    As a backend software developer, I’m used to Spring as my favorite Dependency Injection engine. Alternatives include Java EE’s CDI which achieves the same result - in a different way. However, both inject at runtime: that means that there’s a definite performance cost to pay at the start of the application, the time it takes for all dependencies to be fulfilled. On an application server, where the application lifespan is measured in days (if not weeks), the start time overhead is acceptable. It is even fully transparent if the server is but a node in a large cluster.

    As an Android user, I’m not happy when I start an app and it lags for several seconds before opening. It would be very bad in term of user-friendliness if we were to add several more seconds to that time. Even worse, the memory consumption from a DI engine would be a disaster. That’s the reason why Square developed a compile-time dependency injection mechanism called Dagger. Note that Dagger 2 is currently under development by Google. Before going further, I must admit that the documentation of Dagger 2 is succinct - at best. But it’s a great opportunity for another blog post :-)

    Dagger 2 works with the annotation-processor: when compiling, it will analyze your annotated-code and produce the wiring code between you components. The good thing is that this code is pretty similar to what you would write yourself if you were to do it manually, there’s no secret black magic (as opposed to runtime DI and their proxies). The following code displays a class to be injected:

    public class TimeSetListener implements TimePickerDialog.OnTimeSetListener {
    
        private final EventBus eventBus;
    
        public TimeSetListener(EventBus eventBus) {
            this.eventBus = eventBus;
        }
    
        @Override
        public void onTimeSet(TimePicker view, int hourOfDay, int minute) {
            eventBus.post(new TimeSetEvent(hourOfDay, minute));
        }
    }
    

    Notice the code is completely independent of Dagger in every way. One cannot infer how it will be injected in the end. The interesting part is how to use Dagger to inject the required eventBus dependency. There are two steps:

    1. Get a reference to an eventBus instance in the context
    2. Call the constructor with the relevant parameter

    The wiring configuration itself is done in a so-called module:

    @Module
    
    public class ApplicationModule {
    
        @Provides
        @Singleton
        public TimeSetListener timeSetListener(EventBus eventBus) {
            return new TimeSetListener(eventBus());
        }
    
        ...
    }
    

    Notice that the EventBus is passed as a parameter to the method, and it’s up to the context to provide it. Also, the scope is explicitly @Singleton.

    The binding to the factory occurs in a component, which references the required module (or more):

    @Component(modules = ApplicationModule.class)
    @Singleton
    public interface ApplicationComponent {
    
        TimeSetListener timeListener();
    
        ...
    }
    

    It’s quite straightforward… until one notices that some - if not most objects in Android have a lifecycle managed by Android itself, with no call to our injection-friendly constructor. Activities are such objects: they are instantiated and launched by the framework. Only through dedicated lifecycle methods like onCreate() can we hook our code into the object. This use-case looks much worse as field injection is mandatory. Worse, it is also required to call Dagger: in this case, it acts as a plain factory.

    public class EditTaskActivity extends AbstractTaskActivity {
    
        @Inject TimeSetListener timeListener;
        @Override
        protected void onCreate(Bundle savedInstanceState) {
            DaggerApplicationComponent.create().inject(this);
        }
    
        ...
    }
    

    For the first time we see a coupling to Dagger, but it’s a big one. What is DaggerApplicationComponent? An implementation of the former ApplicationComponent, as well as a factory to provide instances of them. And since it doesn’t provide an inject() method, we have to declare it into our interface:

    @Component(modules = ApplicationModule.class)
    @Singleton
    
    public interface ApplicationComponent {
    
        TimeSetListener timeListener();
    
        void inject(EditTaskActivity editTaskActivity);
    
        ...
    }
    

    For the record, the generated class looks like:

    @Generated("dagger.internal.codegen.ComponentProcessor")
    public final class DaggerApplicationComponent implements ApplicationComponent {
    
      private Provider<TimeSetListener> timeSetListenerProvider;
      private MembersInjector<EditTaskActivity> editTaskActivityMembersInjector;
    
      ...
    
      private DaggerApplicationComponent(Builder builder) {  
        assert builder != null;
        initialize(builder);
      }
    
      public static Builder builder() {  
        return new Builder();
      }
    
      public static ApplicationComponent create() {  
        return builder().build();
      }
    
      private void initialize(final Builder builder) {  
        this.timeSetListenerProvider = ScopedProvider.create(ApplicationModule_TimeSetListenerFactory
            .create(builder.applicationModule, eventBusProvider));
        this.editTaskActivityMembersInjector = TimeSetListener_MembersInjector
            .create((MembersInjector) MembersInjectors.noOp(), timeSetListenerProvider);
      }
    
      @Override
      public EventBus eventBus() {  
        return eventBusProvider.get();
      }
    
      @Override
      public void inject(EditTaskActivity editTaskActivity) {  
        editTaskActivityMembersInjector.injectMembers(editTaskActivity);
      }
    
      public static final class Builder {
    
        private ApplicationModule applicationModule;
    
        private Builder() { }
    
        public ApplicationComponent build() {  
          if (applicationModule == null) {
            this.applicationModule = new ApplicationModule();
          }
          return new DaggerApplicationComponent(this);
        }
    
        public Builder applicationModule(ApplicationModule applicationModule) {  
          if (applicationModule == null) {
            throw new NullPointerException("applicationModule");
          }
    
          this.applicationModule = applicationModule;
          return this;
        }
      }
    }
    

    There’s no such thing as a free lunch. Despite compile-time DI being very appealing at first glance, it becomes much less so when used on objects whose lifecycle is not managed by our code. The downsides become apparent: coupling to the DI framework and more importantly an increased difficulty to unit-test the class. However, considering Android constraints, this might be the best that can be achieved.

    Categories: Java Tags: androiddaggerdependency injection
  • More DevOps for Spring Boot

    I think Spring Boot brings something new to the table, especially concerning DevOps - and I’ve already written a post about it. However, there’s more than metrics and healthchecks.

    In one of another of my previous post, I described how to provide versioning information for Maven-built applications. This article will describe how this later post is not necessary when using Spring Boot.

    As a reminder, just adding adding the spring-boot-starter-actuator dependency in the POM, enable many endpoints, among them:

    • /metrics for monitoring the application
    • /health to check the application can deliver the expected service
    • /bean lists all Spring beans in the context
    • /configprops lists all properties regarding the running profile(s) (if any)

    Among those, one of them is of specific interest: /info. By default, it displays… nothing - or more precisely, the string representation of an empty JSON object.

    However, any property set in the application.properties file (or one of its profile flavor) will find its way into the page. For example:

    Propery file Output
    Key Value
    info.application.name My Demo App
    {
      "application" : {
        "name" : "My Demo App"
      }
    }

    Setting static info is sure nice, but our objective is to get the version of my application within Spring Boot. application.properties files are automatically filtered by Spring Boot during the process-resources build phase. Any property in the POM can be used: it just needs to be set between @ character. For example:

    Propery file Output
    Key Value
    info.application.version @[email protected]
    {
      "application" : {
        "version" : "0.0.1-SNAPSHOT"
      }
    }

    Note that Spring Boot Maven plugin will remove the generated resources, and thus the application will use the unfiltered resource properties file from the sources. In order to keep (and use) the generated resources instead, configure the plugin in the POM like this:

    <build>
      <plugins>
        <plugin>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-maven-plugin</artifactId>
          <configuration>
            <addResources>false</addResources>
          </configuration>
        </plugin>
      </plugins>
    </build>
    

    At this point, we have the equivalent of the previous article, but we can go even further. The maven-git-commit-id-plugin will generate a git.properties stuffed will all possible git-related information. The following snippet is an example of the produced file:

    #Generated by Git-Commit-Id-Plugin
    #Fri Jul 10 23:36:40 CEST 2015
    git.tags=
    git.commit.id.abbrev=bf4afbf
    [email protected]
    git.commit.message.full=Initial commit\n
    git.commit.id=bf4afbf167d51909bd984c35ad5b85a66b9c44b9
    git.commit.id.describe-short=bf4afbf
    git.commit.message.short=Initial commit
    git.commit.user.name=Nicolas Frankel
    git.build.user.name=Nicolas Frankel
    git.commit.id.describe=bf4afbf
    [email protected]
    git.branch=master
    git.commit.time=2015-07-10T23\:34\:46+0200
    git.build.time=2015-07-10T23\:36\:40+0200
    git.remote.origin.url=Unknown
    

    From all of this data, only the following are used in the endpoint:

    Key Output
    git.branch
    {
      "git" : {
        "branch" : "master",
        "commit" : {
          "id" : "bf4afbf",
          "time" : "2015-07-10T23:34:46+0200"
        }
      }
    }
    git.commit.id
    git.commit.time

    Since the path and the formatting is consistent, you can devise a cronjob to parse all your applications and generate a wiki page with all those information, per server/environment. No more having to ssh the server and dig into the filesystem to uncover the version.

    Thus, the /info endpoint can be a very powerful asset in your organization, whether you’re a DevOps yourself or only willing to help your Ops. More detailed information can be found in the Spring Boot documentation.

    Categories: Development Tags: devopsspring boot