• Fixing my own Spring Boot starter demo

    Spring Boot logo

    Since one year or so, I try to show the developer community that there’s no magic involved in Spring Boot but rather just straightforward software engineering. This is achieved with blog posts and conference talks. At jDays, Stéphane Nicoll was nice enough to attend my talk and pointed out an issue in the code. I didn’t fix it then, and it came back to bite me last week during a Pivotal webinar. Since a lesson learned is only as useful as its audience, I’d like to share my mistake with the world, or at least with you, dear readers.

    The context is that of a Spring Boot starter for the XStream library:

    XStream is a simple library to serialize objects to XML and back again.

    1. (De)serialization capabilities are implemented as instance methods of the XStream class
    2. Customization of the (de)serialization process is implemented through converters registered in the XStream instance

    The goal of the starter is to ease the usage of the library. For example, it creates an XStream instance in the context if there’s none already:

    @Configuration
    public class XStreamAutoConfiguration {
    
        @Bean
        @ConditionalOnMissingBean(XStream.class)
        public XStream xStream() {
            return new XStream();
        }
    }
    

    Also, it will collect all custom converters from the context and register them in the existing instance:

    @Configuration
    public class XStreamAutoConfiguration {
    
        @Bean
        @ConditionalOnMissingBean(XStream.class)
        public XStream xStream() {
            return new XStream();
        }
    
        @Bean
        public Collection<Converter> converters(XStream xstream, Collection<Converter> converters) {
            converters.forEach(xstream::registerConverter);
            return converters;
        }
    }
    

    The previous snippet achieves the objective, as Spring obediently inject both xstream and converters dependency beans in the method. Yet, a problem happened during the webinar demo: can you spot it?

    If you answered about an extra converters bean registered into the context, you are right. But the issue occurs if there’s another converters bean registered by the client code, and it’s of different type i.e. not a Collection. Hence, registration of converters must not happen in beans methods, but only in a @PostConstruct one.

    • Option 1

      The first option is to convert the @Bean to @PostConstruct.

      @Configuration
      public class XStreamAutoConfiguration {
      
        @Bean
        @ConditionalOnMissingBean(XStream.class)
        public XStream xStream() {
            return new XStream();
        }
      
        @PostConstruct
        public void register(XStream xstream, Collection<Converter> converters) {
            converters.forEach(xstream::registerConverter);
        }
      }
      

      Unfortunately, @PostConstruct doesn’t allow for methods to have arguments. The code doesn’t compile.

    • Option 2

      The alternative is to inject both beans into attributes of the configuration class, and use them in the @PostConstruct-annotated method.

      @Configuration
      public class XStreamAutoConfiguration {
      
        @Autowired
        private XStream xstream;
      
        @Autowired
        private Collection<Converter> converters;
      
        @Bean
        @ConditionalOnMissingBean(XStream.class)
        public XStream xStream() {
            return new XStream();
        }
      
        @PostConstruct
        public void register() {
            converters.forEach(xstream::registerConverter);
        }
      }    
      

    This compiles fine, but Spring enters a cycle trying both to inject the XStream instance into the configuration and to create it as a bean at the same time.

    • Option 3

      The final (and only valid) option is to learn from the master and use another configuration class - a nested one. Looking at Spring Boot code source, it’s obviously a pattern.

      The final code looks like this:

      @Configuration
      public class XStreamAutoConfiguration {
      
        @Bean
        @ConditionalOnMissingBean(XStream.class)
        public XStream xStream() {
            return new XStream();
        }
      
        @Configuration
        public static class XStreamConverterAutoConfiguration {
      
            @Autowired
            private XStream xstream;
      
            @Autowired
            private Collection<Converter> converters;
      
            @PostConstruct
            public void registerConverters() {
                converters.forEach(converter -> xstream.registerConverter(converter));
            }
        }
      }
      

    The fixed code is available on Github. And grab the webinar while it’s hot.

    Categories: Java Tags: Spring Bootdesign
  • Custom collectors in Java 8

    Among the many features available in Java 8, streams seem to be one of the biggest game changers regarding the way to write Java code. Usage is quite straightforward: the stream is created from a collection (or from a static method of an utility class), it’s processed using one or many of the available stream methods, and the collected back into a collection or an object. One generally uses one of the static method that the Collectors utility class offers:

    • Collectors.toList()
    • Collectors.toSet()
    • Collectors.toMap()
    • etc.

    Sometimes, however, there’s a need for more. The goal of this post is to describe how to achieve that.

    The Collector interface

    Every one of the above static methods returns a Collector. But what is a Collector? The following is a simplified diagram:

    Interface From the JavaDocs
    Supplier Represents a supplier of results. There is no requirement that a new or distinct result be returned each time the supplier is invoked.
    BiConsumer Represents an operation that accepts two input arguments and returns no result. Unlike most other functional interfaces, BiConsumer is expected to operate via side-effects.
    Function Represents a function that accepts one argument and produces a result.
    BinaryOperator Represents an operation upon two operands of the same type, producing a result of the same type as the operands. This is a specialization of BiFunction for the case where the operands and the result are all of the same type.

    The documentation of each dependent interface doesn’t tell much, apart from the obvious. Looking at the Collector documentation yields a little more:

    A Collector is specified by four functions that work together to accumulate entries into a mutable result container, and optionally perform a final transform on the result. They are:

    • creation of a new result container (supplier())
    • incorporating a new data element into a result container (accumulator())
    • combining two result containers into one (combiner())
    • performing an optional final transform on the container (finisher())

    The Stream.collect() method

    The real insight comes from the Stream.collect() method documentation:

    Performs a mutable reduction operation on the elements of this stream. A mutable reduction is one in which the reduced value is a mutable result container, such as an ArrayList, and elements are incorporated by updating the state of the result rather than by replacing the result. This produces a result equivalent to:

        R result = supplier.get();
        for (T element : this stream)
            accumulator.accept(result, element);
        return result;
    

    Note the combiner() method is not used - it is only used within parallel streams, and for simplification purpose, will be set aside for the rest of this post.

    Examples

    Let’s have some examples to demo the development of custom collectors.

    Single-value example

    To start, let’s compute the size of a collection using a collector. Though not very useful, it’s a good introduction. Here are the requirements for the 4 interfaces:

    1. Since the end result should be an integer, the supplier should probably also return some kind of integer. The problem is that neither int nor Integer are mutable, and this is required for the next step. A good candidate type would be MutableInt from Apache Commons Lang.
    2. The accumulator should only increment the MutableInt, whatever the element in the collection is.
    3. Finally, the finisher just returns the int value wrapped by the MutableInt.

    Source is available on Github.

    Grouping example

    The second example shall be more useful. From a collection of strings, let’s create a Apache Commons Lang multi-valued map:

    • The key should be a char
    • The corresponding values should be the strings that start with this char
    1. The supplier is pretty straightforward, it returns a MultiValuedMap instance
    2. The accumulator just calls the put method from the multi-valued map, using the above “specs”
    3. The finisher returns the map itself

    Source is available on Github.

    Partitioning example

    The third example matches a use-case I encountered this week: given a collection and a predicate, dispatch elements that match into a collection and elements that do not into another.

    1. As the supplier returns a single instance, a new data structure e.g. DoubleList should first be designed
    2. The accumulator must be initialized with the predicate, so that the accept() contract method signature is the same.
    3. As for the above example, the finisher should return the DoubleList itself

    Source is available on Github.

    Final consideration

    Developing a custom collector is not that hard, provided one understands the basic concepts behind it.

    The real issue behind collectors is the whole Stream API. Streams need to be created first and then collected afterwards. Newer languages, with Functional Programming paradigm designed from the start - such as Scala or Kotlin, provide collections with such capabilities directly backed-in.

    For example, to filter out something from a map in Java:

    map.entrySet().stream()
            .filter( entry -> entry.getKey().length() == 4)
            .collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue));
    

    That would translate as the following in Kotlin:

    map.entries.filter { it.key.length == 4 }
    
    Categories: Java Tags: java 8lambdacollector
  • Kotlin for front-end developers

    Kotlin logo

    I assume that readers are already familiar with the Kotlin language. Advertised from the site:

    Statically typed programming language for the JVM, Android and the browser 100% interoperable with Java™

    — Headline from the Kotlin website

    Kotlin first penetrated the Android ecosystem and has seen a huge adoption there. There’s also a growing trend on the JVM, via Spring Boot. Since its latest 1.1 release, Kotlin also offers a production-grade Kotlin to JavaScript transpiler. This post is dedicated to the latter.

    IMHO, the biggest issue regarding the process of transpiling Kotlin to JavaScript is that documentation is only aimed at server-side build tools just as Maven and Gradle - or worse, the IntelliJ IDEA IDE. Those work very well for backend-driven projects or prototypes, but is not an incentive to front-end developers whose bread and butter are npm, Grunt/Gulp, yarn, etc.

    Let’s fix that by creating a simple Google Maps using Kotlin.

    Managing the build file

    This section assumes:

    • The Grunt command-line has already been installed, via npm install -g grunt-cli
    • The kotlin package has been installed and the kotlin compiler is on the path. I personally use Homebrew - brew install kotlin

    The build file looks like the following:

    Gruntfile.js
    module.exports = function (grunt) {
    
        grunt.initConfig({
            clean: 'dist/**',
            exec: {
                cmd: 'kotlinc-js src -output dist/script.js' (1)
            },
            copy: { (2)
                files: {
                    expand: true,
                    flatten: true,
                    src: ['src/**/*.html', 'src/**/*.css'],
                    dest: 'dist'
                }
            }
        });
    
        grunt.loadNpmTasks('grunt-contrib-clean');
        grunt.loadNpmTasks('grunt-contrib-copy');
        grunt.loadNpmTasks('grunt-exec');
    
        grunt.registerTask('default', ['exec', 'copy']);
    };
    1 Transpiles all Kotlin files found in the src folder to a single JavaScript file
    2 Copies CSS and HTML files to the dist folder

    Bridging Google Maps API in Kotlin

    The following snippet creates a Map object using Google Maps API which will be displayed on a specific div element:

    var element = document.getElementById('map');
    new google.maps.Map(element, {
    	center: { lat: 46.2050836, lng: 6.1090691 },
        zoom: 8
    });

    Like in TypeScript, there must be a thin Kotlin adapter to bridge the original JavaScript API. Unlike in TypeScript, there’s no existing repository of such adapters. The following is a naive first draft:

    external class Map(element: Element?, options: Json?)
    The external keyword is used to tell the transpiler the function body is implemented in another file - or library in that case.

    The first issue comes from a name collision: Map is an existing Kotlin class and is imported by default. Fortunately, the @JsName annotation allows to translate the name at transpile time.

    gmaps.kt
    @JsName("Map")
    external class GoogleMap(element: Element?, options: Json?)

    The second issue occurs because the original API is in a namespace: the object is not Map, but google.maps.Map. The previous annotation doesn’t allow for dots, but a combination of other annotations can do the trick:

    /google/maps/gmaps.kt
    @JsModule("google")
    @JsQualifier("maps")
    @JsName("Map")
    external class GoogleMap(element: Element?, options: Json?)

    This still doesn’t work - it doesn’t even compile, as @JsQualifier cannot be applied to a class. The final working code is:

    /google/maps/gmaps.kt
    @file:JsModule("google")
    @file:JsQualifier("maps")
    @file:JsNonModule
    
    package google.maps
    
    @JsName("Map")
    external class GoogleMap(element: Element?, options: Json?)

    Calling Google Maps

    Calling the above code is quite straightforward, but note the second parameter of the constructor is of type Json. That for sure is quite different from the strongly-typed code which was the goal of using Kotlin. To address that, let’s create real types:

    internal class MapOptions(val center: LatitudeLongitude, val zoom: Byte)
    internal class LatitudeLongitude(val latitude: Double, val longitude: Double)

    And with Kotlin’s extension function - and an out-of-the-box json() function, let’s make them able to serialize themselves to JSON:

    internal fun LatitudeLongitude.toJson() = json("lat" to latitude, "lng" to longitude)
    internal fun MapOptions.toJson() = json("center" to center.toJson(), "zoom" to zoom)

    This makes it possible to write the following:

    fun initMap() {
        val div = document.getElementById("map")
        val latLng = LatitudeLongitude(latitude = -34.397, longitude = 150.644)
        val options = MapOptions(center = latLng, zoom = 8)
        GoogleMap(element = div, options = options.toJson())
    }

    Refinements

    We could stop at this point, with the feeling to have achieved something. But Kotlin allows much more.

    The low hanging fruit would be to move the JSON serialization to the Map constructor:

    internal class KotlinGoogleMap(element: Element?, options: MapOptions) : GoogleMap(element, options.toJson())
    
    KotlinGoogleMap(element = div, options = options)

    Further refinements

    The "domain" is quite suited to be written using a DSL. Let’s update the "API":

    external open class GoogleMap(element: Element?) {
        fun setOptions(options: Json)
    }
    
    internal class MapOptions {
        lateinit var center: LatitudeLongitude
        var zoom: Byte = 1
        fun center(init: LatitudeLongitude.() -> Unit) {
            center = LatitudeLongitude().apply(init)
        }
        fun toJson() = json("center" to center.toJson(), "zoom" to zoom)
    }
    
    internal class LatitudeLongitude() {
        var latitude: Double = 0.0
        var longitude: Double = 0.0
        fun toJson() = json("lat" to latitude, "lng" to longitude)
    }
    
    internal class KotlinGoogleMap(element: Element?) : GoogleMap(element) {
        fun options(init: MapOptions.() -> Unit) {
            val options = MapOptions().apply(init)
            setOptions(options = options.toJson())
        }
    }
    
    internal fun kotlinGoogleMap(element: Element?, init: KotlinGoogleMap.() -> Unit) = KotlinGoogleMap(element).apply(init)

    The client code now can be written as:

    fun initMap() {
        val div = document.getElementById("map")
        kotlinGoogleMap(div) {
            options {
                zoom = 6
                center {
                    latitude = 46.2050836
                    longitude = 6.1090691
                }
            }
        }
    }

    Conclusion

    Though the documentation is rather terse, it’s possible to only use the JavaScript ecosystem to transpile Kotlin code to JavaScript. Granted, the bridging of existing libraries is a chore, but this is only a one-time effort as the community starts sharing their efforts. On the other hand, the same features that make Kotlin a great language to use server-side - e.g. writing a DSL, also benefit on the front-end.

    Sources for this article can be found on Github.

    Categories: Development Tags: JavaScriptKotlinfront-end
  • Common code in Spring MVC, where to put it?

    Spring Boot logo

    During my journey coding an actuator for a non-Spring Boot application, I came upon an interesting problem regarding where to actually put a snippet of common code. This post tries to list all available options, and their respective pros and cons in a specific context.

    As a concrete example, let’s use the REST endpoint returning the map of all JVM properties accessible through the /jvmprops sub-context. Furthermore, I wanted to offer the option to search not only for a single property e.g. /jvmprops/java.vm.vendor but also to allow for filtering for a subset of properties e.g. /jvmprops/java.vm.*.

    The current situation

    The code is designed around nothing but boring guidelines for a Spring application. The upper layer consists of controllers. They are annotated with @RestController and provide REST endpoints made available as @RequestMapping-annotated methods. In turn, those methods call the second layer implemented as services.

    As seen above, the filter pattern itself is the last path segment. It’s mapped to a method parameter via the @PathVariable annotation.

    @RestController class JvmPropsController(private val service: JvmPropsService) {
      @RequestMapping(path = arrayOf("/jvmprops/{filter}"), method = arrayOf(GET))
      fun readJvmProps(@PathVariable filter: String): Map<String, *> = service.getJvmProps()
    }
    

    To effectively implement filtering, the path segment allows star characters. In Java however, string matching is achieved via regular expression. It’s then mandatory to “translate” the simple calling pattern to a full-fledge regexp. Regarding the above example, not only the dot character needs to be escaped - from . but to \\., but the star character needs to be translated accordingly - from * to .*:

    val regex = filter.replace(".", "\\.").replace("*", ".*")
    

    Then, the associated service returns the filtered map, which is in turn returned by the controller. Spring Boot and Jackson take care of JSON serialization.

    Straightforward alternatives

    This is all fine and nice, until additional map-returning endpoints are required (for example, to get environment variables), and the above snippet ends up being copied-pasted in each of them.

    There surely must be a better solution, so where factor this code?

    In a controller parent class

    The easiest hack is to create a parent class for all controllers, put the code there and call it explicitly.

    abstract class ArtificialController() {
        fun toRegex(filter: String) = filter.replace(".", "\\.").replace("*", ".*")
    }
    
    @RestController class JvmProps(private val service: JvmPropsService): ArtificialController() {
      @RequestMapping(path = arrayOf("/jvmprops/{filter}"), method = arrayOf(GET))
      fun readJvmProps(@PathVariable filter: String): Map<String, *> {
        val regex = toRegex(filter)
        return service.getJvmProps(regex)
      }
    }
    

    This approach has three main disadvantages:

    1. It creates an artificial parent class just for the sake of sharing common code.
    2. It’s necessary for other controllers to inherit from this parent class.
    3. It requires an explicit call, putting the responsibility of the transformation in the client code. Chances are high that no developer but the one who created the method will ever use it.

    In a service parent class

    Instead of setting the code in a shared method of the controller layer, it can be set in the service layer.

    The same disadvantages as above apply.

    In a third-party dependency

    Instead of an artificial class hierarchy, let’s introduce an unrelated dependency class. This translates into the following code.

    class Regexer {
      fun toRegex(filter: String) = filter.replace(".", "\\.").replace("*", ".*")
    }
    
    @RestController class JvmProps(private val service: JvmPropsService,
                                   private val regexer: Regexer) {
      @RequestMapping(path = arrayOf("/jvmprops/{filter}"), method = arrayOf(GET))
      fun readJvmProps(@PathVariable filter: String): Map<String, *> {
        val regex = regexer.toRegex(filter)
        return service.getJvmProps(regex)
      }
    }
    

    While favoring composition over inheritance, this approach still leaves out a big loophole: the client code is required to call the shared one.

    In a Kotlin extension function

    If one is allowed to use alternate languages on the JVM, it’s possible to benefit for Kotlin’s extension functions:

    interface ArtificialController
    
    fun ArtificialController.toRegex(filter: String) = filter.replace(".", "\\.").replace("*", ".*")
    
    @RestController class JvmProps(private val service: JvmPropsService): ArtificialController {
      @RequestMapping(path = arrayOf("/jvmprops/{filter}"), method = arrayOf(GET))
      fun readJvmProps(@PathVariable filter: String): Map<String, *> {
        val regex = toRegex(filter)
        return service.getJvmProps(regex)
      }
    }
    

    Compared to putting the code in a parent controller, at least the code is localized to the file. But the same disadvantages still apply, so the gain is only marginal.

    More refined alternatives

    Refactorings described above work in every possible context. The following options apply specifically for (Spring Boot) web applications.

    They all follow the same approach: instead of explicitly calling the shared code, let’s somehow wrap controllers in a single component where it will be executed.

    In a servlet filter

    In a web application, code that needs to be executed before/after different controllers are bound to take place in a servlet filter.

    With Spring MVC, this is achieved through a filter registration bean:

    @Bean
    fun filterBean() = FilterRegistrationBean().apply {
      urlPatterns = arrayListOf("/jvmProps/*")
      filter = object : Filter {
        override fun destroy() {}
        override fun init(config: FilterConfig) {}
        override fun doFilter(req: ServletRequest, resp: ServletResponse, chain: FilterChain) {
          chain.doFilter(httpServletReq, resp)
          val httpServletReq = req as HttpServletRequest
          val paths = request.pathInfo.split("/")
          if (paths.size > 2) {
            val subpaths = paths.subList(2, paths.size)
            val filter = subpaths.joinToString("")
            val regex = filter.replace(".", "\\.")
                              .replace("*", ".*")
            // Change the JSON here...
          }
        }
      }
    }
    

    The good point about the above code is it doesn’t require controllers to call the shared code explicitly. There’s a not-so-slight problem however: at this point, the map has already been serialized into JSON, and been processed into the response. It’s mandatory to wrap the initial respons in a response wrapper before proceeding with the filter chain and process the JSON instead of an in-memory data structure.

    Not only is this way quite fragile, it has a huge impact on performance.

    In a Spring MVC interceptor

    Moving the above code from a filter in a Spring MVC interceptor unfortunately doesn’t improve anything.

    In an aspect

    The need of translating the string parameter and to filter the map are typical cross-cutting concerns. This is a typical use-case fore Aspect-Oriented Programming. Here’s what the code looks like:

    @Aspect class FilterAspect {
      @Around("execution(Map ch.frankel.actuator.controller.*.*(..))")
      fun filter(joinPoint: ProceedingJoinPoint): Map<String, *> {
        val map = joinPoint.proceed() as Map<String, *>
        val filter = joinPoint.args[0] as String
        val regex = filter.replace(".", "\\.").replace("*", ".*")
        return map.filter { it.key.matches(regex.toRegex()) }
      }
    }
    

    Choosing this option works in the intended way. Plus, the aspect will be applied automatically to all methods of all classes in the configured package that return a map.

    In a Spring MVC advice

    There’s a nice gem hidden in Spring MVC: a specialized advice being executed just after the controller returns but before the returned value is serialized in JSON format (thanks to @Dr4K4n for the hint).

    The class just needs to:

    1. Implement the ResponseBodyAdvice interface
    2. Be annotated with @ControllerAdvice to be scanned by Spring, and to control which package it will be applied to
    @ControllerAdvice("ch.frankel.actuator.controller")
    class TransformBodyAdvice(): ResponseBodyAdvice<Map<String, Any?>> {
    
      override fun supports(returnType: MethodParameter, converterType: Class<out HttpMessageConverter<*>>) =
      returnType.method.returnType == Map::class.java
    
      override fun beforeBodyWrite(map: Map<String, Any?>, methodParameter: MethodParameter,
                mediaType: MediaType, clazz: Class<out HttpMessageConverter<*>>,
                serverHttpRequest: ServerHttpRequest, serverHttpResponse: ServerHttpResponse): Map<String, Any?>  {
        val request = (serverHttpRequest as ServletServerHttpRequest).servletRequest
        val filterPredicate = getFilterPredicate(request)
        return map.filter(filterPredicate)
      }
    
      private fun getFilterPredicate(request: HttpServletRequest): (Map.Entry<String, Any?>) -> Boolean {
        val paths = request.pathInfo.split("/")
        if (paths.size > 2) {
          val subpaths = paths.subList(2, paths.size)
          val filter = subpaths.joinToString("")
          val regex = filter.replace(".", "\\.")
                            .replace("*", ".*")
                            .toRegex()
          return { it.key.matches(regex) }
        }
        return { true }
      }
    }
    

    This code doesn’t require to be called explicitly, it will be applied to all controllers in the configured package. It also will only be applied if the return type of the method is of type Map (no generics check due to type erasure though).

    Even better, it paves the way for future development involving further processing (ordering, paging, etc.).

    Conclusion

    There are several ways to share common code in a Spring MVC app, each having different pros and cons. In this post, for this specific use-case, the ResponseBodyAdvice has the most benefits.

    The main taking here is that the more tools one has around one’s toolbelt, the better the final choice. Go explore some tools you don’t know already about: what about reading some documentation today?

    Categories: JavaEE Tags: Spring MVCClean codeDesign
  • Cloud offerings free tier - Amazon vs Google

    As a developer, you’re a pretty important resources for service providers. Without developers, service providers cannot make savings based on scale. Having developers also means higher chances of someone having a killer idea on top of your service, creating even more value. And developers attract more developers.

    In order to achieve that, service providers offer free services. In the Cloud realm, both Google and Amazon have a packaged offer valid for a year. Once the grace period has expired though, access to some limited free services is still available. This post tries to compare free services of those providers side-by-side.

    Common offerings

    Amazon Web Services Google Cloud Platform logo

    NoSQL database

    Product

    DynamoDB

    Cloud Datastore

    Description

    Fast and flexible NoSQL database with seamless scalability

    Highly-scalable NoSQL database

    Storage capacity

    25GB

    1GB

    # of reads

    25 units

    50k

    # of writes

    25 units

    20k

    # of deletes

    20k

    Analytics

    Product

    Mobile Analytics

    BigQuery

    Description

    Fast, secure mobile app usage analytics

    Fully managed, petabyte scale, analytics data warehouse

    # of events

    100 million

    # of queries

    1 TB

    SCM

    Product

    CodeCommit

    Cloud Source Repositories

    Description

    Highly scalable, managed source control service

    Multiple private Git repositories hosted on Google Cloud Platform

    Storage capacity

    50 GB

    1 GB

    # of active users

    5

    # of requests

    10k

    Serverless

    Product

    Lambda

    Cloud Functions

    Description

    Compute service that runs your code in response to events and automatically manages the compute resources

    A serverless environment to build and connect cloud services with code

    # of requests

    1 million

    2 millions (including HTTP & method invocation)

    Compute time

    3.2 million seconds

    400k GB.seconds, 200k GHz.seconds

    Storage

    Product

    Storage Gateway

    Google Cloud Storage

    Description

    Hybrid cloud storage with seamless local integration and optimized data transfer

    Best in class performance, reliability, and pricing for all your storage needs

    Capacity

    100 GB

    5 GB-months of Regional Storage

    Other

    • 5k Class A Operations per month
    • 50k Class B Operations per month
    • 1 GB network egress from North America to all region destinations per month (excluding China and Australia)

    Provider-specific offering

    Items are listed here because:

    • Either the competitor doesn’t provide a similar offering e.g. Google Cloud Speech API
    • Or it does but the offering is outside the free tier e.g. Google Compute Engine vs Amazon EC2

    Amazon

    Product Description Details

    Key Management Service

    Managed service that provides easy encryption with administrative controls

    20k free requests per month

    Device Farm

    Test your iOS, Android and FireOS apps on real devices in the AWS cloud

    Free one-time trial of 250 Device Minutes

    Simple Notification Service

    Fast, flexible, fully managed push messaging service

    • 1 million Publishes
    • 100k HTTP/S Deliveries
    • 1k Email Deliveries

    CodePipeline

    Continuous delivery service for fast and reliable application updates

    1 Active Pipeline per month

    CloudWatch

    Monitoring for AWS cloud resources and applications

    • 10 Custom Metrics and 10 Alarms
    • 1 million API Requests
    • 5 GB of Log Data Ingestion and 5 GB of Log Data Archive
    • 3 Dashboards with up to 50 Metrics Each per Month

    Simple Email Service

    Cost-effective email service in the Cloud

    • 62k Outbound Messages per month to any recipient when you call Amazon SES from an Amazon EC2 instance directly or through AWS Elastic Beanstalk.
    • 1k Inbound Messages per month

    Simple Queue Service

    Scalable queue for storing messages as they travel between computers

    1 millions requests

    Simple Workflow Service

    Task coordination and state management service for Cloud applications

    • 10k Activity Tasks
    • 30k Workflow-Days
    • 1k Initiated Executions

    Chime

    Amazon Chime is a modern unified communications service that offers frustration-free meetings with exceptional audio and video quality

    Basic subscription is free to use for as long you’d like

    CodeBuild

    Fully managed build service that builds and tests code in the cloud

    100 build minutes per month of build.general1.small compute type usage

    Step Functions

    Coordinate components of distributed applications

    4k state transitions per month

    X-Ray

    Analyze and debug your applications

    • 100k traces recorded per month
    • 1 million traces scanned or retrieved per month

    Database Migration Service

    Migrate databases with minimal downtime

    • 750 Hours of Amazon DMS Single-AZ dms.t2.micro instance usage
    • 50 GB of included General Purpose (SSD) storage

    Google

    Product Description Details

    Google Compute Engine

    Scalable, high-performance virtual machines

    • 1 f1-micro instance per month (US regions only)
    • 30 GB-months HDD, 5 GB-months snapshot
    • 1 GB network egress from North America to all region destinations per month (excluding China and Australia)

    Google App Engine

    Platform for building scalable web applications and mobile backends

    • 28 instance hours per day
    • 5 GB Cloud Storage
    • Shared memcache
    • 1k search operations per day, 10 MB search indexing
    • 100 emails/day

    Cloud Pub/Sub

    A global service for real-time and reliable messaging and streaming data

    10 GB of messages per month

    Container Engine

    One-click container orchestration via Kubernetes clusters, managed by Google

    • Basic cluster of 5 nodes or fewer
    • The basic cluster is free but each node is charged at standard Compute Engine pricing

    Stackdriver

    Monitoring, logging, and diagnostics for applications on Cloud Platform and AWS

    • 5 GB of logs with 7 day retention
    • Read access API
    • Basic email alerting

    Cloud Vision API

    Label detection, OCR, facial detection and more

    1k units per month

    Cloud Speech API

    Speech to text transcription, the same that powers Google’s own products

    60 minutes per month

    Cloud Natural Language API

    Derive insights from unstructured text using Google machine learning

    5k units per month

    Cloud Shell

    Manage your infrastructure and applications from the command-line in any browser

    Free access to Cloud Shell, including 5 GB of persistent disk storage

    Cloud Container Builder

    Fast, consistent, reliable container builds on Google Cloud Platform

    120 build-minutes per day

    Conclusion

    Free services listed above, even if limited compared to the standard version, shouldn’t be overlooked. They can be used for prototyping, having fun with a crazy idea, teaching and/or self-learning. Have fun!

    To go further:

    Categories: Technical Tags: GoogleAmazonfreeCloud
  • Fully configurable mappings for Spring MVC

    Spring Boot logo

    As I wrote some weeks earlier, I’m trying to implement features of the Spring Boot actuator in a non-Boot Spring MVC applications. Developing the endpoints themselves is quite straightforward. Much more challenging, however, is to be able to configure the mapping in a properties file, like in the actuator. This got me to check more closely at how it was done in the current code. This post sums up my “reverse-engineering” attempt around the subject.

    Standard MVC

    Usage

    In Spring MVC, in a class annotated with the @Controller annotation, methods can be in turn annotated with @RequestMapping. This annotation accepts a value attribute (or alternatively a path one) to define from which path it can be called.

    The path can be fixed e.g. /user but can also accepts variables e.g. /user/{id} filled at runtime. In that case, parameters should can be mapped to method parameters via @PathVariable:

    @RequestMapping(path = "/user/{id}", method = arrayOf(RequestMethod.GET))
    fun getUser(@PathVariable("id") id:String) = repository.findUserById(id)
    

    While adapted to REST API, this has to important limitations regarding configuration:

    • The pattern is set during development time and does not change afterwards
    • The filling of parameters occurs at runtime

    Implementation

    With the above, mappings in @Controller-annotated classes will get registered during context startup through the DefaultAnnotationHandlerMapping class. Note there’s a default bean of this type registered in the context. This is summed up in the following diagram:

    DefaultAnnotationHandlerMapping sequence diagram

    In essence, the magic applies only to @Controller-annotated classes. Or, to be more strict, quoting the DefaultAnnotationHandlerMapping’s Javadoc:

    Annotated controllers are usually marked with the Controller stereotype at the type level. This is not strictly necessary when RequestMapping is applied at the type level (since such a handler usually implements the org.springframework.web.servlet.mvc.Controller interface). However, Controller is required for detecting RequestMapping annotations at the method level if RequestMapping is not present at the type level.

    Actuator

    Usage

    Spring Boot actuator allows for configuring the path associated with each endpoint in the application.properties file (or using alternative methods for Boot configuration).

    For example, the metrics endpoint is available by default via the metrics path. But’s it possible to configure a completely different path:

    endpoints.metrics.id=mymetrics
    

    Also, actuator endpoints are by default accessible directly under the root, but it’s possible to group them under a dedicated sub-context:

    management.context-path=/manage
    

    With the above configuration, the metrics endpoint is now available under the /manage/mymetrics.

    Implementation

    Additional actuator endpoints should implements the MvcEndpoint interface. Methods annotated with @RequestMapping will work in the exact same way as for standard controllers above. This is achieved via a dedicated handler mapping, EndpointHandlerMapping in the Spring context.

    HandlerMapping to map Endpoints to URLs via Endpoint.getId(). The semantics of @RequestMapping should be identical to a normal @Controller, but the endpoints should not be annotated as @Controller (otherwise they will be mapped by the normal MVC mechanisms).

    The class hierarchy is the following:

    MvcEndpoint class diagram

    This diagram shows what’s part of Spring Boot and what’s not.

    Conclusion

    Actuator endpoints reuse some from the existing Spring MVC code to handle @RequestMapping. It’s done in a dedicated mapping class so as to separate standard MVC controllers and Spring Boot’s actuator endpoint class hierarchy. In order to achieve fully configurable mappings in Spring MVC, this is the part of the code study, to duplicate and to adapt should one wants fully configurable mappings.

    Categories: JavaEE Tags: Spring MVCactuatorSpring Boot
  • Coping with stringly-typed

    UPDATED on March 13, 2017: Add Builder pattern section

    Most developers have strong opinions regarding whether a language should be strongly-typed or weakly-typed, whatever notions they put behind those terms. Some also actively practice stringly-typed programming - mostly without even being aware of it. It happens when most of attributes and parameters of a codebase are String. In this post, I will make use of the following simple snippet as an example:

    public class Person {
    
        private final String title;
        private final String givenName;
        private final String familyName;
        private final String email;
      
        public Person(String title, String givenName, String familyName, String email) {
            this.title = title;
            this.givenName = givenName;
            this.familyName = familyName;
            this.email = email;
        }
        ...
    }
    

    The original sin

    The problem with that code is that it’s hard to remember which parameter represents what and in which order they should be passed to the constructor.

    Person person = new Person("[email protected]", "John", "Doe", "Sir");
    

    In the previous call, the email and the title parameter values were switched. Ooops.

    This is even worse if more than one constructor is available, offering optional parameters:

    public Person(String givenName, String familyName, String email) {
        this(null, givenName, familyName, email);
    }
    
    Person another = new Person("Sir", "John", "Doe");
    

    In that case, title was the optional parameter, not email. My bad.

    Solving the problem the OOP way

    Object-Oriented Programming and its advocates have a strong aversion to stringly-typed code for good reasons. Since everything in the world has a specific type, so must it be in the system.

    Let’s rewrite the previous code à la OOP:

    public class Title {
        private final String value;
        public Title(String value) {
        	this.value = value;
        }
    }
    
    public class GivenName {
        private final String value;
        public FirstName(String value) {
        	this.value = value;
        }
    }
    
    public class FamilyName {
        private final String value;
        public LastName(String value) {
        	this.value = value;
        }
    }
    
    public class Email {
        private final String value;
        public Email(String value) {
        	this.value = value;
        }
    }
    
    public class Person {
    
        private final Title title;
        private final GivenName givenName;
        private final FamilyName familyName;
        private final Email email;
      
        public Person(Title title, GivenName givenName, FamilyName familyName, Email email) {
            this.title = title;
            this.givenName = givenName;
            this.familyName = familyName;
            this.email = email;
        }
        ...
    }
    
    
    Person person = new Person(new Title(null), new FirstName("John"), new LastName("Doe"), new Email("[email protected]"));
    

    That way drastically limits the possibility of mistakes. The drawback is a large increase in verbosity - which might lead to other bugs.

    Pattern to the rescue

    A common way to tackle this issue in Java is to use the Builder pattern. Let’s introduce a new builder class and rework the code:

    public class Person {
    
        private String title;
        private String givenName;
        private String familyName;
        private String email;
    
        private Person() {}
    
        private void setTitle(String title) {
            this.title = title;
        }
    
        private void setGivenName(String givenName) {
            this.givenName = givenName;
        }
    
        private void setFamilyName(String familyName) {
            this.familyName = familyName;
        }
    
        private void setEmail(String email) {
            this.email = email;
        }
    
        public static class Builder {
    
            private Person person;
    
            public Builder() {
                person = new Person();
            }
    
            public Builder title(String title) {
                person.setTitle(title);
                return this;
            }
    
            public Builder givenName(String givenName) {
                person.setGivenName(givenName);
                return this;
            }
    
            public Builder familyName(String familyName) {
                person.setFamilyName(familyName);
                return this;
            }
    
            public Builder email(String email) {
                person.setEmail(email);
                return this;
            }
    
            public Person build() {
                return person;
            }
        }
    }
    

    Note that in addition to the new builder class, the constructor of the Person class has been set to private. Using the Java language features, this allows only the Builder to create new Person instances. The same is used for the different setters.

    Using this pattern is quite straightforward:

    Person person = new Builder()
                   .title("Sir")
                   .givenName("John")
                   .familyName("Doe")
                   .email("[email protected]")
                   .build();
    

    The builder patterns shifts the verbosity from the calling part to the design part. Not a bad trade-off.

    Languages to the rescue

    Verbosity is unfortunately the mark of Java. Some other languages (Kotlin, Scala, etc.) would be much more friendly to this approach, not only for class declarations, but also for object creation.

    Let’s port class declarations to Kotlin:

    class Title(val value: String?)
    class GivenName(val value: String)
    class FamilyName(val value: String)
    class Email(val value: String)
    
    class Person(val title: Title, val givenName: GivenName, val familyName: FamilyName, val email: Email)
    

    This is much better, thanks to Kotlin! And now object creation:

    val person = Person(Title(null), GivenName("John"), FamilyName("Doe"), Email("[email protected]"))
    

    For this, verbosity is only marginally decreased compared to Java.

    Named parameters to the rescue

    OOP fanatics may stop reading there, for their way is not the only one to cope with stringly-typed.

    One alternative is about named parameters, and is incidentally also found in Kotlin. Let’s get back to the original stringly-typed code, port it to Kotlin and use named parameters:

    class Person(val title: String?, val givenName: String, val familyName: String, val email: String)
    
    val person = Person(title = null, givenName = "John", familyName = "Doe", email = "[email protected]")
    
    val another = Person(email = "[email protected]", title = "Sir", givenName = "John", familyName = "Doe")
    

    A benefit of named parameters besides coping with stringly-typed code is that they are order-agnostic when invoking the constructor. Plus, they also play nice with default values:

    class Person(val title: String? = null, val givenName: String, val familyName: String, val email: String? = null)
    
    val person = Person(givenName = "John", familyName = "Doe")
    val another = Person(title = "Sir", givenName = "John", familyName = "Doe")
    

    Type aliases to the rescue

    While looking at Kotlin, let’s describe a feature released with 1.1 that might help.

    A type alias is as its name implies a name for an existing type; the type can be a simple type, a collection, a lambda - whatever exists within the type system.

    Let’s create some type aliases in the stringly-typed world:

    typealias Title = String
    typelias GivenName = String
    typealias FamilyName = String
    typealias Email = String
    
    class Person(val title: Title, val givenName: GivenName, val familyName: FamilyName, val email: Email)
    
    val person = Person(null, "John", "Doe", "[email protected]")
    

    The declaration seems more typed. Unfortunately object creation doesn’t bring any betterment.

    Note the main problem of type aliases is that they are just that - aliases: no new type is created so if 2 aliases point to the same type, all 3 are interchangeable with one another.

    Libraries to the rescue

    For the rest of this post, let’s go back to the Java language.

    Twisting the logic a bit, parameters can be validated at runtime instead of compile-time with the help of specific libraries. In particular, the Bean validation library does the job:

    public Person(@Title String title, @GivenName String givenName, @FamilyName String familyName, @Email String email) {
        this.title = title;
        this.givenName = givenName;
        this.familyName = familyName;
        this.email = email;
    }
    

    Admittedly, it’s not the best solution… but it works.

    Tooling to the rescue

    I have already written about tooling and that it’s as important (if not more) as the language itself.

    Tools fill gaps in languages, while being non-intrusive. The downside is that everyone has to use it (or find a tool with the same feature).

    For example, when I started my career, coding guidelines mandated developers to order methods by alphabetical order in the class file. Nowadays, that would be senseless, as every IDE worth its salt can display the methods of a class in order.

    Likewise, named parameters can be a feature of the IDE, for languages that lack it. In particular, latest versions of IntelliJ IDEA emulates named parameters for the Java language for types that are deemed to generic. The following shows the Person class inside the IDE:

    Conclusion

    While proper OOP design is the historical way to cope with stringly-typed code, it also makes it quite verbose and unwieldy in Java. This post describes alternatives, with their specific pros and cons. Each needs to be evaluated in the context of one’s own specific context to decide which one is the best fit.

  • A use-case for Spring component scan

    Regular readers of this blog know I’m a big proponent of the Spring framework, but I’m quite opinionated in the way it should be used. For example, I favor explicit object instantiation and explicit component wiring over self-annotated classes, component scanning and autowiring.

    Concepts

    Though those concepts are used by many Spring developers, my experience has taught me they are not always fully understood. Some explanation is in order.

    Self-annotated classes

    Self-annotated classes are classes which define how they will be instantiated by Spring via annotations on the classes themselves. @Component, @Controller, @Service and @Repository are the usual self-annotated classes found in most Spring projects.

    @Component
    class MySelfAnnotatedClass
    

    The main disadvantages of self-annotated classes is the hard coupling between the class and the bean. For example, it’s not possible to instantiate 2 singleton beans of the same class, as in explicit creation. Worse, it also couples the class to the Spring framework itself.

    Note that @Configuration classes are also considered to be self-annotated.

    Component scanning

    Self-annotated classes need to be listed and registered in the context. This can be done explicitly:

    @Configuration
    class MyConfig {
    
        @Bean
        fun myClass() =  MySelfAnnotatedClass()
    }
    

    However, the most widespread option is to let the Spring framework search for every self-annotated class on the project classpath and register them according to the annotations.

    @Configuration @ComponentScan
    class MyConfig
    

    Autowiring

    Some beans require dependencies in order to be initialized. Wiring the dependency into the dependent bean can be either:

    • explicit: it’s the developer’s responsibility to tell which beans will fulfill the dependency.
    • implicit (or auto): the Spring framework is responsible to provide the dependency. In order to do that, a single bean must be eligible.

    Regarding the second option, please re-read an old post of mine for its related problems.

    Sometimes, however, there’s no avoiding autowiring. When bean Bean1 defined in configuration fragment Config1 depends on bean Bean2 defined in fragment Config2, the only possible injection option is autowiring.

    @Configuration
    class Config2 {
    
        @Bean
        fun bean2() = Bean2()
    }
    
    @Configuration
    class Config1 {
    
        @Bean @Autowired
        fun bean1(bean2: Bean2) = Bean1(bean2)
    }
    

    In the above snippet, autowiring is used without self-annotated classes.

    Reinventing the wheel

    This week, I found myself re-implementing Spring Boot’s actuator in a legacy non-Spring Boot application.

    The architecture is dead simple: the HTTP endpoint returns a Java object (or a list of them) serialized through the Jackson library. Every endpoint might return a different object, and each can be serialized using a custom serializer.

    I’ve organized the project in a package-per-endpoint way (as opposed to package-per-layer), and have already provided several endpoints. I’d like people to contribute other endpoints, and I want it to be as simple as possible. In particular, they should only:

    1. Declare controllers
    2. Declare configuration classes
    3. Instantiate Jackson serializers

    The rest should taken care of by generic code written by me.

    The right usage of autowire

    Controllers and configuration classes are easily taken care of by using @ComponentScan on the main configuration class located in the project’s main package. But what about serializers?

    Spring is able to collect all beans of a specific class registed into a context into a list. That means that every package will declare its serializers independently, and common code can take care of the registration:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    @Configuration @EnableWebMvc @ComponentScan
    class WebConfiguration : WebMvcConfigurerAdapter() {
    
        @Autowired
        private lateinit var serializers: List<StdSerializer<*>>
    
        override fun configureMessageConverters(converters: MutableList<HttpMessageConverter<*>>) {
            converters.add(MappingJackson2HttpMessageConverter().apply {
                objectMapper.registerModule(SimpleModule().apply {
                    serializers.forEach { addSerializer(it) }
                })
            })
        }
    }
    

    Magic happens on lines 6 and 7 above. This configuration class has already been written, and new packages don’t need to do anything, serializers will be part of the list.

    Here’s an example of such a configuration class, declaring a serializer bean:

    @Configuration
    class FooConfiguration {
    
        @Bean
        fun fooSerializer()  = FooSerializer()
    }
    
    class FooSerializer : StdSerializer<Foo>(Foo::class.java) {
        ...
    }
    

    Even better, if packages need to be modularized further into full-fledged JARs, this setup will work in the exact same way without any change.

    Conclusion

    A better understanding of self-annotated classes, component-scanning and autowiring is beneficial to all Spring developers.

    Moreover, while they have a lot of drawbacks in “standard” beans classe, it’s not only perfectly acceptable but even an advantage to do so within the scope of configuration classes. In projects designed in a package-by-feature way, it improves modularization and decoupling.

    Categories: Development Tags: Springautowiringcomponent scangood practice
  • A use-case for local class declaration

    Polar bear in a bubble

    One of the first things one learns when starting with Java development is how to declare a class into its own file. Potential later stages include:

    But this doesn’t stop there: the JLS is a trove full of surprises. I recently learned classes can be declared inside any block, including methods. This is called local class declarations (§14.3).

    A local class is a nested class (§8) that is not a member of any class and that has a name. All local classes are inner classes (§8.1.3). Every local class declaration statement is immediately contained by a block. Local class declaration statements may be intermixed freely with other kinds of statements in the block.

    The scope of a local class immediately enclosed by a block (§14.2) is the rest of the immediately enclosing block, including its own class declaration. The scope of a local class immediately enclosed by in a switch block statement group (§14.11)is the rest of the immediately enclosing switch block statement group, including its own class declaration.

    Cool, isn’t it? But use it just for the sake of it is not reason enough… until this week: I started to implement something like the Spring Boot Actuator in a non-Boot application, using Jackson to serialize the results.

    Jackson offers several ways to customize the serialization process. For objects that require only to hide fields or change their names and which classes stand outside one’s reach, it offers mixins. As an example, let’s tweak serialization of the following class:

    public class Person {
    
        private final String firstName;
        private final String lastName;
        
        public Person(String firstName, String lastName) {
            this.firstName = firstName;
            this.lastName = lastName;
        }
        
        public String getFirstName() {
            return firstName;
        }
        
        public String getLastName() {
            return lastName;
        }
    }
    

    Suppose the requirement is to have givenName and familyName attributes. In a regular Spring application, the mixin class should be registered during the configuration of message converters:

    public class WebConfiguration extends WebMvcConfigurerAdapter {
    
        @Override
        public void configureMessageConverters(List<HttpMessageConverter<?>> converters) {
            MappingJackson2HttpMessageConverter jackson2HttpMessageConverter = new MappingJackson2HttpMessageConverter();
            jackson2HttpMessageConverter.getObjectMapper().addMixIn(Person.class, PersonMixin.class);
            converters.add(jackson2HttpMessageConverter);
        }
    }
    

    Now, where does it make the most sense to declare this mixin class? The principle to declare something in the smallest possible scope applies: having it in a dedicated file is obviously wrong, but even a private nested class is overkill. Hence, the most restricted scope is the method itself:

    public class WebConfiguration extends WebMvcConfigurerAdapter {
    
        @Override
        public void configureMessageConverters(List<HttpMessageConverter<?>> converters) {
            MappingJackson2HttpMessageConverter jackson2HttpMessageConverter = new MappingJackson2HttpMessageConverter();
            abstract class PersonMixin {
                @JsonProperty("givenName") abstract String getFirstName();
                @JsonProperty("familyName") abstract String getLastName();
            }
            jackson2HttpMessageConverter.getObjectMapper().addMixIn(Person.class, PersonMixin.class);
            converters.add(jackson2HttpMessageConverter);
        }
    }
    

    While this way makes sense from a pure software engineering point of view, there is a reason not to design code like this: the principle of least surprise. Unless every member of the team is aware and comfortable with local classes, this feature shouldn’t be used.

    Categories: Java Tags: classmethoddesign
  • ElasticSearch API cheatsheet

    ElasticSearch documentation is exhaustive, but the way it’s structured has some room for improvement. This post is meant as a cheat-sheet entry point into ElasticSearch APIs.

    Category Description Call examples

    Document API

    Single Document API

    Adds a new document

    PUT /my_index/my_type/1
    {
      "my_field" : "my_value"
    }
    
    POST /my_index/my_type
    {
      …​
    }
    
    PUT /my_index/my_type/1/_create
    {
      …​
    }

    Gets an existing document

    GET /my_index/my_type/0

    Deletes a document

    DELETE /my_index/my_type/0

    Updates a document

    PUT /my_index/my_type/1
    {
      …​
    }

    Multi-Document API

    Multi-get

    GET /_mget
    {
      "docs" : [
        {
          "_index" : "my_index",
          "_type" : "my_type",
          "_id" : "1"
        }
      ]
    }
    
    GET /my_index/_mget
    {
      "docs" : [
        {
          "_type" : "my_type",
          "_id" : "1"
        }
      ]
    }
    
    
    GET /my_index/my_type/_mget
    {
      "docs" : [
        {
          "_id" : "1"
        }
      ]
    }

    Performs many index/delete operations in one call

    Deletes by query

    POST /my_index/_delete_by_query
    {
      "query": {
        "match": {
          …​
        }
      }
    }

    Updates by query

    POST /my_index/_update_by_query?conflicts=proceed
    POST /my_index/_update_by_query?conflicts=proceed
    {
      "query": {
        "term": {
          "my_field": "my_value"
        }
      }
    }
    
    POST /my_index1,my_index2/my_type1,my_type2/_update_by_query

    Reindexes

    POST /_reindex
    {
      "source": {
        "index": "old_index"
      },
      "dest": {
        "index": "new_index"
      }
    }

    Search API

    URI Search

    Executes a search with query parameters on the URL

    GET /my_index/my_type/_search?q=my_field:my_value
    GET /my_index/my_type/_search
    {
      "query" : {
        "term" : { "my_field" : "my_value" }
      }
    }

    Search Shards API

    Gets indices/shards of a search would be executed against

    GET /my_index/_search_shards

    Count API

    Executes a count query

    GET /my_index/my_type/_count?q=my_field:my_value
    GET /my_index/my_type/_count
    {
      "query" : {
        "term" : { "my_field" : "my_value" }
      }
    }

    Validate API

    Validates a search query

    GET /my_index/my_type/_validate?q=my_field:my_value
    GET /my_index/my_type/_validate
    {
      "query" : {
        "term" : { "my_field" : "my_value" }
      }
    }

    Explain API

    Provides feedback on computation of a search

    GET /my_index/my_type/0/_explain
    {
      "query" : {
        "match" : { "message" : "elasticsearch" }
      }
    }
    GET /my_index/my_type/0/_explain?q=message:elasticsearch

    Profile API

    Provides timing information on individual components during a search

    GET /_search
    {
      "profile": true,
      "query" : {
        …​
      }
    }

    Field Stats API

    Finds statistical properties of fields without executing a search

    GET /_field_stats?fields=my_field
    GET /my_index/_field_stats?fields=my_field
    GET /my_index1,my_index2/_field_stats?fields=my_field

    Indices API

    Index management

    Instantiates a new index

    PUT /my_index
    {
      "settings" : {
        …​
      }
    }

    Deletes existing indices

    DELETE /my_index
    DELETE /my_index1,my_index2
    DELETE /my_index*
    DELETE /_all

    Retrieves information about indices

    GET /my_index
    GET /my_index*
    GET my_index/_settings,_mappings

    Checks whether an index exists

    HEAD /my_index

    Closes/opens an index

    POST /my_index/_close
    POST /my_index/_open

    Shrinks an index to a new index with fewer primary shards

    Rolls over an alias to a new index if conditions are met

    POST /my_index/_rollover
    {
      "conditions": {
        …​
      }
    }

    Mapping management

    Adds a new type to an existing index

    PUT /my_index/_mapping/new_type
    {
      "properties": {
        "my_field": {
          "type": "text"
        }
      }
    }

    Retrieves mapping definition for fields

    GET /my_index/_mapping/my_type/field/my_field
    GET /my_index1,my_index2/_mapping/my_type/field/my_field
    GET /_all/_mapping/my_type1,my_type2/field/my_field1,my_field2
    GET /_all/_mapping/my_type1*/field/my_field*

    Checks whether a type exists

    HEAD /my_index/_mapping/my_type

    Alias management

    Creates an alias over an index

    POST /_aliases
    {
      "actions" : [
        { "add" :
          { "index" : "my_index", "alias" : "my_alias" }
        }
      ]
    }
    
    POST /_aliases
    {
      "actions" : [
        { "add" :
          { "index" : ["index1", "index2"] , "alias" : "another_alias" }
        }
      ]
    }

    Removes an alias

    POST /_aliases
    {
      "actions" : [
        { "remove" :
          { "index" : "my_index", "alias" : "my_old_alias" }
        }
      ]
    }

    Index settings

    Updates settings of indices

    PUT /my_index/_settings
    {
      …​
    }

    Retrieves settings of indices

    GET /my_index/_settings

    Performs an analysis process of a text and return the tokens

    GET /_analyze
    {
      "analyzer" : "standard",
      "text" : "this is a test"
    }

    Creates a new template

    PUT /_template/my_template
    {
      …​
    }

    Deletes an existing template

    DELETE /_template/my_template

    Gets info about an existing template

    GET /_template/my_template

    Checks whether a template exists

    HEAD /_template/my_template

    Replica configuration

    Sets index data location on a disk

    Monitoring

    Provides statistics on indices

    GET /_stats
    GET /my_index1/_stats
    GET /my_index1,my_index2/_stats
    GET /my_index1/_stats/flush,merge

    Provides info on Lucene segments

    GET /_segments
    GET /my_index1/_segments
    GET /my_index1,my_index2/_segments

    Provide recovery info on indices

    GET /_recovery
    GET /my_index1/_recovery
    GET /my_index1,my_index2/_recovery

    Provide store info on shard copies of indices

    GET /_shard_stores
    GET /my_index1/_shard_stores
    GET /my_index1,my_index2/_shard_stores

    Status management

    Clears the cache of indices

    POST /_cache/clear
    POST /my_index/_cache/clear
    POST /my_index1,my_index2/_cache/clear

    Explicitly refreshes indices

    POST /_refresh
    POST /my_index/_refresh
    POST /my_index1,my_index2/_refresh

    Flushes in-memory transaction log on disk

    POST /_flush
    POST /my_index/_flush
    POST /my_index1,my_index2/_flush

    Merge Lucene segments

    POST /_forcemerge?max_num_segments=1
    POST /my_index/_forcemerge?max_num_segments=1
    POST /my_index1,my_index2/_forcemerge?max_num_segments=1

    cat API

    cat aliases

    Shows information about aliases, including filter and routing infos

    GET /_cat/aliases?v
    GET /_cat/aliases/my_alias?v

    cat allocations

    Provides a snapshot on how many shards are allocated and how much disk space is used for each node

    GET /_cat/allocation?v

    cat count

    Provides quick access to the document count

    GET /_cat/count?v
    GET /_cat/count/my_index?v

    cat fielddata

    Shows heap memory currently being used by fielddata

    GET /_cat/fielddata?v
    GET /_cat/fielddata/my_field1,my_field2?v

    cat health

    One-line representation of the same information from /_cluster/health

    GET /_cat/health?v
    GET /_cat/health?v&ts=0

    cat indices

    Provides a node-spanning cross-section of each index

    GET /_cat/indices?v
    GET /_cat/indices?v&s=index
    GET /_cat/indices?v&health=yellow
    GET /_cat/indices/my_index*?v&health=yellow

    cat master

    Displays the master’s node ID, bound IP address, and node name

    GET /_cat/master?v

    cat nodeattrs

    Shows custom node attributes

    GET /_cat/nodeattrs?v
    GET /_cat/nodeattrs?v&h=name,id,pid,ip

    cat nodes

    Shows cluster topology

    GET /_cat/nodes?v
    GET /_cat/nodes?v&h=name,id,pid,ip

    cat pending tasks

    Provides the same information as /_cluster/pending_tasks

    GET /_cat/pending_tasks?v

    cat plugins

    Provides a node-spanning view of running plugins per node

    GET /_cat/plugins?v
    GET /_cat/plugins?v&h=name,id,pid,ip

    cat recovery

    Shows on-going and completed index shard recoveries

    GET /_cat/recovery?v
    GET /_cat/recovery?v&h=name,id,pid,ip

    cat repositories

    Shows snapshot repositories registered in the cluster

    GET /_cat/repositories?v

    cat thread pool

    Shows cluster-wide thread pool statistics per node

    GET /_cat/thread_pool?v
    GET /_cat/thread_pool?v&h=id,pid,ip

    cat shards

    Displays shards to nodes relationships

    GET /_cat/shards?v
    GET /_cat/shards/my_index?v
    GET /_cat/shards/my_ind*?v

    cat segments

    Provides information similar to _segments

    GET /_cat/segments?v
    GET /_cat/segments/my_index?v
    GET /_cat/segments/my_index1,my_index2?v

    cat snapshots

    Shows snapshots belonging to a repository

    /_cat/snapshots/my_repo?v

    cat templates

    Provides information about existing templates

    GET /_cat/templates?v
    GET /_cat/templates/my_template
    GET /_cat/templates/my_template*

    Cluster API

    Cluster Health

    Gets the status of a cluster’s health

    GET /_cluster/health
    GET /_cluster/health?wait_for_status=yellow&timeout=50s
    GET /_cluster/health/my_index1
    GET /_cluster/health/my_index1,my_index2

    Cluster State

    Gets state information about a cluster

    GET /_cluster/state
    GET /_cluster/state/version,nodes/my_index1
    GET /_cluster/state/version,nodes/my_index1,my_index2
    GET /_cluster/state/version,nodes/_all

    Cluster Stats

    Retrieves statistics from a cluster

    GET /_cluster/stats
    GET /_cluster/stats?human&pretty

    Pending cluster tasks

    Returns a list of any cluster-level changes

    GET /_cluster/pending_tasks

    Cluster Reroute

    Executes a cluster reroute allocation

    GET /_cluster/reroute {
      …​
    }

    Cluster Update Settings

    Update cluster-wide specific settings

    GET /_cluster/settings
    {
      "persistent" : {
        …​
      },
      "transient" : {
        …​
      }
    }

    Node Stats

    Retrieves cluster nodes statistics

    GET /_nodes/stats
    GET /_nodes/my_node1,my_node2/stats
    GET /_nodes/127.0.0.1/stats
    GET /_nodes/stats/indices,os,process

    Node Info

    Retrieves cluster nodes information

    GET /_nodes
    GET /_nodes/my_node1,my_node2
    GET /_nodes/_all/indices,os,process
    GET /_nodes/indices,os,process
    GET /_nodes/my_node1,my_node2/_all

    Task Management API

    Retrieve information about tasks currently executing on nodes in the cluster

    GET /_tasks
    GET /_tasks?nodes=my_node1,my_node2
    GET /_tasks?nodes=my_node1,my_node2&actions=cluster:*

    Nodes Hot Threads

    Gets current hot threads on nodes in the cluster

    GET /_nodes/hot_threads
    GET /_nodes/hot_threads/my_node
    GET /_nodes/my_node1,my_node2/hot_threads

    Cluster Allocation Explain API

    Answers the question "why is this shard unnassigned?"

    GET /_cluster/allocation/explain
    GET /_cluster/allocation/explain
    {
      "index": "myindex",
      "shard": 0,
      "primary": false
    }

    marks an experimental (respectively new) API that is subject to removal (resp. change) in future versions

    Last updated on Feb. 22th
    Categories: Development Tags: ElasticElasticSearchAPI