Posts Tagged ‘monitoring’
  • Feeding Spring Boot metrics to Elasticsearch

    Elasticsearch logo

    :imagesdir: /assets/resources/feeding-spring-boot-metrics-to-elasticsearch/

    This week’s post aims to describe how to send JMX metrics taken from the JVM to an Elasticsearch instance.

    == Business app requirements

    The business app(s) has some minor requirements.

    The easiest use-case is to start from a Spring Boot application. In order for metrics to be available, just add the Actuator dependency to it:


    org.springframework.boot spring-boot-starter-actuator

    Note that when inheriting from spring-boot-starter-parent, setting the version is not necessary and taken from the parent POM.

    To send data to JMX, configure a brand-new @Bean in the context:


    @Bean @ExportMetricWriter MetricWriter metricWriter(MBeanExporter exporter) { return new JmxMetricWriter(exporter); } —-

    == To-be architectural design

    There are several options to put JMX data into Elasticsearch.

    === Possible options

    . The most straightforward way is to use Logstash with the[JMX plugin^] . Alternatively, one can hack his own micro-service architecture:

    • Let the application send metrics to the JVM - there’s the Spring Boot actuator for that, the overhead is pretty limited
    • Have a feature expose JMX data on an HTTP endpoint using[Jolokia^]
    • Have a dedicated app poll the endpoint and send data to Elasticsearch + This way, every component has its own responsibility, there’s not much performance overhead and the metric-handling part can fail while the main app is still available. . An alternative would be to directly poll the JMX data from the JVM

    === Unfortunate setback

    Any architect worth his salt (read lazy) should always consider the out-of-the-box option. The Logstash JMX plugin looks promising. After installing the plugin, the jmx input can be configured into the Logstash configuration file:


    input { jmx { path => “/var/logstash/jmxconf” polling_frequency => 5 type => “jmx” } }

    output { stdout { codec => rubydebug } } —-

    The plugin is designed to read JVM parameters (such as host and port), as well as the metrics to handle from JSON configuration files. In the above example, they will be watched in the /var/logstash/jmxconf folder. Moreover, they can be added, removed and updated on the fly.

    Here’s an example of such configuration file:


    { “host” : “localhost”, “port” : 1616, “alias” : “petclinic”, “queries” : [ { “object_name” : “org.springframework.metrics:name=,type=,value=*”, “object_alias” : “${type}.${name}.${value}” }] } —-

    A MBean’s ObjectName can be determined from inside the jconsole:

    image::jconsole.png[JConsole screenshot,739,314,align=”center”]

    The plugin allows wildcard in the metric’s name and usage of captured values in the alias. Also, by default, all attributes will be read (those can be restricted if necessary).

    Note: when starting the business app, it’s highly recommended to set the JMX port through the system property.

    Unfortunately, at the time of this writing, running the above configuration fails with messages of this kind:

    [WARN][logstash.inputs.jmx] Failed retrieving metrics for attribute Value on object blah blah blah [WARN][logstash.inputs.jmx] undefined method `event’ for = ----

    For reference purpose, the Github issue can be found[here^].

    == The do-it yourself alternative

    Considering it’s easier to poll HTTP endpoints than JMX - and that implementations already exist, let’s go for option 3 above. Libraries will include:

    • Spring Boot for the business app
    • With the Actuator starter to provides metrics
    • Configured with the JMX exporter for sending data
    • Also with the dependency to expose JMX beans on an HTTP endpoints
    • Another Spring Boot app for the “poller”
    • Configured with a scheduled service to regularly poll the endpoint and send it to Elasticsearch

    image::component-diagram.png[Draft architecture component diagram,535,283,align=”center”]

    === Additional business app requirement

    To expose the JMX data over HTTP, simply add the Jolokia dependency to the business app:


    org.jolokia jolokia-core

    From this point on, one can query for any JMX metric via the HTTP endpoint exposed by Jolokia - by default, the full URL looks like /jolokia/read/<JMX_ObjectName>.

    === Custom-made broker

    The broker app responsibilities include:

    • reading JMX metrics from the business app through the HTTP endpoint at regular intervals
    • sending them to Elasticsearch for indexing

    My initial move was to use Spring Data, but it seems the current release is not compatible with Elasticsearch latest 5 version, as I got the following exception:

    java.lang.IllegalStateException: Received message from unsupported version: [2.0.0] minimal compatible version is: [5.0.0] —-

    Besides, Spring Data is based on entities, which implies deserializing from HTTP and serializing back again to Elasticsearch: that has a negative impact on performance for no real added value.

    The code itself is quite straightforward:


    @SpringBootApplication <1> @EnableScheduling <2> open class JolokiaElasticApplication {

    @Autowired lateinit var client: JestClient <6>

    @Bean open fun template() = RestTemplate() <4>

    @Scheduled(fixedRate = 5000) <3> open fun transfer() { val result = template().getForObject( <5> “http://localhost:8080/manage/jolokia/read/org.springframework.metrics:name=status,type=counter,value=beans”, val index = Index.Builder(result).index(“metrics”).type(“metric”).id(UUID.randomUUID().toString()).build() client.execute(index) } }

    fun main(args: Array) {, *args) } ----

    <1> Of course, it’s a Spring Boot application. <2> To poll at regular intervals, it must be annotated with @EnableScheduling <3> And have the polling method annotated with @Scheduled and parameterized with the interval in milliseconds. <4> In Spring Boot application, calling HTTP endpoints is achieved through the RestTemplate. Once created - it’s singleton, it can be (re)used throughout the application. <5> The call result is deserialized into a String. <6> The client to use is[Jest^]. Jest offers a dedicated indexing API: it just requires the JSON string to be sent, as well as the index name, the object name as well as its id. With the Spring Boot Elastic starter on the classpath, a JestClient instance is automatically registered in the bean factory. Just autowire it in the configuration to use it.

    At this point, launching the Spring Boot application will poll the business app at regular intervals for the specified metrics and send it to Elasticsearch. It’s of course quite crude, everything is hard-coded, but it gets the job done.

    == Conclusion

    Despite the failing plugin, we managed to get the JMX data from the business application to Elasticsearch by using a dedicated Spring Boot app.

  • HTTP headers forwarding in microservices

    Spring Cloud logo

    :page-liquid: :experimental: :imagesdir: /assets/resources/http-headers-management-with-spring/

    Microservices are not a trend anymore. Like it or not, they are here to stay. Yet, there’s a huge gap before embracing the microservice architecture and implementing them right. As a reminder, one might first want to check the many[fallacies of distributed computed^]. Among all requirements necessary to overcome them is the ability to follow one HTTP request along microservices involved in a specific business scenario - for monitoring and debugging purpose.

    One possible implementation of it is a dedicated HTTP header with an immutable value passed along every microservice involved in the call chain. This week, I developed this monitoring of sort on a Spring microservice and would like to share how to achieve that.

    == Headers forwarding out-of-the-box

    In the Spring ecosystem, the[Spring Cloud Sleuth^] is the library dedicated to that:

    [quote] __ Spring Cloud Sleuth implements a distributed tracing solution for Spring Cloud, borrowing heavily from[Dapper^],[Zipkin^] and HTrace. For most users Sleuth should be invisible, and all your interactions with external systems should be instrumented automatically. You can capture data simply in logs, or by sending it to a remote collector service. __

    Within Spring Boot projects, adding the Spring Cloud Sleuth library to the classpath will automatically add 2 HTTP headers to all calls:

    X-B3-Traceid:: Shared by all HTTP calls of a single transaction i.e. the wished-for transaction identifier X-B3-Spanid:: Identifies the work of a single microservice during a transaction

    Spring Cloud Sleuth offers some customisation capabilities, such as alternative header names, at the cost of some extra code.

    == Diverging from out-of-the-box features

    Those features are quite handy when starting from a clean slate architecture. Unfortunately, the project I’m working has a different context:

    • The transaction ID is not created by the first microservice in the call chain - a mandatory façade proxy does
    • The transaction ID is not numeric - and Sleuth handles only numeric values
    • Another header is required. Its objective is to group all requests related to one business scenario across different call chains
    • A third header is necessary. It’s to be incremented by each new microservice in the call chain

    A solution architect’s first move would be to check among API management products, such as Apigee (recently bought by Google) and search which one offers the feature matching those requirements. Unfortunately, the current context doesn’t allow for that.

    == Coding the requirements

    In the end, I ended up coding the following using the Spring framework:

    . Read and store headers from the initial request . Write them in new microservice requests . Read and store headers from the microservice response . Write them in the final response to the initiator, not forgetting to increment the call counter

    image::header-flow.png[UML modeling of the header flow,533,599,align=”center”]

    === Headers holder

    The first step is to create the entity responsible to hold all necessary headers. It’s unimaginatively called HeadersHolder. Blame me all you want, I couldn’t find a more descriptive name.


    private const val HOP_KEY = “hop” private const val REQUEST_ID_KEY = “request-id” private const val SESSION_ID_KEY = “session-id”

    data class HeadersHolder (var hop: Int?, var requestId: String?, var sessionId: String?) —-

    The interesting part is to decide which scope is more relevant to put instances of this class in. Obviously, there must be several instances, so this makes singleton not suitable. Also, since data must be stored across several requests, it cannot be prototype. In the end, the only possible way to manage the instance is through a ThreadLocal.

    Though it’s possible to manage ThreadLocal, let’s leverage Spring’s features since it allows to easily add new scopes. There’s already an out-of-the-box scope for ThreadLocal, one just needs to register it in the context. This directly translates into the following code:


    internal const val THREAD_SCOPE = “thread”

    @Scope(THREAD_SCOPE) annotation class ThreadScope

    @Configuration open class WebConfigurer {

    @Bean @ThreadScope
    open fun headersHolder() = HeadersHolder()
    @Bean open fun customScopeConfigurer() = CustomScopeConfigurer().apply {
        addScope(THREAD_SCOPE, SimpleThreadScope())
    } } ----

    === Headers in the server part

    Let’s implement requirements 1 & 4 above: read headers from the request and write them to the response. Also, headers need to be reset after the request-response cycle to prepare for the next.

    This also mandates for the holder class to be updated to be more +++OOP+++-friendly:


    data class HeadersHolder private constructor (private var hop: Int?, private var requestId: String?, private var sessionId: String?) { constructor() : this(null, null, null)

    fun readFrom(request: HttpServletRequest) {
        this.hop = request.getIntHeader(HOP_KEY)
        this.requestId = request.getHeader(REQUEST_ID_KEY)
        this.sessionId = request.getHeader(SESSION_ID_KEY)
    fun writeTo(response: HttpServletResponse) {
        hop?.let { response.addIntHeader(HOP_KEY, hop as Int) }
        response.addHeader(REQUEST_ID_KEY, requestId)
        response.addHeader(SESSION_ID_KEY, sessionId)
    fun clear() {
        hop = null
        requestId = null
        sessionId = null
    } } ----

    To keep controllers free from any header management concern, related code should be located in a filter or a similar component. In the Spring MVC ecosystem, this translates into an interceptor.


    abstract class HeadersServerInterceptor : HandlerInterceptorAdapter() {

    abstract val headersHolder: HeadersHolder
    override fun preHandle(request: HttpServletRequest,
                           response: HttpServletResponse, handler: Any): Boolean {
        return true
    override fun afterCompletion(request: HttpServletRequest, response: HttpServletResponse,
                                 handler: Any, ex: Exception?) {
        with (headersHolder) {
    } }

    @Configuration open class WebConfigurer : WebMvcConfigurerAdapter() {

    override fun addInterceptors(registry: InterceptorRegistry) {
        registry.addInterceptor(object : HeadersServerInterceptor() {
            override val headersHolder: HeadersHolder
                get() = headersHolder()
    } } ----

    Note the invocation of the clear() method to reset the headers holder for the next request.

    The most important bit is the abstract headersHolder property. As its scope - thread, is smaller than the adapter’s, it cannot be injected directly less it will be only injected during Spring’s context startup. Hence, Spring provides link:{% post_url 2012-07-29-method-injection-with-spring %}[lookup method injection^]. The above code is its direct translation in Kotlin.

    === Headers in client calls

    The previous code assumes the current microservice is at the end of the caller chain: it reads request headers and writes them back in the response (not forgetting to increment the ‘hop’ counter). However, monitoring is relevant only for a caller chain having more than one single link. How is it possible to pass headers to the next microservice (and get them back) - requirements 2 & 3 above?

    Spring provides a handy abstraction to handle that client part - ClientHttpRequestInterceptor, that can be registered to a REST template. Regarding scope mismatch, the same injection trick as for the interceptor handler above is used.


    abstract class HeadersClientInterceptor : ClientHttpRequestInterceptor {

    abstract val headersHolder: HeadersHolder
    override fun intercept(request: HttpRequest, 
                           body: ByteArray, execution: ClientHttpRequestExecution): ClientHttpResponse {
        with(headersHolder) {
            return execution.execute(request, body).apply {
    } }

    @Configuration open class WebConfigurer : WebMvcConfigurerAdapter() {

    @Bean open fun headersClientInterceptor() = object : HeadersClientInterceptor() {
        override val headersHolder: HeadersHolder
            get() = headersHolder()
    @Bean open fun oAuth2RestTemplate() = OAuth2RestTemplate(clientCredentialsResourceDetails()).apply {
        interceptors = listOf(headersClientInterceptor())
    } } ----

    With this code, every REST call using the oAuth2RestTemplate() will have headers managed automatically by the interceptor.

    The HeadersHolder just needs a quick update:


    data class HeadersHolder private constructor (private var hop: Int?, private var requestId: String?, private var sessionId: String?) {

    fun readFrom(headers: org.springframework.http.HttpHeaders) {
        headers[HOP_KEY]?.let {
            it.getOrNull(0)?.let { this.hop = it.toInt() }
        headers[REQUEST_ID_KEY]?.let { this.requestId = it.getOrNull(0) }
        headers[SESSION_ID_KEY]?.let { this.sessionId = it.getOrNull(0) }
    fun writeTo(headers: org.springframework.http.HttpHeaders) {
        hop?.let { headers.add(HOP_KEY, hop.toString()) }
        headers.add(REQUEST_ID_KEY, requestId)
        headers.add(SESSION_ID_KEY, sessionId)
    } } ----

    == Conclusion

    Spring Cloud offers many components that can be used out-of-the-box when developing microservices. When requirements start to diverge from what it provides, the flexibility of the underlying Spring Framework can be leveraged to code those requirements.