Archive

Posts Tagged ‘metrics’
  • Feeding Spring Boot metrics to Elasticsearch

    Elasticsearch logo

    :imagesdir: /assets/resources/feeding-spring-boot-metrics-to-elasticsearch/

    This week’s post aims to describe how to send JMX metrics taken from the JVM to an Elasticsearch instance.

    == Business app requirements

    The business app(s) has some minor requirements.

    The easiest use-case is to start from a Spring Boot application. In order for metrics to be available, just add the Actuator dependency to it:

    [source,xml]

    org.springframework.boot spring-boot-starter-actuator

    Note that when inheriting from spring-boot-starter-parent, setting the version is not necessary and taken from the parent POM.

    To send data to JMX, configure a brand-new @Bean in the context:

    [source,java]

    @Bean @ExportMetricWriter MetricWriter metricWriter(MBeanExporter exporter) { return new JmxMetricWriter(exporter); } —-

    == To-be architectural design

    There are several options to put JMX data into Elasticsearch.

    === Possible options

    . The most straightforward way is to use Logstash with the https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jmx.html[JMX plugin^] . Alternatively, one can hack his own micro-service architecture:

    • Let the application send metrics to the JVM - there’s the Spring Boot actuator for that, the overhead is pretty limited
    • Have a feature expose JMX data on an HTTP endpoint using https://jolokia.org/[Jolokia^]
    • Have a dedicated app poll the endpoint and send data to Elasticsearch + This way, every component has its own responsibility, there’s not much performance overhead and the metric-handling part can fail while the main app is still available. . An alternative would be to directly poll the JMX data from the JVM

    === Unfortunate setback

    Any architect worth his salt (read lazy) should always consider the out-of-the-box option. The Logstash JMX plugin looks promising. After installing the plugin, the jmx input can be configured into the Logstash configuration file:

    [source]

    input { jmx { path => “/var/logstash/jmxconf” polling_frequency => 5 type => “jmx” } }

    output { stdout { codec => rubydebug } } —-

    The plugin is designed to read JVM parameters (such as host and port), as well as the metrics to handle from JSON configuration files. In the above example, they will be watched in the /var/logstash/jmxconf folder. Moreover, they can be added, removed and updated on the fly.

    Here’s an example of such configuration file:

    [source]

    { “host” : “localhost”, “port” : 1616, “alias” : “petclinic”, “queries” : [ { “object_name” : “org.springframework.metrics:name=,type=,value=*”, “object_alias” : “${type}.${name}.${value}” }] } —-

    A MBean’s ObjectName can be determined from inside the jconsole:

    image::jconsole.png[JConsole screenshot,739,314,align=”center”]

    The plugin allows wildcard in the metric’s name and usage of captured values in the alias. Also, by default, all attributes will be read (those can be restricted if necessary).

    Note: when starting the business app, it’s highly recommended to set the JMX port through the com.sun.management.jmxremote.port system property.

    Unfortunately, at the time of this writing, running the above configuration fails with messages of this kind:


    [WARN][logstash.inputs.jmx] Failed retrieving metrics for attribute Value on object blah blah blah [WARN][logstash.inputs.jmx] undefined method `event’ for = ----

    For reference purpose, the Github issue can be found https://github.com/logstash-plugins/logstash-input-jmx/issues/26[here^].

    == The do-it yourself alternative

    Considering it’s easier to poll HTTP endpoints than JMX - and that implementations already exist, let’s go for option 3 above. Libraries will include:

    • Spring Boot for the business app
    • With the Actuator starter to provides metrics
    • Configured with the JMX exporter for sending data
    • Also with the dependency to expose JMX beans on an HTTP endpoints
    • Another Spring Boot app for the “poller”
    • Configured with a scheduled service to regularly poll the endpoint and send it to Elasticsearch

    image::component-diagram.png[Draft architecture component diagram,535,283,align=”center”]

    === Additional business app requirement

    To expose the JMX data over HTTP, simply add the Jolokia dependency to the business app:

    [source,xml]

    org.jolokia jolokia-core

    From this point on, one can query for any JMX metric via the HTTP endpoint exposed by Jolokia - by default, the full URL looks like /jolokia/read/<JMX_ObjectName>.

    === Custom-made broker

    The broker app responsibilities include:

    • reading JMX metrics from the business app through the HTTP endpoint at regular intervals
    • sending them to Elasticsearch for indexing

    My initial move was to use Spring Data, but it seems the current release is not compatible with Elasticsearch latest 5 version, as I got the following exception:


    java.lang.IllegalStateException: Received message from unsupported version: [2.0.0] minimal compatible version is: [5.0.0] —-

    Besides, Spring Data is based on entities, which implies deserializing from HTTP and serializing back again to Elasticsearch: that has a negative impact on performance for no real added value.

    The code itself is quite straightforward:

    [source,java]

    @SpringBootApplication <1> @EnableScheduling <2> open class JolokiaElasticApplication {

    @Autowired lateinit var client: JestClient <6>

    @Bean open fun template() = RestTemplate() <4>

    @Scheduled(fixedRate = 5000) <3> open fun transfer() { val result = template().getForObject( <5> “http://localhost:8080/manage/jolokia/read/org.springframework.metrics:name=status,type=counter,value=beans”, String::class.java) val index = Index.Builder(result).index(“metrics”).type(“metric”).id(UUID.randomUUID().toString()).build() client.execute(index) } }

    fun main(args: Array) { SpringApplication.run(JolokiaElasticApplication::class.java, *args) } ----

    <1> Of course, it’s a Spring Boot application. <2> To poll at regular intervals, it must be annotated with @EnableScheduling <3> And have the polling method annotated with @Scheduled and parameterized with the interval in milliseconds. <4> In Spring Boot application, calling HTTP endpoints is achieved through the RestTemplate. Once created - it’s singleton, it can be (re)used throughout the application. <5> The call result is deserialized into a String. <6> The client to use is https://github.com/searchbox-io/Jest[Jest^]. Jest offers a dedicated indexing API: it just requires the JSON string to be sent, as well as the index name, the object name as well as its id. With the Spring Boot Elastic starter on the classpath, a JestClient instance is automatically registered in the bean factory. Just autowire it in the configuration to use it.

    At this point, launching the Spring Boot application will poll the business app at regular intervals for the specified metrics and send it to Elasticsearch. It’s of course quite crude, everything is hard-coded, but it gets the job done.

    == Conclusion

    Despite the failing plugin, we managed to get the JMX data from the business application to Elasticsearch by using a dedicated Spring Boot app.

  • Become a DevOps with Spring Boot

    :imagesdir: /assets/resources/become-a-devops-with-spring-boot/

    Have you ever found yourself in the situation to finish a project and you’re about to deliver it to the Ops team. You’re so happy because this time, you covered all the bases: the documentation contains the JNDI datasource name the application will use, all environment-dependent parameters have been externalized in a property file - and documented, and you even made sure logging has been implemented at key points in the code. Unfortunately, Ops refuse your delivery since they don’t know how to monitor the new application. And you missed that… Sure you could hack something to fulfill this requirement, but the project is already over-budget. In some (most?) companies, this means someone will have to be blamed and chances are the developer will bear all the burden. Time for some sleepless nights.

    Spring Boot is a product from Spring that brings many out-of-the-box features to the table. Convention over configuration, in-memory default datasource and and embedded Tomcat are part of the features known to most. However, I think there’s a hidden gem that should be much more advertised. The actuator module actually provides metrics and health checks out-of-the-box as well as an easy way to add your own. In this article, we’ll see how to access those metrics from HTTP and send them to JMX and Graphite.

    As an example application, let’s use an https://github.com/nfrankel/enhanced-pet-clinic[update of the Spring Pet Clinic made with Boot^] - thanks forArnaldo Piccinelli for his work. The starting point is https://github.com/nfrankel/enhanced-pet-clinic/commit/790e5d0ff889d69db1b23e63d8d2d63c97863525[commit 790e5d0^]. Now, let’s add some metrics in no time.

    The first step is to add the actuator module starter in the Maven POM and let Boot does its magic:

    [source,xml]

    org.springframework.boot spring-boot-starter-actuator

    At this point, we can launch the Spring Pet Clinic with mvn spring-boot:run and navigate to http://localhost:8090/metrics[http://localhost:8090/metrics^] (note that the path is protected by Spring Security, credentials are user/password) to see something like the following:

    [source,json]

    { “mem” : 562688, “mem.free” : 328492, “processors” : 8, “uptime” : 26897, “instance.uptime” : 18974, “heap.committed” : 562688, “heap.init” : 131072, “heap.used” : 234195, “heap” : 1864192, “threads.peak” : 20, “threads.daemon” : 17, “threads” : 19, “classes” : 9440, “classes.loaded” : 9443, “classes.unloaded” : 3, “gc.ps_scavenge.count” : 16, “gc.ps_scavenge.time” : 104, “gc.ps_marksweep.count” : 2, “gc.ps_marksweep.time” : 152 } —-

    As can be seen, Boot provides hardware- and Java-related metrics without further configuration. Even better, if one browses the app e.g. repeatedly refreshed the root, new metrics appear:

    [source,json]

    { “counter.status.200.metrics” : 1, “counter.status.200.root” : 2, “counter.status.304.star-star” : 4, “counter.status.304.webjars.star-star” : 1, “gauge.response.metrics” : 72.0, “gauge.response.root” : 16.0, “gauge.response.star-star” : 8.0, “gauge.response.webjars.star-star” : 11.0, … } —- Those metrics are more functional in nature, and they are are separated into two separate groups:

    • Gauges are the simplest metrics and return a numeric value e.g. gauge.response.root is the time (in milliseconds) of the last response from the /metrics path
    • Counters are metrics which can be incremented/decremented e.g. counter.status.200.metrics is the number of times the /metrics path returned a HTTP 200 code

    At this point, your Ops team could probably scrape the returned JSON and make something out of it. It will be their responsibility to regularly poll the URL and to use the figures the way they want. However, with just a little more effort, we can ease the life of our beloved Ops team by putting these metrics in JMX.

    Spring Boot integrates easily with http://dropwizard.io/[Dropwizard metrics^]. By just adding the following dependency to the POM, Boot is able to provide a https://github.com/dropwizard/metrics/blob/master/metrics-core/src/main/java/com/codahale/metrics/MetricRegistry.java[MetricRegistry^], a Dropwizard registry for all metrics:

    [source,xml]

    io.dropwizard.metrics metrics-core 4.0.0-SNAPSHOT

    Using the provided registry, one is able to send metrics to JMX in addition to the HTTP endpoint. We just need a simple configuration class as well as a few API calls:

    [source,java]

    @Configuration public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;
    
    @Bean
    public JmxReporter jmxReporter() {
        JmxReporter reporter = JmxReporter.forRegistry(registry).build();
        reporter.start();
        return reporter;
    } } ----
    

    Launching jconsole let us check it works alright:

    image::jconsole.png[JConsole screenshot,704,551,align=”center”]

    The Ops team now just needs to get metrics from JMX and push them into their preferred graphical display tool, such as http://graphite.wikidot.com/[Graphite^]. One such way to achieve this is through http://www.jmxtrans.org/[jmx-trans^]. However, it’s also possible to directly send metrics to the Graphite server with just a few different API calls:

    [source,java]

    @Configuration public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;
    
    @Bean
    public GraphiteReporter graphiteReporter() {
        Graphite graphite = new Graphite(new InetSocketAddress("localhost", 2003));
        GraphiteReporter reporter = GraphiteReporter.forRegistry(registry)
                                                    .prefixedWith("boot").build(graphite);
        reporter.start(500, TimeUnit.MILLISECONDS);
        return reporter;
    } } ----
    

    The result is quite interesting given the few lines of code:

    image::graphite.png[Graphite screenshot,574,423,align=”center”]

    Note that going to Graphite using the JMX route makes things easier as there’s no need for a dedicated Graphite server in development environments.

    Categories: Java Tags: devopsmetricsspring boot
  • Metrics, metrics everywhere

    :imagesdir: /assets/resources/metrics-metrics-everywhere/

    With DevOps, metrics are starting to be among the non-functional requirements any application has to bring into scope. Before going further, there are several comments I’d like to make:

    . Metrics are not only about non-functional stuff. Many metrics represent very important KPI for the business. For example, for an e-commerce shop, the business needs to know how many customers leave the checkout process, and in which screen. True, there are several solutions to achieve this, though they are all web-based (Google Analytics comes to mind) and metrics might also be required for different architectures. And having all metrics in the same backend mean they can be correlated easily. . Metrics, as any other +++NFR+++ (e.g. logging and exception handling) should be designed and managed upfront and not pushed in as an afterthought. How do I know that? Well, one of my last project focused on functional requirement only, and only in the end did project management realized +++NFR+++ were important. Trust me when I say it was gory - and it has cost much more than if designed in the early phases of the project. . Metrics have an overhead. However, without metrics, it’s not possible to increase performance. Just accept that and live with it.

    The inputs are the following: the application is http://docs.spring.io/spring/docs/current/spring-framework-reference/html/mvc.html[Spring MVC^]-based and metrics have to be aggregated in http://graphite.wikidot.com/[Graphite^]. We will start by using the excellent https://dropwizard.github.io/metrics/[Metrics project^]: not only does it get the job done, its https://dropwizard.github.io/metrics/3.1.0/manual/[documentation^] is of very high quality and it’s available under the friendly OpenSource Apache v2.0 license.

    That said, let’s imagine a “standard” base architecture to manage those components.

    First, though Metrics offer a Graphite endpoint, this requires configuration in each environment and this makes it harder, especially on developers workstations. To manage this, we’ll send metrics to JMX and introduce http://www.jmxtrans.org/[jmxtrans^] as a middle component between JMX and graphite. As every JVM provides JMX services, this requires no configuration when there’s none needed - and has no impact on performance.

    Second, as developers, we usually enjoy develop everything from scratch in order to show off how good we are - or sometimes because they didn’t browse the documentation. My point of view as a software engineer is that I’d rather not reinvent the wheel and focus on the task at end. Actually, Spring Boot already http://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-metrics.html=production-ready-code-hale-metrics[integrates with Metrics^] through the Actuator component. However, it only provides GaugeService - to send unique values, and CounterService - to increment/decrement values. This might be good enough for +++FR+++ but not for +++NFR+++ so we might want to tweak things a little.

    The flow would be designed like this: Code > Spring Boot > Metrics > JMX > Graphite

    The starting point is to create an aspect, as performance metric is a cross-cutting concern:

    [source,java]

    @Aspect public class MetricAspect {

    private final MetricSender metricSender;
    
    @Autowired
    public MetricAspect(MetricSender metricSender) {
        this.metricSender = metricSender;
    }
    
    @Around("execution(* ch.frankel.blog.metrics.ping..*(..)) ||execution(* ch.frankel.blog.metrics.dice..*(..))")
    public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
    
        StopWatch stopWatch = metricSender.getStartedStopWatch();
    
        try {
            return pjp.proceed();
        } finally {
            Class<?> clazz = pjp.getTarget().getClass();
            String methodName = pjp.getSignature().getName();
            metricSender.stopAndSend(stopWatch, clazz, methodName);
        }
    } } ----
    

    The only thing outside of the ordinary is the usage of autowiring as aspects don’t seem to be able to be the target of explicit wiring (yet?). Also notice the aspect itself doesn’t interact with the Metrics API, it only delegates to a dedicated component:

    [source,java]

    public class MetricSender {

    private final MetricRegistry registry;
    
    public MetricSender(MetricRegistry registry) {
        this.registry = registry;
    }
    
    private Histogram getOrAdd(String metricsName) {
        Map<String, Histogram> registeredHistograms = registry.getHistograms();
        Histogram registeredHistogram = registeredHistograms.get(metricsName);
        if (registeredHistogram == null) {
            Reservoir reservoir = new ExponentiallyDecayingReservoir();
            registeredHistogram = new Histogram(reservoir);
            registry.register(metricsName, registeredHistogram);
        }
        return registeredHistogram;
    }
    
    public StopWatch getStartedStopWatch() {
        StopWatch stopWatch = new StopWatch();
        stopWatch.start();
        return stopWatch;
    }
    
    private String computeMetricName(Class<?> clazz, String methodName) {
        return clazz.getName() + '.' + methodName;
    }
    
    public void stopAndSend(StopWatch stopWatch, Class<?> clazz, String methodName) {
        stopWatch.stop();
        String metricName = computeMetricName(clazz, methodName);
        getOrAdd(metricName).update(stopWatch.getTotalTimeMillis());
    } } ----
    

    The sender does several interesting things (but with no state):

    • It returns a new StopWatch for the aspect to pass back after method execution
    • It computes the metric name depending on the class and the method
    • It stops the StopWatch and sends the time to the MetricRegistry
    • Note it also lazily creates and registers a new Histogram with an ExponentiallyDecayingReservoir instance. The default behavior is to provide an UniformReservoir, which keeps data forever and is not suitable for our need.

    The final step is to tell the Metrics API to send data to JMX. This can be done in one of the configuration classes, preferably the one dedicated to metrics, using the @PostConstruct annotation on the desired method.

    [source,java]

    @Configuration public class MetricsConfiguration {

    @Autowired
    private MetricRegistry metricRegistry;
    
    @Bean
    public MetricSender metricSender() {
        return new MetricSender(metricRegistry);
    }
    
    @PostConstruct
    public void connectRegistryToJmx() {
        JmxReporter reporter = JmxReporter.forRegistry(metricRegistry).build();
        reporter.start();
    } } ----
    

    The JConsole should look like the following. Icing on the cake, all default Spring Boot metrics are also available:

    image::jconsole.png[JConsole screenshot,662,452,align=”center”]

    Sources for this article are link:/assets/resources/metrics-metrics-everywhere/spring-metrics-1.0.0.zip[available] in Maven “format”.