Archive

Posts Tagged ‘devops’
  • Exploratory Infrastructure projects

    Exploring the jungle

    :imagesdir: /assets/resources/exploratory-infrastructure-projects

    Nowadays, most companies use one or another Agile methodology for their software development projects. That makes people involved in software development projects at least aware of agile principles - whether they truly try to follow agile practices or just pay lip service to them for a variety of reasons remains debatable. To avoid any association with tainted practices, I’d rather use the name “Exploratory Development”. As with software development, exploration has a vague feeling of the final target destination, and a more or less detailed understanding on how to get there.footnoteref:[joke, This an HTML joke, because “emphasis on less”] Plus on a map, the plotted path is generally not a straight line.

    However, even with the rise of the DevOps movement, the operational side of projects still seems to remain oblivious to what happens on the other side of the curtain. This post aims to provide food for thoughts to think about how true Agile can be applied to Ops.

    == The legacy approach

    As an example, let’s have an e-commerce application. Sometimes, bad stuff happens and the development team needs to access logs to analyze what happened. In general, due to “security” reasons, developers cannot directly access the system, and needs to ask the operations team to send the log file(s). This too common process results in frustration, time wasting, and contributes to build even taller walls between Dev and Ops. To improve the situation, an option could be to setup a datastore to store the logs into, and a webapp to make them available to developers.

    Here are some hypotheses regarding the source architecture components:

    • Load-balancer
    • Apache Web server
    • Apache Tomcat servlet container, where the e-commerce application is deployed
    • Solr server, that provide search and faceting features to the app

    In turn, relevant data/logs that needs to be stored include:

    • Web server logs
    • Application logs proper
    • Tomcat technical logs
    • JMX data
    • Solr logs

    Let’s implement those requirements with the Elastic stack. The target infrastructure could look like this:

    image::target-architecture.png[Target architecture,947,472,align=”center”]

    Defining the architecture is generally not enough. Unless one work for a dream company that empowers employees to improve the situation on their own initiative (please send it my resume), chances are there’s a need for some estimates regarding the setup of this architecture. And that goes double in case it’s done for a customer company. You might push back, stating the evidence:

    image::wewillaskfortestimates.jpg[We will ask for estimates and then treat them as deadlines,502,362,align=”center”]

    Engineers will probably think that managers are asking them to stick their neck out to produce estimates for no real reasons, and push back. But the later will kindly remind the former to “just” base estimates on assumptions, as if it was a real solution instead of plausible deniability. In other words, an assumption is a way to escape blame for a wrong estimate, and yes, it sounds much closer to contract law than to engineering concepts. That means that if any of the listed assumption is not fulfilled, then it’s acceptable for estimates to be wrong. It also implies estimates are then not considered a deadline anymore - which they never were supposed to be in the first place, if only from a semantical viewpoint.footnoteref:[guesstimate, If an estimate needs to be really taken as an estimate, the compound word guesstimate is available. Aren’t enterprise semantics wonderful?]

    Notwithstanding all that, and for the sake of the argument, let’s try to review possible assumptions pertaining to the proposed architecture that might impact implementation time:

    • Hardware location: on-premise, cloud-based or a mix of both?
    • Underlying operating system(s): Nix, Windows, something else, or a mix?
    • Infrastructure virtualization degree: is the infrastructure physical, virtualized, both?
    • Co-location of the OS: are there requirements regarding the location of components on physical systems?
    • Hardware availability: should there be a need to purchase physical machines?
    • Automation readiness: is there any solution already in place to automate infrastructure management? If not, in how many environments will the implementation need to be setup and if more than 2, will replication be handled manually?
    • Clustering: is any component clustered? Which one(s)? For the application, is there a session replication solution in place? Which one?
    • Infrastructure access: needs to be on-site? Needs to get a security token hardware? From software?

    And those are quite basic items regarding hardware only. Other wide areas include software (volume of logs, criticality, hardware/software mis/match, versions, etc.), people (in-house support, etc.), planning (vacation seasons, etc.), and I’m probably forgetting some important ones too. Given the sheer number of available items - and assuming they all have been listed, it stands to reason that at least one assumption would prove wrong, hence making final estimates dead wrong. In that case, playing the estimate game is just another way to provide plausible deniability. A much more useful alternative would be to create a n-dimension matrix of all items, and estimate all possible combinations. But as in software projects, the event space has just too many parameters to do that in an acceptable timeframe.

    == Proposal for an alternative

    That said, what about a real working alternative that might not be to satisfy dashboard managers but the underlying business? It would start by implementing the most basic requirement, and to add more features until it’s good enough, or enough budget has been spent. Here are some possible steps from the above example:

    Foundation setup:: The initial goal of the setup is to enable log access and the most important logs are applications logs. Hence, the first setup is the following: + image::foundation-architecture.png[Foundation architecture,450,294,align=”center”] + More JVM logs:: From this point on, a near-zero effort is to add scraping Tomcat’s log, to help with incident analysis by adding correlation. + image::more-jvm-logs.png[More JVM logs,608,294,align=”center”] + Machine decoupling:: The next logical step is to move the Elasticsearch instance to its own dedicated machine, to add an extra level of modularity to the overall architecture. + image::machine-decoupling.png[Machine decoupling,739,365,align=”center”] + Even more logs:: At this point, additional logs from other components - load balancer, Solr server, etc. can be sent to Elasticsearch to improve issue-solving involving different components. Performance improvement:: Given that Logstash is written in Ruby, there might be some performance issues on running Logstash directly along the component, depending on each machine specific load and performances. Elastic realized it some time ago and now propose better performance via dedicated Beat. Every Logstash instance can be replaced by Filebeats. Not only logs:: With the Jolokia library, it’s possible to expose JMX beans through an HTTP interface. Unfortunately, there are only a few available Beats and none of them handle HTTP. However, Logstash with the http-poller plugin gets the job done. Reliability:: In order to improve reliability, Elasticsearch can be cluster-ized.

    The good thing about those steps is that they can implemented in (nearly) any order. This means that after laying out the base foundation - the first step, the stakeholder can decide which on makes sense for its specific context, or to stop because it’s enough regarding the added value.

    At this point, estimates still might make sense regarding the first step. But after eliminating most complexity (and its related uncertainty), it feels much more comfortable estimating the setup of an Elastic stack in a specific context.

    == Conclusion

    As stated above, whether Agile principles are implemented in software development projects can be subject to debate. However, my feeling is that they have not reached the Ops sphere yet. That’s a shame, because as in Development, projects can truly benefit from real Agile practices. However, to prevent association with Agile cargo cult, I proposed the use of the term “Exploratory Infrastructure”. This post described a proposal to apply such an exploratory approach to a sample infrastructure project. The main drawback of such approach is that it will cost more, as the path straight; the main benefit is that at every step, the stakeholder can choose to pursue or stop, taking into account the law of diminishing returns.

    Categories: Technical Tags: agileinfrastructureopsdevops
  • Feeding Spring Boot metrics to Elasticsearch

    Elasticsearch logo

    :imagesdir: /assets/resources/feeding-spring-boot-metrics-to-elasticsearch/

    This week’s post aims to describe how to send JMX metrics taken from the JVM to an Elasticsearch instance.

    == Business app requirements

    The business app(s) has some minor requirements.

    The easiest use-case is to start from a Spring Boot application. In order for metrics to be available, just add the Actuator dependency to it:

    [source,xml]

    org.springframework.boot spring-boot-starter-actuator

    Note that when inheriting from spring-boot-starter-parent, setting the version is not necessary and taken from the parent POM.

    To send data to JMX, configure a brand-new @Bean in the context:

    [source,java]

    @Bean @ExportMetricWriter MetricWriter metricWriter(MBeanExporter exporter) { return new JmxMetricWriter(exporter); } —-

    == To-be architectural design

    There are several options to put JMX data into Elasticsearch.

    === Possible options

    . The most straightforward way is to use Logstash with the https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jmx.html[JMX plugin^] . Alternatively, one can hack his own micro-service architecture:

    • Let the application send metrics to the JVM - there’s the Spring Boot actuator for that, the overhead is pretty limited
    • Have a feature expose JMX data on an HTTP endpoint using https://jolokia.org/[Jolokia^]
    • Have a dedicated app poll the endpoint and send data to Elasticsearch + This way, every component has its own responsibility, there’s not much performance overhead and the metric-handling part can fail while the main app is still available. . An alternative would be to directly poll the JMX data from the JVM

    === Unfortunate setback

    Any architect worth his salt (read lazy) should always consider the out-of-the-box option. The Logstash JMX plugin looks promising. After installing the plugin, the jmx input can be configured into the Logstash configuration file:

    [source]

    input { jmx { path => “/var/logstash/jmxconf” polling_frequency => 5 type => “jmx” } }

    output { stdout { codec => rubydebug } } —-

    The plugin is designed to read JVM parameters (such as host and port), as well as the metrics to handle from JSON configuration files. In the above example, they will be watched in the /var/logstash/jmxconf folder. Moreover, they can be added, removed and updated on the fly.

    Here’s an example of such configuration file:

    [source]

    { “host” : “localhost”, “port” : 1616, “alias” : “petclinic”, “queries” : [ { “object_name” : “org.springframework.metrics:name=,type=,value=*”, “object_alias” : “${type}.${name}.${value}” }] } —-

    A MBean’s ObjectName can be determined from inside the jconsole:

    image::jconsole.png[JConsole screenshot,739,314,align=”center”]

    The plugin allows wildcard in the metric’s name and usage of captured values in the alias. Also, by default, all attributes will be read (those can be restricted if necessary).

    Note: when starting the business app, it’s highly recommended to set the JMX port through the com.sun.management.jmxremote.port system property.

    Unfortunately, at the time of this writing, running the above configuration fails with messages of this kind:


    [WARN][logstash.inputs.jmx] Failed retrieving metrics for attribute Value on object blah blah blah [WARN][logstash.inputs.jmx] undefined method `event’ for = ----

    For reference purpose, the Github issue can be found https://github.com/logstash-plugins/logstash-input-jmx/issues/26[here^].

    == The do-it yourself alternative

    Considering it’s easier to poll HTTP endpoints than JMX - and that implementations already exist, let’s go for option 3 above. Libraries will include:

    • Spring Boot for the business app
    • With the Actuator starter to provides metrics
    • Configured with the JMX exporter for sending data
    • Also with the dependency to expose JMX beans on an HTTP endpoints
    • Another Spring Boot app for the “poller”
    • Configured with a scheduled service to regularly poll the endpoint and send it to Elasticsearch

    image::component-diagram.png[Draft architecture component diagram,535,283,align=”center”]

    === Additional business app requirement

    To expose the JMX data over HTTP, simply add the Jolokia dependency to the business app:

    [source,xml]

    org.jolokia jolokia-core

    From this point on, one can query for any JMX metric via the HTTP endpoint exposed by Jolokia - by default, the full URL looks like /jolokia/read/<JMX_ObjectName>.

    === Custom-made broker

    The broker app responsibilities include:

    • reading JMX metrics from the business app through the HTTP endpoint at regular intervals
    • sending them to Elasticsearch for indexing

    My initial move was to use Spring Data, but it seems the current release is not compatible with Elasticsearch latest 5 version, as I got the following exception:


    java.lang.IllegalStateException: Received message from unsupported version: [2.0.0] minimal compatible version is: [5.0.0] —-

    Besides, Spring Data is based on entities, which implies deserializing from HTTP and serializing back again to Elasticsearch: that has a negative impact on performance for no real added value.

    The code itself is quite straightforward:

    [source,java]

    @SpringBootApplication <1> @EnableScheduling <2> open class JolokiaElasticApplication {

    @Autowired lateinit var client: JestClient <6>

    @Bean open fun template() = RestTemplate() <4>

    @Scheduled(fixedRate = 5000) <3> open fun transfer() { val result = template().getForObject( <5> “http://localhost:8080/manage/jolokia/read/org.springframework.metrics:name=status,type=counter,value=beans”, String::class.java) val index = Index.Builder(result).index(“metrics”).type(“metric”).id(UUID.randomUUID().toString()).build() client.execute(index) } }

    fun main(args: Array) { SpringApplication.run(JolokiaElasticApplication::class.java, *args) } ----

    <1> Of course, it’s a Spring Boot application. <2> To poll at regular intervals, it must be annotated with @EnableScheduling <3> And have the polling method annotated with @Scheduled and parameterized with the interval in milliseconds. <4> In Spring Boot application, calling HTTP endpoints is achieved through the RestTemplate. Once created - it’s singleton, it can be (re)used throughout the application. <5> The call result is deserialized into a String. <6> The client to use is https://github.com/searchbox-io/Jest[Jest^]. Jest offers a dedicated indexing API: it just requires the JSON string to be sent, as well as the index name, the object name as well as its id. With the Spring Boot Elastic starter on the classpath, a JestClient instance is automatically registered in the bean factory. Just autowire it in the configuration to use it.

    At this point, launching the Spring Boot application will poll the business app at regular intervals for the specified metrics and send it to Elasticsearch. It’s of course quite crude, everything is hard-coded, but it gets the job done.

    == Conclusion

    Despite the failing plugin, we managed to get the JMX data from the business application to Elasticsearch by using a dedicated Spring Boot app.

  • More DevOps for Spring Boot

    :page-liquid: :experimental:

    I think Spring Boot brings something new to the table, especially concerning DevOps - and I’ve already written a link:{% post_url 2015-03-08-become-a-devops-with-spring-boot %}[post about it^]. However, there’s more than metrics and healthchecks.

    In one of link:{% post_url 2015-04-06-whats-the-version-of-my-deployed-application %}[another^] of my previous post, I described how to provide versioning information for Maven-built applications. This article will describe how this later post is not necessary when using Spring Boot.

    As a reminder, just adding adding the spring-boot-starter-actuator dependency in the POM, enable many endpoints, among them:

    • /metrics for monitoring the application
    • /health to check the application can deliver the expected service
    • /bean lists all Spring beans in the context
    • /configprops lists all properties regarding the running profile(s) (if any)

    Among those, one of them is of specific interest: /info. By default, it displays… nothing - or more precisely, the string representation of an empty JSON object.

    However, any property set in the application.properties file (or one of its profile flavor) will find its way into the page. For example:

    [options=”header”] |===

    2+ Propery file .2+ Output
    h Key
    h Value

    | info.application.nameMy Demo App a| [source,json] —- { “application” : { “name” : “My Demo App” } } —-

    |===

    Setting static info is sure nice, but our objective is to get the version of my application within Spring Boot. application.properties files are automatically filtered by Spring Boot during the process-resources build phase. Any property in the POM can be used: it just needs to be set between @ character. For example:

    [options=”header”] |===

    2+ Propery file .2+ Output
    h Key
    h Value

    info.application.version | @[email protected] a| [source,json] —- { “application” : { “version” : “0.0.1-SNAPSHOT” } } —-

    |===

    Note that Spring Boot Maven plugin will remove the generated resources, and thus the application will use the unfiltered resource properties file from the sources. In order to keep (and use) the generated resources instead, configure the plugin in the POM like this:

    [source,xml]

    org.springframework.boot spring-boot-maven-plugin false

    At this point, we have the equivalent of the previous article, but we can go even further. The maven-git-commit-id-plugin will generate a git.properties stuffed will all possible git-related information. The following snippet is an example of the produced file:


    #Generated by Git-Commit-Id-Plugin #Fri Jul 10 23:36:40 CEST 2015 git.tags= git.commit.id.abbrev=bf4afbf [email protected] git.commit.message.full=Initial commit\n git.commit.id=bf4afbf167d51909bd984c35ad5b85a66b9c44b9 git.commit.id.describe-short=bf4afbf git.commit.message.short=Initial commit git.commit.user.name=Nicolas Frankel git.build.user.name=Nicolas Frankel git.commit.id.describe=bf4afbf [email protected] git.branch=master git.commit.time=2015-07-10T23:34:46+0200 git.build.time=2015-07-10T23:36:40+0200 git.remote.origin.url=Unknown —-

    From all of this data, only the following are used in the endpoint:

    [options=”header”] |===

     Key Output

    | git.branch .3+a| [source,json] —- { “git” : { “branch” : “master”, “commit” : { “id” : “bf4afbf”, “time” : “2015-07-10T23:34:46+0200” } } } —-

    git.commit.id
    git.commit.time

    |===

    Since the path and the formatting is consistent, you can devise a cronjob to parse all your applications and generate a wiki page with all those information, per server/environment. No more having to ssh the server and dig into the filesystem to uncover the version.

    Thus, the /info endpoint can be a very powerful asset in your organization, whether you’re a DevOps yourself or only willing to help your Ops.

    Categories: Development Tags: devopsspring boot
  • Become a DevOps with Spring Boot

    :imagesdir: /assets/resources/become-a-devops-with-spring-boot/

    Have you ever found yourself in the situation to finish a project and you’re about to deliver it to the Ops team. You’re so happy because this time, you covered all the bases: the documentation contains the JNDI datasource name the application will use, all environment-dependent parameters have been externalized in a property file - and documented, and you even made sure logging has been implemented at key points in the code. Unfortunately, Ops refuse your delivery since they don’t know how to monitor the new application. And you missed that… Sure you could hack something to fulfill this requirement, but the project is already over-budget. In some (most?) companies, this means someone will have to be blamed and chances are the developer will bear all the burden. Time for some sleepless nights.

    Spring Boot is a product from Spring that brings many out-of-the-box features to the table. Convention over configuration, in-memory default datasource and and embedded Tomcat are part of the features known to most. However, I think there’s a hidden gem that should be much more advertised. The actuator module actually provides metrics and health checks out-of-the-box as well as an easy way to add your own. In this article, we’ll see how to access those metrics from HTTP and send them to JMX and Graphite.

    As an example application, let’s use an https://github.com/nfrankel/enhanced-pet-clinic[update of the Spring Pet Clinic made with Boot^] - thanks forArnaldo Piccinelli for his work. The starting point is https://github.com/nfrankel/enhanced-pet-clinic/commit/790e5d0ff889d69db1b23e63d8d2d63c97863525[commit 790e5d0^]. Now, let’s add some metrics in no time.

    The first step is to add the actuator module starter in the Maven POM and let Boot does its magic:

    [source,xml]

    org.springframework.boot spring-boot-starter-actuator

    At this point, we can launch the Spring Pet Clinic with mvn spring-boot:run and navigate to http://localhost:8090/metrics[http://localhost:8090/metrics^] (note that the path is protected by Spring Security, credentials are user/password) to see something like the following:

    [source,json]

    { “mem” : 562688, “mem.free” : 328492, “processors” : 8, “uptime” : 26897, “instance.uptime” : 18974, “heap.committed” : 562688, “heap.init” : 131072, “heap.used” : 234195, “heap” : 1864192, “threads.peak” : 20, “threads.daemon” : 17, “threads” : 19, “classes” : 9440, “classes.loaded” : 9443, “classes.unloaded” : 3, “gc.ps_scavenge.count” : 16, “gc.ps_scavenge.time” : 104, “gc.ps_marksweep.count” : 2, “gc.ps_marksweep.time” : 152 } —-

    As can be seen, Boot provides hardware- and Java-related metrics without further configuration. Even better, if one browses the app e.g. repeatedly refreshed the root, new metrics appear:

    [source,json]

    { “counter.status.200.metrics” : 1, “counter.status.200.root” : 2, “counter.status.304.star-star” : 4, “counter.status.304.webjars.star-star” : 1, “gauge.response.metrics” : 72.0, “gauge.response.root” : 16.0, “gauge.response.star-star” : 8.0, “gauge.response.webjars.star-star” : 11.0, … } —- Those metrics are more functional in nature, and they are are separated into two separate groups:

    • Gauges are the simplest metrics and return a numeric value e.g. gauge.response.root is the time (in milliseconds) of the last response from the /metrics path
    • Counters are metrics which can be incremented/decremented e.g. counter.status.200.metrics is the number of times the /metrics path returned a HTTP 200 code

    At this point, your Ops team could probably scrape the returned JSON and make something out of it. It will be their responsibility to regularly poll the URL and to use the figures the way they want. However, with just a little more effort, we can ease the life of our beloved Ops team by putting these metrics in JMX.

    Spring Boot integrates easily with http://dropwizard.io/[Dropwizard metrics^]. By just adding the following dependency to the POM, Boot is able to provide a https://github.com/dropwizard/metrics/blob/master/metrics-core/src/main/java/com/codahale/metrics/MetricRegistry.java[MetricRegistry^], a Dropwizard registry for all metrics:

    [source,xml]

    io.dropwizard.metrics metrics-core 4.0.0-SNAPSHOT

    Using the provided registry, one is able to send metrics to JMX in addition to the HTTP endpoint. We just need a simple configuration class as well as a few API calls:

    [source,java]

    @Configuration public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;
    
    @Bean
    public JmxReporter jmxReporter() {
        JmxReporter reporter = JmxReporter.forRegistry(registry).build();
        reporter.start();
        return reporter;
    } } ----
    

    Launching jconsole let us check it works alright:

    image::jconsole.png[JConsole screenshot,704,551,align=”center”]

    The Ops team now just needs to get metrics from JMX and push them into their preferred graphical display tool, such as http://graphite.wikidot.com/[Graphite^]. One such way to achieve this is through http://www.jmxtrans.org/[jmx-trans^]. However, it’s also possible to directly send metrics to the Graphite server with just a few different API calls:

    [source,java]

    @Configuration public class MonitoringConfig {

    @Autowired
    private MetricRegistry registry;
    
    @Bean
    public GraphiteReporter graphiteReporter() {
        Graphite graphite = new Graphite(new InetSocketAddress("localhost", 2003));
        GraphiteReporter reporter = GraphiteReporter.forRegistry(registry)
                                                    .prefixedWith("boot").build(graphite);
        reporter.start(500, TimeUnit.MILLISECONDS);
        return reporter;
    } } ----
    

    The result is quite interesting given the few lines of code:

    image::graphite.png[Graphite screenshot,574,423,align=”center”]

    Note that going to Graphite using the JMX route makes things easier as there’s no need for a dedicated Graphite server in development environments.

    Categories: Java Tags: devopsmetricsspring boot
  • Metrics, metrics everywhere

    :imagesdir: /assets/resources/metrics-metrics-everywhere/

    With DevOps, metrics are starting to be among the non-functional requirements any application has to bring into scope. Before going further, there are several comments I’d like to make:

    . Metrics are not only about non-functional stuff. Many metrics represent very important KPI for the business. For example, for an e-commerce shop, the business needs to know how many customers leave the checkout process, and in which screen. True, there are several solutions to achieve this, though they are all web-based (Google Analytics comes to mind) and metrics might also be required for different architectures. And having all metrics in the same backend mean they can be correlated easily. . Metrics, as any other +++NFR+++ (e.g. logging and exception handling) should be designed and managed upfront and not pushed in as an afterthought. How do I know that? Well, one of my last project focused on functional requirement only, and only in the end did project management realized +++NFR+++ were important. Trust me when I say it was gory - and it has cost much more than if designed in the early phases of the project. . Metrics have an overhead. However, without metrics, it’s not possible to increase performance. Just accept that and live with it.

    The inputs are the following: the application is http://docs.spring.io/spring/docs/current/spring-framework-reference/html/mvc.html[Spring MVC^]-based and metrics have to be aggregated in http://graphite.wikidot.com/[Graphite^]. We will start by using the excellent https://dropwizard.github.io/metrics/[Metrics project^]: not only does it get the job done, its https://dropwizard.github.io/metrics/3.1.0/manual/[documentation^] is of very high quality and it’s available under the friendly OpenSource Apache v2.0 license.

    That said, let’s imagine a “standard” base architecture to manage those components.

    First, though Metrics offer a Graphite endpoint, this requires configuration in each environment and this makes it harder, especially on developers workstations. To manage this, we’ll send metrics to JMX and introduce http://www.jmxtrans.org/[jmxtrans^] as a middle component between JMX and graphite. As every JVM provides JMX services, this requires no configuration when there’s none needed - and has no impact on performance.

    Second, as developers, we usually enjoy develop everything from scratch in order to show off how good we are - or sometimes because they didn’t browse the documentation. My point of view as a software engineer is that I’d rather not reinvent the wheel and focus on the task at end. Actually, Spring Boot already http://docs.spring.io/spring-boot/docs/current/reference/html/production-ready-metrics.html=production-ready-code-hale-metrics[integrates with Metrics^] through the Actuator component. However, it only provides GaugeService - to send unique values, and CounterService - to increment/decrement values. This might be good enough for +++FR+++ but not for +++NFR+++ so we might want to tweak things a little.

    The flow would be designed like this: Code > Spring Boot > Metrics > JMX > Graphite

    The starting point is to create an aspect, as performance metric is a cross-cutting concern:

    [source,java]

    @Aspect public class MetricAspect {

    private final MetricSender metricSender;
    
    @Autowired
    public MetricAspect(MetricSender metricSender) {
        this.metricSender = metricSender;
    }
    
    @Around("execution(* ch.frankel.blog.metrics.ping..*(..)) ||execution(* ch.frankel.blog.metrics.dice..*(..))")
    public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
    
        StopWatch stopWatch = metricSender.getStartedStopWatch();
    
        try {
            return pjp.proceed();
        } finally {
            Class<?> clazz = pjp.getTarget().getClass();
            String methodName = pjp.getSignature().getName();
            metricSender.stopAndSend(stopWatch, clazz, methodName);
        }
    } } ----
    

    The only thing outside of the ordinary is the usage of autowiring as aspects don’t seem to be able to be the target of explicit wiring (yet?). Also notice the aspect itself doesn’t interact with the Metrics API, it only delegates to a dedicated component:

    [source,java]

    public class MetricSender {

    private final MetricRegistry registry;
    
    public MetricSender(MetricRegistry registry) {
        this.registry = registry;
    }
    
    private Histogram getOrAdd(String metricsName) {
        Map<String, Histogram> registeredHistograms = registry.getHistograms();
        Histogram registeredHistogram = registeredHistograms.get(metricsName);
        if (registeredHistogram == null) {
            Reservoir reservoir = new ExponentiallyDecayingReservoir();
            registeredHistogram = new Histogram(reservoir);
            registry.register(metricsName, registeredHistogram);
        }
        return registeredHistogram;
    }
    
    public StopWatch getStartedStopWatch() {
        StopWatch stopWatch = new StopWatch();
        stopWatch.start();
        return stopWatch;
    }
    
    private String computeMetricName(Class<?> clazz, String methodName) {
        return clazz.getName() + '.' + methodName;
    }
    
    public void stopAndSend(StopWatch stopWatch, Class<?> clazz, String methodName) {
        stopWatch.stop();
        String metricName = computeMetricName(clazz, methodName);
        getOrAdd(metricName).update(stopWatch.getTotalTimeMillis());
    } } ----
    

    The sender does several interesting things (but with no state):

    • It returns a new StopWatch for the aspect to pass back after method execution
    • It computes the metric name depending on the class and the method
    • It stops the StopWatch and sends the time to the MetricRegistry
    • Note it also lazily creates and registers a new Histogram with an ExponentiallyDecayingReservoir instance. The default behavior is to provide an UniformReservoir, which keeps data forever and is not suitable for our need.

    The final step is to tell the Metrics API to send data to JMX. This can be done in one of the configuration classes, preferably the one dedicated to metrics, using the @PostConstruct annotation on the desired method.

    [source,java]

    @Configuration public class MetricsConfiguration {

    @Autowired
    private MetricRegistry metricRegistry;
    
    @Bean
    public MetricSender metricSender() {
        return new MetricSender(metricRegistry);
    }
    
    @PostConstruct
    public void connectRegistryToJmx() {
        JmxReporter reporter = JmxReporter.forRegistry(metricRegistry).build();
        reporter.start();
    } } ----
    

    The JConsole should look like the following. Icing on the cake, all default Spring Boot metrics are also available:

    image::jconsole.png[JConsole screenshot,662,452,align=”center”]

    Sources for this article are link:/assets/resources/metrics-metrics-everywhere/spring-metrics-1.0.0.zip[available] in Maven “format”.

  • Vagrant your Drupal

    :page-liquid: :experimental:

    In one of my link:{% post_url 2012-03-18-i-made-the-step-forward-into-virtualization %}[recent post^], I described how I used VMWare to create a Drupal I could play with before deploying updates to http://morevaadin.com/[morevaadin.com^]. Then, at Devoxx France, I attended a session where the talker detailed how he set up a whole infrastructure for after work formations with Vagrant.

    Meanwhile, a little turn of fate put me in charge of some Drupal projects and I had to get better at it… fast. I put my hands on the http://definitivedrupal.org/[Definitive Guide to Drupal 7^] that talks about Drupal use with Vagrant. This was definitely too much: I decided to take on this opportunity to automatically manage my own Drupal infrastructure. These are the steps I followed and the lessons I learned.

    NOTE: My host OS is Windows 7 🙂

    == Download VirtualBox

    Oracle’s VirtualBox is the format used by Vagrant. Go on their https://www.virtualbox.org/wiki/Downloads[download page^] and choose your pick.

    == Download Vagrant

    Vagrant download page is http://downloads.vagrantup.com/[here^]. Once installed on your system, you should put the bin directory on your PATH.

    Now, get the Ubuntu Lucyd Linx box ready with:

    [source] vagrant box add base http://files.vagrantup.com/lucid32.box

    This download the box in your %USER_HOME%/.vagrant.d/boxes under the lucid32 folder (at least on Windows).

    == Get Drupal Vagrant project

    Download the current version of https://drupal.org/project/vagrant[Drupal Vagrant^] and extract it in the directory of your choice. Edit the Vagrantfile according to the following guidelines:

    [source]

    config.vm.box = “lucid32” // References the right box … config.vm.network :hostonly, “33.33.33.10” // Creates a machine with this IP —-

    Then, boot the virtual box with vagrant up and let Vagrant take care of all things (boot the VM, get the necessary applications, configure all, etc.).

    Update your etc/hosts file to have the 2 following domains pointing to 33.33.33.10.

    [source]

    33.33.33.10 drupal.vbox.local 33.33.33.10 dev-site.vbox.local —-

    At the end of the process (it could be a lenghty one), browsing on your host system to http://33.33.33.10[http://drupal.vbox.local/install.php^] should get you the familiar Drupal installation screen. You’re on!

    == SSH into the virtual box

    Now is time to get into the host system, with vagrant ssh.

    If on Windows, here comes the hard part. Since there’s no SSH utility out-of-the-box, you have to get one. Personally, I used http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html[PuTTY^]. Not the VM uses a SSH key for authentication and unfortunately, the format of Vagrant’s provided key is not compatible with PuTTY so that we have to use PuTTYGen to translate %USER_HOME%/.vagrant.d/insecure_private_key into a format PuTTY is able to use. When done, connect with PuTTY on the system (finally).

    == Conclusion

    All in all, this approach works alright, altough https://drupal.org/project/drush[Drush^] is present in /usr/share/drush but doesn’t seem to work (http://git-scm.com/[Git^] is installed and works fine).

    [NOTE]

    I recently stumbled upon this other http://community.opscode.com/cookbooks/drupal[Drupal cookbook^] but it cannot be used as is. Better DevOps than me can probably fix it. ====

    Categories: Development Tags: cmsdevopsdrupal