Posts Tagged ‘devops’
  • Exploratory Infrastructure projects

    Exploring the jungle

    :imagesdir: /assets/resources/exploratory-infrastructure-projects

    Nowadays, most companies use one or another Agile methodology for their software development projects. That makes people involved in software development projects at least aware of agile principles - whether they truly try to follow agile practices or just pay lip service to them for a variety of reasons remains debatable. To avoid any association with tainted practices, I’d rather use the name “Exploratory Development”. As with software development, exploration has a vague feeling of the final target destination, and a more or less detailed understanding on how to get there.footnoteref:[joke, This an HTML joke, because “emphasis on less”] Plus on a map, the plotted path is generally not a straight line.

    However, even with the rise of the DevOps movement, the operational side of projects still seems to remain oblivious to what happens on the other side of the curtain. This post aims to provide food for thoughts to think about how true Agile can be applied to Ops.

    == The legacy approach

    As an example, let’s have an e-commerce application. Sometimes, bad stuff happens and the development team needs to access logs to analyze what happened. In general, due to “security” reasons, developers cannot directly access the system, and needs to ask the operations team to send the log file(s). This too common process results in frustration, time wasting, and contributes to build even taller walls between Dev and Ops. To improve the situation, an option could be to setup a datastore to store the logs into, and a webapp to make them available to developers.

    Here are some hypotheses regarding the source architecture components:

    • Load-balancer
    • Apache Web server
    • Apache Tomcat servlet container, where the e-commerce application is deployed
    • Solr server, that provide search and faceting features to the app

    In turn, relevant data/logs that needs to be stored include:

    • Web server logs
    • Application logs proper
    • Tomcat technical logs
    • JMX data
    • Solr logs

    Let’s implement those requirements with the Elastic stack. The target infrastructure could look like this:

    image::target-architecture.png[Target architecture,947,472,align=”center”]

    Defining the architecture is generally not enough. Unless one work for a dream company that empowers employees to improve the situation on their own initiative (please send it my resume), chances are there’s a need for some estimates regarding the setup of this architecture. And that goes double in case it’s done for a customer company. You might push back, stating the evidence:

    image::wewillaskfortestimates.jpg[We will ask for estimates and then treat them as deadlines,502,362,align=”center”]

    Engineers will probably think that managers are asking them to stick their neck out to produce estimates for no real reasons, and push back. But the later will kindly remind the former to “just” base estimates on assumptions, as if it was a real solution instead of plausible deniability. In other words, an assumption is a way to escape blame for a wrong estimate, and yes, it sounds much closer to contract law than to engineering concepts. That means that if any of the listed assumption is not fulfilled, then it’s acceptable for estimates to be wrong. It also implies estimates are then not considered a deadline anymore - which they never were supposed to be in the first place, if only from a semantical viewpoint.footnoteref:[guesstimate, If an estimate needs to be really taken as an estimate, the compound word guesstimate is available. Aren’t enterprise semantics wonderful?]

    Notwithstanding all that, and for the sake of the argument, let’s try to review possible assumptions pertaining to the proposed architecture that might impact implementation time:

    • Hardware location: on-premise, cloud-based or a mix of both?
    • Underlying operating system(s): Nix, Windows, something else, or a mix?
    • Infrastructure virtualization degree: is the infrastructure physical, virtualized, both?
    • Co-location of the OS: are there requirements regarding the location of components on physical systems?
    • Hardware availability: should there be a need to purchase physical machines?
    • Automation readiness: is there any solution already in place to automate infrastructure management? If not, in how many environments will the implementation need to be setup and if more than 2, will replication be handled manually?
    • Clustering: is any component clustered? Which one(s)? For the application, is there a session replication solution in place? Which one?
    • Infrastructure access: needs to be on-site? Needs to get a security token hardware? From software?

    And those are quite basic items regarding hardware only. Other wide areas include software (volume of logs, criticality, hardware/software mis/match, versions, etc.), people (in-house support, etc.), planning (vacation seasons, etc.), and I’m probably forgetting some important ones too. Given the sheer number of available items - and assuming they all have been listed, it stands to reason that at least one assumption would prove wrong, hence making final estimates dead wrong. In that case, playing the estimate game is just another way to provide plausible deniability. A much more useful alternative would be to create a n-dimension matrix of all items, and estimate all possible combinations. But as in software projects, the event space has just too many parameters to do that in an acceptable timeframe.

    == Proposal for an alternative

    That said, what about a real working alternative that might not be to satisfy dashboard managers but the underlying business? It would start by implementing the most basic requirement, and to add more features until it’s good enough, or enough budget has been spent. Here are some possible steps from the above example:

    Foundation setup:: The initial goal of the setup is to enable log access and the most important logs are applications logs. Hence, the first setup is the following: + image::foundation-architecture.png[Foundation architecture,450,294,align=”center”] + More JVM logs:: From this point on, a near-zero effort is to add scraping Tomcat’s log, to help with incident analysis by adding correlation. + image::more-jvm-logs.png[More JVM logs,608,294,align=”center”] + Machine decoupling:: The next logical step is to move the Elasticsearch instance to its own dedicated machine, to add an extra level of modularity to the overall architecture. + image::machine-decoupling.png[Machine decoupling,739,365,align=”center”] + Even more logs:: At this point, additional logs from other components - load balancer, Solr server, etc. can be sent to Elasticsearch to improve issue-solving involving different components. Performance improvement:: Given that Logstash is written in Ruby, there might be some performance issues on running Logstash directly along the component, depending on each machine specific load and performances. Elastic realized it some time ago and now propose better performance via dedicated Beat. Every Logstash instance can be replaced by Filebeats. Not only logs:: With the Jolokia library, it’s possible to expose JMX beans through an HTTP interface. Unfortunately, there are only a few available Beats and none of them handle HTTP. However, Logstash with the http-poller plugin gets the job done. Reliability:: In order to improve reliability, Elasticsearch can be cluster-ized.

    The good thing about those steps is that they can implemented in (nearly) any order. This means that after laying out the base foundation - the first step, the stakeholder can decide which on makes sense for its specific context, or to stop because it’s enough regarding the added value.

    At this point, estimates still might make sense regarding the first step. But after eliminating most complexity (and its related uncertainty), it feels much more comfortable estimating the setup of an Elastic stack in a specific context.

    == Conclusion

    As stated above, whether Agile principles are implemented in software development projects can be subject to debate. However, my feeling is that they have not reached the Ops sphere yet. That’s a shame, because as in Development, projects can truly benefit from real Agile practices. However, to prevent association with Agile cargo cult, I proposed the use of the term “Exploratory Infrastructure”. This post described a proposal to apply such an exploratory approach to a sample infrastructure project. The main drawback of such approach is that it will cost more, as the path straight; the main benefit is that at every step, the stakeholder can choose to pursue or stop, taking into account the law of diminishing returns.

    Categories: Technical Tags: agileinfrastructureopsdevops
  • Feeding Spring Boot metrics to Elasticsearch

    Elasticsearch logo

    This week’s post aims to describe how to send JMX metrics taken from the JVM to an Elasticsearch instance.

    Business app requirements

    The business app(s) has some minor requirements.

    The easiest use-case is to start from a Spring Boot application. In order for metrics to be available, just add the Actuator dependency to it:


    Note that when inheriting from spring-boot-starter-parent, setting the version is not necessary and taken from the parent POM.

    To send data to JMX, configure a brand-new @Bean in the context:

    @Bean @ExportMetricWriter
    MetricWriter metricWriter(MBeanExporter exporter) {
        return new JmxMetricWriter(exporter);

    To-be architectural design

    There are several options to put JMX data into Elasticsearch.

    Possible options

    1. The most straightforward way is to use Logstash with the JMX plugin
    2. Alternatively, one can hack his own micro-service architecture:
      • Let the application send metrics to the JVM - there’s the Spring Boot actuator for that, the overhead is pretty limited
      • Have a feature expose JMX data on an HTTP endpoint using Jolokia
      • Have a dedicated app poll the endpoint and send data to Elasticsearch

      This way, every component has its own responsibility, there’s not much performance overhead and the metric-handling part can fail while the main app is still available.

    3. An alternative would be to directly poll the JMX data from the JVM

    Unfortunate setback

    Any architect worth his salt (read lazy) should always consider the out-of-the-box option. The Logstash JMX plugin looks promising. After installing the plugin, the jmx input can be configured into the Logstash configuration file:

    input {
      jmx {
        path => "/var/logstash/jmxconf"
        polling_frequency => 5
        type => "jmx"
    output {
      stdout { codec => rubydebug }

    The plugin is designed to read JVM parameters (such as host and port), as well as the metrics to handle from JSON configuration files. In the above example, they will be watched in the /var/logstash/jmxconf folder. Moreover, they can be added, removed and updated on the fly.

    Here’s an example of such configuration file:

      "host" : "localhost",
      "port" : 1616,
      "alias" : "petclinic",
      "queries" : [
        "object_name" : "org.springframework.metrics:name=*,type=*,value=*",
        "object_alias" : "${type}.${name}.${value}"

    A MBean’s ObjectName can be determined from inside the jconsole:

    JConsole screenshot

    The plugin allows wildcard in the metric’s name and usage of captured values in the alias. Also, by default, all attributes will be read (those can be restricted if necessary).

    Note: when starting the business app, it’s highly recommended to set the JMX port through the system property.

    Unfortunately, at the time of this writing, running the above configuration fails with messages of this kind:

    [WARN][logstash.inputs.jmx] Failed retrieving metrics for attribute Value on object blah blah blah
    [WARN][logstash.inputs.jmx] undefined method `event' for #<LogStash::Inputs::Jmx:0x70836e5d>

    For reference purpose, the Github issue can be found here.

    The do-it yourself alternative

    Considering it’s easier to poll HTTP endpoints than JMX - and that implementations already exist, let’s go for option 3 above. Libraries will include:

    • Spring Boot for the business app
    • With the Actuator starter to provides metrics
    • Configured with the JMX exporter for sending data
    • Also with the dependency to expose JMX beans on an HTTP endpoints
    • Another Spring Boot app for the “poller”
    • Configured with a scheduled service to regularly poll the endpoint and send it to Elasticsearch

    Draft architecture component diagram

    Additional business app requirement

    To expose the JMX data over HTTP, simply add the Jolokia dependency to the business app:


    From this point on, one can query for any JMX metric via the HTTP endpoint exposed by Jolokia - by default, the full URL looks like /jolokia/read/<JMX_ObjectName>.

    Custom-made broker

    The broker app responsibilities include:

    • reading JMX metrics from the business app through the HTTP endpoint at regular intervals
    • sending them to Elasticsearch for indexing

    My initial move was to use Spring Data, but it seems the current release is not compatible with Elasticsearch latest 5 version, as I got the following exception:

      Received message from unsupported version: [2.0.0] minimal compatible version is: [5.0.0]

    Besides, Spring Data is based on entities, which implies deserializing from HTTP and serializing back again to Elasticsearch: that has a negative impact on performance for no real added value.

    The code itself is quite straightforward:

    {% highlight java linenos %} @SpringBootApplication @EnableScheduling open class JolokiaElasticApplication {

    @Autowired lateinit var client: JestClient

    @Bean open fun template() = RestTemplate()

    @Scheduled(fixedRate = 5000) open fun transfer() { val result = template().getForObject( “http://localhost:8080/manage/jolokia/read/org.springframework.metrics:name=status,type=counter,value=beans”, val index = Index.Builder(result).index(“metrics”).type(“metric”).id(UUID.randomUUID().toString()).build() client.execute(index) } }

    fun main(args: Array) {, *args) } {% endhighlight %}

    Of course, it’s a Spring Boot application (line 1). To poll at regular intervals, it must be annotated with @EnableScheduling (line 2) and have the polling method annotated with @Scheduled and parameterized with the interval in milliseconds (line 9).

    In Spring Boot application, calling HTTP endpoints is achieved through the RestTemplate. Once created (line 7) - it’s singleton, it can be (re)used throughout the application. The call result is deserialized into a String (line 11).

    The client to use is Jest. Jest offers a dedicated indexing API: it just requires the JSON string to be sent, as well as the index name, the object name as well as its id (line 14). With the Spring Boot Elastic starter on the classpath, a JestClient instance is automatically registered in the bean factory. Just autowire it in the configuration (line 5) to use it (line 15).

    At this point, launching the Spring Boot application will poll the business app at regular intervals for the specified metrics and send it to Elasticsearch. It’s of course quite crude, everything is hard-coded, but it gets the job done.


    Despite the failing plugin, we managed to get the JMX data from the business application to Elasticsearch by using a dedicated Spring Boot app.

  • More DevOps for Spring Boot

    I think Spring Boot brings something new to the table, especially concerning DevOps - and I’ve already written a post about it. However, there’s more than metrics and healthchecks.

    In one of another of my previous post, I described how to provide versioning information for Maven-built applications. This article will describe how this later post is not necessary when using Spring Boot.

    As a reminder, just adding adding the spring-boot-starter-actuator dependency in the POM, enable many endpoints, among them:

    • /metrics for monitoring the application
    • /health to check the application can deliver the expected service
    • /bean lists all Spring beans in the context
    • /configprops lists all properties regarding the running profile(s) (if any)

    Among those, one of them is of specific interest: /info. By default, it displays… nothing - or more precisely, the string representation of an empty JSON object.

    However, any property set in the file (or one of its profile flavor) will find its way into the page. For example:

    Propery file Output
    Key Value My Demo App {% highlight javascript %} { "application" : { "name" : "My Demo App" } } {% endhighlight %}

    Setting static info is sure nice, but our objective is to get the version of my application within Spring Boot. files are automatically filtered by Spring Boot during the process-resources build phase. Any property in the POM can be used: it just needs to be set between @ character. For example:

    Propery file Output
    Key Value
    info.application.version @project.version@ {% highlight javascript %} { "application" : { "version" : "0.0.1-SNAPSHOT" } } {% endhighlight %}

    Note that Spring Boot Maven plugin will remove the generated resources, and thus the application will use the unfiltered resource properties file from the sources. In order to keep (and use) the generated resources instead, configure the plugin in the POM like this:


    At this point, we have the equivalent of the previous article, but we can go even further. The maven-git-commit-id-plugin will generate a stuffed will all possible git-related information. The following snippet is an example of the produced file:

    #Generated by Git-Commit-Id-Plugin
    #Fri Jul 10 23:36:40 CEST 2015
    [email protected]
    git.commit.message.full=Initial commit\n
    git.commit.message.short=Initial commit Frankel Frankel
    [email protected]

    From all of this data, only the following are used in the endpoint:

    Key Output
    git.branch {% highlight javascript %} { "git" : { "branch" : "master", "commit" : { "id" : "bf4afbf", "time" : "2015-07-10T23:34:46+0200" } } } {% endhighlight %}

    Since the path and the formatting is consistent, you can devise a cronjob to parse all your applications and generate a wiki page with all those information, per server/environment. No more having to ssh the server and dig into the filesystem to uncover the version.

    Thus, the /info endpoint can be a very powerful asset in your organization, whether you’re a DevOps yourself or only willing to help your Ops. More detailed information can be found in the Spring Boot documentation.

    Categories: Development Tags: devopsspring boot
  • Become a DevOps with Spring Boot

    Have you ever found yourself in the situation to finish a project and you’re about to deliver it to the Ops team. You’re so happy because this time, you covered all the bases: the documentation contains the JNDI datasource name the application will use, all environment-dependent parameters have been externalized in a property file - and documented, and you even made sure logging has been implemented at key points in the code. Unfortunately, Ops refuse your delivery since they don’t know how to monitor the new application. And you missed that… Sure you could hack something to fulfill this requirement, but the project is already over-budget. In some (most?) companies, this means someone will have to be blamed and chances are the developer will bear all the burden. Time for some sleepless nights.

    Spring Boot is a product from Spring that brings many out-of-the-box features to the table. Convention over configuration, in-memory default datasource and and embedded Tomcat are part of the features known to most. However, I think there’s a hidden gem that should be much more advertised. The actuator module actually provides metrics and health checks out-of-the-box as well as an easy way to add your own. In this article, we’ll see how to access those metrics from HTTP and send them to JMX and Graphite.

    As an example application, let’s use an update of the Spring Pet Clinic made with Boot - thanks forArnaldo Piccinelli for his work. The starting point is commit 790e5d0. Now, let’s add some metrics in no time.

    The first step is to add the actuator module starter in the Maven POM and let Boot does its magic:


    At this point, we can launch the Spring Pet Clinic with mvn spring-boot:run and navigate to http://localhost:8090/metrics (note that the path is protected by Spring Security, credentials are user/password) to see something like the following:

      "mem" : 562688,
      "" : 328492,
      "processors" : 8,
      "uptime" : 26897,
      "instance.uptime" : 18974,
      "heap.committed" : 562688,
      "heap.init" : 131072,
      "heap.used" : 234195,
      "heap" : 1864192,
      "threads.peak" : 20,
      "threads.daemon" : 17,
      "threads" : 19,
      "classes" : 9440,
      "classes.loaded" : 9443,
      "classes.unloaded" : 3,
      "gc.ps_scavenge.count" : 16,
      "gc.ps_scavenge.time" : 104,
      "gc.ps_marksweep.count" : 2,
      "gc.ps_marksweep.time" : 152

    As can be seen, Boot provides hardware- and Java-related metrics without further configuration. Even better, if one browses the app e.g. repeatedly refreshed the root, new metrics appear:

      "counter.status.200.metrics" : 1,
      "counter.status.200.root" : 2,
      "" : 4,
      "" : 1,
      "gauge.response.metrics" : 72.0,
      "gauge.response.root" : 16.0,
      "" : 8.0,
      "" : 11.0,

    Those metrics are more functional in nature, and they are are separated into two separate groups:

    • Gauges are the simplest metrics and return a numeric value e.g. gauge.response.root is the time (in milliseconds) of the last response from the /metrics path
    • Counters are metrics which can be incremented/decremented e.g. counter.status.200.metrics is the number of times the /metrics path returned a HTTP 200 code

    At this point, your Ops team could probably scrape the returned JSON and make something out of it. It will be their responsibility to regularly poll the URL and to use the figures the way they want. However, with just a little more effort, we can ease the life of our beloved Ops team by putting these metrics in JMX.

    Spring Boot integrates easily with Dropwizard metrics. By just adding the following dependency to the POM, Boot is able to provide a MetricRegistry, a Dropwizard registry for all metrics:


    Using the provided registry, one is able to send metrics to JMX in addition to the HTTP endpoint. We just need a simple configuration class as well as a few API calls:

    public class MonitoringConfig {
        private MetricRegistry registry;
        public JmxReporter jmxReporter() {
            JmxReporter reporter = JmxReporter.forRegistry(registry).build();
            return reporter;

    Launching jconsole let us check it works alright: The Ops team now just needs to get metrics from JMX and push them into their preferred graphical display tool, such as Graphite. One such way to achieve this is through jmx-trans. However, it’s also possible to directly send metrics to the Graphite server with just a few different API calls:

    public class MonitoringConfig {
        private MetricRegistry registry;
        public GraphiteReporter graphiteReporter() {
            Graphite graphite = new Graphite(new InetSocketAddress("localhost", 2003));
            GraphiteReporter reporter = GraphiteReporter.forRegistry(registry)
            reporter.start(500, TimeUnit.MILLISECONDS);
            return reporter;

    The result is quite interesting given the few lines of code: Note that going to Graphite using the JMX route makes things easier as there’s no need for a dedicated Graphite server in development environments.

    Categories: Java Tags: devopsmetricsspring boot
  • Metrics, metrics everywhere

    With DevOps, metrics are starting to be among the non-functional requirements any application has to bring into scope. Before going further, there are several comments I’d like to make:

    1. Metrics are not only about non-functional stuff. Many metrics represent very important KPI for the business. For example, for an e-commerce shop, the business needs to know how many customers leave the checkout process, and in which screen. True, there are several solutions to achieve this, though they are all web-based (Google Analytics comes to mind) and metrics might also be required for different architectures. And having all metrics in the same backend mean they can be correlated easily.
    2. Metrics, as any other NFR (e.g. logging and exception handling) should be designed and managed upfront and not pushed in as an afterthought. How do I know that? Well, one of my last project focused on functional requirement only, and only in the end did project management realized NFR were important. Trust me when I say it was gory - and it has cost much more than if designed in the early phases of the project.
    3. Metrics have an overhead. However, without metrics, it's not possible to increase performance. Just accept that and live with it.

    The inputs are the following: the application is Spring MVC-based and metrics have to be aggregated in Graphite. We will start by using the excellent Metrics project: not only does it get the job done, its documentation is of very high quality and it’s available under the friendly OpenSource Apache v2.0 license.

    That said, let’s imagine a “standard” base architecture to manage those components.

    First, though Metrics offer a Graphite endpoint, this requires configuration in each environment and this makes it harder, especially on developers workstations. To manage this, we’ll send metrics to JMX and introduce jmxtrans as a middle component between JMX and graphite. As every JVM provides JMX services, this requires no configuration when there’s none needed - and has no impact on performance.

    Second, as developers, we usually enjoy develop everything from scratch in order to show off how good we are - or sometimes because they didn’t browse the documentation. My point of view as a software engineer is that I’d rather not reinvent the wheel and focus on the task at end. Actually, Spring Boot already integrates with Metrics through the Actuator component. However, it only provides GaugeService - to send unique values, and CounterService - to increment/decrement values. This might be good enough for FR but not for NFR so we might want to tweak things a little.

    The flow would be designed like this: Code > Spring Boot > Metrics > JMX > Graphite

    The starting point is to create an aspect, as performance metric is a cross-cutting concern:

    public class MetricAspect {
        private final MetricSender metricSender;
        public MetricAspect(MetricSender metricSender) {
            this.metricSender = metricSender;
        @Around("execution(**(..)) ||execution(**(..))")
        public Object doBasicProfiling(ProceedingJoinPoint pjp) throws Throwable {
            StopWatch stopWatch = metricSender.getStartedStopWatch();
            try {
                return pjp.proceed();
            } finally {
                Class<?> clazz = pjp.getTarget().getClass();
                String methodName = pjp.getSignature().getName();
                metricSender.stopAndSend(stopWatch, clazz, methodName);

    The only thing outside of the ordinary is the usage of autowiring as aspects don’t seem to be able to be the target of explicit wiring (yet?). Also notice the aspect itself doesn’t interact with the Metrics API, it only delegates to a dedicated component:

    public class MetricSender {
        private final MetricRegistry registry;
        public MetricSender(MetricRegistry registry) {
            this.registry = registry;
        private Histogram getOrAdd(String metricsName) {
            Map<String, Histogram> registeredHistograms = registry.getHistograms();
            Histogram registeredHistogram = registeredHistograms.get(metricsName);
            if (registeredHistogram == null) {
                Reservoir reservoir = new ExponentiallyDecayingReservoir();
                registeredHistogram = new Histogram(reservoir);
                registry.register(metricsName, registeredHistogram);
            return registeredHistogram;
        public StopWatch getStartedStopWatch() {
            StopWatch stopWatch = new StopWatch();
            return stopWatch;
        private String computeMetricName(Class<?> clazz, String methodName) {
            return clazz.getName() + '.' + methodName;
        public void stopAndSend(StopWatch stopWatch, Class<?> clazz, String methodName) {
            String metricName = computeMetricName(clazz, methodName);

    The sender does several interesting things (but with no state):

    • It returns a new StopWatch for the aspect to pass back after method execution
    • It computes the metric name depending on the class and the method
    • It stops the StopWatch and sends the time to the MetricRegistry
    • Note it also lazily creates and registers a new Histogram with an ExponentiallyDecayingReservoir instance. The default behavior is to provide an UniformReservoir, which keeps data forever and is not suitable for our need.

    The final step is to tell the Metrics API to send data to JMX. This can be done in one of the configuration classes, preferably the one dedicated to metrics, using the @PostConstruct annotation on the desired method.

    public class MetricsConfiguration {
        private MetricRegistry metricRegistry;
        public MetricSender metricSender() {
            return new MetricSender(metricRegistry);
        public void connectRegistryToJmx() {
            JmxReporter reporter = JmxReporter.forRegistry(metricRegistry).build();

    The JConsole should look like the following. Icing on the cake, all default Spring Boot metrics are also available:

    Sources for this article are available in Maven “format”.

    To go further:

  • Vagrant your Drupal

    In one of my recent post, I described how I used VMWare to create a Drupal I could play with before deploying updates to Then, at Devoxx France, I attended a session where the talker detailed how he set up a whole infrastructure for after work formations with Vagrant.

    Meanwhile, a little turn of fate put me in charge of some Drupal projects and I had to get better at it… fast. I put my hands on the Definitive Guide to Drupal 7 that talks about Drupal use with Vagrant. This was definitely too much: I decided to take on this opportunity to automatically manage my own Drupal infrastructure. These are the steps I followed and the lessons I learned. Note my host OS is Windows 7 :-)

    Download VirtualBox

    Oracle’s VirtualBox is the format used by Vagrant. Go on their download page and choose your pick.

    Download Vagrant

    Vagrant download page is here. Once installed on your system, you should put the bin directory on your PATH.

    Now, get the Ubuntu Lucyd Linx box ready with:

    vagrant box add base

    This download the box in your %USER_HOME%/.vagrant.d/boxes under the lucid32 folder (at least on Windows).

    Get Drupal Vagrant project

    Download the current version of Drupal Vagrant and extract it in the directory of your choice. Edit the Vagrantfile according to the following guidelines: = "lucid32" // References the right box
    ... :hostonly, "" // Creates a machine with this IP

    Then, boot the virtual box with `vagrant up and let Vagrant take care of all things (boot the VM, get the necessary applications, configure all, etc.).

    Update your etc/hosts file to have the 2 following domains pointing to	drupal.vbox.local		dev-site.vbox.local

    At the end of the process (it could be a lenghty one), browsing on your host system to http://drupal.vbox.local/install.php should get you the familiar Drupal installation screen. You’re on!

    SSH into the virtual box

    Now is time to get into the host system, with vagrant ssh.

    If on Windows, here comes the hard part. Since there’s no SSH utility out-of-the-box, you have to get one. Personally, I used PuTTY. Not the VM uses a SSH key for authentication and unfortunately, the format of Vagrant’s provided key is not compatible with PuTTY so that we have to use PuTTYGen to translate %USER_HOME%/.vagrant.d/insecure_private_key into a format PuTTY is able to use. When done, connect with PuTTY on the system (finally).


    All in all, this approach works alright, altough Drush is present in /usr/share/drush but doesn’t seem to work (Git is installed and works fine).

    Note: I recently stumbled upon this other Drupal cookbook but it cannot be used as is. Better DevOps than me can probably fix it.

    Categories: Development Tags: cmsdevopsdrupal