Home > Development > I made the step forward into virtualization

I made the step forward into virtualization

I have heard of virtualization 5 or 6 years ago and at the time, I didn’t understand the real implications of its use, save it was a tool toward an agile infrastructure. More recently, I heard that a virtual platform could be run on top of an undetermined number of physical platforms: you need more power, you can add platforms; a platform is down, there’s no effect (except a decrease in performance). At this time, I realized all the power we could gain from virtualization, or more precisely, all the power operations could gain from it.

As for myself, I tried VMWare to be able to get my hands on a Linux distro once, but I quickly lost interest since I had no real needs. Meanwhile, I had some seemingly unrelated problems bothering me in parallel:

  • When writing a book (ok, I only wrote one, but it still applies), I installed all the needed software on my main system. Never mind the latest products versions, I installed the version I had to and had to play with scripts to have different environment variables configurations. Moreover, I had to describe the install process of my operating system (Windows), which may be suitable… or not.
  • The same could be said for Proof-Of-Concepts: I had to manage a whole new bunch of products and a new set of environment variables that had the potential to wreak havoc with my existing system.
  • When I have to pre-sale, this is even more damaging since the demo effect enters the scene.
  • Finally, despite having both a WordPress blog and a Drupal site, I couldn’t resign myself to install PHP on my system and I made all changes, whatever their criticity, directly on the production platform. I admit it’s bad, very bad indeed, but well, the heart has reasons and all that.

Very recently, I found myself teaching basic Linux to students and we used virtualization to run a Ubuntu distro on top of Windows. That’s when it became clear to me; I had only to create a virtual machine for each of the above problems!

Since my top priority was to create a staging area for my blog and CMS, here are the steps I took to cover my needs:

  1. Download and install the latest VMWare Player(free, but need to register)
  2. Download the latest Ubuntu distribution in ISO format
  3. Create a new virtual machine with the ISO
  4. Run the new VM, welcome to Ubuntu!
  5. Install versions of Apache Web Server, PHP, MySQL and the PHP MySQL driver that mirror those provided by my host. With Ubuntu, this is easy as pie with my friend the apt-getcommand.
  6. Configure my etc/hostfile to resolve my sites URL to 127.0.0.1
  7. Configure Apache with two virtual hosts, one to /var/www/blogfrankel, the other to /var/www/morevaadin
  8. Export SQL from both sites thanks to the Plesk interface providing phpMyAdmin and import them in my virtualized MySQL instance
  9. FTP files from both sites to my virtualized filesystem
  10. Optional but just to be sure: configure both sites to have a big “Staging” warning as the site name to avoid confusion

That was it. I fumbled a little here and there since I’m not an Ubuntu guru but the process was fairly straightforward. The part I spent the most time on was configuring Apache to redirect requests to the right site (I’m also not an Apache guru). Now I can do and undo what I want and have no effect on my production environment until I decide. As an added value, I get no security hole from having a PHP environment since I shut the VM down when not needing it.

It’s a big step from where I come from but I see some improvements to be made:

  • Being able to directly push updates to the right site. At present, I have to redo actions manually (thus taking the risk to do something wrong/forget something)
  • Being able to synchronize the VM with the production data (SQL and files), either periodically or on-demand so as to have the latest image on my VM

Any ideas on how to accomplish these would be welcome; even more welcome if they are simple to implement (KISS), remember I’m neither a sysadmin nor a devops.

email
Send to Kindle
Categories: Development Tags:
  1. March 18th, 2012 at 14:15 | #1

    As thoughtwork advise here http://www.thoughtworks.com/articles/technology-radar-march-2012

    Think infrastructure as code. This technique treats infrastructure configuration in the same way as code; checking configuration into source control, then carefully pushing changes out to the data center.

    That’s what I do since 2010, thaks to @olange

  2. Guy
    March 18th, 2012 at 21:43 | #2

    Hi Nicolas,

    As Tom suggests, and as we do in my company (we develop a lot of websites using PHP-based websites) we use to infrastructure as code to sync our dev, test and productions environments:

    We simply install a code versioning solution client (aka svn or git) on each of the servers we want to sync. Then when some code has been updated, we use the client update commands to align the different parts. Naturally, this can be run either on some parts or on the whole code base. Additionally, we can rollback changes on the live when something gone wrong without much efforts.

    Also check the rsync tool available on all *nx platforms (there are also ports on Windows). It runs over ssh thus more secure and let’s you sync whole directory trees automatically. (https://en.wikipedia.org/wiki/Rsync). Pretty powerful when running as a daily job.

    Finally, there are more sophisticated environments like Zivios http://zivios.org/ that let you manage your infrastructure automatically, keeping your server base consistent, running scripts, managing users, …

  3. Roger Parkinson
    March 18th, 2012 at 21:48 | #3

    Or you can go the other way around. There are still a couple of Windows Only things I need to do from time to time so I run a copy of XP under VirtualBox on my Ubuntu host. I fins VirtualBox a little simpler to drive than VMWare but there isn’t much in it.

    For synchronize you can probably do something with shared folders. Your build copies the war file to the shared folder and the application server running on the VM is monitoring the shared folder or updates. It will probably be a bit slower than a non-shared but otherwise okay. SQL updates and file updates could be done in the same way, but you’d need an app to unpack them whenever it reloads, and that probably gets more complicated than is worthwhile.

  4. Jean-François Lamy
    March 19th, 2012 at 21:20 | #4

    Install Artifactory and push you jars/wars there.
    Look at dbunit as a way to push databases around — treat the data it produces as code, check it in.

    For version control, get a vm from TurnKey Linux. They have one with all the current trendy version control systems — Git is a pain to setup, Mercurial a tiny bit less.

  5. March 29th, 2012 at 19:50 | #5

    Nicolas, virtualization is very important and will be more and more in future. I’ve come to the conclusion that in some development scenarios is fundamental, or you run into troubles. I’ve been using it for a few years, the problem is that it’s somewhat “expensive” in the time that you need to keep it up and running (I’m thinking of more complex scenarios, where you have to deploy to more physical nodes: think of multiple virtual boxes, virtual networks, different configurations, and you must be able to quickly set them up and down). I’ve started playing with Vagrant, VirtualBox and Puppet and it’s a definite improvement. I’m going to illustrate some examples to a customer next week, I hope to be soon able to post about my experience so far; but in the meantime, googling around already provides some useful starting points.

  1. No trackbacks yet.