Automating a conference submission workflow: deploying to production

In the first post of this series, we detailed the setup of a software to automate submissions to conferences. In the second one, we configured the integration endpoints. This third post is dedicated to the deployment of the solution to production.

To Cloud or not to Cloud?

To decide what to do, the first step is to ask oneself whether to host:

  1. On-premise
  2. In the Cloud
  3. Or even use my own machine

First, let’s remove on-premise from the options. It wouldn’t make sense, as I’m the only user. Instead, there are two reasons to choose the Cloud:

  1. The app needs to react to events: the event being moving a card on the Trello board. Hence, it needs to be up at all time, as well as its endpoint. Using a Cloud provider allows that. The alternative would be to start the app each time on my machine, interact with Trello, and stop the app again.
  2. The app needs to be accessible from Trello. I already wrote how it’s possible to configure ngrok to receive HTTP requests from the Web on one’s machine. While it’s possible for debug purposes, it’s not a great idea for production purposes.

Which Cloud platform to choose?

Now that the option of hosting in the Cloud has been settled, it’s time to choose one’s PaaS provider. Several choices are available:

  • Google Cloud Platform
  • Microsoft Azure
  • Amazon Web Services
  • IBM Cloud
  • Oracle Cloud
  • And a couple of others

Additionally, a build tool pipeline is also required: I need a full-fledged CI job. It would also be a nice-to-have to benefit from implemented Continuous Deployment. The idea is to automatically build and deploy the app at each commit. My main requirement, however, is ease-of-use: I’d like to avoid setting up a stack, which requires a full-time Software Reliability Engineer and 24/7 monitoring. My understanding of the big providers' stack is that they are quite complex.

Heroku to the rescue

Fortunately, beside those, I knew of a provider that fits: Heroku.

Heroku is a platform as a service based on a managed container system, with integrated data services and a powerful ecosystem, for deploying and running modern apps. The Heroku developer experience is an app-centric approach for software delivery, integrated with today’s most popular developer tools and workflows.

Within Heroku, the basic building block is known as a dyno. Heroku is built on containerization: a dyno is an isolated, containerized process dedicated to execute code with a specific amount of RAM. Depending on the load, one can add/remove dynos to scale up/down horizontally. Some higher pricing plans allow the scaling to be automatic depending on the load. As the sole user, a single dyno is enough for my needs.

In that case, the free plan fits the requirements. Here are some of its features:

  • Switches to sleep mode after 30 minutes of inactivity
  • Monthly 1,000 hours of activity
  • Custom subdomain

I have two main usages of the app:

  1. When I submit to conferences, I do that in "bursts". I block a half-a-day timeslot in my calendar, and submit to multiple conferences during that time. After each submission, I move the Trello card from the Backlog column to the Submitted one as explained in the first post of this series.
  2. When I receive a conference update, I just move the card from the Submitted column to the relevant one - Accepted or Refused.

I can cope with the sleeping behavior in both cases, as there’s no requirement for neither the calendar nor the sheet to be updated in a specific timeframe.

However, the biggest strength of Heroku is IMHO its embedded Continuous Deployment model, associated with its dedicated Git repository. That allows every push to the master branch to trigger a build that creates the package, and to deploy it to production. Let’s see how it can be done.

5-minutes crash course on Heroku

Heroku comes with a web interface, as well as a dedicated Command-Line Interface. I’d suggest to install the CLI:

brew tap heroku/brew && brew install heroku

It’s now possible to authenticate to one’s account:

heroku login

From that point on, let’s create an app.

heroku create dummy

The app is bound to a subdomain. The dummy app is accessible from https://dummy.herokuapp.com; In addition, the underlying Git repository is hosted on https://git.heroku.com/dummy.git. By running the previous command from the app’s root folder, the newly-created remote Git repo should have been added as the heroku remote. Now, each push to master should build the artifact and deploy it:

git push heroku master

Heroku does it by inferring the tech stack and the build tool. For example, it recognizes a Maven POM located at the app’s root. The previous push displays something like this:

remote: Compressing source files... done.
remote: Building source:
remote: -----> Java app detected
remote: -----> Installing JDK 1.8... done
remote: -----> Executing Maven
remote:  $ ./mvnw -DskipTests clean dependency:list install
remote:  [INFO] Scanning for projects...
remote:  [INFO]
remote:  [INFO] ----------------< ch.frankel.conftools:conf-automation >----------------
remote:  [INFO] Building conf-automation 0.0.1-SNAPSHOT
remote:  [INFO] --------------------------------[ jar ]---------------------------------
remote:  [INFO]
remote:  [INFO] --- maven-clean-plugin:3.1.0:clean (default-clean) @ conf-automation ---
remote:  [INFO]
remote:  [INFO] --- maven-dependency-plugin:3.1.1:list (default-cli) @ conf-automation ---
remote:  [INFO]
remote:  [INFO] --- maven-resources-plugin:3.1.0:resources (default-resources) @ conf-automation ---
remote:  [INFO] Using 'UTF-8' encoding to copy filtered resources.
remote:  [INFO] Copying 2 resources
remote:  [INFO] Copying 2 resources
remote:  [INFO]
remote:  [INFO] --- maven-compiler-plugin:3.8.1:compile (default-compile) @ conf-automation ---
remote:  [INFO] Nothing to compile - all classes are up to date
remote:  [INFO]
remote:  [INFO] --- kotlin-maven-plugin:1.3.50:compile (compile) @ conf-automation ---
remote:  [INFO] Applied plugin: 'spring'
remote:  [WARNING] /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/src/main/kotlin/ch/frankel/conf/automation/TriggerHandler.kt: (25, 14) Parameter 'request' is never used
remote:  [WARNING] /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/src/main/kotlin/ch/frankel/conf/automation/action/AddSheetRow.kt: (51, 31) Unchecked cast: Any! to Collection<Any>
remote:  [INFO]
remote:  [INFO] --- maven-resources-plugin:3.1.0:testResources (default-testResources) @ conf-automation ---
remote:  [INFO] Using 'UTF-8' encoding to copy filtered resources.
remote:  [INFO] skip non existing resourceDirectory /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/src/test/resources
remote:  [INFO]
remote:  [INFO] --- maven-compiler-plugin:3.8.1:testCompile (default-testCompile) @ conf-automation ---
remote:  [INFO] Changes detected - recompiling the module!
remote:  [INFO]
remote:  [INFO] --- kotlin-maven-plugin:1.3.50:test-compile (test-compile) @ conf-automation ---
remote:  [INFO] Applied plugin: 'spring'
remote:  [INFO]
remote:  [INFO] --- maven-surefire-plugin:2.22.2:test (default-test) @ conf-automation ---
remote:  [INFO] Tests are skipped.
remote:  [INFO]
remote:  [INFO] --- maven-jar-plugin:3.1.2:jar (default-jar) @ conf-automation ---
remote:  [INFO] Building jar: /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/target/conf-automation-0.0.1-SNAPSHOT.jar
remote:  [INFO]
remote:  [INFO] --- spring-boot-maven-plugin:2.2.0.RELEASE:repackage (repackage) @ conf-automation ---
remote:  [INFO] Replacing main artifact with repackaged archive
remote:  [INFO]
remote:  [INFO] --- maven-install-plugin:2.5.2:install (default-install) @ conf-automation ---
remote:  [INFO] Installing /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/target/conf-automation-0.0.1-SNAPSHOT.jar to /app/tmp/cache/.m2/repository/ch/frankel/conftools/conf-automation/0.0.1-SNAPSHOT/conf-automation-0.0.1-SNAPSHOT.jar
remote:  [INFO] Installing /tmp/build_bc06910fc8b3d2c530bcec797ce67d25/pom.xml to /app/tmp/cache/.m2/repository/ch/frankel/conftools/conf-automation/0.0.1-SNAPSHOT/conf-automation-0.0.1-SNAPSHOT.pom
remote:  [INFO] ------------------------------------------------------------------------
remote:  [INFO] ------------------------------------------------------------------------
remote:  [INFO] Total time:  21.087 s
remote:  [INFO] Finished at: 2020-04-13T08:17:18Z
remote:  [INFO] ------------------------------------------------------------------------
remote: -----> Discovering process types
remote:  Procfile declares types -> web
remote: -----> Compressing...
remote:  Done: 86.1M
remote: -----> Launching...
remote:  Released v41
remote:  https://dummy.herokuapp.com/ deployed to Heroku
remote: Verifying deploy... done.

Deployment parameters can be configured through a dedicated Procfile located at the root of the repo. An alternative is to use a heroku.yml file. In my case, here is the Procfile:

web: java $JAVA_OPTS -jar target/conf-automation-0.0.1-SNAPSHOT.jar --spring.profiles.active=production --server.port=$PORT
  1. web configures the app to be accessible via HTTP
  2. The other part is the actual command-line to launch the application. It’s pretty recognizable if you already launched a Spring Boot application through the CLI
  3. The only gotcha is the --server.port=$PORT parameter. Heroku decides which port the application should bind to, and exports it in the $PORT environment variable. This parameter makes Spring Boot receives HTTP requests on it.

The deployment will stop the running JVM, deploy the new JAR, and start the JVM again with this new JAR. With one single dyno, downtime is to be expected, the duration is the time it takes for the JVM to start.

Finally, the application requires a database. By default, Spring Boot uses the H2 in-memory database. That means that when the application goes down e.g. because of sleeping, all data is lost. The database is used by Camunda under-the-cover for all workflow-related data. For that reason, I configured Spring Boot to use PostgreSQL instead to be able to access the persisted data in case there would be an issue.

Heroku allows to set up additional services, called add-ons, for applications. Add-ons come in different kinds, such as data stores, logging, monitoring, search, etc. One add-on wraps PostgreSQL and makes it available to the app.

The free plan limits the storage to 10,000 rows. Hence, I need to regularly manually reset the stored data when I receive an email warning about approaching the limit. I’m at the point to get back to H2, as I didn’t need to debug: everything works nicely as expected.


In this series, I described how boring administrative tasks around conference submission could be automated. First, I showed both context and requirements, as well as a way to test locally. Then, I went on to detail the necessary steps to integrate the application with Google Calendar and Google Sheets. Finally, I described how to deploy the application on Heroku.

Once a developer, always a developer. When you learned how to program, repetitive manual work becomes just a problem to code away.

Nicolas Fränkel

Nicolas Fränkel

Developer Advocate with 15+ years experience consulting for many different customers, in a wide range of contexts (such as telecoms, banking, insurances, large retail and public sector). Usually working on Java/Java EE and Spring technologies, but with focused interests like Rich Internet Applications, Testing, CI/CD and DevOps. Also double as a trainer and triples as a book author.

Read More
Automating a conference submission workflow: deploying to production
Share this