How We Moved From Heroku To Containers With No Docker Experience

14,922
ProLeads
ProLeads is the holy grail of social selling. We provide a set of tools for B2B companies to reduce their CAC and increase conversion rates on closing new business. Our mission is to build the worlds leading social selling platform. Whatever tools sales people need we'll engineer them to the best version!

Editor's note: By Anders Fredriksson, ‎Co-founder & CEO at ProLeads


Background

As a growing startup that provides a SaaS platform to automate B2B sales lead management and social selling at scale, ProLeads has been pressed to deliver more valuable features while we cut our server hosting costs. Our top two priorities have been (1) ensuring zero downtime in production, and (2) pushing changes to production seamlessly and instantly. We’ve been using Heroku to host our production environment.  The platform has certainly provided us with the high-availability we needed while making seamless updates through a simple git push heroku master.

There was only problem with Heroku – the cost. Our monthly bill for both Heroku & Compose (which was hosting our MongoDB Replica Sets) was $6,500. After graduating from the 500 Startups program before the summer, we were fortunate enough to get a generous offer from Rackspace. The offer allowed us to move our entire production environment to Rackspace Cloud Servers essentially for free for 2 years. After our offer expires however, we expect to pay Rackspace around $3,200 per month for a production environment that will not need to be scaled for years. This is still more than 50% reduction of costs compared to Heroku. So it was very clear to us what we had to do – but the problem was that we had become so accustomed to the simplicity and the high-availability of Heroku. 

We realized that unless we re-architected our application so that it can be deployed on any cloud anywhere, we would always be locked into whatever hosting provider we were using. That’s when we started looking into Docker containers. The idea that the same container would run exactly the same on any Linux host running anywhere was extremely appealing to us.

Unfortunately containerizing applications is still a challenge mostly because existing application composition frameworks do not address complex dependencies, high-availability or auto-scaling workflows post-provision. This is actually the most common complaint I hear for why companies don't just containerize their applications. It takes too much time and effort to get it right. That’s when we thought about our fellow batch mates from 500 Startups, DCHQ. Their platform addressed all of our challenges and allowed us to get almost the same PaaS experience from Heroku – but on our Rackspace Cloud Servers.

DCHQ, simplifies the containerization of applications through a framework that extends Docker Compose with cross-image environment variable bindings, extensible BASH script plug-ins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling. And my personal favorite feature is that their platform automates the building of Docker images from our private GitHub project and pushes these images to our private Docker Hub repository.

Once an application is provisioned, a user gets access to application backups, scale in/out, log viewing, monitoring and application updates post-provision. 

In this post, we'll go over our application architecture and how we made the move to containers in Production.

Application Architecture

ProLeads helps enterprises target qualified leads with remarkably personalized e-mails. The application consists of Ruby along with Redis and MongoDB Replica Sets. You can check out our entire stack here. Heroku simplified things for us by providing a load balancing service and a very simple way to push our changes from GitHub to production with minimal to zero downtime. Moreover, Heroku’s Compose add-on took care of hosting the MongoDB Replica Sets while ensuring high-availability.

So to move to containers, we needed to ensure that the application met our high-availability requirements and was easy to update with minimal to zero downtime.

DCHQ had some out-of-box templates for Docker-based MongoDB Replica Sets and Redis – but we decided to go with the Rackspace "Stack Templates" that are built using Chef recipes. The "Stack Templates" provided us the high-availability across multiple servers – so we decided not to containerize those services.

Additionally, Rackspace provided us with a load balancing service that can route traffic to multiple Cloud Servers.

We ended up containerizing the actual Ruby application along with Nginx for load balancing across the Ruby containers.

Here’s the end result in production:



The MongoDB and Redis services are distributed across multiple hosts and are treated as "existing services" that our Ruby containers connect to.

We then containerized our Ruby application using a Dockerfile in our GitHub repo. We use DCHQ to automate the building of our Ruby image from our GitHub project and then pushing the new image to our Docker Hub private repository. We have two image builds scheduled on DCHQ:

  • One that creates an image with the time-stamp in the tag name using {{timestamp}}. This allows us to revert back to older images at any point in time.

  • One that creates an image with the tag latest (i.e. always overriding the latest image with the latest code commits).



We used DCHQ to create the application template for our Ruby containers and Nginx. This template is deployed on two separate Rackspace Cloud Servers. However we point the Rackspace Load Balancer to only one of those servers – allowing us to deploy the latest Ruby containers on the other cloud server whenever an update or a new feature is available. Once we complete the load and functional testing, we simply flip the routing in the Load Balancer to start pointing to the latest deployment.

Here’s the application template we created using DCHQ:



You will notice that we’re invoking a BASH script plug-in in Nginx to update the default.conf file with the array of container private IP’s for the Ruby containers. The plug-in is executed at request time and the IP’s are resolved dynamically using DCHQ.

The Ruby containers in this case are not exposing their ports on the host – but Nginx will be able to route traffic to the clustered containers.

The volumes parameter allowed us to securely store the SSL certificates on the cloud servers on which these containers are running and then mapping that directory to a volume on the Nginx container.

The host parameter is optional – but can allow us to distribute containers across multiple hosts if we opted to use DCHQ’s software-defined networking option. For example, if we wanted to split the Ruby containers on multiple hosts for high availability, DCHQ would allow us to do that.

The registry_id parameter allowed us to pull images from our private Docker Hub repository.

The mem_min parameter allowed us to specify the minimum amount of memory to allocate to the container. Of course, as the load on the containers increases, the containers will automatically use whatever memory is available on the host.

Lastly, we used environment variables to point to the URL’s of the MongoDB and Redis services.

Using DCHQ in Production

We believe that our setup addresses our original objective, which is never being tied down to a specific cloud provider again. By containerizing Ruby, which is the constantly changing part of the application, we can now move to any cloud at any time.

Our highest priorities continue to be:

  1. Achieving high-availability for our application in production

  2. Pushing updates to the application seamlessly and with zero downtime

DCHQ allows us to achieve both of these things. The Ruby containers cluster means that there is no single point of failure. DCHQ allowed us to route traffic through an Nginx container to multiple Ruby containers that are all taking to the same shared Redis and MongoDB services.

Moreover, by automating the image builds, DCHQ allowed us to constantly deploy our latest Ruby containers in an automated and seamless fashion. Additionally, by supporting the {{timestamp}} tag name, we can always revert back to older Ruby images for backup.

If we ever wanted to scale out the Ruby containers cluster, we can simply use the scale out feature in DCHQ. The plug-in framework allows us to update the Nginx configuration as part of the scale out operation to make the process more seamless. The scale in and scale out operations can be scheduled. We currently don’t use this today – but in the future, we may define a scale out policy during business hours and a scale in policy during weekends.

With DCHQ we also get monitoring for the running containers. We can always track the CPU, Memory and I/O of the Nginx and Ruby containers and get notifications/alerts if the metrics exceed a pre-defined threshold.



Conclusion

Now that we’ve containerized our Ruby application stack, we’re no longer bound to a specific cloud provider or PaaS solution. Since DCHQ integrates with 13 different cloud providers, we can easily deploy the same template on any Linux host running anywhere.

An interesting side effect of this is that onboarding new developers is much easier since they can just install our entire platform on their own machines using DCHQ.io’s Hosted PaaS one-click deploy button. More importantly however, our move to containers did not compromise our priorities in production: (1) getting high-availability, and (2) pushing updates to production with zero downtime.

We still have some work to do to make the deploy procedure as efficient as git push, but it is well underway.

DCHQ’s application framework allowed us to easily containerize our Ruby stack with support for integrating with the existing Mongo, Redis & Rackspace Load Balancing services, clustering our Ruby containers, scaling out our Ruby cluster while updating the load balancer, and monitoring the containers in production.

By moving from Heroku to DCHQ & Rackspace, we expect to save more than $150,000 over two years. This is quite a lot of money for a growing startup like ours!

ProLeads
ProLeads is the holy grail of social selling. We provide a set of tools for B2B companies to reduce their CAC and increase conversion rates on closing new business. Our mission is to build the worlds leading social selling platform. Whatever tools sales people need we'll engineer them to the best version!
Tools mentioned in article