Heroku
We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).
We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .
Read the blog post to go more in depth.
We undertook the task of building a manufacturing ERP for small branded manufacturers. We needed to build a lot, fast with a small team, and have clear focus on product delivery. We chose JavaScript / Node.js ( React + LoopBack full stack) , Heroku and Heroku Postgres (also Heroku Redis ) . This decision has guided us to picking other key technologies. It has granted us high pace of product delivery and service availability while operating with a small team.
We use Heroku because it's easy and fast. We spend 0 time on DevOps stuff (and I've spent a lot of time on that before), and it just keeps running. One click install of add-ons, and consolidated sign on with billing is awesome.
If you're going to use a lot of memory or run many processes it gets expensive fast. But you probably shouldn't use that much memory and you rarely need to run many processes. Heroku will start a new process if one dies (rare), so if you need extreme up time you can pay for running multiples. :)
Their support is stellar even though we don't pay for top tier support. Since we're off timezone wise it might take some time to get responses. But they always connect me to someone with deep technical insights that give concrete feedback and helpful information even when my problems are of the less common ones.
We began our hosting journey, as many do, on Heroku because they make it easy to deploy your application and automate some of the routine tasks associated with deployments, etc. However, as our team grew and our product matured, our needs have outgrown Heroku. I will dive into the history and reasons for this in a future blog post.
We decided to migrate our infrastructure to Kubernetes running on Amazon EKS. Although Google Kubernetes Engine has a slightly more mature Kubernetes offering and is more user-friendly; we decided to go with EKS because we already using other AWS services (including a previous migration from Heroku Postgres to AWS RDS). We are still in the process of moving our main website workloads to EKS, however we have successfully migrate all our staging and testing PR apps to run in a staging cluster. We developed a Slack chatops application (also running in the cluster) which automates all the common tasks of spinning up and managing a production-like cluster for a pull request. This allows our engineering team to iterate quickly and safely test code in a full production environment. Helm plays a central role when deploying our staging apps into the cluster. We use CircleCI to build docker containers for each PR push, which are then published to Amazon EC2 Container Service (ECR). An upgrade-operator
process watches the ECR repository for new containers and then uses Helm to rollout updates to the staging environments. All this happens automatically and makes it really easy for developers to get code onto servers quickly. The immutable and isolated nature of our staging environments means that we can do anything we want in that environment and quickly re-create or restore the environment to start over.
The next step in our journey is to migrate our production workloads to an EKS cluster and build out the CD workflows to get our containers promoted to that cluster after our QA testing is complete in our staging environments.
Sometimes #ad-blocking addons can cause a real headache when working with JavaScript apps. Onboarding assistants (Appcues + elevio ), chat (Intercom) and product usage insight (Hotjar) have all landed on their blacklists. I guess there is a perfectly good reason for this that I just don't know.
In order to fix this, we had to set up our own content delivery service. We chose Amazon CloudFront and Amazon S3 to do the job because it has a good synergy with Heroku PaaS we are already using.
When adding a new feature to Checkly rearchitecting some older piece, I tend to pick Heroku for rolling it out. But not always, because sometimes I pick AWS Lambda . The short story:
- Developer Experience trumps everything.
- AWS Lambda is cheap. Up to a limit though. This impact not only your wallet.
- If you need geographic spread, AWS is lonely at the top.
Recently, I was doing a brainstorm at a startup here in Berlin on the future of their infrastructure. They were ready to move on from their initial, almost 100% Ec2 + Chef based setup. Everything was on the table. But we crossed out a lot quite quickly:
- Pure, uncut, self hosted Kubernetes — way too much complexity
- Managed Kubernetes in various flavors — still too much complexity
- Zeit — Maybe, but no Docker support
- Elastic Beanstalk — Maybe, bit old but does the job
- Heroku
- Lambda
It became clear a mix of PaaS and FaaS was the way to go. What a surprise! That is exactly what I use for Checkly! But when do you pick which model?
I chopped that question up into the following categories:
- Developer Experience / DX 🤓
- Ops Experience / OX 🐂 (?)
- Cost 💵
- Lock in 🔐
Read the full post linked below for all details
We are preparing to deploy a MERN-stack application (PWA) for a client. The app will be a public-facing real estate platform for listing, buying, and selling homes. While presenting a user experience much like a website, it retains the scalability and functionality of a web application.
I am weighing the pros and cons of using Microsoft Azure over Heroku, especially now that Heroku no longer supports mLAB for connecting Mongo databases. See more Suggestions and feedback always welcome.
Fair enough. I will give Digital Ocean some continued consideration as well. Thank you for the advice!
Even if the integration is no longer available on Heroku, you can still startup a MongoDB hosted database and deploy it on one of the regions that Heroku uses for good latency (e.g. AWS Oregon for North America) https://www.dropbox.com/s/k2y2xbpoy95b09l/Pasted_Image_9_14_20__11_55_PM.png?dl=0
I really like how simple the Heroku interface is, how reliant their services are, and in general how great their CLI tools work.
The Azure control panel has grown to a point where it's very convoluted, and in general it's a bit more expensive than the rest. They also stopped their entrepreneur incentive program (Spark?) so there's little incentive to start something new on it.
Depending on what I'm building I usually go for: a) Vercel + Serverless functions if it's a React SPA b) Heroku, for NodeJS/Express + Postgress + Any FE framework you like c) DigitalOcean if I need full control of the server
That said... if latency is REALLY important then go with Azure. If you have tradeoffs, go for the ones that make your customer's experience better, even if you're annoyed at Azure's interface, or have to pay a few extra bucks
Hope that helps
This is definitely useful information to be aware of. Thank you for your input. Right now we are leaning toward Heroku but these recommendations for Digital Ocean are also something to consider. Thank you for your advice, I appreciate it!
Heroku is one of the leading cloud platforms and it is the ideal candidate for our workflow. It will provide us with the Heroku is one of the leading cloud platforms and it is the ideal candidate for our workflow. It will provide us with capabilities of instant deployment which will be really helpful to us in publishing the early stages of our product. It also provides vertical and horizontal scalability which is going to be of real importance to us in the future.
Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
So why is your deployment different for your (Heroku) test/dev and your stage/production?
When it comes to testing our web app we do not demand great computational resources and need a very simple, convenient and fast PaaS solution for deploying the app to our testers. In production though, the demand of great computational resources can rise very fast. With Amazon we are able to control that in better way.