What is Laravel Forge and what are its top alternatives?
Top Alternatives to Laravel Forge
- Envoyer
Envoyer deploys your PHP applications with zero downtime. Just push your code, and let Envoyer deliver your application to one or many servers without interrupting a single customer. In this series, we'll discuss each feature of Envoyer, demonstrating how to use them with a sample project. ...
- Runcloud
SaaS based PHP cloud server control panel. Support Digital Ocean, Linode, AWS, Vultr, Azure and other custom VPS. GIT deployment webhook and easiest control panel to manage Laravel, Cake, Symphony or WordPress. ...
- ServerPilot
It is a SaaS platform for hosting PHP websites on Ubuntu servers. You can think of it as a modern, centralized hosting control panel. Manage all servers and sites through a single control panel or automate using our API. ...
- Vapor
Vapor is the first true web framework for Swift. It provides a beautifully expressive foundation for your app without tying you to any single server implementation. ...
- Docker
The Docker Platform is the industry-leading container platform for continuous, high-velocity innovation, enabling organizations to seamlessly build and share any application — from legacy to what comes next — and securely run them anywhere ...
- Heroku
Heroku is a cloud application platform – a new way of building and deploying web apps. Heroku lets app developers spend 100% of their time on their application code, not managing servers, deployment, ongoing operations, or scaling. ...
- DigitalOcean
We take the complexities out of cloud hosting by offering blazing fast, on-demand SSD cloud servers, straightforward pricing, a simple API, and an easy-to-use control panel. ...
- Git
Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. ...
Laravel Forge alternatives & related posts
- Easy to use3
related Envoyer posts
related Runcloud posts
related ServerPilot posts
- Fast13
- Swift11
- Type-safe10
- Great for apis6
- Readable5
- Compiled to machine code5
- Good Abstraction5
- Asynchronous5
- Maintainable3
- Complete1
- Mature1
- Server side swift is still in its infancy1
- Not as much support available.1
related Vapor posts
- Rapid integration and build up823
- Isolation692
- Open source521
- Testability and reproducibility505
- Lightweight460
- Standardization218
- Scalable185
- Upgrading / downgrading / application versions106
- Security88
- Private paas environments85
- Portability34
- Limit resource usage26
- Game changer17
- I love the way docker has changed virtualization16
- Fast14
- Concurrency12
- Docker's Compose tools8
- Fast and Portable6
- Easy setup6
- Because its fun5
- Makes shipping to production very simple4
- It's dope3
- Highly useful3
- Does a nice job hogging memory2
- Open source and highly configurable2
- Simplicity, isolation, resource effective2
- MacOS support FAKE2
- Its cool2
- Docker hub for the FTW2
- HIgh Throughput2
- Very easy to setup integrate and build2
- Package the environment with the application2
- Super2
- Asdfd0
- New versions == broken features8
- Unreliable networking6
- Documentation not always in sync6
- Moves quickly4
- Not Secure3
related Docker posts
Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up
or vagrant reload
we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up
, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.
Heroku
- Easy deployment703
- Free for side projects459
- Huge time-saver374
- Simple scaling348
- Low devops skills required261
- Easy setup190
- Add-ons for almost everything174
- Beginner friendly153
- Better for startups150
- Low learning curve133
- Postgres hosting48
- Easy to add collaborators41
- Faster development30
- Awesome documentation24
- Simple rollback19
- Focus on product, not deployment19
- Natural companion for rails development15
- Easy integration15
- Great customer support12
- GitHub integration8
- Painless & well documented6
- No-ops6
- I love that they make it free to launch a side project4
- Free4
- Great UI3
- Just works3
- PostgreSQL forking and following2
- MySQL extension2
- Security1
- Able to host stuff good like Discord Bot1
- Sec0
- Super expensive27
- Not a whole lot of flexibility9
- No usable MySQL option7
- Storage7
- Low performance on free tier5
- 24/7 support is $1,000 per month2
related Heroku posts
StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.
Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!
#StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit
Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
DigitalOcean
- Great value for money560
- Simple dashboard364
- Good pricing362
- Ssds300
- Nice ui250
- Easy configuration191
- Great documentation156
- Ssh access138
- Great community135
- Ubuntu24
- Docker13
- IPv6 support12
- Private networking10
- 99.99% uptime SLA8
- Simple API7
- Great tutorials7
- 55 Second Provisioning6
- One Click Applications5
- Dokku4
- LAMP4
- Debian4
- CoreOS4
- Node.js4
- 1Gb/sec Servers3
- Word Press3
- Mean3
- LEMP3
- Simple Control Panel3
- Ghost3
- Runs CoreOS2
- Quick and no nonsense service2
- Django2
- Good Tutorials2
- Speed2
- Ruby on Rails2
- GitLab2
- Hex Core machines with dedicated ECC Ram and RAID SSD s2
- CentOS1
- Spaces1
- KVM Virtualization1
- Amazing Hardware1
- Transfer Globally1
- Fedora1
- FreeBSD1
- Drupal1
- FreeBSD Amp1
- Magento1
- ownCloud1
- RedMine1
- My go to server provider1
- Ease and simplicity1
- Nice1
- Find it superfitting with my requirements (SSD, ssh.1
- Easy Setup1
- Cheap1
- Static IP1
- It's the easiest to get started for small projects1
- Automatic Backup1
- Great support1
- Quick and easy to set up1
- Servers on demand - literally1
- Reliability1
- Variety of services0
- Managed Kubernetes0
- No live support chat3
- Pricing3
related DigitalOcean posts
This week, we finally released NurseryPeople.com. In the end, I chose to provision our server on DigitalOcean. So far, I am SO happy with that decision. Although setting everything up was a challenge, and I learned a lot, DigitalOceans blogs helped in so many ways. I was able to set up nginx and the Laravel web app pretty smoothly. I am also using Buddy for deploying changes made in git, which is super awesome. All I have to do in order to deploy is push my code to my private repo, and buddy transfers everything over to DigitalOcean. So far, we haven't had any downtime and DigitalOceans prices are quite fair for the power under the hood.
Hello, I'm currently writing an e-commerce website with Laravel and Laravel Nova (as an admin panel). I want to start deploying the app and created a DigitalOcean account. After some searches about the deployment process, I saw that the setup via DigitalOcean (using Droplets) isn't very easy for beginners. Now I'm not sure how to deploy my app. I am in between Laravel Forge and DigitalOcean (?Apps Platform or Droplets?). I've read that Heroku and Laravel Vapor are a bit expensive. That's why I didn't consider them yet. I'd be happy to read your opinions on that topic!
- Distributed version control system1.4K
- Efficient branching and merging1.1K
- Fast959
- Open source845
- Better than svn726
- Great command-line application368
- Simple306
- Free291
- Easy to use232
- Does not require server222
- Distributed27
- Small & Fast22
- Feature based workflow18
- Staging Area15
- Most wide-spread VSC13
- Role-based codelines11
- Disposable Experimentation11
- Frictionless Context Switching7
- Data Assurance6
- Efficient5
- Just awesome4
- Github integration3
- Easy branching and merging3
- Compatible2
- Flexible2
- Possible to lose history and commits2
- Rebase supported natively; reflog; access to plumbing1
- Light1
- Team Integration1
- Fast, scalable, distributed revision control system1
- Easy1
- Flexible, easy, Safe, and fast1
- CLI is great, but the GUI tools are awesome1
- It's what you do1
- Phinx0
- Hard to learn16
- Inconsistent command line interface11
- Easy to lose uncommitted work9
- Worst documentation ever possibly made8
- Awful merge handling5
- Unexistent preventive security flows3
- Rebase hell3
- Ironically even die-hard supporters screw up badly2
- When --force is disabled, cannot rebase2
- Doesn't scale for big data1
related Git posts
Our whole DevOps stack consists of the following tools:
- GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
- Respectively Git as revision control system
- SourceTree as Git GUI
- Visual Studio Code as IDE
- CircleCI for continuous integration (automatize development process)
- Prettier / TSLint / ESLint as code linter
- SonarQube as quality gate
- Docker as container management (incl. Docker Compose for multi-container application management)
- VirtualBox for operating system simulation tests
- Kubernetes as cluster management for docker containers
- Heroku for deploying in test environments
- nginx as web server (preferably used as facade server in production environment)
- SSLMate (using OpenSSL) for certificate management
- Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
- PostgreSQL as preferred database system
- Redis as preferred in-memory database/store (great for caching)
The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:
- Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
- Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
- Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
- Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
- Scalability: All-in-one framework for distributed systems.
- Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up
or vagrant reload
we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up
, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.