Alternatives to Chef logo

Alternatives to Chef

Ansible, Puppet Labs, Terraform, Jenkins, and Git are the most popular alternatives and competitors to Chef.
1.3K
345

What is Chef and what are its top alternatives?

Chef is a powerful automation platform that allows users to define infrastructure as code, making it easy to manage and deploy complex systems. Its key features include creating reusable code templates, tracking infrastructure changes, and automating configuration management. However, Chef can have a steep learning curve for beginners and may require additional resources to fully utilize its capabilities.

  1. Puppet: Puppet is an open-source configuration management tool that helps automate infrastructure tasks. Its key features include declarative language for defining configurations and support for multiple operating systems. Pros of Puppet include scalability and extensive community support, while cons include complexity for beginners.
  2. Ansible: Ansible is an open-source automation tool that focuses on simplicity and ease of use. Key features include agentless architecture, playbooks for defining tasks, and a large library of modules. Pros of Ansible include quick setup and minimal configuration, while cons include limited scalability for larger environments.
  3. SaltStack: SaltStack is a configuration management and orchestration tool that offers flexibility and scalability. Its key features include remote execution, event-driven automation, and support for cloud infrastructure. Pros of SaltStack include fast performance and comprehensive documentation, while cons include a steeper learning curve compared to other tools.
  4. Terraform: Terraform is an infrastructure as code tool that allows for provisioning and managing resources in various cloud environments. Key features include declarative configuration, support for multiple providers, and a module system for organizing code. Pros of Terraform include multi-cloud support and version control integration, while cons include a learning curve and initial setup overhead.
  5. CFEngine: CFEngine is an automation tool for managing and securing IT infrastructure. Its key features include policy-based configuration management, reporting and monitoring capabilities, and support for compliance automation. Pros of CFEngine include efficient resource management and real-time insights, while cons include complexity and limited community support.
  6. Juju: Juju is a model-driven operations tool that focuses on simplicity and repeatability of infrastructure deployment. Key features include charm store for reusable components, cross-cloud compatibility, and intuitive graphical interface. Pros of Juju include ease of use and scalability, while cons include limited support for certain cloud providers.
  7. Docker: Docker is a containerization platform that allows for packaging and running applications in isolated environments. Its key features include lightweight containers, portability across different systems, and flexibility in deployment. Pros of Docker include resource efficiency and rapid development cycles, while cons include potential security risks and complexity in networking configurations.
  8. Kubernetes: Kubernetes is an open-source container orchestration platform that helps automate deployment, scaling, and management of containerized applications. Key features include container scheduling, service discovery, and self-healing capabilities. Pros of Kubernetes include high scalability and fault tolerance, while cons include complexity in configuration and maintenance.
  9. Octopus Deploy: Octopus Deploy is a release management tool that automates the deployment of applications and infrastructure. Its key features include release pipelines, multi-tenancy support, and integration with various tools and platforms. Pros of Octopus Deploy include ease of use and comprehensive deployment tracking, while cons include licensing costs for larger environments.
  10. Cobbler: Cobbler is an installation server that automates the provisioning of servers with support for various operating systems and configurations. Key features include network booting, system profiling, and configuration management. Pros of Cobbler include ease of setup and integration with existing tools, while cons include limited scalability for larger environments.

Top Alternatives to Chef

  • Ansible
    Ansible

    Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates. Ansible’s goals are foremost those of simplicity and maximum ease of use. ...

  • Puppet Labs
    Puppet Labs

    Puppet is an automated administrative engine for your Linux, Unix, and Windows systems and performs administrative tasks (such as adding users, installing packages, and updating server configurations) based on a centralized specification. ...

  • Terraform
    Terraform

    With Terraform, you describe your complete infrastructure as code, even as it spans multiple service providers. Your servers may come from AWS, your DNS may come from CloudFlare, and your database may come from Heroku. Terraform will build all these resources across all these providers in parallel. ...

  • Jenkins
    Jenkins

    In a nutshell Jenkins CI is the leading open-source continuous integration server. Built with Java, it provides over 300 plugins to support building and testing virtually any project. ...

  • Git
    Git

    Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. ...

  • GitHub
    GitHub

    GitHub is the best place to share code with friends, co-workers, classmates, and complete strangers. Over three million people use GitHub to build amazing things together. ...

  • Visual Studio Code
    Visual Studio Code

    Build and debug modern web and cloud applications. Code is free and available on your favorite platform - Linux, Mac OSX, and Windows. ...

  • Docker
    Docker

    The Docker Platform is the industry-leading container platform for continuous, high-velocity innovation, enabling organizations to seamlessly build and share any application — from legacy to what comes next — and securely run them anywhere ...

Chef alternatives & related posts

Ansible logo

Ansible

19.1K
15.4K
1.3K
Radically simple configuration-management, application deployment, task-execution, and multi-node orchestration engine
19.1K
15.4K
+ 1
1.3K
PROS OF ANSIBLE
  • 284
    Agentless
  • 210
    Great configuration
  • 199
    Simple
  • 176
    Powerful
  • 155
    Easy to learn
  • 69
    Flexible
  • 55
    Doesn't get in the way of getting s--- done
  • 35
    Makes sense
  • 30
    Super efficient and flexible
  • 27
    Powerful
  • 11
    Dynamic Inventory
  • 9
    Backed by Red Hat
  • 7
    Works with AWS
  • 6
    Cloud Oriented
  • 6
    Easy to maintain
  • 4
    Vagrant provisioner
  • 4
    Simple and powerful
  • 4
    Multi language
  • 4
    Simple
  • 4
    Because SSH
  • 4
    Procedural or declarative, or both
  • 4
    Easy
  • 3
    Consistency
  • 2
    Well-documented
  • 2
    Masterless
  • 2
    Debugging is simple
  • 2
    Merge hash to get final configuration similar to hiera
  • 2
    Fast as hell
  • 1
    Manage any OS
  • 1
    Work on windows, but difficult to manage
  • 1
    Certified Content
CONS OF ANSIBLE
  • 8
    Dangerous
  • 5
    Hard to install
  • 3
    Doesn't Run on Windows
  • 3
    Bloated
  • 3
    Backward compatibility
  • 2
    No immutable infrastructure

related Ansible posts

Tymoteusz Paul
Devops guy at X20X Development LTD · | 23 upvotes · 9.8M views

Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

See more
Sebastian Gębski

Heroku was a decent choice to start a business, but at some point our platform was too big, too complex & too heterogenic, so Heroku started to be a constraint, not a benefit. First, we've started containerizing our apps with Docker to eliminate "works in my machine" syndrome & uniformize the environment setup. The first orchestration was composed with Docker Compose , but at some point it made sense to move it to Kubernetes. Fortunately, we've made a very good technical decision when starting our work with containers - all the container configuration & provisions HAD (since the beginning) to be done in code (Infrastructure as Code) - we've used Terraform & Ansible for that (correspondingly). This general trend of containerisation was accompanied by another, parallel & equally big project: migrating environments from Heroku to AWS: using Amazon EC2 , Amazon EKS, Amazon S3 & Amazon RDS.

See more
Puppet Labs logo

Puppet Labs

1.1K
792
227
Server automation framework and application
1.1K
792
+ 1
227
PROS OF PUPPET LABS
  • 52
    Devops
  • 44
    Automate it
  • 26
    Reusable components
  • 21
    Dynamic and idempotent server configuration
  • 18
    Great community
  • 12
    Very scalable
  • 12
    Cloud management
  • 10
    Easy to maintain
  • 9
    Free tier
  • 6
    Works with Amazon EC2
  • 4
    Declarative
  • 4
    Ruby
  • 3
    Works with Azure
  • 3
    Works with OpenStack
  • 2
    Nginx
  • 1
    Ease of use
CONS OF PUPPET LABS
  • 3
    Steep learning curve
  • 1
    Customs types idempotence

related Puppet Labs posts

Shared insights
on
SaltSaltPuppet LabsPuppet LabsAnsibleAnsible
at

By 2014, the DevOps team at Lyft decided to port their infrastructure code from Puppet to Salt. At that point, the Puppet code based included around "10,000 lines of spaghetti-code,” which was unfamiliar and challenging to the relatively new members of the DevOps team.

“The DevOps team felt that the Puppet infrastructure was too difficult to pick up quickly and would be impossible to introduce to [their] developers as the tool they’d use to manage their own services.”

To determine a path forward, the team assessed both Ansible and Salt, exploring four key areas: simplicity/ease of use, maturity, performance, and community.

They found that “Salt’s execution and state module support is more mature than Ansible’s, overall,” and that “Salt was faster than Ansible for state/playbook runs.” And while both have high levels of community support, Salt exceeded expectations in terms of friendless and responsiveness to opened issues.

See more
Marcel Kornegoor

Since #ATComputing is a vendor independent Linux and open source specialist, we do not have a favorite Linux distribution. We mainly use Ubuntu , Centos Debian , Red Hat Enterprise Linux and Fedora during our daily work. These are also the distributions we see most often used in our customers environments.

For our #ci/cd training, we use an open source pipeline that is build around Visual Studio Code , Jenkins , VirtualBox , GitHub , Docker Kubernetes and Google Compute Engine.

For #ServerConfigurationAndAutomation, we have embraced and contributed to Ansible mainly because it is not only flexible and powerful, but also straightforward and easier to learn than some other (open source) solutions. On the other hand: we are not affraid of Puppet Labs and Chef either.

Currently, our most popular #programming #Language course is Python . The reason Python is so popular has to do with it's versatility, but also with its low complexity. This helps sysadmins to write scripts or simple programs to make their job less repetitive and automating things more fun. Python is also widely used to communicate with (REST) API's and for data analysis.

See more
Terraform logo

Terraform

18.4K
14.5K
344
Describe your complete infrastructure as code and build resources across providers
18.4K
14.5K
+ 1
344
PROS OF TERRAFORM
  • 121
    Infrastructure as code
  • 73
    Declarative syntax
  • 45
    Planning
  • 28
    Simple
  • 24
    Parallelism
  • 8
    Well-documented
  • 8
    Cloud agnostic
  • 6
    It's like coding your infrastructure in simple English
  • 6
    Immutable infrastructure
  • 5
    Platform agnostic
  • 4
    Extendable
  • 4
    Automation
  • 4
    Automates infrastructure deployments
  • 4
    Portability
  • 2
    Lightweight
  • 2
    Scales to hundreds of hosts
CONS OF TERRAFORM
  • 1
    Doesn't have full support to GKE

related Terraform posts

Context: I wanted to create an end to end IoT data pipeline simulation in Google Cloud IoT Core and other GCP services. I never touched Terraform meaningfully until working on this project, and it's one of the best explorations in my development career. The documentation and syntax is incredibly human-readable and friendly. I'm used to building infrastructure through the google apis via Python , but I'm so glad past Sung did not make that decision. I was tempted to use Google Cloud Deployment Manager, but the templates were a bit convoluted by first impression. I'm glad past Sung did not make this decision either.

Solution: Leveraging Google Cloud Build Google Cloud Run Google Cloud Bigtable Google BigQuery Google Cloud Storage Google Compute Engine along with some other fun tools, I can deploy over 40 GCP resources using Terraform!

Check Out My Architecture: CLICK ME

Check out the GitHub repo attached

See more
Emanuel Evans
Senior Architect at Rainforest QA · | 20 upvotes · 1.5M views

We recently moved our main applications from Heroku to Kubernetes . The 3 main driving factors behind the switch were scalability (database size limits), security (the inability to set up PostgreSQL instances in private networks), and costs (GCP is cheaper for raw computing resources).

We prefer using managed services, so we are using Google Kubernetes Engine with Google Cloud SQL for PostgreSQL for our PostgreSQL databases and Google Cloud Memorystore for Redis . For our CI/CD pipeline, we are using CircleCI and Google Cloud Build to deploy applications managed with Helm . The new infrastructure is managed with Terraform .

Read the blog post to go more in depth.

See more
Jenkins logo

Jenkins

58.4K
49.8K
2.2K
An extendable open source continuous integration server
58.4K
49.8K
+ 1
2.2K
PROS OF JENKINS
  • 523
    Hosted internally
  • 469
    Free open source
  • 318
    Great to build, deploy or launch anything async
  • 243
    Tons of integrations
  • 211
    Rich set of plugins with good documentation
  • 111
    Has support for build pipelines
  • 68
    Easy setup
  • 66
    It is open-source
  • 53
    Workflow plugin
  • 13
    Configuration as code
  • 12
    Very powerful tool
  • 11
    Many Plugins
  • 10
    Continuous Integration
  • 10
    Great flexibility
  • 9
    Git and Maven integration is better
  • 8
    100% free and open source
  • 7
    Github integration
  • 7
    Slack Integration (plugin)
  • 6
    Easy customisation
  • 6
    Self-hosted GitLab Integration (plugin)
  • 5
    Docker support
  • 5
    Pipeline API
  • 4
    Fast builds
  • 4
    Platform idnependency
  • 4
    Hosted Externally
  • 4
    Excellent docker integration
  • 3
    It`w worked
  • 3
    Customizable
  • 3
    Can be run as a Docker container
  • 3
    It's Everywhere
  • 3
    JOBDSL
  • 3
    AWS Integration
  • 2
    Easily extendable with seamless integration
  • 2
    PHP Support
  • 2
    Build PR Branch Only
  • 2
    NodeJS Support
  • 2
    Ruby/Rails Support
  • 2
    Universal controller
  • 2
    Loose Coupling
CONS OF JENKINS
  • 13
    Workarounds needed for basic requirements
  • 10
    Groovy with cumbersome syntax
  • 8
    Plugins compatibility issues
  • 7
    Lack of support
  • 7
    Limited abilities with declarative pipelines
  • 5
    No YAML syntax
  • 4
    Too tied to plugins versions

related Jenkins posts

Tymoteusz Paul
Devops guy at X20X Development LTD · | 23 upvotes · 9.8M views

Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

See more
Thierry Schellenbach

Releasing new versions of our services is done by Travis CI. Travis first runs our test suite. Once it passes, it publishes a new release binary to GitHub.

Common tasks such as installing dependencies for the Go project, or building a binary are automated using plain old Makefiles. (We know, crazy old school, right?) Our binaries are compressed using UPX.

Travis has come a long way over the past years. I used to prefer Jenkins in some cases since it was easier to debug broken builds. With the addition of the aptly named “debug build” button, Travis is now the clear winner. It’s easy to use and free for open source, with no need to maintain anything.

#ContinuousIntegration #CodeCollaborationVersionControl

See more
Git logo

Git

297.4K
178.7K
6.6K
Fast, scalable, distributed revision control system
297.4K
178.7K
+ 1
6.6K
PROS OF GIT
  • 1.4K
    Distributed version control system
  • 1.1K
    Efficient branching and merging
  • 959
    Fast
  • 845
    Open source
  • 726
    Better than svn
  • 368
    Great command-line application
  • 306
    Simple
  • 291
    Free
  • 232
    Easy to use
  • 222
    Does not require server
  • 27
    Distributed
  • 22
    Small & Fast
  • 18
    Feature based workflow
  • 15
    Staging Area
  • 13
    Most wide-spread VSC
  • 11
    Role-based codelines
  • 11
    Disposable Experimentation
  • 7
    Frictionless Context Switching
  • 6
    Data Assurance
  • 5
    Efficient
  • 4
    Just awesome
  • 3
    Github integration
  • 3
    Easy branching and merging
  • 2
    Compatible
  • 2
    Flexible
  • 2
    Possible to lose history and commits
  • 1
    Rebase supported natively; reflog; access to plumbing
  • 1
    Light
  • 1
    Team Integration
  • 1
    Fast, scalable, distributed revision control system
  • 1
    Easy
  • 1
    Flexible, easy, Safe, and fast
  • 1
    CLI is great, but the GUI tools are awesome
  • 1
    It's what you do
  • 0
    Phinx
CONS OF GIT
  • 16
    Hard to learn
  • 11
    Inconsistent command line interface
  • 9
    Easy to lose uncommitted work
  • 8
    Worst documentation ever possibly made
  • 5
    Awful merge handling
  • 3
    Unexistent preventive security flows
  • 3
    Rebase hell
  • 2
    Ironically even die-hard supporters screw up badly
  • 2
    When --force is disabled, cannot rebase
  • 1
    Doesn't scale for big data

related Git posts

Simon Reymann
Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 11.2M views

Our whole DevOps stack consists of the following tools:

  • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
  • Respectively Git as revision control system
  • SourceTree as Git GUI
  • Visual Studio Code as IDE
  • CircleCI for continuous integration (automatize development process)
  • Prettier / TSLint / ESLint as code linter
  • SonarQube as quality gate
  • Docker as container management (incl. Docker Compose for multi-container application management)
  • VirtualBox for operating system simulation tests
  • Kubernetes as cluster management for docker containers
  • Heroku for deploying in test environments
  • nginx as web server (preferably used as facade server in production environment)
  • SSLMate (using OpenSSL) for certificate management
  • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
  • PostgreSQL as preferred database system
  • Redis as preferred in-memory database/store (great for caching)

The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

  • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
  • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
  • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
  • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
  • Scalability: All-in-one framework for distributed systems.
  • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
See more
Tymoteusz Paul
Devops guy at X20X Development LTD · | 23 upvotes · 9.8M views

Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

See more
GitHub logo

GitHub

285.8K
249.6K
10.3K
Powerful collaboration, review, and code management for open source and private development projects
285.8K
249.6K
+ 1
10.3K
PROS OF GITHUB
  • 1.8K
    Open source friendly
  • 1.5K
    Easy source control
  • 1.3K
    Nice UI
  • 1.1K
    Great for team collaboration
  • 867
    Easy setup
  • 504
    Issue tracker
  • 487
    Great community
  • 483
    Remote team collaboration
  • 449
    Great way to share
  • 442
    Pull request and features planning
  • 147
    Just works
  • 132
    Integrated in many tools
  • 122
    Free Public Repos
  • 116
    Github Gists
  • 113
    Github pages
  • 83
    Easy to find repos
  • 62
    Open source
  • 60
    Easy to find projects
  • 60
    It's free
  • 56
    Network effect
  • 49
    Extensive API
  • 43
    Organizations
  • 42
    Branching
  • 34
    Developer Profiles
  • 32
    Git Powered Wikis
  • 30
    Great for collaboration
  • 24
    It's fun
  • 23
    Clean interface and good integrations
  • 22
    Community SDK involvement
  • 20
    Learn from others source code
  • 16
    Because: Git
  • 14
    It integrates directly with Azure
  • 10
    Standard in Open Source collab
  • 10
    Newsfeed
  • 8
    Fast
  • 8
    Beautiful user experience
  • 8
    It integrates directly with Hipchat
  • 7
    Easy to discover new code libraries
  • 6
    Smooth integration
  • 6
    Integrations
  • 6
    Graphs
  • 6
    Nice API
  • 6
    It's awesome
  • 6
    Cloud SCM
  • 5
    Quick Onboarding
  • 5
    Remarkable uptime
  • 5
    CI Integration
  • 5
    Reliable
  • 5
    Hands down best online Git service available
  • 4
    Version Control
  • 4
    Unlimited Public Repos at no cost
  • 4
    Simple but powerful
  • 4
    Loved by developers
  • 4
    Free HTML hosting
  • 4
    Uses GIT
  • 4
    Security options
  • 4
    Easy to use and collaborate with others
  • 3
    Easy deployment via SSH
  • 3
    Ci
  • 3
    IAM
  • 3
    Nice to use
  • 2
    Easy and efficient maintainance of the projects
  • 2
    Beautiful
  • 2
    Self Hosted
  • 2
    Issues tracker
  • 2
    Easy source control and everything is backed up
  • 2
    Never dethroned
  • 2
    All in one development service
  • 2
    Good tools support
  • 2
    Free HTML hostings
  • 2
    IAM integration
  • 2
    Very Easy to Use
  • 2
    Easy to use
  • 2
    Leads the copycats
  • 2
    Free private repos
  • 1
    Profound
  • 1
    Dasf
CONS OF GITHUB
  • 55
    Owned by micrcosoft
  • 38
    Expensive for lone developers that want private repos
  • 15
    Relatively slow product/feature release cadence
  • 10
    API scoping could be better
  • 9
    Only 3 collaborators for private repos
  • 4
    Limited featureset for issue management
  • 3
    Does not have a graph for showing history like git lens
  • 2
    GitHub Packages does not support SNAPSHOT versions
  • 1
    No multilingual interface
  • 1
    Takes a long time to commit
  • 1
    Expensive

related GitHub posts

Johnny Bell

I was building a personal project that I needed to store items in a real time database. I am more comfortable with my Frontend skills than my backend so I didn't want to spend time building out anything in Ruby or Go.

I stumbled on Firebase by #Google, and it was really all I needed. It had realtime data, an area for storing file uploads and best of all for the amount of data I needed it was free!

I built out my application using tools I was familiar with, React for the framework, Redux.js to manage my state across components, and styled-components for the styling.

Now as this was a project I was just working on in my free time for fun I didn't really want to pay for hosting. I did some research and I found Netlify. I had actually seen them at #ReactRally the year before and deployed a Gatsby site to Netlify already.

Netlify was very easy to setup and link to my GitHub account you select a repo and pretty much with very little configuration you have a live site that will deploy every time you push to master.

With the selection of these tools I was able to build out my application, connect it to a realtime database, and deploy to a live environment all with $0 spent.

If you're looking to build out a small app I suggest giving these tools a go as you can get your idea out into the real world for absolutely no cost.

See more

Context: I wanted to create an end to end IoT data pipeline simulation in Google Cloud IoT Core and other GCP services. I never touched Terraform meaningfully until working on this project, and it's one of the best explorations in my development career. The documentation and syntax is incredibly human-readable and friendly. I'm used to building infrastructure through the google apis via Python , but I'm so glad past Sung did not make that decision. I was tempted to use Google Cloud Deployment Manager, but the templates were a bit convoluted by first impression. I'm glad past Sung did not make this decision either.

Solution: Leveraging Google Cloud Build Google Cloud Run Google Cloud Bigtable Google BigQuery Google Cloud Storage Google Compute Engine along with some other fun tools, I can deploy over 40 GCP resources using Terraform!

Check Out My Architecture: CLICK ME

Check out the GitHub repo attached

See more
Visual Studio Code logo

Visual Studio Code

179.4K
163.6K
2.3K
Build and debug modern web and cloud applications, by Microsoft
179.4K
163.6K
+ 1
2.3K
PROS OF VISUAL STUDIO CODE
  • 340
    Powerful multilanguage IDE
  • 308
    Fast
  • 193
    Front-end develop out of the box
  • 158
    Support TypeScript IntelliSense
  • 142
    Very basic but free
  • 126
    Git integration
  • 106
    Intellisense
  • 78
    Faster than Atom
  • 53
    Better ui, easy plugins, and nice git integration
  • 45
    Great Refactoring Tools
  • 44
    Good Plugins
  • 42
    Terminal
  • 38
    Superb markdown support
  • 36
    Open Source
  • 35
    Extensions
  • 26
    Awesome UI
  • 26
    Large & up-to-date extension community
  • 24
    Powerful and fast
  • 22
    Portable
  • 18
    Best code editor
  • 18
    Best editor
  • 17
    Easy to get started with
  • 15
    Lots of extensions
  • 15
    Good for begginers
  • 15
    Crossplatform
  • 15
    Built on Electron
  • 14
    Extensions for everything
  • 14
    Open, cross-platform, fast, monthly updates
  • 14
    All Languages Support
  • 13
    Easy to use and learn
  • 12
    "fast, stable & easy to use"
  • 12
    Extensible
  • 11
    Ui design is great
  • 11
    Totally customizable
  • 11
    Git out of the box
  • 11
    Useful for begginer
  • 11
    Faster edit for slow computer
  • 10
    SSH support
  • 10
    Great community
  • 10
    Fast Startup
  • 9
    Works With Almost EveryThing You Need
  • 9
    Great language support
  • 9
    Powerful Debugger
  • 9
    It has terminal and there are lots of shortcuts in it
  • 8
    Can compile and run .py files
  • 8
    Python extension is fast
  • 7
    Features rich
  • 7
    Great document formater
  • 6
    He is not Michael
  • 6
    Extension Echosystem
  • 6
    She is not Rachel
  • 6
    Awesome multi cursor support
  • 5
    VSCode.pro Course makes it easy to learn
  • 5
    Language server client
  • 5
    SFTP Workspace
  • 5
    Very proffesional
  • 5
    Easy azure
  • 4
    Has better support and more extentions for debugging
  • 4
    Supports lots of operating systems
  • 4
    Excellent as git difftool and mergetool
  • 4
    Virtualenv integration
  • 3
    Better autocompletes than Atom
  • 3
    Has more than enough languages for any developer
  • 3
    'batteries included'
  • 3
    More tools to integrate with vs
  • 3
    Emmet preinstalled
  • 2
    VS Code Server: Browser version of VS Code
  • 2
    CMake support with autocomplete
  • 2
    Microsoft
  • 2
    Customizable
  • 2
    Light
  • 2
    Big extension marketplace
  • 2
    Fast and ruby is built right in
  • 1
    File:///C:/Users/ydemi/Downloads/yuksel_demirkaya_webpa
CONS OF VISUAL STUDIO CODE
  • 46
    Slow startup
  • 29
    Resource hog at times
  • 20
    Poor refactoring
  • 13
    Poor UI Designer
  • 11
    Weak Ui design tools
  • 10
    Poor autocomplete
  • 8
    Super Slow
  • 8
    Huge cpu usage with few installed extension
  • 8
    Microsoft sends telemetry data
  • 7
    Poor in PHP
  • 6
    It's MicroSoft
  • 3
    Poor in Python
  • 3
    No Built in Browser Preview
  • 3
    No color Intergrator
  • 3
    Very basic for java development and buggy at times
  • 3
    No built in live Preview
  • 3
    Electron
  • 2
    Bad Plugin Architecture
  • 2
    Powered by Electron
  • 1
    Terminal does not identify path vars sometimes
  • 1
    Slow C++ Language Server

related Visual Studio Code posts

Yshay Yaacobi

Our first experience with .NET core was when we developed our OSS feature management platform - Tweek (https://github.com/soluto/tweek). We wanted to create a solution that is able to run anywhere (super important for OSS), has excellent performance characteristics and can fit in a multi-container architecture. We decided to implement our rule engine processor in F# , our main service was implemented in C# and other components were built using JavaScript / TypeScript and Go.

Visual Studio Code worked really well for us as well, it worked well with all our polyglot services and the .Net core integration had great cross-platform developer experience (to be fair, F# was a bit trickier) - actually, each of our team members used a different OS (Ubuntu, macos, windows). Our production deployment ran for a time on Docker Swarm until we've decided to adopt Kubernetes with almost seamless migration process.

After our positive experience of running .Net core workloads in containers and developing Tweek's .Net services on non-windows machines, C# had gained back some of its popularity (originally lost to Node.js), and other teams have been using it for developing microservices, k8s sidecars (like https://github.com/Soluto/airbag), cli tools, serverless functions and other projects...

See more
Simon Reymann
Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 11.2M views

Our whole DevOps stack consists of the following tools:

  • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
  • Respectively Git as revision control system
  • SourceTree as Git GUI
  • Visual Studio Code as IDE
  • CircleCI for continuous integration (automatize development process)
  • Prettier / TSLint / ESLint as code linter
  • SonarQube as quality gate
  • Docker as container management (incl. Docker Compose for multi-container application management)
  • VirtualBox for operating system simulation tests
  • Kubernetes as cluster management for docker containers
  • Heroku for deploying in test environments
  • nginx as web server (preferably used as facade server in production environment)
  • SSLMate (using OpenSSL) for certificate management
  • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
  • PostgreSQL as preferred database system
  • Redis as preferred in-memory database/store (great for caching)

The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

  • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
  • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
  • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
  • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
  • Scalability: All-in-one framework for distributed systems.
  • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
See more
Docker logo

Docker

174.4K
140.2K
3.9K
Enterprise Container Platform for High-Velocity Innovation.
174.4K
140.2K
+ 1
3.9K
PROS OF DOCKER
  • 823
    Rapid integration and build up
  • 692
    Isolation
  • 521
    Open source
  • 505
    Testa­bil­i­ty and re­pro­ducibil­i­ty
  • 460
    Lightweight
  • 218
    Standardization
  • 185
    Scalable
  • 106
    Upgrading / down­grad­ing / ap­pli­ca­tion versions
  • 88
    Security
  • 85
    Private paas environments
  • 34
    Portability
  • 26
    Limit resource usage
  • 17
    Game changer
  • 16
    I love the way docker has changed virtualization
  • 14
    Fast
  • 12
    Concurrency
  • 8
    Docker's Compose tools
  • 6
    Fast and Portable
  • 6
    Easy setup
  • 5
    Because its fun
  • 4
    Makes shipping to production very simple
  • 3
    It's dope
  • 3
    Highly useful
  • 2
    Does a nice job hogging memory
  • 2
    Open source and highly configurable
  • 2
    Simplicity, isolation, resource effective
  • 2
    MacOS support FAKE
  • 2
    Its cool
  • 2
    Docker hub for the FTW
  • 2
    HIgh Throughput
  • 2
    Very easy to setup integrate and build
  • 2
    Package the environment with the application
  • 2
    Super
  • 0
    Asdfd
CONS OF DOCKER
  • 8
    New versions == broken features
  • 6
    Unreliable networking
  • 6
    Documentation not always in sync
  • 4
    Moves quickly
  • 3
    Not Secure

related Docker posts

Simon Reymann
Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 11.2M views

Our whole DevOps stack consists of the following tools:

  • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
  • Respectively Git as revision control system
  • SourceTree as Git GUI
  • Visual Studio Code as IDE
  • CircleCI for continuous integration (automatize development process)
  • Prettier / TSLint / ESLint as code linter
  • SonarQube as quality gate
  • Docker as container management (incl. Docker Compose for multi-container application management)
  • VirtualBox for operating system simulation tests
  • Kubernetes as cluster management for docker containers
  • Heroku for deploying in test environments
  • nginx as web server (preferably used as facade server in production environment)
  • SSLMate (using OpenSSL) for certificate management
  • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
  • PostgreSQL as preferred database system
  • Redis as preferred in-memory database/store (great for caching)

The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

  • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
  • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
  • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
  • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
  • Scalability: All-in-one framework for distributed systems.
  • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
See more
Tymoteusz Paul
Devops guy at X20X Development LTD · | 23 upvotes · 9.8M views

Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

See more