Alternatives to AWS CodePipeline logo

Alternatives to AWS CodePipeline

AWS CodeDeploy, Jenkins, AWS CodeBuild, TeamCity, and Bamboo are the most popular alternatives and competitors to AWS CodePipeline.
384
644
+ 1
30

What is AWS CodePipeline and what are its top alternatives?

CodePipeline builds, tests, and deploys your code every time there is a code change, based on the release process models you define.
AWS CodePipeline is a tool in the Continuous Deployment category of a tech stack.

Top Alternatives to AWS CodePipeline

  • AWS CodeDeploy

    AWS CodeDeploy

    AWS CodeDeploy is a service that automates code deployments to Amazon EC2 instances. AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during deployment, and handles the complexity of updating your applications. ...

  • Jenkins

    Jenkins

    In a nutshell Jenkins CI is the leading open-source continuous integration server. Built with Java, it provides over 300 plugins to support building and testing virtually any project. ...

  • AWS CodeBuild

    AWS CodeBuild

    AWS CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. ...

  • TeamCity

    TeamCity

    TeamCity is a user-friendly continuous integration (CI) server for professional developers, build engineers, and DevOps. It is trivial to setup and absolutely free for small teams and open source projects. ...

  • Bamboo

    Bamboo

    Focus on coding and count on Bamboo as your CI and build server! Create multi-stage build plans, set up triggers to start builds upon commits, and assign agents to your critical builds and deployments. ...

  • AWS CodeStar

    AWS CodeStar

    Start new software projects on AWS in minutes using templates for web applications, web services and more. ...

  • Azure DevOps

    Azure DevOps

    Azure DevOps provides unlimited private Git hosting, cloud build for continuous integration, agile planning, and release management for continuous delivery to the cloud and on-premises. Includes broad IDE support. ...

  • CircleCI

    CircleCI

    Continuous integration and delivery platform helps software teams rapidly release code with confidence by automating the build, test, and deploy process. Offers a modern software development platform that lets teams ramp. ...

AWS CodePipeline alternatives & related posts

AWS CodeDeploy logo

AWS CodeDeploy

319
464
38
Coordinate application deployments to Amazon EC2 instances
319
464
+ 1
38
PROS OF AWS CODEDEPLOY
  • 17
    Automates code deployments
  • 9
    Backed by Amazon
  • 7
    Adds autoscaling lifecycle hooks
  • 5
    Git integration
CONS OF AWS CODEDEPLOY
    Be the first to leave a con

    related AWS CodeDeploy posts

    Chris McFadden
    VP, Engineering at SparkPost · | 9 upvotes · 126.2K views

    The recent move of our CI/CD tooling to AWS CodeBuild / AWS CodeDeploy (with GitHub ) as well as moving to Amazon EC2 Container Service / AWS Lambda for our deployment architecture for most of our services has helped us significantly reduce our deployment times while improving both feature velocity and overall reliability. In one extreme case, we got one service down from 90 minutes to a very reasonable 15 minutes. Container-based build and deployments have made so many things simpler and easier and the integration between the tools has been helpful. There is still some work to do on our service mesh & API proxy approach to further simplify our environment.

    See more
    Sathish Raju
    Founder/CTO at Kloudio · | 5 upvotes · 61K views

    At Kloud.io we use Node.js for our backend Microservices and Angular 2 for the frontend. We also use React for a couple of our internal applications. Writing services in Node.js in TypeScript improved developer productivity and we could capture bugs way before they can occur in the production. The use of Angular 2 in our production environment reduced the time to release any new features. At the same time, we are also exploring React by using it in our internal tools. So far we enjoyed what React has to offer. We are an enterprise SAAS product and also offer an on-premise or hybrid cloud version of #kloudio. We heavily use Docker for shipping our on-premise version. We also use Docker internally for automated testing. Using Docker reduced the install time errors in customer environments. Our cloud version is deployed in #AWS. We use AWS CodePipeline and AWS CodeDeploy for our CI/CD. We also use AWS Lambda for automation jobs.

    See more
    Jenkins logo

    Jenkins

    39.4K
    32.1K
    2.2K
    An extendable open source continuous integration server
    39.4K
    32.1K
    + 1
    2.2K
    PROS OF JENKINS
    • 521
      Hosted internally
    • 463
      Free open source
    • 313
      Great to build, deploy or launch anything async
    • 243
      Tons of integrations
    • 208
      Rich set of plugins with good documentation
    • 108
      Has support for build pipelines
    • 72
      Open source and tons of integrations
    • 63
      Easy setup
    • 61
      It is open-source
    • 54
      Workflow plugin
    • 11
      Configuration as code
    • 10
      Very powerful tool
    • 9
      Many Plugins
    • 8
      Great flexibility
    • 8
      Git and Maven integration is better
    • 7
      Continuous Integration
    • 6
      Github integration
    • 6
      Slack Integration (plugin)
    • 5
      100% free and open source
    • 5
      Self-hosted GitLab Integration (plugin)
    • 5
      Easy customisation
    • 4
      Docker support
    • 3
      Pipeline API
    • 3
      Excellent docker integration
    • 3
      Platform idnependency
    • 3
      Fast builds
    • 2
      Hosted Externally
    • 2
      It`w worked
    • 2
      Can be run as a Docker container
    • 2
      Customizable
    • 2
      AWS Integration
    • 2
      It's Everywhere
    • 2
      JOBDSL
    • 1
      NodeJS Support
    • 1
      PHP Support
    • 1
      Ruby/Rails Support
    • 1
      Universal controller
    • 1
      Easily extendable with seamless integration
    • 1
      Build PR Branch Only
    CONS OF JENKINS
    • 12
      Workarounds needed for basic requirements
    • 8
      Groovy with cumbersome syntax
    • 6
      Plugins compatibility issues
    • 6
      Limited abilities with declarative pipelines
    • 5
      Lack of support
    • 4
      No YAML syntax
    • 2
      Too tied to plugins versions

    related Jenkins posts

    Thierry Schellenbach

    Releasing new versions of our services is done by Travis CI. Travis first runs our test suite. Once it passes, it publishes a new release binary to GitHub.

    Common tasks such as installing dependencies for the Go project, or building a binary are automated using plain old Makefiles. (We know, crazy old school, right?) Our binaries are compressed using UPX.

    Travis has come a long way over the past years. I used to prefer Jenkins in some cases since it was easier to debug broken builds. With the addition of the aptly named “debug build” button, Travis is now the clear winner. It’s easy to use and free for open source, with no need to maintain anything.

    #ContinuousIntegration #CodeCollaborationVersionControl

    See more
    Tymoteusz Paul
    Devops guy at X20X Development LTD · | 21 upvotes · 4.3M views

    Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

    It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

    I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

    We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

    If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

    The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

    Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

    See more
    AWS CodeBuild logo

    AWS CodeBuild

    271
    345
    38
    Build and test code with continuous scaling
    271
    345
    + 1
    38
    PROS OF AWS CODEBUILD
    • 6
      Pay per minute
    • 4
      Parameter Store integration for passing secrets
    • 4
      Integrated with AWS
    • 3
      Bit bucket integration
    • 2
      GitHub Webhooks support
    • 2
      Streaming logs to Amazon CloudWatch
    • 1
      Local build debug support
    • 1
      Native support for accessing Amazon VPC resources
    • 1
      VPC PrivateLinks to invoke service without internet
    • 1
      Docker based build environment
    • 1
      Support for bringing custom Docker images
    • 1
      Fully managed (no installation/updates, servers to mai
    • 1
      PCI, SOC, ISO, HIPAA compliant
    • 1
      Full API/SDKs/CLI support
    • 1
      YAML based configuration
    • 1
      Great support (forums, premium support, SO, GitHub)
    • 1
      Perpetual free tier option (100 mins/month)
    • 1
      AWS Config and Config rule integration for compliance
    • 1
      GitHub Enterprise support
    • 1
      Windows/.NET support
    • 1
      Jenkins plugin integration
    • 1
      Ondemand scaling of build jobs
    • 1
      Scheduled builds with CloudWatch Events integration
    • 0
      G
    CONS OF AWS CODEBUILD
      Be the first to leave a con

      related AWS CodeBuild posts

      Chris McFadden
      VP, Engineering at SparkPost · | 9 upvotes · 126.2K views

      The recent move of our CI/CD tooling to AWS CodeBuild / AWS CodeDeploy (with GitHub ) as well as moving to Amazon EC2 Container Service / AWS Lambda for our deployment architecture for most of our services has helped us significantly reduce our deployment times while improving both feature velocity and overall reliability. In one extreme case, we got one service down from 90 minutes to a very reasonable 15 minutes. Container-based build and deployments have made so many things simpler and easier and the integration between the tools has been helpful. There is still some work to do on our service mesh & API proxy approach to further simplify our environment.

      See more

      Hi, I need advice. In my project, we are using Bitbucket hosted on-prem, Jenkins, and Jira. Also, we have restrictions not to use any plugins for code review, code quality, code security, etc., with bitbucket. Now we want to migrate to AWS CodeCommit, which would mean that we can use, let's say, Amazon CodeGuru for code reviews and move to AWS CodeBuild and AWS CodePipeline for build automation in the future rather than using Jenkins.

      Now I want advice on below.

      1. Is it a good idea to migrate from Bitbucket to AWS Codecommit?
      2. If we want to integrate Jira with AWS Codecommit, then how can we do this? If a developer makes any changes in Jira, then a build should be triggered automatically in AWS and create a Jira ticket if the build fails. So, how can we achieve this?
      See more
      TeamCity logo

      TeamCity

      981
      856
      306
      TeamCity is an ultimate Continuous Integration tool for professionals
      981
      856
      + 1
      306
      PROS OF TEAMCITY
      • 59
        Easy to configure
      • 37
        Reliable and high-quality
      • 31
        User friendly
      • 31
        Github integration
      • 31
        On premise
      • 18
        Great UI
      • 16
        Smart
      • 12
        Free for open source
      • 12
        Can run jobs in parallel
      • 8
        Crossplatform
      • 4
        Chain dependencies
      • 4
        Great support by jetbrains
      • 4
        Fully-functional out of the box
      • 4
        Projects hierarchy
      • 4
        REST API
      • 3
        100+ plugins
      • 3
        Free for small teams
      • 3
        Per-project permissions
      • 3
        Personal notifications
      • 3
        Build templates
      • 2
        Ide plugins
      • 2
        GitLab integration
      • 2
        Smart build failure analysis and tracking
      • 2
        Upload build artifacts
      • 2
        Artifact dependencies
      • 2
        Build progress messages promoting from running process
      • 1
        TeamCity Professional is FREE
      • 1
        Powerful build chains / pipelines
      • 1
        Built-in artifacts repository
      • 1
        Repository-stored, full settings dsl with ide support
      • 0
        Official reliable support
      • 0
        High-Availability
      • 0
        Hosted internally
      CONS OF TEAMCITY
      • 1
        Proprietary
      • 1
        High costs for more than three build agents
      • 1
        User friendly
      • 1
        User-friendly

      related TeamCity posts

      Tymoteusz Paul
      Devops guy at X20X Development LTD · | 21 upvotes · 4.3M views

      Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

      It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

      I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

      We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

      If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

      The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

      Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

      See more
      Sarah Elson
      Product Growth at LambdaTest · | 4 upvotes · 283.6K views

      @producthunt LambdaTest Selenium JavaScript Java Python PHP Cucumber TeamCity CircleCI With this new release of LambdaTest automation, you can run tests across an Online Selenium Grid of 2000+ browsers and OS combinations to perform cross browser testing. This saves you from the pain of maintaining the infrastructure and also saves you the licensing costs for browsers and operating systems. #testing #Seleniumgrid #Selenium #testautomation #automation #webdriver #producthunt hunted

      See more
      Bamboo logo

      Bamboo

      442
      422
      17
      Tie automated builds, tests, and releases together in a single workflow
      442
      422
      + 1
      17
      PROS OF BAMBOO
      • 10
        Integrates with other Atlassian tools
      • 4
        Great notification scheme
      • 2
        Great UI
      • 1
        Has Deployment Projects
      CONS OF BAMBOO
      • 5
        Expensive

      related Bamboo posts

      xie zhifeng
      Shared insights
      on
      Bamboo
      Jenkins
      GitLab
      at

      I am choosing a DevOps toolset for my team. GitLab is open source and quite cloud-native. Jenkins has a very popular environment system but old-style technicals. Bamboo is very nice but integrated only with Atlassian products.

      See more
      AWS CodeStar logo

      AWS CodeStar

      19
      129
      3
      Quickly Develop, Build, and Deploy Applications on AWS
      19
      129
      + 1
      3
      PROS OF AWS CODESTAR
      • 2
        Simple to set up
      • 1
        Manual Steps Available
      • 0
        GitHub integration
      CONS OF AWS CODESTAR
        Be the first to leave a con

        related AWS CodeStar posts

        Azure DevOps logo

        Azure DevOps

        1.8K
        1.9K
        220
        Services for teams to share code, track work, and ship software
        1.8K
        1.9K
        + 1
        220
        PROS OF AZURE DEVOPS
        • 47
          Complete and powerful
        • 28
          Huge extension ecosystem
        • 24
          Azure integration
        • 24
          Flexible and powerful
        • 23
          One Stop Shop For Build server, Project Mgt, CDCI
        • 14
          Everything I need. Simple and intuitive UI
        • 13
          Support Open Source
        • 8
          Integrations
        • 7
          GitHub Integration
        • 6
          Project Mgmt Features
        • 5
          Crap
        • 5
          Cost free for Stakeholders
        • 5
          One 4 all
        • 4
          Runs in the cloud
        • 2
          Jenkins Integration
        • 2
          Agent On-Premise(Linux - Windows)
        • 2
          Aws integration
        • 1
          GCP Integration
        CONS OF AZURE DEVOPS
        • 5
          Still dependant on C# for agents
        • 3
          Capacity across cross functional teams not visibile
        • 3
          Half Baked
        • 2
          Poor Jenkins integration
        • 2
          Many in devops disregard MS altogether
        • 2
          Not a requirements management tool
        • 2
          Jack of all trades, master of none

        related Azure DevOps posts

        Farzad Jalali
        Senior Software Architect at BerryWorld · | 8 upvotes · 170.8K views

        Visual Studio Azure DevOps Azure Functions Azure Websites #Azure #AzureKeyVault #AzureAD #AzureApps

        #Azure Cloud Since Amazon is potentially our competitor then we need a different cloud vendor, also our programmers are microsoft oriented so the choose were obviously #Azure for us.

        Azure DevOps Because we need to be able to develop a neww pipeline into Azure environment ina few minutes.

        Azure Kubernetes Service We already in #Azure , also need to use K8s , so let's use AKS as it's a manged Kubernetes in the #Azure

        See more
        Nicholas Rogoff

        Secure Membership Web API backed by SQL Server. This is the backing API to store additional profile and complex membership metadata outside of an Azure AD B2C provider. The front-end using the Azure AD B2C to allow 3rd party trusted identity providers to authenticate. This API provides a way to add and manage more complex permission structures than can easily be maintained in Azure AD.

        We have .Net developers and an Azure infrastructure environment using server-less functions, logic apps and SaaS where ever possible. For this service I opted to keep it as a classic WebAPI project and deployed to AppService.

        • Trusted Authentication Provider: @AzureActiveDirectoryB2C
        • Frameworks: .NET Core
        • Language: C# , Microsoft SQL Server , JavaScript
        • IDEs: Visual Studio Code , Visual Studio
        • Libraries: jQuery @EntityFramework, @AutoMapper, @FeatureToggle , @Swashbuckle
        • Database: @SqlAzure
        • Source Control: Git
        • Build and Release Pipelines: Azure DevOps
        • Test tools: Postman , Newman
        • Test framework: @nUnit, @moq
        • Infrastructure: @AzureAppService, @AzureAPIManagement
        See more
        CircleCI logo

        CircleCI

        8.2K
        5K
        957
        Automate your development process quickly, safely, and at scale
        8.2K
        5K
        + 1
        957
        PROS OF CIRCLECI
        • 223
          Github integration
        • 175
          Easy setup
        • 151
          Fast builds
        • 94
          Competitively priced
        • 73
          Slack integration
        • 54
          Docker support
        • 44
          Awesome UI
        • 33
          Great customer support
        • 18
          Ios support
        • 14
          Hipchat integration
        • 12
          SSH debug access
        • 11
          Free for Open Source
        • 5
          Mobile support
        • 5
          Bitbucket integration
        • 4
          Nodejs support
        • 4
          AWS CodeDeploy integration
        • 3
          YAML configuration
        • 3
          Free for Github private repo
        • 3
          Great support
        • 2
          Clojure
        • 2
          Simple, clean UI
        • 2
          Clojurescript
        • 2
          OSX support
        • 2
          Continuous Deployment
        • 1
          Android support
        • 1
          Autoscaling
        • 1
          Fair pricing
        • 1
          All inclusive testing
        • 1
          Helpful documentation
        • 1
          Japanese in rspec comment appears OK
        • 1
          Favorite
        • 1
          Build PR Branch Only
        • 1
          Really easy to use
        • 1
          Unstable
        • 1
          So circular
        • 1
          Easy setup, easy to understand, fast and reliable
        • 1
          Parallel builds for slow test suites
        • 1
          Easy setup. 2.0 is fast!
        • 1
          Parallelism
        • 1
          Extremely configurable
        • 1
          Easy to deploy to private servers
        • 1
          Works
        CONS OF CIRCLECI
        • 11
          Unstable
        • 6
          Scammy pricing structure
        • 0
          Aggressive Github permissions

        related CircleCI posts

        Tymoteusz Paul
        Devops guy at X20X Development LTD · | 21 upvotes · 4.3M views

        Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

        It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

        I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

        We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

        If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

        The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

        Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

        See more
        Tim Abbott
        Shared insights
        on
        Travis CI
        CircleCI
        at

        We actually started out on Travis CI, but we've migrated our main builds to CircleCI, and it's been a huge improvement.

        The reason it's been a huge improvement is that Travis CI has a fundamentally bad design for their images, where they start with a standard base Linux image containing tons of packages (several versions of postgres, every programming language environment, etc). This is potentially nice for the "get builds for a small project running quickly" use case, but it's a total disaster for a larger project that needs a decent number of dependencies and cares about the performance and reliability of their build.

        This issue is exacerbated by their networking infrastructure being unreliable; we usually saw over 1% of builds failing due to transient networking errors in Travis CI, even after we added retries to the most frequently failing operations like apt update or pip install. And they never install Ubuntu's point release updates to their images. So doing an apt update, apt install, or especially apt upgrade would take forever. We ended up writing code to actually uninstall many of their base packages and pin the versions of hundreds of others to get a semi-fast, semi-reliable build. It was infuriating.

        The CircleCI v2.0 system has the right design for a CI system: we can customize the base image to start with any expensive-to-install packages we need for our build, and we can update that image if and when we want to. The end result is that when migrating, we were able to delete all the hacky optimizations mentioned above, while still ending up with a 50% faster build latency. And we've also had 5-10x fewer issues with networking-related flakes, which means one doesn't have to constantly check whether a build failure is actually due to an issue with the code under test or "just another networking flake".

        See more