Alternatives to ScyllaDB logo

Alternatives to ScyllaDB

Cassandra, Redis, Aerospike, MongoDB, and Kraken.io are the most popular alternatives and competitors to ScyllaDB.
131
185
+ 1
8

What is ScyllaDB and what are its top alternatives?

ScyllaDB is the database for data-intensive apps that require high performance and low latency. It enables teams to harness the ever-increasing computing power of modern infrastructures – eliminating barriers to scale as data grows.
ScyllaDB is a tool in the Databases category of a tech stack.
ScyllaDB is an open source tool with GitHub stars and GitHub forks. Here’s a link to ScyllaDB's open source repository on GitHub

Top Alternatives to ScyllaDB

  • Cassandra
    Cassandra

    Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL. ...

  • Redis
    Redis

    Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams. ...

  • Aerospike
    Aerospike

    Aerospike is an open-source, modern database built from the ground up to push the limits of flash storage, processors and networks. It was designed to operate with predictable low latency at high throughput with uncompromising reliability – both high availability and ACID guarantees. ...

  • MongoDB
    MongoDB

    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding. ...

  • Kraken.io
    Kraken.io

    It supports JPEG, PNG and GIF files. You can optimize your images in two ways - by providing an URL of the image you want to optimize or by uploading an image file directly to its API. ...

  • Clickhouse
    Clickhouse

    It allows analysis of data that is updated in real time. It offers instant results in most cases: the data is processed faster than it takes to create a query. ...

  • JavaScript
    JavaScript

    JavaScript is most known as the scripting language for Web pages, but used in many non-browser environments as well such as node.js or Apache CouchDB. It is a prototype-based, multi-paradigm scripting language that is dynamic,and supports object-oriented, imperative, and functional programming styles. ...

  • Git
    Git

    Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency. ...

ScyllaDB alternatives & related posts

Cassandra logo

Cassandra

3.5K
3.5K
507
A partitioned row store. Rows are organized into tables with a required primary key.
3.5K
3.5K
+ 1
507
PROS OF CASSANDRA
  • 119
    Distributed
  • 98
    High performance
  • 81
    High availability
  • 74
    Easy scalability
  • 53
    Replication
  • 26
    Reliable
  • 26
    Multi datacenter deployments
  • 10
    Schema optional
  • 9
    OLTP
  • 8
    Open source
  • 2
    Workload separation (via MDC)
  • 1
    Fast
CONS OF CASSANDRA
  • 3
    Reliability of replication
  • 1
    Size
  • 1
    Updates

related Cassandra posts

Thierry Schellenbach
Shared insights
on
GolangGolangPythonPythonCassandraCassandra
at

After years of optimizing our existing feed technology, we decided to make a larger leap with 2.0 of Stream. While the first iteration of Stream was powered by Python and Cassandra, for Stream 2.0 of our infrastructure we switched to Go.

The main reason why we switched from Python to Go is performance. Certain features of Stream such as aggregation, ranking and serialization were very difficult to speed up using Python.

We’ve been using Go since March 2017 and it’s been a great experience so far. Go has greatly increased the productivity of our development team. Not only has it improved the speed at which we develop, it’s also 30x faster for many components of Stream. Initially we struggled a bit with package management for Go. However, using Dep together with the VG package contributed to creating a great workflow.

Go as a language is heavily focused on performance. The built-in PPROF tool is amazing for finding performance issues. Uber’s Go-Torch library is great for visualizing data from PPROF and will be bundled in PPROF in Go 1.10.

The performance of Go greatly influenced our architecture in a positive way. With Python we often found ourselves delegating logic to the database layer purely for performance reasons. The high performance of Go gave us more flexibility in terms of architecture. This led to a huge simplification of our infrastructure and a dramatic improvement of latency. For instance, we saw a 10 to 1 reduction in web-server count thanks to the lower memory and CPU usage for the same number of requests.

#DataStores #Databases

See more
Thierry Schellenbach
Shared insights
on
RedisRedisCassandraCassandraRocksDBRocksDB
at

1.0 of Stream leveraged Cassandra for storing the feed. Cassandra is a common choice for building feeds. Instagram, for instance started, out with Redis but eventually switched to Cassandra to handle their rapid usage growth. Cassandra can handle write heavy workloads very efficiently.

Cassandra is a great tool that allows you to scale write capacity simply by adding more nodes, though it is also very complex. This complexity made it hard to diagnose performance fluctuations. Even though we had years of experience with running Cassandra, it still felt like a bit of a black box. When building Stream 2.0 we decided to go for a different approach and build Keevo. Keevo is our in-house key-value store built upon RocksDB, gRPC and Raft.

RocksDB is a highly performant embeddable database library developed and maintained by Facebook’s data engineering team. RocksDB started as a fork of Google’s LevelDB that introduced several performance improvements for SSD. Nowadays RocksDB is a project on its own and is under active development. It is written in C++ and it’s fast. Have a look at how this benchmark handles 7 million QPS. In terms of technology it’s much more simple than Cassandra.

This translates into reduced maintenance overhead, improved performance and, most importantly, more consistent performance. It’s interesting to note that LinkedIn also uses RocksDB for their feed.

#InMemoryDatabases #DataStores #Databases

See more
Redis logo

Redis

58.2K
44.8K
3.9K
Open source (BSD licensed), in-memory data structure store
58.2K
44.8K
+ 1
3.9K
PROS OF REDIS
  • 886
    Performance
  • 542
    Super fast
  • 513
    Ease of use
  • 444
    In-memory cache
  • 324
    Advanced key-value cache
  • 194
    Open source
  • 182
    Easy to deploy
  • 164
    Stable
  • 155
    Free
  • 121
    Fast
  • 42
    High-Performance
  • 40
    High Availability
  • 35
    Data Structures
  • 32
    Very Scalable
  • 24
    Replication
  • 22
    Great community
  • 22
    Pub/Sub
  • 19
    "NoSQL" key-value data store
  • 16
    Hashes
  • 13
    Sets
  • 11
    Sorted Sets
  • 10
    NoSQL
  • 10
    Lists
  • 9
    Async replication
  • 9
    BSD licensed
  • 8
    Bitmaps
  • 8
    Integrates super easy with Sidekiq for Rails background
  • 7
    Keys with a limited time-to-live
  • 7
    Open Source
  • 6
    Lua scripting
  • 6
    Strings
  • 5
    Awesomeness for Free
  • 5
    Hyperloglogs
  • 4
    Transactions
  • 4
    Outstanding performance
  • 4
    Runs server side LUA
  • 4
    LRU eviction of keys
  • 4
    Feature Rich
  • 4
    Written in ANSI C
  • 4
    Networked
  • 3
    Data structure server
  • 3
    Performance & ease of use
  • 2
    Dont save data if no subscribers are found
  • 2
    Automatic failover
  • 2
    Easy to use
  • 2
    Temporarily kept on disk
  • 2
    Scalable
  • 2
    Existing Laravel Integration
  • 2
    Channels concept
  • 2
    Object [key/value] size each 500 MB
  • 2
    Simple
CONS OF REDIS
  • 15
    Cannot query objects directly
  • 3
    No secondary indexes for non-numeric data types
  • 1
    No WAL

related Redis posts

Russel Werner
Lead Engineer at StackShare · | 32 upvotes · 1.9M views

StackShare Feed is built entirely with React, Glamorous, and Apollo. One of our objectives with the public launch of the Feed was to enable a Server-side rendered (SSR) experience for our organic search traffic. When you visit the StackShare Feed, and you aren't logged in, you are delivered the Trending feed experience. We use an in-house Node.js rendering microservice to generate this HTML. This microservice needs to run and serve requests independent of our Rails web app. Up until recently, we had a mono-repo with our Rails and React code living happily together and all served from the same web process. In order to deploy our SSR app into a Heroku environment, we needed to split out our front-end application into a separate repo in GitHub. The driving factor in this decision was mostly due to limitations imposed by Heroku specifically with how processes can't communicate with each other. A new SSR app was created in Heroku and linked directly to the frontend repo so it stays in-sync with changes.

Related to this, we need a way to "deploy" our frontend changes to various server environments without building & releasing the entire Ruby application. We built a hybrid Amazon S3 Amazon CloudFront solution to host our Webpack bundles. A new CircleCI script builds the bundles and uploads them to S3. The final step in our rollout is to update some keys in Redis so our Rails app knows which bundles to serve. The result of these efforts were significant. Our frontend team now moves independently of our backend team, our build & release process takes only a few minutes, we are now using an edge CDN to serve JS assets, and we have pre-rendered React pages!

#StackDecisionsLaunch #SSR #Microservices #FrontEndRepoSplit

See more
Simon Reymann
Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 9M views

Our whole DevOps stack consists of the following tools:

  • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
  • Respectively Git as revision control system
  • SourceTree as Git GUI
  • Visual Studio Code as IDE
  • CircleCI for continuous integration (automatize development process)
  • Prettier / TSLint / ESLint as code linter
  • SonarQube as quality gate
  • Docker as container management (incl. Docker Compose for multi-container application management)
  • VirtualBox for operating system simulation tests
  • Kubernetes as cluster management for docker containers
  • Heroku for deploying in test environments
  • nginx as web server (preferably used as facade server in production environment)
  • SSLMate (using OpenSSL) for certificate management
  • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
  • PostgreSQL as preferred database system
  • Redis as preferred in-memory database/store (great for caching)

The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

  • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
  • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
  • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
  • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
  • Scalability: All-in-one framework for distributed systems.
  • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
See more
Aerospike logo

Aerospike

198
285
48
Flash-optimized in-memory open source NoSQL database
198
285
+ 1
48
PROS OF AEROSPIKE
  • 16
    Ram and/or ssd persistence
  • 12
    Easy clustering support
  • 5
    Easy setup
  • 4
    Acid
  • 3
    Petabyte Scale
  • 3
    Scale
  • 3
    Performance better than Redis
  • 2
    Ease of use
CONS OF AEROSPIKE
    Be the first to leave a con

    related Aerospike posts

    Gagan Jakhotiya
    Engineering Manager at BigBasket · | 5 upvotes · 58.1K views
    Shared insights
    on
    Tile38Tile38MySQLMySQLAerospikeAerospike

    I have a very limited but significant use case for spatial index in a routing service. I see these indexes not growing beyond 10,000 geometries for the next 1 year and maybe 100,000 for the next 3 years. The solution needs to be approached from a delivery timeline perspective mostly because the use case also comes with a slightly relaxed compute time SLA and cost optimum implementation PoV.

    We have chosen R-Tree based index as a suitable choice for our use case. We are already using Aerospike and MySQL in our stack. MySQL supports R-Tree and has good docs as well. I couldn't find anything specific to R-Tree with Aerospike. Also, generally would like to understand from the performance perspective how these two choices would fare with something like Tile38?

    Suggestions beside these are also most welcome.

    See more
    MongoDB logo

    MongoDB

    91.6K
    79K
    4.1K
    The database for giant ideas
    91.6K
    79K
    + 1
    4.1K
    PROS OF MONGODB
    • 827
      Document-oriented storage
    • 593
      No sql
    • 553
      Ease of use
    • 464
      Fast
    • 410
      High performance
    • 257
      Free
    • 218
      Open source
    • 180
      Flexible
    • 145
      Replication & high availability
    • 112
      Easy to maintain
    • 42
      Querying
    • 39
      Easy scalability
    • 38
      Auto-sharding
    • 37
      High availability
    • 31
      Map/reduce
    • 27
      Document database
    • 25
      Easy setup
    • 25
      Full index support
    • 16
      Reliable
    • 15
      Fast in-place updates
    • 14
      Agile programming, flexible, fast
    • 12
      No database migrations
    • 8
      Easy integration with Node.Js
    • 8
      Enterprise
    • 6
      Enterprise Support
    • 5
      Great NoSQL DB
    • 4
      Support for many languages through different drivers
    • 3
      Drivers support is good
    • 3
      Aggregation Framework
    • 3
      Schemaless
    • 2
      Fast
    • 2
      Managed service
    • 2
      Easy to Scale
    • 2
      Awesome
    • 2
      Consistent
    • 1
      Good GUI
    • 1
      Acid Compliant
    CONS OF MONGODB
    • 6
      Very slowly for connected models that require joins
    • 3
      Not acid compliant
    • 1
      Proprietary query language

    related MongoDB posts

    Shared insights
    on
    Node.jsNode.jsGraphQLGraphQLMongoDBMongoDB

    I just finished the very first version of my new hobby project: #MovieGeeks. It is a minimalist online movie catalog for you to save the movies you want to see and for rating the movies you already saw. This is just the beginning as I am planning to add more features on the lines of sharing and discovery

    For the #BackEnd I decided to use Node.js , GraphQL and MongoDB:

    1. Node.js has a huge community so it will always be a safe choice in terms of libraries and finding solutions to problems you may have

    2. GraphQL because I needed to improve my skills with it and because I was never comfortable with the usual REST approach. I believe GraphQL is a better option as it feels more natural to write apis, it improves the development velocity, by definition it fixes the over-fetching and under-fetching problem that is so common on REST apis, and on top of that, the community is getting bigger and bigger.

    3. MongoDB was my choice for the database as I already have a lot of experience working on it and because, despite of some bad reputation it has acquired in the last months, I still believe it is a powerful database for at least a very long list of use cases such as the one I needed for my website

    See more
    Vaibhav Taunk
    Team Lead at Technovert · | 31 upvotes · 3.6M views

    I am starting to become a full-stack developer, by choosing and learning .NET Core for API Development, Angular CLI / React for UI Development, MongoDB for database, as it a NoSQL DB and Flutter / React Native for Mobile App Development. Using Postman, Markdown and Visual Studio Code for development.

    See more
    Kraken.io logo

    Kraken.io

    16
    53
    7
    Image optimization and compression API
    16
    53
    + 1
    7
    PROS OF KRAKEN.IO
    • 6
      Free
    • 1
      Magento plugin
    CONS OF KRAKEN.IO
      Be the first to leave a con

      related Kraken.io posts

      Clickhouse logo

      Clickhouse

      388
      518
      78
      A column-oriented database management system
      388
      518
      + 1
      78
      PROS OF CLICKHOUSE
      • 19
        Fast, very very fast
      • 11
        Good compression ratio
      • 6
        Horizontally scalable
      • 5
        Great CLI
      • 5
        Utilizes all CPU resources
      • 5
        RESTful
      • 4
        Buggy
      • 4
        Open-source
      • 4
        Great number of SQL functions
      • 3
        Server crashes its normal :(
      • 3
        Has no transactions
      • 2
        Flexible connection options
      • 2
        Highly available
      • 2
        ODBC
      • 2
        Flexible compression options
      • 1
        In IDEA data import via HTTP interface not working
      CONS OF CLICKHOUSE
      • 5
        Slow insert operations

      related Clickhouse posts

      JavaScript logo

      JavaScript

      349.6K
      266.3K
      8.1K
      Lightweight, interpreted, object-oriented language with first-class functions
      349.6K
      266.3K
      + 1
      8.1K
      PROS OF JAVASCRIPT
      • 1.7K
        Can be used on frontend/backend
      • 1.5K
        It's everywhere
      • 1.2K
        Lots of great frameworks
      • 896
        Fast
      • 745
        Light weight
      • 425
        Flexible
      • 392
        You can't get a device today that doesn't run js
      • 286
        Non-blocking i/o
      • 236
        Ubiquitousness
      • 191
        Expressive
      • 55
        Extended functionality to web pages
      • 49
        Relatively easy language
      • 46
        Executed on the client side
      • 30
        Relatively fast to the end user
      • 25
        Pure Javascript
      • 21
        Functional programming
      • 15
        Async
      • 13
        Full-stack
      • 12
        Setup is easy
      • 12
        Its everywhere
      • 11
        JavaScript is the New PHP
      • 11
        Because I love functions
      • 10
        Like it or not, JS is part of the web standard
      • 9
        Can be used in backend, frontend and DB
      • 9
        Expansive community
      • 9
        Future Language of The Web
      • 9
        Easy
      • 8
        No need to use PHP
      • 8
        For the good parts
      • 8
        Can be used both as frontend and backend as well
      • 8
        Everyone use it
      • 8
        Most Popular Language in the World
      • 8
        Easy to hire developers
      • 7
        Love-hate relationship
      • 7
        Powerful
      • 7
        Photoshop has 3 JS runtimes built in
      • 7
        Evolution of C
      • 7
        Popularized Class-Less Architecture & Lambdas
      • 7
        Agile, packages simple to use
      • 7
        Supports lambdas and closures
      • 6
        1.6K Can be used on frontend/backend
      • 6
        It's fun
      • 6
        Hard not to use
      • 6
        Nice
      • 6
        Client side JS uses the visitors CPU to save Server Res
      • 6
        Versitile
      • 6
        It let's me use Babel & Typescript
      • 6
        Easy to make something
      • 6
        Its fun and fast
      • 6
        Can be used on frontend/backend/Mobile/create PRO Ui
      • 5
        Function expressions are useful for callbacks
      • 5
        What to add
      • 5
        Client processing
      • 5
        Everywhere
      • 5
        Scope manipulation
      • 5
        Stockholm Syndrome
      • 5
        Promise relationship
      • 5
        Clojurescript
      • 4
        Because it is so simple and lightweight
      • 4
        Only Programming language on browser
      • 1
        Hard to learn
      • 1
        Test
      • 1
        Test2
      • 1
        Easy to understand
      • 1
        Not the best
      • 1
        Easy to learn
      • 1
        Subskill #4
      • 0
        Hard 彤
      CONS OF JAVASCRIPT
      • 22
        A constant moving target, too much churn
      • 20
        Horribly inconsistent
      • 15
        Javascript is the New PHP
      • 9
        No ability to monitor memory utilitization
      • 8
        Shows Zero output in case of ANY error
      • 7
        Thinks strange results are better than errors
      • 6
        Can be ugly
      • 3
        No GitHub
      • 2
        Slow

      related JavaScript posts

      Zach Holman

      Oof. I have truly hated JavaScript for a long time. Like, for over twenty years now. Like, since the Clinton administration. It's always been a nightmare to deal with all of the aspects of that silly language.

      But wowza, things have changed. Tooling is just way, way better. I'm primarily web-oriented, and using React and Apollo together the past few years really opened my eyes to building rich apps. And I deeply apologize for using the phrase rich apps; I don't think I've ever said such Enterprisey words before.

      But yeah, things are different now. I still love Rails, and still use it for a lot of apps I build. But it's that silly rich apps phrase that's the problem. Users have way more comprehensive expectations than they did even five years ago, and the JS community does a good job at building tools and tech that tackle the problems of making heavy, complicated UI and frontend work.

      Obviously there's a lot of things happening here, so just saying "JavaScript isn't terrible" might encompass a huge amount of libraries and frameworks. But if you're like me, yeah, give things another shot- I'm somehow not hating on JavaScript anymore and... gulp... I kinda love it.

      See more
      Conor Myhrvold
      Tech Brand Mgr, Office of CTO at Uber · | 44 upvotes · 9.6M views

      How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

      Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

      Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

      https://eng.uber.com/distributed-tracing/

      (GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

      Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

      See more
      Git logo

      Git

      288.6K
      173.6K
      6.6K
      Fast, scalable, distributed revision control system
      288.6K
      173.6K
      + 1
      6.6K
      PROS OF GIT
      • 1.4K
        Distributed version control system
      • 1.1K
        Efficient branching and merging
      • 959
        Fast
      • 845
        Open source
      • 726
        Better than svn
      • 368
        Great command-line application
      • 306
        Simple
      • 291
        Free
      • 232
        Easy to use
      • 222
        Does not require server
      • 27
        Distributed
      • 22
        Small & Fast
      • 18
        Feature based workflow
      • 15
        Staging Area
      • 13
        Most wide-spread VSC
      • 11
        Role-based codelines
      • 11
        Disposable Experimentation
      • 7
        Frictionless Context Switching
      • 6
        Data Assurance
      • 5
        Efficient
      • 4
        Just awesome
      • 3
        Github integration
      • 3
        Easy branching and merging
      • 2
        Compatible
      • 2
        Flexible
      • 2
        Possible to lose history and commits
      • 1
        Rebase supported natively; reflog; access to plumbing
      • 1
        Light
      • 1
        Team Integration
      • 1
        Fast, scalable, distributed revision control system
      • 1
        Easy
      • 1
        Flexible, easy, Safe, and fast
      • 1
        CLI is great, but the GUI tools are awesome
      • 1
        It's what you do
      • 0
        Phinx
      CONS OF GIT
      • 16
        Hard to learn
      • 11
        Inconsistent command line interface
      • 9
        Easy to lose uncommitted work
      • 7
        Worst documentation ever possibly made
      • 5
        Awful merge handling
      • 3
        Unexistent preventive security flows
      • 3
        Rebase hell
      • 2
        When --force is disabled, cannot rebase
      • 2
        Ironically even die-hard supporters screw up badly
      • 1
        Doesn't scale for big data

      related Git posts

      Simon Reymann
      Senior Fullstack Developer at QUANTUSflow Software GmbH · | 30 upvotes · 9M views

      Our whole DevOps stack consists of the following tools:

      • GitHub (incl. GitHub Pages/Markdown for Documentation, GettingStarted and HowTo's) for collaborative review and code management tool
      • Respectively Git as revision control system
      • SourceTree as Git GUI
      • Visual Studio Code as IDE
      • CircleCI for continuous integration (automatize development process)
      • Prettier / TSLint / ESLint as code linter
      • SonarQube as quality gate
      • Docker as container management (incl. Docker Compose for multi-container application management)
      • VirtualBox for operating system simulation tests
      • Kubernetes as cluster management for docker containers
      • Heroku for deploying in test environments
      • nginx as web server (preferably used as facade server in production environment)
      • SSLMate (using OpenSSL) for certificate management
      • Amazon EC2 (incl. Amazon S3) for deploying in stage (production-like) and production environments
      • PostgreSQL as preferred database system
      • Redis as preferred in-memory database/store (great for caching)

      The main reason we have chosen Kubernetes over Docker Swarm is related to the following artifacts:

      • Key features: Easy and flexible installation, Clear dashboard, Great scaling operations, Monitoring is an integral part, Great load balancing concepts, Monitors the condition and ensures compensation in the event of failure.
      • Applications: An application can be deployed using a combination of pods, deployments, and services (or micro-services).
      • Functionality: Kubernetes as a complex installation and setup process, but it not as limited as Docker Swarm.
      • Monitoring: It supports multiple versions of logging and monitoring when the services are deployed within the cluster (Elasticsearch/Kibana (ELK), Heapster/Grafana, Sysdig cloud integration).
      • Scalability: All-in-one framework for distributed systems.
      • Other Benefits: Kubernetes is backed by the Cloud Native Computing Foundation (CNCF), huge community among container orchestration tools, it is an open source and modular tool that works with any OS.
      See more
      Tymoteusz Paul
      Devops guy at X20X Development LTD · | 23 upvotes · 8M views

      Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

      It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

      I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

      We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

      If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

      The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

      Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

      See more