What is ELK and what are its top alternatives?
Top Alternatives to ELK
Datadog is the leading service for cloud-scale monitoring. It is used by IT, operations, and development teams who build and operate applications that run on dynamic or hybrid cloud infrastructure. Start monitoring in minutes with Datadog! ...
It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...
Centralize and aggregate all your log files for 100% visibility. Use our powerful query language to search through terabytes of log data to discover and analyze important information. ...
Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). If you store them in Elasticsearch, you can view and analyze them with Kibana. ...
Papertrail helps detect, resolve, and avoid infrastructure problems using log messages. Papertrail's practicality comes from our own experience as sysadmins, developers, and entrepreneurs. ...
Fluentd collects events from various data sources and writes them to files, RDBMS, NoSQL, IaaS, SaaS, Hadoop and so on. Fluentd helps you unify your logging infrastructure. ...
It is a simple Logging Facade for Java (SLF4J) serves as a simple facade or abstraction for various logging frameworks allowing the end user to plug in the desired logging framework at deployment time. ...
It is intended as a successor to the popular log4j project. It is divided into three modules, logback-core, logback-classic and logback-access. The logback-core module lays the groundwork for the other two modules, logback-classic natively implements the SLF4J API so that you can readily switch back and forth between logback and other logging frameworks and logback-access module integrates with Servlet containers, such as Tomcat and Jetty, to provide HTTP-access log functionality. ...
ELK alternatives & related posts
- Monitoring for many apps (databases, web servers, etc)134
- Easy setup105
- Powerful ui85
- Powerful integrations82
- Great value69
- Great visualization53
- Events + metrics = clarity45
- Custom metrics40
- Free & paid plans18
- Great customer support15
- Makes my life easier14
- Adapts automatically as i scale up9
- Easy setup and plugins8
- Super easy and powerful7
- In-context collaboration6
- AWS support6
- Rich in features5
- Docker support4
- Monitor almost everything3
- Full visibility of applications3
- Easy to Analyze3
- Cute logo3
- Simple, powerful, great for infra3
- Source control and bug tracking3
- Best than others3
- Automation tools3
- Best in the field2
- Good for Startups2
- Free setup2
- No errors exception tracking4
- External Network Goes Down You Wont Be Logging2
related Datadog posts
Our primary source of monitoring and alerting is Datadog. We’ve got prebuilt dashboards for every scenario and integration with PagerDuty to manage routing any alerts. We’ve definitely scaled past the point where managing dashboards is easy, but we haven’t had time to invest in using features like Anomaly Detection. We’ve started using Honeycomb for some targeted debugging of complex production issues and we are liking what we’ve seen. We capture any unhandled exceptions with Rollbar and, if we realize one will keep happening, we quickly convert the metrics to point back to Datadog, to keep Rollbar as clean as possible.
We use Segment to consolidate all of our trackers, the most important of which goes to Amplitude to analyze user patterns. However, if we need a more consolidated view, we push all of our data to our own data warehouse running PostgreSQL; this is available for analytics and dashboard creation through Looker.
We are looking for a centralised monitoring solution for our application deployed on Amazon EKS. We would like to monitor using metrics from Kubernetes, AWS services (NeptuneDB, AWS Elastic Load Balancing (ELB), Amazon EBS, Amazon S3, etc) and application microservice's custom metrics.
We are expected to use around 80 microservices (not replicas). I think a total of 200-250 microservices will be there in the system with 10-12 slave nodes.
We tried Prometheus but it looks like maintenance is a big issue. We need to manage scaling, maintaining the storage, and dealing with multiple exporters and Grafana. I felt this itself needs few dedicated resources (at least 2-3 people) to manage. Not sure if I am thinking in the correct direction. Please confirm.
You mentioned Datadog and Sysdig charges per host. Does it charge per slave node?
- API for searching logs, running reports2
- Query engine supports joining, aggregation, stats, etc2
- Ability to style search results into reports1
- Query any log as key-value pairs1
- Splunk language supports string, date manip, math, etc1
- Granular scheduling and time window support1
- Alert system based on custom query results1
- Custom log parsing as well as automatic parsing1
- Dashboarding on any log contents1
- Rich GUI for searching live logs1
- Splunk query language rich so lots to learn1
related Splunk posts
I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.
- Open source17
- Well documented7
- Flexibel query and parsing language5
- User authentification5
- Easy query language and english parsing2
- Alerts and dashboards2
- User management2
- Easy to install1
- Honestly the worst tool I ever used1
- A large community1
- Manage users and permissions1
- Does not handle frozen indices at all1
related Graylog posts
- Easy but powerful filtering18
- Kibana provides machine learning based analytics to log2
- Great to meet GDPR goals1
- Well Documented1
- Documentation difficult to use1
related Logstash posts
Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).
It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear
vagrant up or
vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.
I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as
vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.
We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.
If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.
The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).
Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.
ELK Stack (Elasticsearch, Logstash, Kibana) is widely known as the de facto way to centralize logs from operational systems. The assumption is that Elasticsearch (a "search engine") is a good place to put text-based logs for the purposes of free-text search. And indeed, simply searching text-based logs for the word "error" or filtering logs based on a set of a well-known tags is extremely powerful, and is often where most users start.
- Log search86
- Easy log aggregation across multiple machines43
- Integrates with Heroku43
- Simple interface37
- Backup to S326
- Easy setup, independent of existing logging setup19
- Heroku add-on15
- Command line interface3
- Good for Startups1
- External Network Goes Down You Wont Be Logging1
related Papertrail posts
- Great for Kubernetes node container log forwarding9