Alternatives to Seq logo

Alternatives to Seq

Splunk, Elasticsearch, ELK, Graylog, and Logstash are the most popular alternatives and competitors to Seq.
113
140
+ 1
19

What is Seq and what are its top alternatives?

Seq is a self-hosted server for structured log search, analysis, and alerting. It can be hosted on Windows or Linux/Docker, and has integrations for most popular structured logging libraries.
Seq is a tool in the Logging Tools category of a tech stack.

Top Alternatives to Seq

  • Splunk
    Splunk

    It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...

  • Elasticsearch
    Elasticsearch

    Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack). ...

  • ELK
    ELK

    It is the acronym for three open source projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch. ...

  • Graylog
    Graylog

    Centralize and aggregate all your log files for 100% visibility. Use our powerful query language to search through terabytes of log data to discover and analyze important information. ...

  • Logstash
    Logstash

    Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). If you store them in Elasticsearch, you can view and analyze them with Kibana. ...

  • Kibana
    Kibana

    Kibana is an open source (Apache Licensed), browser based analytics and search dashboard for Elasticsearch. Kibana is a snap to setup and start using. Kibana strives to be easy to get started with, while also being flexible and powerful, just like Elasticsearch. ...

  • Log4j
    Log4j

    It is an open source logging framework. With this tool – logging behavior can be controlled by editing a configuration file only without touching the application binary and can be used to store the Selenium Automation flow logs. ...

  • Castle Core
    Castle Core

    It provides common Castle Project abstractions including logging services. It also features Castle DynamicProxy a lightweight runtime proxy generator, and Castle DictionaryAdapter. ...

Seq alternatives & related posts

Splunk logo

Splunk

752
992
20
Search, monitor, analyze and visualize machine data
752
992
+ 1
20
PROS OF SPLUNK
  • 3
    API for searching logs, running reports
  • 3
    Alert system based on custom query results
  • 2
    Dashboarding on any log contents
  • 2
    Custom log parsing as well as automatic parsing
  • 2
    Ability to style search results into reports
  • 2
    Query engine supports joining, aggregation, stats, etc
  • 2
    Splunk language supports string, date manip, math, etc
  • 2
    Rich GUI for searching live logs
  • 1
    Query any log as key-value pairs
  • 1
    Granular scheduling and time window support
CONS OF SPLUNK
  • 1
    Splunk query language rich so lots to learn

related Splunk posts

Shared insights
on
KibanaKibanaSplunkSplunkGrafanaGrafana

I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.

See more
Shared insights
on
SplunkSplunkElasticsearchElasticsearch

We are currently exploring Elasticsearch and Splunk for our centralized logging solution. I need some feedback about these two tools. We expect our logs in the range of upwards > of 10TB of logging data.

See more
Elasticsearch logo

Elasticsearch

34.7K
26.4K
1.6K
Open Source, Distributed, RESTful Search Engine
34.7K
26.4K
+ 1
1.6K
PROS OF ELASTICSEARCH
  • 326
    Powerful api
  • 315
    Great search engine
  • 230
    Open source
  • 214
    Restful
  • 199
    Near real-time search
  • 97
    Free
  • 84
    Search everything
  • 54
    Easy to get started
  • 45
    Analytics
  • 26
    Distributed
  • 6
    Fast search
  • 5
    More than a search engine
  • 3
    Highly Available
  • 3
    Awesome, great tool
  • 3
    Great docs
  • 3
    Easy to scale
  • 2
    Fast
  • 2
    Easy setup
  • 2
    Great customer support
  • 2
    Intuitive API
  • 2
    Great piece of software
  • 2
    Reliable
  • 2
    Potato
  • 2
    Nosql DB
  • 2
    Document Store
  • 1
    Not stable
  • 1
    Scalability
  • 1
    Open
  • 1
    Github
  • 1
    Elaticsearch
  • 1
    Actively developing
  • 1
    Responsive maintainers on GitHub
  • 1
    Ecosystem
  • 1
    Easy to get hot data
  • 0
    Community
CONS OF ELASTICSEARCH
  • 7
    Resource hungry
  • 6
    Diffecult to get started
  • 5
    Expensive
  • 4
    Hard to keep stable at large scale

related Elasticsearch posts

Tim Abbott

We've been using PostgreSQL since the very early days of Zulip, but we actually didn't use it from the beginning. Zulip started out as a MySQL project back in 2012, because we'd heard it was a good choice for a startup with a wide community. However, we found that even though we were using the Django ORM for most of our database access, we spent a lot of time fighting with MySQL. Issues ranged from bad collation defaults, to bad query plans which required a lot of manual query tweaks.

We ended up getting so frustrated that we tried out PostgresQL, and the results were fantastic. We didn't have to do any real customization (just some tuning settings for how big a server we had), and all of our most important queries were faster out of the box. As a result, we were able to delete a bunch of custom queries escaping the ORM that we'd written to make the MySQL query planner happy (because postgres just did the right thing automatically).

And then after that, we've just gotten a ton of value out of postgres. We use its excellent built-in full-text search, which has helped us avoid needing to bring in a tool like Elasticsearch, and we've really enjoyed features like its partial indexes, which saved us a lot of work adding unnecessary extra tables to get good performance for things like our "unread messages" and "starred messages" indexes.

I can't recommend it highly enough.

See more
Tymoteusz Paul
Devops guy at X20X Development LTD · | 23 upvotes · 8M views

Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

See more
ELK logo

ELK

843
919
21
The acronym for three open source projects: Elasticsearch, Logstash, and Kibana
843
919
+ 1
21
PROS OF ELK
  • 13
    Open source
  • 3
    Can run locally
  • 3
    Good for startups with monetary limitations
  • 1
    External Network Goes Down You Aren't Without Logging
  • 1
    Easy to setup
  • 0
    Json log supprt
  • 0
    Live logging
CONS OF ELK
  • 5
    Elastic Search is a resource hog
  • 3
    Logstash configuration is a pain
  • 1
    Bad for startups with personal limitations

related ELK posts

Wallace Alves
Cyber Security Analyst · | 2 upvotes · 855.2K views

Docker Docker Compose Portainer ELK Elasticsearch Kibana Logstash nginx

See more
Graylog logo

Graylog

607
704
70
Open source log management that actually works
607
704
+ 1
70
PROS OF GRAYLOG
  • 19
    Open source
  • 13
    Powerfull
  • 8
    Well documented
  • 6
    Alerts
  • 5
    User authentification
  • 5
    Flexibel query and parsing language
  • 3
    User management
  • 3
    Easy query language and english parsing
  • 3
    Alerts and dashboards
  • 2
    Easy to install
  • 1
    A large community
  • 1
    Manage users and permissions
  • 1
    Free Version
CONS OF GRAYLOG
  • 1
    Does not handle frozen indices at all

related Graylog posts

Logstash logo

Logstash

12K
8.5K
103
Collect, Parse, & Enrich Data
12K
8.5K
+ 1
103
PROS OF LOGSTASH
  • 69
    Free
  • 18
    Easy but powerful filtering
  • 12
    Scalable
  • 2
    Kibana provides machine learning based analytics to log
  • 1
    Great to meet GDPR goals
  • 1
    Well Documented
CONS OF LOGSTASH
  • 4
    Memory-intensive
  • 1
    Documentation difficult to use

related Logstash posts

Tymoteusz Paul
Devops guy at X20X Development LTD · | 23 upvotes · 8M views

Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

See more

Hi everyone. I'm trying to create my personal syslog monitoring.

  1. To get the logs, I have uncertainty to choose the way: 1.1 Use Logstash like a TCP server. 1.2 Implement a Go TCP server.

  2. To store and plot data. 2.1 Use Elasticsearch tools. 2.2 Use InfluxDB and Grafana.

I would like to know... Which is a cheaper and scalable solution?

Or even if there is a better way to do it.

See more
Kibana logo

Kibana

20.2K
16K
261
Visualize your Elasticsearch data and navigate the Elastic Stack
20.2K
16K
+ 1
261
PROS OF KIBANA
  • 88
    Easy to setup
  • 64
    Free
  • 45
    Can search text
  • 21
    Has pie chart
  • 13
    X-axis is not restricted to timestamp
  • 9
    Easy queries and is a good way to view logs
  • 6
    Supports Plugins
  • 4
    Dev Tools
  • 3
    Can build dashboards
  • 3
    More "user-friendly"
  • 2
    Out-of-Box Dashboards/Analytics for Metrics/Heartbeat
  • 2
    Easy to drill-down
  • 1
    Up and running
CONS OF KIBANA
  • 6
    Unintuituve
  • 4
    Elasticsearch is huge
  • 3
    Hardweight UI
  • 3
    Works on top of elastic only

related Kibana posts

Tymoteusz Paul
Devops guy at X20X Development LTD · | 23 upvotes · 8M views

Often enough I have to explain my way of going about setting up a CI/CD pipeline with multiple deployment platforms. Since I am a bit tired of yapping the same every single time, I've decided to write it up and share with the world this way, and send people to read it instead ;). I will explain it on "live-example" of how the Rome got built, basing that current methodology exists only of readme.md and wishes of good luck (as it usually is ;)).

It always starts with an app, whatever it may be and reading the readmes available while Vagrant and VirtualBox is installing and updating. Following that is the first hurdle to go over - convert all the instruction/scripts into Ansible playbook(s), and only stopping when doing a clear vagrant up or vagrant reload we will have a fully working environment. As our Vagrant environment is now functional, it's time to break it! This is the moment to look for how things can be done better (too rigid/too lose versioning? Sloppy environment setup?) and replace them with the right way to do stuff, one that won't bite us in the backside. This is the point, and the best opportunity, to upcycle the existing way of doing dev environment to produce a proper, production-grade product.

I should probably digress here for a moment and explain why. I firmly believe that the way you deploy production is the same way you should deploy develop, shy of few debugging-friendly setting. This way you avoid the discrepancy between how production work vs how development works, which almost always causes major pains in the back of the neck, and with use of proper tools should mean no more work for the developers. That's why we start with Vagrant as developer boxes should be as easy as vagrant up, but the meat of our product lies in Ansible which will do meat of the work and can be applied to almost anything: AWS, bare metal, docker, LXC, in open net, behind vpn - you name it.

We must also give proper consideration to monitoring and logging hoovering at this point. My generic answer here is to grab Elasticsearch, Kibana, and Logstash. While for different use cases there may be better solutions, this one is well battle-tested, performs reasonably and is very easy to scale both vertically (within some limits) and horizontally. Logstash rules are easy to write and are well supported in maintenance through Ansible, which as I've mentioned earlier, are at the very core of things, and creating triggers/reports and alerts based on Elastic and Kibana is generally a breeze, including some quite complex aggregations.

If we are happy with the state of the Ansible it's time to move on and put all those roles and playbooks to work. Namely, we need something to manage our CI/CD pipelines. For me, the choice is obvious: TeamCity. It's modern, robust and unlike most of the light-weight alternatives, it's transparent. What I mean by that is that it doesn't tell you how to do things, doesn't limit your ways to deploy, or test, or package for that matter. Instead, it provides a developer-friendly and rich playground for your pipelines. You can do most the same with Jenkins, but it has a quite dated look and feel to it, while also missing some key functionality that must be brought in via plugins (like quality REST API which comes built-in with TeamCity). It also comes with all the common-handy plugins like Slack or Apache Maven integration.

The exact flow between CI and CD varies too greatly from one application to another to describe, so I will outline a few rules that guide me in it: 1. Make build steps as small as possible. This way when something breaks, we know exactly where, without needing to dig and root around. 2. All security credentials besides development environment must be sources from individual Vault instances. Keys to those containers should exist only on the CI/CD box and accessible by a few people (the less the better). This is pretty self-explanatory, as anything besides dev may contain sensitive data and, at times, be public-facing. Because of that appropriate security must be present. TeamCity shines in this department with excellent secrets-management. 3. Every part of the build chain shall consume and produce artifacts. If it creates nothing, it likely shouldn't be its own build. This way if any issue shows up with any environment or version, all developer has to do it is grab appropriate artifacts to reproduce the issue locally. 4. Deployment builds should be directly tied to specific Git branches/tags. This enables much easier tracking of what caused an issue, including automated identifying and tagging the author (nothing like automated regression testing!).

Speaking of deployments, I generally try to keep it simple but also with a close eye on the wallet. Because of that, I am more than happy with AWS or another cloud provider, but also constantly peeking at the loads and do we get the value of what we are paying for. Often enough the pattern of use is not constantly erratic, but rather has a firm baseline which could be migrated away from the cloud and into bare metal boxes. That is another part where this approach strongly triumphs over the common Docker and CircleCI setup, where you are very much tied in to use cloud providers and getting out is expensive. Here to embrace bare-metal hosting all you need is a help of some container-based self-hosting software, my personal preference is with Proxmox and LXC. Following that all you must write are ansible scripts to manage hardware of Proxmox, similar way as you do for Amazon EC2 (ansible supports both greatly) and you are good to go. One does not exclude another, quite the opposite, as they can live in great synergy and cut your costs dramatically (the heavier your base load, the bigger the savings) while providing production-grade resiliency.

See more
Tassanai Singprom

This is my stack in Application & Data

JavaScript PHP HTML5 jQuery Redis Amazon EC2 Ubuntu Sass Vue.js Firebase Laravel Lumen Amazon RDS GraphQL MariaDB

My Utilities Tools

Google Analytics Postman Elasticsearch

My Devops Tools

Git GitHub GitLab npm Visual Studio Code Kibana Sentry BrowserStack

My Business Tools

Slack

See more
Log4j logo

Log4j

3.1K
99
0
A Java-based logging utility
3.1K
99
+ 1
0
PROS OF LOG4J
    Be the first to leave a pro
    CONS OF LOG4J
      Be the first to leave a con

      related Log4j posts

      Castle Core logo

      Castle Core

      559
      3
      0
      It provides common Castle Project abstractions including logging services
      559
      3
      + 1
      0
      PROS OF CASTLE CORE
        Be the first to leave a pro
        CONS OF CASTLE CORE
          Be the first to leave a con

          related Castle Core posts