What is OpenTracing and what are its top alternatives?
OpenTracing is an open-source project that provides a vendor-neutral API and instrumentation for distributed tracing. It allows developers to instrument their code to capture and propagate traces across microservices. Key features of OpenTracing include support for multiple programming languages, integration with popular tracing systems like Jaeger and Zipkin, and the ability to create custom instrumentation. However, a limitation of OpenTracing is that it does not provide built-in capabilities for log correlation or metrics collection.
- Jaeger: Jaeger is a distributed tracing platform that is compatible with OpenTracing. Key features include high scalability, support for multiple storage backends, and a user-friendly web interface. Pros: seamless integration with OpenTracing, rich visualization capabilities. Cons: may require additional resources for large-scale deployments.
- Zipkin: Zipkin is another popular distributed tracing system that supports the OpenTracing API. Features include a simple and lightweight design, support for multiple storage backends, and integration with various programming languages. Pros: easy to deploy and use, active community support. Cons: not as scalable as some other tools.
- SkyWalking: Apache SkyWalking is an APM (application performance monitoring) tool that provides distributed tracing capabilities. Key features include support for multiple languages and frameworks, advanced visualization of traces, and alerting functionality. Pros: comprehensive monitoring solution, active development and community support. Cons: may have a learning curve for beginners.
- Instana: Instana is an APM platform that includes distributed tracing as part of its monitoring capabilities. Features include automatic distributed tracing setup, deep visibility into applications, and AI-powered analytics. Pros: easy setup and configuration, advanced monitoring and alerting features. Cons: may be expensive for small teams or startups.
- Datadog APM: Datadog APM is a performance monitoring tool that offers distributed tracing functionality. Key features include seamless integration with other Datadog services, customizable dashboards, and detailed performance metrics. Pros: robust monitoring features, extensive documentation. Cons: pricing may be a concern for some users.
- New Relic: New Relic is a popular APM and monitoring tool that includes distributed tracing capabilities. Features include end-to-end visibility into transactions, real-time performance monitoring, and anomaly detection. Pros: user-friendly interface, comprehensive monitoring solutions. Cons: pricing may be prohibitive for small teams or startups.
- Prometheus: Prometheus is an open-source monitoring and alerting system that can be used for distributed tracing. Key features include a multi-dimensional data model, powerful query language, and easy integration with other tools. Pros: scalable and flexible, active community support. Cons: may require additional setup and configuration compared to all-in-one APM solutions.
- Elastic APM: Elastic APM is part of the Elastic Stack and provides distributed tracing capabilities. Features include real-time performance monitoring, detailed transaction traces, and integration with other Elastic products. Pros: seamless integration with Elastic Stack, powerful search and analysis capabilities. Cons: may require some familiarity with Elastic products.
- Dynatrace: Dynatrace is an APM platform that offers distributed tracing functionality. Key features include AI-driven monitoring, automatic root cause analysis, and support for cloud-native technologies. Pros: advanced monitoring capabilities, automatic problem resolution. Cons: may be expensive for small teams or startups.
- OpenTelemetry: OpenTelemetry is a project that aims to provide a unified standard for observability by combining OpenTracing and OpenCensus. Key features include support for multiple programming languages, compatibility with various tracing systems, and flexibility in instrumentation. Pros: community-driven development, seamless transition from OpenTracing. Cons: still in active development, may have some compatibility issues with existing systems.
Top Alternatives to OpenTracing
- Zipkin
It helps gather timing data needed to troubleshoot latency problems in service architectures. Features include both the collection and lookup of this data. ...
- Datadog
Datadog is the leading service for cloud-scale monitoring. It is used by IT, operations, and development teams who build and operate applications that run on dynamic or hybrid cloud infrastructure. Start monitoring in minutes with Datadog! ...
- Jaeger
Jaeger, a Distributed Tracing System
- Fluentd
Fluentd collects events from various data sources and writes them to files, RDBMS, NoSQL, IaaS, SaaS, Hadoop and so on. Fluentd helps you unify your logging infrastructure. ...
- OpenCensus
It is a set of libraries for various languages that allow you to collect application metrics and distributed traces, then transfer the data to a backend of your choice in real time. This data can be analyzed by developers and admins to understand the health of the application and debug problems. ...
- Prometheus
Prometheus is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. ...
- Brave
It is a fast, private and secure web browser for PC and mobile. It blocks ads and trackers. It prevents you from being tracked by sneaky advertisers, malware and pop-ups. ...
- Splunk
It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data. ...
OpenTracing alternatives & related posts
- Open Source10
related Zipkin posts
Datadog
- Monitoring for many apps (databases, web servers, etc)139
- Easy setup107
- Powerful ui87
- Powerful integrations84
- Great value70
- Great visualization54
- Events + metrics = clarity46
- Notifications41
- Custom metrics41
- Flexibility39
- Free & paid plans19
- Great customer support16
- Makes my life easier15
- Adapts automatically as i scale up10
- Easy setup and plugins9
- Super easy and powerful8
- In-context collaboration7
- AWS support7
- Rich in features6
- Docker support5
- Cute logo4
- Source control and bug tracking4
- Monitor almost everything4
- Cost4
- Full visibility of applications4
- Simple, powerful, great for infra4
- Easy to Analyze4
- Best than others4
- Automation tools4
- Best in the field3
- Free setup3
- Good for Startups3
- Expensive3
- APM2
- Expensive19
- No errors exception tracking4
- External Network Goes Down You Wont Be Logging2
- Complicated1
related Datadog posts
We just launched the Segment Config API (try it out for yourself here) — a set of public REST APIs that enable you to manage your Segment configuration. Behind the scenes the Config API is built with Go , GRPC and Envoy.
At Segment, we build new services in Go by default. The language is simple so new team members quickly ramp up on a codebase. The tool chain is fast so developers get immediate feedback when they break code, tests or integrations with other systems. The runtime is fast so it performs great at scale.
For the newest round of APIs we adopted the GRPC service #framework.
The Protocol Buffer service definition language makes it easy to design type-safe and consistent APIs, thanks to ecosystem tools like the Google API Design Guide for API standards, uber/prototool
for formatting and linting .protos and lyft/protoc-gen-validate
for defining field validations, and grpc-gateway
for defining REST mapping.
With a well designed .proto, its easy to generate a Go server interface and a TypeScript client, providing type-safe RPC between languages.
For the API gateway and RPC we adopted the Envoy service proxy.
The internet-facing segmentapis.com
endpoint is an Envoy front proxy that rate-limits and authenticates every request. It then transcodes a #REST / #JSON request to an upstream GRPC request. The upstream GRPC servers are running an Envoy sidecar configured for Datadog stats.
The result is API #security , #reliability and consistent #observability through Envoy configuration, not code.
We experimented with Swagger service definitions, but the spec is sprawling and the generated clients and server stubs leave a lot to be desired. GRPC and .proto and the Go implementation feels better designed and implemented. Thanks to the GRPC tooling and ecosystem you can generate Swagger from .protos, but it’s effectively impossible to go the other way.
Our primary source of monitoring and alerting is Datadog. We’ve got prebuilt dashboards for every scenario and integration with PagerDuty to manage routing any alerts. We’ve definitely scaled past the point where managing dashboards is easy, but we haven’t had time to invest in using features like Anomaly Detection. We’ve started using Honeycomb for some targeted debugging of complex production issues and we are liking what we’ve seen. We capture any unhandled exceptions with Rollbar and, if we realize one will keep happening, we quickly convert the metrics to point back to Datadog, to keep Rollbar as clean as possible.
We use Segment to consolidate all of our trackers, the most important of which goes to Amplitude to analyze user patterns. However, if we need a more consolidated view, we push all of our data to our own data warehouse running PostgreSQL; this is available for analytics and dashboard creation through Looker.
- Easy to install6
- Open Source6
- Feature Rich UI5
- CNCF Project4
related Jaeger posts
How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:
Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.
Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:
https://eng.uber.com/distributed-tracing/
(GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)
Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark
- Open-source11
- Easy9
- Great for Kubernetes node container log forwarding9
- Lightweight9
related Fluentd posts
OpenCensus
related OpenCensus posts
Prometheus
- Powerful easy to use monitoring47
- Flexible query language38
- Dimensional data model32
- Alerts27
- Active and responsive community23
- Extensive integrations22
- Easy to setup19
- Beautiful Model and Query language12
- Easy to extend7
- Nice6
- Written in Go3
- Good for experimentation2
- Easy for monitoring1
- Just for metrics12
- Bad UI6
- Needs monitoring to access metrics endpoints6
- Not easy to configure and use4
- Supports only active agents3
- Written in Go2
- TLS is quite difficult to understand2
- Requires multiple applications and tools2
- Single point of failure1
related Prometheus posts
Grafana and Prometheus together, running on Kubernetes , is a powerful combination. These tools are cloud-native and offer a large community and easy integrations. At PayIt we're using exporting Java application metrics using a Dropwizard metrics exporter, and our Node.js services now use the prom-client npm library to serve metrics.
Why we spent several years building an open source, large-scale metrics alerting system, M3, built for Prometheus:
By late 2014, all services, infrastructure, and servers at Uber emitted metrics to a Graphite stack that stored them using the Whisper file format in a sharded Carbon cluster. We used Grafana for dashboarding and Nagios for alerting, issuing Graphite threshold checks via source-controlled scripts. While this worked for a while, expanding the Carbon cluster required a manual resharding process and, due to lack of replication, any single node’s disk failure caused permanent loss of its associated metrics. In short, this solution was not able to meet our needs as the company continued to grow.
To ensure the scalability of Uber’s metrics backend, we decided to build out a system that provided fault tolerant metrics ingestion, storage, and querying as a managed platform...
(GitHub : https://github.com/m3db/m3)
- Privacy31
- Faster22
- Open Source'21
- Customizable14
- Ad block11
- Simple10
- Great start screen6
- Built in tor browser6
- Built-in add-ons3
- Sync with phone app3
- Chromium-based12
- Slower5
- More secure4
- Bad color scheme1
- Ram hungry1
- Bad color1
related Brave posts
- API for searching logs, running reports3
- Alert system based on custom query results3
- Splunk language supports string, date manip, math, etc2
- Dashboarding on any log contents2
- Custom log parsing as well as automatic parsing2
- Query engine supports joining, aggregation, stats, etc2
- Rich GUI for searching live logs2
- Ability to style search results into reports2
- Granular scheduling and time window support1
- Query any log as key-value pairs1
- Splunk query language rich so lots to learn1
related Splunk posts
I am designing a Django application for my organization which will be used as an internal tool. The infra team said that I will not be having SSH access to the production server and I will have to log all my backend application messages to Splunk. I have no knowledge of Splunk so the following are the approaches I am considering: Approach 1: Create an hourly cron job that uploads the server log file to some Splunk storage for later analysis. - Is this possible? Approach 2: Is it possible just to stream the logs to some splunk endpoint? (If yes, I feel network usage and communication overhead will be a pain-point for my application)
Is there any better or standard approach? Thanks in advance.
I use Kibana because it ships with the ELK stack. I don't find it as powerful as Splunk however it is light years above grepping through log files. We previously used Grafana but found it to be annoying to maintain a separate tool outside of the ELK stack. We were able to get everything we needed from Kibana.