StackShareStackShare
Follow on
StackShare

Discover and share technology stacks from companies around the world.

Follow on

© 2025 StackShare. All rights reserved.

Product

  • Stacks
  • Tools
  • Feed

Company

  • About
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
  1. Stackups
  2. Utilities
  3. Background Jobs
  4. Message Queue
  5. Kafka vs Scribe

Kafka vs Scribe

OverviewDecisionsComparisonAlternatives

Overview

Kafka
Kafka
Stacks24.2K
Followers22.3K
Votes607
GitHub Stars31.2K
Forks14.8K
Scribe
Scribe
Stacks36
Followers31
Votes0
GitHub Stars3.9K
Forks787

Kafka vs Scribe: What are the differences?

Introduction

In this post, we will compare and highlight the key differences between Kafka and Scribe, two widely used data streaming platforms.

  1. Architecture: Kafka is designed as a distributed streaming platform, providing a publish-subscribe model for real-time data streaming. It uses a distributed commit log to store and manage the data streams, providing fault-tolerance and high throughput. On the other hand, Scribe is a simple log-serving system that enables reliable and scalable log data publishing and subscription. It uses a centralized architecture with a single server managing the log data.

  2. Scalability: Kafka is known for its ability to handle high throughput and large volume of data streams. It is horizontally scalable, allowing the addition of more brokers to accommodate increasing data load. Scribe, while scalable to a certain extent, has limitations in terms of throughput and data volume handling, as it relies on a single server for log data management.

  3. Fault-Tolerance: Kafka provides fault-tolerance by replicating the data streams across multiple brokers, ensuring data durability even in the event of a broker failure. It also supports automatic leader election and data recovery mechanisms. Scribe, on the other hand, does not provide built-in fault-tolerance mechanisms. It relies on external backup and recovery processes to ensure data integrity.

  4. Message Delivery Guarantee: Kafka guarantees at-least-once delivery semantics, ensuring that every message published to a topic will be received by the subscribed consumers. It achieves this by storing the message offsets and providing consistency guarantees in the data replication process. Scribe, on the other hand, does not provide any built-in message delivery guarantees. It follows a best-effort approach, where message loss can occur in certain failure scenarios.

  5. API and Language Support: Kafka provides a rich set of APIs and client libraries for various programming languages like Java, Scala, Python, and more. It also supports a comprehensive set of integrations with popular data processing frameworks and tools. Scribe, however, has limited language support, primarily focusing on C++ and Python. It may require additional effort to integrate with other languages or frameworks.

  6. Use Cases: Kafka is widely used for scenarios involving real-time data streaming, event sourcing, and high-throughput data processing. It finds applications in various domains such as log aggregation, stream processing, and microservices communication. Scribe, on the other hand, is commonly used for log collection, aggregation, and storage. It is suitable for scenarios that require simple log data publishing and subscription without the need for real-time processing or high throughput.

In summary, Kafka and Scribe differ in terms of architecture, scalability, fault-tolerance, message delivery guarantee, API and language support, and use cases. While Kafka excels in handling real-time data streaming and provides high throughput, fault-tolerance, and strong delivery guarantees, Scribe focuses on simplicity and scalability for log data publishing and aggregation.

Share your Stack

Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.

View Docs
CLI (Node.js)
or
Manual

Advice on Kafka, Scribe

viradiya
viradiya

Apr 12, 2020

Needs adviceonAngularJSAngularJSASP.NET CoreASP.NET CoreMSSQLMSSQL

We are going to develop a microservices-based application. It consists of AngularJS, ASP.NET Core, and MSSQL.

We have 3 types of microservices. Emailservice, Filemanagementservice, Filevalidationservice

I am a beginner in microservices. But I have read about RabbitMQ, but come to know that there are Redis and Kafka also in the market. So, I want to know which is best.

933k views933k
Comments
Ishfaq
Ishfaq

Feb 28, 2020

Needs advice

Our backend application is sending some external messages to a third party application at the end of each backend (CRUD) API call (from UI) and these external messages take too much extra time (message building, processing, then sent to the third party and log success/failure), UI application has no concern to these extra third party messages.

So currently we are sending these third party messages by creating a new child thread at end of each REST API call so UI application doesn't wait for these extra third party API calls.

I want to integrate Apache Kafka for these extra third party API calls, so I can also retry on failover third party API calls in a queue(currently third party messages are sending from multiple threads at the same time which uses too much processing and resources) and logging, etc.

Question 1: Is this a use case of a message broker?

Question 2: If it is then Kafka vs RabitMQ which is the better?

804k views804k
Comments
Roman
Roman

Senior Back-End Developer, Software Architect

Feb 12, 2019

ReviewonKafkaKafka

I use Kafka because it has almost infinite scaleability in terms of processing events (could be scaled to process hundreds of thousands of events), great monitoring (all sorts of metrics are exposed via JMX).

Downsides of using Kafka are:

  • you have to deal with Zookeeper
  • you have to implement advanced routing yourself (compared to RabbitMQ it has no advanced routing)
10.8k views10.8k
Comments

Detailed Comparison

Kafka
Kafka
Scribe
Scribe

Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.

It is a server for aggregating log data streamed in real time from a large number of servers. It is designed to be scalable and reliable.

Written at LinkedIn in Scala;Used by LinkedIn to offload processing of all page and other views;Defaults to using persistence, uses OS disk cache for hot data (has higher throughput then any of the above having persistence enabled);Supports both on-line as off-line processing
Aggregating log data ;Streamed in real time
Statistics
GitHub Stars
31.2K
GitHub Stars
3.9K
GitHub Forks
14.8K
GitHub Forks
787
Stacks
24.2K
Stacks
36
Followers
22.3K
Followers
31
Votes
607
Votes
0
Pros & Cons
Pros
  • 126
    High-throughput
  • 119
    Distributed
  • 92
    Scalable
  • 86
    High-Performance
  • 66
    Durable
Cons
  • 32
    Non-Java clients are second-class citizens
  • 29
    Needs Zookeeper
  • 9
    Operational difficulties
  • 5
    Terrible Packaging
No community feedback yet
Integrations
No integrations available
Python
Python
Hadoop
Hadoop
Apache Thrift
Apache Thrift

What are some alternatives to Kafka, Scribe?

RabbitMQ

RabbitMQ

RabbitMQ gives your applications a common platform to send and receive messages, and your messages a safe place to live until received.

Celery

Celery

Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well.

Papertrail

Papertrail

Papertrail helps detect, resolve, and avoid infrastructure problems using log messages. Papertrail's practicality comes from our own experience as sysadmins, developers, and entrepreneurs.

Logmatic

Logmatic

Get a clear overview of what is happening across your distributed environments, and spot the needle in the haystack in no time. Build dynamic analyses and identify improvements for your software, your user experience and your business.

Amazon SQS

Amazon SQS

Transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be always available. With SQS, you can offload the administrative burden of operating and scaling a highly available messaging cluster, while paying a low price for only what you use.

Loggly

Loggly

It is a SaaS solution to manage your log data. There is nothing to install and updates are automatically applied to your Loggly subdomain.

NSQ

NSQ

NSQ is a realtime distributed messaging platform designed to operate at scale, handling billions of messages per day. It promotes distributed and decentralized topologies without single points of failure, enabling fault tolerance and high availability coupled with a reliable message delivery guarantee. See features & guarantees.

Logentries

Logentries

Logentries makes machine-generated log data easily accessible to IT operations, development, and business analysis teams of all sizes. With the broadest platform support and an open API, Logentries brings the value of log-level data to any system, to any team member, and to a community of more than 25,000 worldwide users.

Logstash

Logstash

Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). If you store them in Elasticsearch, you can view and analyze them with Kibana.

ActiveMQ

ActiveMQ

Apache ActiveMQ is fast, supports many Cross Language Clients and Protocols, comes with easy to use Enterprise Integration Patterns and many advanced features while fully supporting JMS 1.1 and J2EE 1.4. Apache ActiveMQ is released under the Apache 2.0 License.

Related Comparisons

GitHub
Bitbucket

Bitbucket vs GitHub vs GitLab

Bootstrap
Materialize

Bootstrap vs Materialize

Laravel
Django

Django vs Laravel vs Node.js

Bootstrap
Foundation

Bootstrap vs Foundation vs Material UI

Node.js
Spring Boot

Node.js vs Spring-Boot