Need advice about which tool to choose?Ask the StackShare community!

Cassandra

3.6K
3.5K
+ 1
507
Hadoop

2.5K
2.3K
+ 1
56
Add tool

Cassandra vs Hadoop: What are the differences?

## Introduction
Here are key differences between Cassandra and Hadoop.

1. **Data Model**: Cassandra follows a NoSQL data model, specifically a wide-column store, while Hadoop is based on HDFS (Hadoop Distributed File System) and follows a distributed file system model.
2. **Query Language**: Cassandra uses CQL (Cassandra Query Language) for querying, whereas Hadoop utilizes MapReduce for processing and querying large datasets.
3. **Consistency**: In Cassandra, consistency can be adjusted per query, allowing for eventual consistency or strong consistency based on requirements, whereas Hadoop maintains data consistency through replication factor and block replication.
4. **Scalability**: Cassandra is designed to be highly scalable horizontally, making it suitable for large amounts of data and high write throughput, while Hadoop is also scalable but is more optimized for batch processing and analytics on vast datasets.
5. **Real-time Processing**: Cassandra excels in real-time data processing and low latency requirements, whereas Hadoop is better suited for batch processing and offline analytics tasks.
6. **Fault Tolerance**: Hadoop provides fault tolerance through data replication in HDFS, allowing for reliability in case of hardware failures, while Cassandra ensures fault tolerance through distributed architecture and data replication across multiple nodes.

In Summary, Cassandra and Hadoop differ in their data models, query languages, consistency levels, scalability approaches, real-time processing capabilities, and fault tolerance mechanisms.
Advice on Cassandra and Hadoop
Needs advice
on
HadoopHadoopMarkLogicMarkLogic
and
SnowflakeSnowflake

For a property and casualty insurance company, we currently use MarkLogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus Snowflake versus a hadoop or all three of these platforms redundant with one another?

See more
Needs advice
on
HadoopHadoopMarkLogicMarkLogic
and
SnowflakeSnowflake

for property and casualty insurance company we current Use marklogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus snowflake versus a hadoop or all three of these platforms redundant with one another?

See more
Replies (1)
Ivo Dinis Rodrigues
none of you bussines at Marklogic · | 1 upvotes · 20.3K views
Recommends

As i see it, you can use Snowflake as your data warehouse and marklogic as a data lake. You can add all your raw data to ML and curate it to a company data model to then supply this to Snowflake. You could try to implement the dw functionality on marklogic but it will just cost you alot of time. If you are using Aws version of Snowflake you can use ML spark connector to access the data. As an extra you can use the ML also as an Operational report system if you join it with a Reporting tool lie PowerBi. With extra apis you can also provide data to other systems with ML as source.

See more
Umair Iftikhar
Technical Architect at ERP Studio · | 3 upvotes · 445.8K views
Needs advice
on
CassandraCassandraDruidDruid
and
TimescaleDBTimescaleDB

Developing a solution that collects Telemetry Data from different devices, nearly 1000 devices minimum and maximum 12000. Each device is sending 2 packets in 1 second. This is time-series data, and this data definition and different reports are saved on PostgreSQL. Like Building information, maintenance records, etc. I want to know about the best solution. This data is required for Math and ML to run different algorithms. Also, data is raw without definitions and information stored in PostgreSQL. Initially, I went with TimescaleDB due to PostgreSQL support, but to increase in sites, I started facing many issues with timescale DB in terms of flexibility of storing data.

My major requirement is also the replication of the database for reporting and different purposes. You may also suggest other options other than Druid and Cassandra. But an open source solution is appreciated.

See more
Replies (1)
Recommends
on
MongoDBMongoDB

Hi Umair, Did you try MongoDB. We are using MongoDB on a production environment and collecting data from devices like your scenario. We have a MongoDB cluster with three replicas. Data from devices are being written to the master node and real-time dashboard UI is using the secondary nodes for read operations. With this setup write operations are not affected by read operations too.

See more
Needs advice
on
HadoopHadoopInfluxDBInfluxDB
and
KafkaKafka

I have a lot of data that's currently sitting in a MariaDB database, a lot of tables that weigh 200gb with indexes. Most of the large tables have a date column which is always filtered, but there are usually 4-6 additional columns that are filtered and used for statistics. I'm trying to figure out the best tool for storing and analyzing large amounts of data. Preferably self-hosted or a cheap solution. The current problem I'm running into is speed. Even with pretty good indexes, if I'm trying to load a large dataset, it's pretty slow.

See more
Replies (1)
Recommends
on
DruidDruid

Druid Could be an amazing solution for your use case, My understanding, and the assumption is you are looking to export your data from MariaDB for Analytical workload. It can be used for time series database as well as a data warehouse and can be scaled horizontally once your data increases. It's pretty easy to set up on any environment (Cloud, Kubernetes, or Self-hosted nix system). Some important features which make it a perfect solution for your use case. 1. It can do streaming ingestion (Kafka, Kinesis) as well as batch ingestion (Files from Local & Cloud Storage or Databases like MySQL, Postgres). In your case MariaDB (which has the same drivers to MySQL) 2. Columnar Database, So you can query just the fields which are required, and that runs your query faster automatically. 3. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. 4. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures 5. Gives ana amazing centralized UI to manage data sources, query, tasks.

See more
Vinay Mehta
Needs advice
on
CassandraCassandra
and
ScyllaDBScyllaDB

The problem I have is - we need to process & change(update/insert) 55M Data every 2 min and this updated data to be available for Rest API for Filtering / Selection. Response time for Rest API should be less than 1 sec.

The most important factors for me are processing and storing time of 2 min. There need to be 2 views of Data One is for Selection & 2. Changed data.

See more
Replies (4)
Recommends
on
ScyllaDBScyllaDB

Scylla can handle 1M/s events with a simple data model quite easily. The api to query is CQL, we have REST api but that's for control/monitoring

See more
Alex Peake
Recommends
on
CassandraCassandra

Cassandra is quite capable of the task, in a highly available way, given appropriate scaling of the system. Remember that updates are only inserts, and that efficient retrieval is only by key (which can be a complex key). Talking of keys, make sure that the keys are well distributed.

See more
Pankaj Soni
Chief Technical Officer at Software Joint · | 2 upvotes · 158.3K views
Recommends
on
CassandraCassandra

i love syclla for pet projects however it's license which is based on server model is an issue. thus i recommend cassandra

See more
Recommends
on
ScyllaDBScyllaDB

By 55M do you mean 55 million entity changes per 2 minutes? It is relatively high, means almost 460k per second. If I had to choose between Scylla or Cassandra, I would opt for Scylla as it is promising better performance for simple operations. However, maybe it would be worth to consider yet another alternative technology. Take into consideration required consistency, reliability and high availability and you may realize that there are more suitable once. Rest API should not be the main driver, because you can always develop the API yourself, if not supported by given technology.

See more
Decisions about Cassandra and Hadoop
Micha Mailänder
CEO & Co-Founder at Dechea · | 14 upvotes · 84.1K views

Fauna is a serverless database where you store data as JSON. Also, you have build in a HTTP GraphQL interface with a full authentication & authorization layer. That means you can skip your Backend and call it directly from the Frontend. With the power, that you can write data transformation function within Fauna with her own language called FQL, we're getting a blazing fast application.

Also, Fauna takes care about scaling and backups (All data are sharded on three different locations on the globe). That means we can fully focus on writing business logic and don't have to worry anymore about infrastructure.

See more
Manage your open source components, licenses, and vulnerabilities
Learn More
Pros of Cassandra
Pros of Hadoop
  • 119
    Distributed
  • 98
    High performance
  • 81
    High availability
  • 74
    Easy scalability
  • 53
    Replication
  • 26
    Reliable
  • 26
    Multi datacenter deployments
  • 10
    Schema optional
  • 9
    OLTP
  • 8
    Open source
  • 2
    Workload separation (via MDC)
  • 1
    Fast
  • 39
    Great ecosystem
  • 11
    One stack to rule them all
  • 4
    Great load balancer
  • 1
    Amazon aws
  • 1
    Java syntax

Sign up to add or upvote prosMake informed product decisions

Cons of Cassandra
Cons of Hadoop
  • 3
    Reliability of replication
  • 1
    Size
  • 1
    Updates
    Be the first to leave a con

    Sign up to add or upvote consMake informed product decisions

    What is Cassandra?

    Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL.

    What is Hadoop?

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

    Need advice about which tool to choose?Ask the StackShare community!

    What companies use Cassandra?
    What companies use Hadoop?
    Manage your open source components, licenses, and vulnerabilities
    Learn More

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Cassandra?
    What tools integrate with Hadoop?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    Blog Posts

    MySQLKafkaApache Spark+6
    2
    2046
    Aug 28 2019 at 3:10AM

    Segment

    PythonJavaAmazon S3+16
    7
    2606
    GitHubPythonReact+42
    49
    40873
    What are some alternatives to Cassandra and Hadoop?
    HBase
    Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google' Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Apache Hadoop.
    Google Cloud Bigtable
    Google Cloud Bigtable offers you a fast, fully managed, massively scalable NoSQL database service that's ideal for web, mobile, and Internet of Things applications requiring terabytes to petabytes of data. Unlike comparable market offerings, Cloud Bigtable doesn't require you to sacrifice speed, scale, or cost efficiency when your applications grow. Cloud Bigtable has been battle-tested at Google for more than 10 years—it's the database driving major applications such as Google Analytics and Gmail.
    Redis
    Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams.
    Couchbase
    Developed as an alternative to traditionally inflexible SQL databases, the Couchbase NoSQL database is built on an open source foundation and architected to help developers solve real-world problems and meet high scalability demands.
    MySQL
    The MySQL software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software.
    See all alternatives