Need advice about which tool to choose?Ask the StackShare community!

Cassandra

3.6K
3.5K
+ 1
507
Apache Spark

2.9K
3.5K
+ 1
140
Add tool

Apache Spark vs Cassandra: What are the differences?

Key Differences between Apache Spark and Cassandra

Apache Spark and Cassandra are two popular technologies used in big data processing and analytics. While they both serve different purposes, there are several key differences between them.

1. Data Processing Model: Apache Spark is a distributed computing system that utilizes in-memory processing for faster data processing. It supports batch processing, interactive queries, streaming, and machine learning workloads. On the other hand, Cassandra is a distributed database management system designed for high scalability and fault-tolerance. It provides fast read and write operations for large-scale, structured data sets.

2. Data Storage Model: Spark does not have its own data storage system and can process data from various sources like Hadoop Distributed File System (HDFS) or Amazon S3. It can also integrate with databases like Cassandra for data processing. Cassandra, on the other hand, is a NoSQL database that stores and retrieves data using a key-value pair approach. It provides a highly distributed and fault-tolerant architecture for storing large volumes of data.

3. Query Language: Spark includes Spark SQL, which provides a SQL-like interface for querying structured data. It also supports programming languages like Python, Java, and Scala for data processing. Cassandra, on the other hand, uses its own query language called CQL (Cassandra Query Language), which is similar to SQL but has some differences in syntax and functionality compared to traditional SQL.

4. Data Consistency and Availability: Spark does not provide built-in mechanisms for data consistency and availability. It relies on the underlying storage system, such as HDFS or Cassandra, to ensure data durability and availability. Cassandra, on the other hand, guarantees high availability and fault tolerance by replicating data across multiple nodes in a cluster. It also supports tunable consistency levels to balance consistency and performance based on application requirements.

5. Data Model: Spark operates on a distributed collection of objects called Resilient Distributed Datasets (RDDs), which are fault-tolerant and can be cached in memory for faster processing. It also supports DataFrames and Datasets, which provide a higher-level abstraction for working with structured data. Cassandra, on the other hand, is based on a column-oriented data model, where data is stored in columns instead of rows. It provides flexibility in schema design and efficient read and write operations for specific use cases.

6. Use Cases: Spark is commonly used for various big data processing tasks, such as data transformation, analytics, and machine learning. It is suitable for scenarios that require fast and iterative data processing, real-time analytics, and complex data pipelines. On the other hand, Cassandra is often used for handling large-scale, high-volume data with high write throughput and low latency requirements. It is commonly used in applications that require fast data ingestion, real-time querying, and high availability.

In summary, Apache Spark and Cassandra differ in their data processing and storage models, query languages, data consistency and availability mechanisms, data models, and use cases. They offer unique capabilities and are suited for different types of big data applications and analytical requirements.

Advice on Cassandra and Apache Spark
Nilesh Akhade
Technical Architect at Self Employed · | 5 upvotes · 530.1K views

We have a Kafka topic having events of type A and type B. We need to perform an inner join on both type of events using some common field (primary-key). The joined events to be inserted in Elasticsearch.

In usual cases, type A and type B events (with same key) observed to be close upto 15 minutes. But in some cases they may be far from each other, lets say 6 hours. Sometimes event of either of the types never come.

In all cases, we should be able to find joined events instantly after they are joined and not-joined events within 15 minutes.

See more
Replies (2)
Recommends
on
ElasticsearchElasticsearch

The first solution that came to me is to use upsert to update ElasticSearch:

  1. Use the primary-key as ES document id
  2. Upsert the records to ES as soon as you receive them. As you are using upsert, the 2nd record of the same primary-key will not overwrite the 1st one, but will be merged with it.

Cons: The load on ES will be higher, due to upsert.

To use Flink:

  1. Create a KeyedDataStream by the primary-key
  2. In the ProcessFunction, save the first record in a State. At the same time, create a Timer for 15 minutes in the future
  3. When the 2nd record comes, read the 1st record from the State, merge those two, and send out the result, and clear the State and the Timer if it has not fired
  4. When the Timer fires, read the 1st record from the State and send out as the output record.
  5. Have a 2nd Timer of 6 hours (or more) if you are not using Windowing to clean up the State

Pro: if you have already having Flink ingesting this stream. Otherwise, I would just go with the 1st solution.

See more
Akshaya Rawat
Senior Specialist Platform at Publicis Sapient · | 3 upvotes · 372.5K views
Recommends
on
Apache SparkApache Spark

Please refer "Structured Streaming" feature of Spark. Refer "Stream - Stream Join" at https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html#stream-stream-joins . In short you need to specify "Define watermark delays on both inputs" and "Define a constraint on time across the two inputs"

See more
Vinay Mehta
Needs advice
on
CassandraCassandra
and
ScyllaDBScyllaDB

The problem I have is - we need to process & change(update/insert) 55M Data every 2 min and this updated data to be available for Rest API for Filtering / Selection. Response time for Rest API should be less than 1 sec.

The most important factors for me are processing and storing time of 2 min. There need to be 2 views of Data One is for Selection & 2. Changed data.

See more
Replies (4)
Recommends
on
ScyllaDBScyllaDB

Scylla can handle 1M/s events with a simple data model quite easily. The api to query is CQL, we have REST api but that's for control/monitoring

See more
Alex Peake
Recommends
on
CassandraCassandra

Cassandra is quite capable of the task, in a highly available way, given appropriate scaling of the system. Remember that updates are only inserts, and that efficient retrieval is only by key (which can be a complex key). Talking of keys, make sure that the keys are well distributed.

See more
Recommends
on
ScyllaDBScyllaDB

By 55M do you mean 55 million entity changes per 2 minutes? It is relatively high, means almost 460k per second. If I had to choose between Scylla or Cassandra, I would opt for Scylla as it is promising better performance for simple operations. However, maybe it would be worth to consider yet another alternative technology. Take into consideration required consistency, reliability and high availability and you may realize that there are more suitable once. Rest API should not be the main driver, because you can always develop the API yourself, if not supported by given technology.

See more
Pankaj Soni
Chief Technical Officer at Software Joint · | 2 upvotes · 152.1K views
Recommends
on
CassandraCassandra

i love syclla for pet projects however it's license which is based on server model is an issue. thus i recommend cassandra

See more
Get Advice from developers at your company using StackShare Enterprise. Sign up for StackShare Enterprise.
Learn More
Pros of Cassandra
Pros of Apache Spark
  • 119
    Distributed
  • 98
    High performance
  • 81
    High availability
  • 74
    Easy scalability
  • 53
    Replication
  • 26
    Reliable
  • 26
    Multi datacenter deployments
  • 10
    Schema optional
  • 9
    OLTP
  • 8
    Open source
  • 2
    Workload separation (via MDC)
  • 1
    Fast
  • 61
    Open-source
  • 48
    Fast and Flexible
  • 8
    One platform for every big data problem
  • 8
    Great for distributed SQL like applications
  • 6
    Easy to install and to use
  • 3
    Works well for most Datascience usecases
  • 2
    Interactive Query
  • 2
    Machine learning libratimery, Streaming in real
  • 2
    In memory Computation

Sign up to add or upvote prosMake informed product decisions

Cons of Cassandra
Cons of Apache Spark
  • 3
    Reliability of replication
  • 1
    Size
  • 1
    Updates
  • 4
    Speed

Sign up to add or upvote consMake informed product decisions

What is Cassandra?

Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL.

What is Apache Spark?

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Need advice about which tool to choose?Ask the StackShare community!

What companies use Cassandra?
What companies use Apache Spark?
See which teams inside your own company are using Cassandra or Apache Spark.
Sign up for StackShare EnterpriseLearn More

Sign up to get full access to all the companiesMake informed product decisions

What tools integrate with Cassandra?
What tools integrate with Apache Spark?

Sign up to get full access to all the tool integrationsMake informed product decisions

Blog Posts

What are some alternatives to Cassandra and Apache Spark?
HBase
Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google' Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Apache Hadoop.
Google Cloud Bigtable
Google Cloud Bigtable offers you a fast, fully managed, massively scalable NoSQL database service that's ideal for web, mobile, and Internet of Things applications requiring terabytes to petabytes of data. Unlike comparable market offerings, Cloud Bigtable doesn't require you to sacrifice speed, scale, or cost efficiency when your applications grow. Cloud Bigtable has been battle-tested at Google for more than 10 years—it's the database driving major applications such as Google Analytics and Gmail.
Hadoop
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
Redis
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache, and message broker. Redis provides data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes, and streams.
Couchbase
Developed as an alternative to traditionally inflexible SQL databases, the Couchbase NoSQL database is built on an open source foundation and architected to help developers solve real-world problems and meet high scalability demands.
See all alternatives