StackShareStackShare
Follow on
StackShare

Discover and share technology stacks from companies around the world.

Follow on

© 2025 StackShare. All rights reserved.

Product

  • Stacks
  • Tools
  • Feed

Company

  • About
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
  1. Stackups
  2. Application & Data
  3. Databases
  4. Big Data Tools
  5. Apache Spark vs Dremio

Apache Spark vs Dremio

OverviewDecisionsComparisonAlternatives

Overview

Apache Spark
Apache Spark
Stacks3.1K
Followers3.5K
Votes140
GitHub Stars42.2K
Forks28.9K
Dremio
Dremio
Stacks116
Followers348
Votes8

Apache Spark vs Dremio: What are the differences?

Introduction

Apache Spark and Dremio are both popular tools used for data processing and analysis. While they share some similarities, there are key differences that set them apart from each other. Here are six important differences between Apache Spark and Dremio:

  1. Architecture: Apache Spark follows a distributed computing architecture, allowing it to process large-scale datasets across a cluster of machines. On the other hand, Dremio follows a distributed SQL architecture that focuses on accelerating query performance using data lake engines.

  2. Data Processing: Spark is a general-purpose data processing engine that supports various workloads, including batch processing, real-time streaming, and machine learning. Dremio, on the other hand, is specifically designed for SQL-based data processing tasks and offers high-speed query execution.

  3. Data Sources: Spark is known for its versatility when it comes to data sources. It supports a wide range of data formats and can seamlessly integrate with various data storage systems, such as Hadoop Distributed File System (HDFS), Apache Cassandra, and Amazon S3. Dremio, on the other hand, focuses on providing optimization and self-service data access to data stored in data lakes, including popular file formats such as Parquet, JSON, and CSV.

  4. SQL Optimization: While both Spark and Dremio support SQL queries, Dremio incorporates advanced query optimization techniques to improve query performance. It leverages query acceleration techniques like columnar in-memory caching, indexing, and reflection, which allows for faster query execution. Spark, on the other hand, doesn't provide built-in query acceleration and relies more on parallel processing capabilities.

  5. Governance and Security: Dremio places a strong emphasis on data governance and security. It provides fine-grained access control, auditing, and data lineage features to ensure data compliance and security. Spark, on the other hand, does not have built-in governance and security features but can integrate with external tools to meet these requirements.

  6. Data Catalog and Discovery: Dremio includes a built-in data catalog that provides a unified view of data from multiple sources within the data lake. It also offers data discovery capabilities, making it easier to explore and analyze data. In contrast, Spark does not provide a native data catalog and data discovery functionality, although it can be integrated with external tools like Apache Hive for similar capabilities.

In Summary, Apache Spark and Dremio differ in their architecture, data processing capabilities, support for different data sources, SQL optimization techniques, governance and security features, and data catalog and discovery functionalities.

Share your Stack

Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.

View Docs
CLI (Node.js)
or
Manual

Advice on Apache Spark, Dremio

karunakaran
karunakaran

Consultant

Jun 26, 2020

Needs advice

I am trying to build a data lake by pulling data from multiple data sources ( custom-built tools, excel files, CSV files, etc) and use the data lake to generate dashboards.

My question is which is the best tool to do the following:

  1. Create pipelines to ingest the data from multiple sources into the data lake
  2. Help me in aggregating and filtering data available in the data lake.
  3. Create new reports by combining different data elements from the data lake.

I need to use only open-source tools for this activity.

I appreciate your valuable inputs and suggestions. Thanks in Advance.

80.4k views80.4k
Comments
Nilesh
Nilesh

Technical Architect at Self Employed

Jul 8, 2020

Needs adviceonElasticsearchElasticsearchKafkaKafka

We have a Kafka topic having events of type A and type B. We need to perform an inner join on both type of events using some common field (primary-key). The joined events to be inserted in Elasticsearch.

In usual cases, type A and type B events (with same key) observed to be close upto 15 minutes. But in some cases they may be far from each other, lets say 6 hours. Sometimes event of either of the types never come.

In all cases, we should be able to find joined events instantly after they are joined and not-joined events within 15 minutes.

576k views576k
Comments
datocrats-org
datocrats-org

Jul 29, 2020

Needs adviceonAmazon EC2Amazon EC2TableauTableauPowerBIPowerBI

We need to perform ETL from several databases into a data warehouse or data lake. We want to

  • keep raw and transformed data available to users to draft their own queries efficiently
  • give users the ability to give custom permissions and SSO
  • move between open-source on-premises development and cloud-based production environments

We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.

319k views319k
Comments

Detailed Comparison

Apache Spark
Apache Spark
Dremio
Dremio

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Dremio—the data lake engine, operationalizes your data lake storage and speeds your analytics processes with a high-performance and high-efficiency query engine while also democratizing data access for data scientists and analysts.

Run programs up to 100x faster than Hadoop MapReduce in memory, or 10x faster on disk;Write applications quickly in Java, Scala or Python;Combine SQL, streaming, and complex analytics;Spark runs on Hadoop, Mesos, standalone, or in the cloud. It can access diverse data sources including HDFS, Cassandra, HBase, S3
Democratize all your data; Make your data engineers more productive; Accelerate your favorite tools; Self service, for everybody
Statistics
GitHub Stars
42.2K
GitHub Stars
-
GitHub Forks
28.9K
GitHub Forks
-
Stacks
3.1K
Stacks
116
Followers
3.5K
Followers
348
Votes
140
Votes
8
Pros & Cons
Pros
  • 61
    Open-source
  • 48
    Fast and Flexible
  • 8
    One platform for every big data problem
  • 8
    Great for distributed SQL like applications
  • 6
    Easy to install and to use
Cons
  • 4
    Speed
Pros
  • 3
    Nice GUI to enable more people to work with Data
  • 2
    Connect NoSQL databases with RDBMS
  • 2
    Easier to Deploy
  • 1
    Free
Cons
  • 1
    Works only on Iceberg structured data
Integrations
No integrations available
Amazon S3
Amazon S3
Python
Python
Tableau
Tableau
Azure Database for PostgreSQL
Azure Database for PostgreSQL
Qlik Sense
Qlik Sense
PowerBI
PowerBI

What are some alternatives to Apache Spark, Dremio?

Google BigQuery

Google BigQuery

Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python.

Amazon Redshift

Amazon Redshift

It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

Qubole

Qubole

Qubole is a cloud based service that makes big data easy for analysts and data engineers.

Presto

Presto

Distributed SQL Query Engine for Big Data

Amazon EMR

Amazon EMR

It is used in a variety of applications, including log analysis, data warehousing, machine learning, financial analysis, scientific simulation, and bioinformatics.

Amazon Athena

Amazon Athena

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Apache Flink

Apache Flink

Apache Flink is an open source system for fast and versatile data analytics in clusters. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala.

lakeFS

lakeFS

It is an open-source data version control system for data lakes. It provides a “Git for data” platform enabling you to implement best practices from software engineering on your data lake, including branching and merging, CI/CD, and production-like dev/test environments.

Druid

Druid

Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.

Altiscale

Altiscale

we run Apache Hadoop for you. We not only deploy Hadoop, we monitor, manage, fix, and update it for you. Then we take it a step further: We monitor your jobs, notify you when something’s wrong with them, and can help with tuning.

Related Comparisons

Bootstrap
Materialize

Bootstrap vs Materialize

Laravel
Django

Django vs Laravel vs Node.js

Bootstrap
Foundation

Bootstrap vs Foundation vs Material UI

Node.js
Spring Boot

Node.js vs Spring-Boot

Liquibase
Flyway

Flyway vs Liquibase