Need advice about which tool to choose?Ask the StackShare community!

Amazon EMR

+ 1
Amazon Redshift

+ 1
Add tool

Amazon EMR vs Amazon Redshift: What are the differences?

Amazon EMR (Elastic MapReduce) and Amazon Redshift are both services offered by Amazon Web Services (AWS) for big data processing and analysis. While they may serve similar purposes, there are several key differences between the two services that make them suitable for different use cases.

  1. Storage and Processing: Amazon EMR is designed for distributed processing of large datasets using frameworks like Apache Hadoop and Apache Spark. It allows users to run custom applications and process data in parallel across a cluster of EC2 instances. On the other hand, Amazon Redshift is a fully-managed data warehousing service that is optimized for online analytical processing (OLAP). It is ideal for running complex analytic queries on large datasets stored in a columnar format.

  2. Data Structure and Query Performance: Amazon EMR can handle both structured and unstructured data, and provides flexibility in schema design. It excels at handling iterative data analysis and machine learning tasks. In contrast, Amazon Redshift is designed for structured data and supports SQL-based queries. It uses massively parallel processing to deliver fast query performance on large datasets, making it suitable for reporting and business intelligence scenarios.

  3. Data Volume and Scalability: Amazon EMR can handle petabytes of data and scales horizontally by adding or removing compute nodes as needed. It provides an elastic and cost-effective solution for processing large volumes of data. On the other hand, Amazon Redshift is optimized for large-scale datasets and can handle terabytes to petabytes of data. It automatically distributes and parallelizes data across nodes for high performance, making it suitable for data warehousing scenarios.

  4. Data Transfer and Integration: Amazon EMR supports various data sources and integrates well with other AWS services like Amazon S3 and Amazon DynamoDB. It provides tools for data import/export and enables seamless integration with other AWS services. Amazon Redshift also supports data import from various sources and can integrate with different data sources through JDBC/ODBC drivers. Additionally, it provides native integration with AWS Glue for data cataloging and integration.

  5. Cost Structure: Amazon EMR offers a flexible pricing model based on EC2 instance usage, storage, and data transfer. It provides cost optimization options like spot instances for cost-effective processing. Amazon Redshift, on the other hand, has a separate cost structure based on compute node hours and the amount of data stored. It offers options for resize and pause, allowing users to scale up or down based on usage requirements.

  6. Data Availability and Durability: Amazon EMR provides data durability by storing data on Amazon S3, which offers 99.999999999% durability. It also supports data replication for fault tolerance. Amazon Redshift provides high availability and fault tolerance by replicating data within the cluster and across multiple availability zones. It also offers automated backups and snapshots for data recovery.

In summary, Amazon EMR is designed for distributed processing of large datasets using frameworks like Hadoop and Spark, while Amazon Redshift is a fully-managed data warehousing service optimized for OLAP queries on large structured datasets. EMR provides flexibility, scalability, and cost-effectiveness for big data processing, while Redshift offers fast query performance, integration with various data sources, and high availability for data warehousing.

Advice on Amazon EMR and Amazon Redshift

We need to perform ETL from several databases into a data warehouse or data lake. We want to

  • keep raw and transformed data available to users to draft their own queries efficiently
  • give users the ability to give custom permissions and SSO
  • move between open-source on-premises development and cloud-based production environments

We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.

See more
Replies (3)
John Nguyen
AirflowAirflowAWS LambdaAWS Lambda

You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.

But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.

See more

Though we have always built something custom, Apache airflow ( stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.

See more

You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.

See more
Decisions about Amazon EMR and Amazon Redshift
Julien Lafont

Cloud Data-warehouse is the centerpiece of modern Data platform. The choice of the most suitable solution is therefore fundamental.

Our benchmark was conducted over BigQuery and Snowflake. These solutions seem to match our goals but they have very different approaches.

BigQuery is notably the only 100% serverless cloud data-warehouse, which requires absolutely NO maintenance: no re-clustering, no compression, no index optimization, no storage management, no performance management. Snowflake requires to set up (paid) reclustering processes, to manage the performance allocated to each profile, etc. We can also mention Redshift, which we have eliminated because this technology requires even more ops operation.

BigQuery can therefore be set up with almost zero cost of human resources. Its on-demand pricing is particularly adapted to small workloads. 0 cost when the solution is not used, only pay for the query you're running. But quickly the use of slots (with monthly or per-minute commitment) will drastically reduce the cost of use. We've reduced by 10 the cost of our nightly batches by using flex slots.

Finally, a major advantage of BigQuery is its almost perfect integration with Google Cloud Platform services: Cloud functions, Dataflow, Data Studio, etc.

BigQuery is still evolving very quickly. The next milestone, BigQuery Omni, will allow to run queries over data stored in an external Cloud platform (Amazon S3 for example). It will be a major breakthrough in the history of cloud data-warehouses. Omni will compensate a weakness of BigQuery: transferring data in near real time from S3 to BQ is not easy today. It was even simpler to implement via Snowflake's Snowpipe solution.

We also plan to use the Machine Learning features built into BigQuery to accelerate our deployment of Data-Science-based projects. An opportunity only offered by the BigQuery solution

See more
Get Advice from developers at your company using StackShare Enterprise. Sign up for StackShare Enterprise.
Learn More
Pros of Amazon EMR
Pros of Amazon Redshift
  • 15
    On demand processing power
  • 12
    Don't need to maintain Hadoop Cluster yourself
  • 7
    Hadoop Tools
  • 6
  • 4
    Backed by Amazon
  • 3
  • 3
    Economic - pay as you go, easy to use CLI and SDKs
  • 2
    Don't need a dedicated Ops group
  • 1
    Massive data handling
  • 1
    Great support
  • 41
    Data Warehousing
  • 27
  • 17
  • 14
    Backed by Amazon
  • 5
  • 1
    Cheap and reliable
  • 1
  • 1
    Best Cloud DW Performance
  • 1
    Fast columnar storage

Sign up to add or upvote prosMake informed product decisions

What is Amazon EMR?

It is used in a variety of applications, including log analysis, data warehousing, machine learning, financial analysis, scientific simulation, and bioinformatics.

What is Amazon Redshift?

It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

Need advice about which tool to choose?Ask the StackShare community!

Jobs that mention Amazon EMR and Amazon Redshift as a desired skillset
What companies use Amazon EMR?
What companies use Amazon Redshift?
See which teams inside your own company are using Amazon EMR or Amazon Redshift.
Sign up for StackShare EnterpriseLearn More

Sign up to get full access to all the companiesMake informed product decisions

What tools integrate with Amazon EMR?
What tools integrate with Amazon Redshift?

Sign up to get full access to all the tool integrationsMake informed product decisions

Blog Posts

Aug 28 2019 at 3:10AM


PythonJavaAmazon S3+16
Jul 9 2019 at 7:22PM

Blue Medora

DockerPostgreSQLNew Relic+8
What are some alternatives to Amazon EMR and Amazon Redshift?
Amazon EC2
It is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale computing easier for developers.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
Amazon DynamoDB
With it , you can offload the administrative burden of operating and scaling a highly available distributed database cluster, while paying a low price for only what you use.
Azure HDInsight
It is a cloud-based service from Microsoft for big data analytics that helps organizations process large amounts of streaming or historical data.
Databricks Unified Analytics Platform, from the original creators of Apache Spark™, unifies data science and engineering across the Machine Learning lifecycle from data preparation to experimentation and deployment of ML applications.
See all alternatives