Need advice about which tool to choose?Ask the StackShare community!

Amazon Redshift

1.5K
1.4K
+ 1
108
Amazon S3

53K
39.7K
+ 1
2K
Add tool

Amazon Redshift vs Amazon S3: What are the differences?

Introduction

Amazon Redshift and Amazon S3 are both popular products offered by Amazon Web Services (AWS). However, they serve different purposes and have distinct features and capabilities. Understanding the key differences between Amazon Redshift and Amazon S3 is essential for determining which service is the best fit for a particular use case.

  1. Data Storage Structure: One of the significant differences between Amazon Redshift and Amazon S3 is how they store data. Amazon Redshift is a fully-managed data warehousing service that uses columnar storage, where data is organized and stored vertically by column. On the other hand, Amazon S3 is an object storage service that stores data in a flat structure, treating each object (file) as a separate entity. This difference in data storage structure has implications for data access and query performance.

  2. Data Querying and Processing: Another key difference is how data is queried and processed in Amazon Redshift and Amazon S3. Amazon Redshift supports SQL queries and is optimized for quickly analyzing and querying large datasets. It provides a massively parallel processing (MPP) architecture and advanced query optimization features, making it suitable for complex analytical queries. In contrast, Amazon S3 does not offer built-in querying capabilities. To query data stored in Amazon S3, additional tools or services like Amazon Athena or AWS Glue are required.

  3. Data Scalability and Concurrency: The scalability and concurrency capabilities differ between Amazon Redshift and Amazon S3. Amazon Redshift is designed to handle high-performance analytics workloads with the ability to scale vertically (by adding more compute resources) and horizontally (by adding more clusters). It also supports concurrent queries, allowing multiple users to run queries simultaneously without performance degradation. On the other hand, Amazon S3 is highly scalable and can store virtually unlimited amounts of data. However, it does not provide built-in concurrency and has limitations in terms of directly querying and processing data.

  4. Data Ingestion and Updates: When it comes to data ingestion and updates, Amazon Redshift and Amazon S3 have distinct capabilities. Amazon Redshift is optimized for bulk data loading and updates, making it suitable for scenarios where data is regularly added or modified. It provides different mechanisms like COPY command and data manipulation language (DML) statements to efficiently load and update data. Amazon S3, on the other hand, is designed for storing and retrieving unstructured or semi-structured data, and it does not support direct updates like traditional databases.

  5. Data Durability and Resilience: Amazon Redshift and Amazon S3 have different durability and resilience features. Amazon S3 is designed for 99.999999999% (11 nines) object durability, meaning that objects stored in S3 are highly reliable and protected against data loss. It automatically replicates data across multiple devices and facilities. Amazon Redshift also provides data durability through automatic replication, but it uses a different replication mechanism optimized for data warehousing workloads.

  6. Data Pricing Model: The pricing models for Amazon Redshift and Amazon S3 vary. Amazon Redshift pricing is based on factors like the type and size of nodes, the amount of data stored, and the data transfer rates. It offers different pricing options such as on-demand, reserved instances, and per-second billing. On the other hand, Amazon S3 pricing is based on factors like the amount of data stored, data transfer rates, and additional features like data retrieval options. It also provides pricing tiers based on the storage class used (Standard, Intelligent-Tiering, Glacier, etc.)

In summary, Amazon Redshift is a columnar data warehousing service optimized for analytical queries, with superior query performance and scalability, while Amazon S3 is an object storage service suitable for storing and retrieving large volumes of unstructured or semi-structured data, but requires additional tools for querying and processing. Pricing, data storage structure, querying capabilities, data updates, scalability, and durability are the key differences between Amazon Redshift and Amazon S3.

Advice on Amazon Redshift and Amazon S3

We need to perform ETL from several databases into a data warehouse or data lake. We want to

  • keep raw and transformed data available to users to draft their own queries efficiently
  • give users the ability to give custom permissions and SSO
  • move between open-source on-premises development and cloud-based production environments

We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.

See more
Replies (3)
John Nguyen
Recommends
on
AirflowAirflowAWS LambdaAWS Lambda

You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.

But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.

See more
Recommends
on
AirflowAirflow

Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.

See more
Recommends

You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.

See more
Decisions about Amazon Redshift and Amazon S3
Gabriel Pa

We offer our customer HIPAA compliant storage. After analyzing the market, we decided to go with Google Storage. The Nodejs API is ok, still not ES6 and can be very confusing to use. For each new customer, we created a different bucket so they can have individual data and not have to worry about data loss. After 1000+ customers we started seeing many problems with the creation of new buckets, with saving or retrieving a new file. Many false positive: the Promise returned ok, but in reality, it failed.

That's why we switched to S3 that just works.

See more
Manage your open source components, licenses, and vulnerabilities
Learn More
Pros of Amazon Redshift
Pros of Amazon S3
  • 41
    Data Warehousing
  • 27
    Scalable
  • 17
    SQL
  • 14
    Backed by Amazon
  • 5
    Encryption
  • 1
    Cheap and reliable
  • 1
    Isolation
  • 1
    Best Cloud DW Performance
  • 1
    Fast columnar storage
  • 590
    Reliable
  • 492
    Scalable
  • 456
    Cheap
  • 329
    Simple & easy
  • 83
    Many sdks
  • 30
    Logical
  • 13
    Easy Setup
  • 11
    REST API
  • 11
    1000+ POPs
  • 6
    Secure
  • 4
    Plug and play
  • 4
    Easy
  • 3
    Web UI for uploading files
  • 2
    Faster on response
  • 2
    Flexible
  • 2
    GDPR ready
  • 1
    Easy to use
  • 1
    Plug-gable
  • 1
    Easy integration with CloudFront

Sign up to add or upvote prosMake informed product decisions

Cons of Amazon Redshift
Cons of Amazon S3
    Be the first to leave a con
    • 7
      Permissions take some time to get right
    • 6
      Requires a credit card
    • 6
      Takes time/work to organize buckets & folders properly
    • 3
      Complex to set up

    Sign up to add or upvote consMake informed product decisions

    What is Amazon Redshift?

    It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

    What is Amazon S3?

    Amazon Simple Storage Service provides a fully redundant data storage infrastructure for storing and retrieving any amount of data, at any time, from anywhere on the web

    Need advice about which tool to choose?Ask the StackShare community!

    What companies use Amazon Redshift?
    What companies use Amazon S3?
    Manage your open source components, licenses, and vulnerabilities
    Learn More

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Amazon Redshift?
    What tools integrate with Amazon S3?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    What are some alternatives to Amazon Redshift and Amazon S3?
    Google BigQuery
    Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python.
    Amazon Athena
    Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
    Amazon DynamoDB
    With it , you can offload the administrative burden of operating and scaling a highly available distributed database cluster, while paying a low price for only what you use.
    Amazon Redshift Spectrum
    With Redshift Spectrum, you can extend the analytic power of Amazon Redshift beyond data stored on local disks in your data warehouse to query vast amounts of unstructured data in your Amazon S3 “data lake” -- without having to load or transform any data.
    Hadoop
    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
    See all alternatives