Need advice about which tool to choose?Ask the StackShare community!

Amazon Redshift

1.5K
1.4K
+ 1
108
Snowflake

1.1K
1.2K
+ 1
27
Add tool

Amazon Redshift vs Snowflake: What are the differences?

Introduction

When choosing a data warehouse solution for your business, it's crucial to understand the key differences between Amazon Redshift and Snowflake. Both are popular options that offer unique features and capabilities, making it essential to assess which one aligns best with your specific requirements.

  1. Architecture: Amazon Redshift follows a shared-disk architecture, where data is stored in nodes and queries are processed by the leader node. On the other hand, Snowflake utilizes a multi-cluster, shared-everything architecture, where storage and compute are separate, allowing for independent scaling of both resources. This means that Snowflake can offer a more cost-effective and scalable solution for organizations with varying workloads.

  2. Concurrency: Amazon Redshift has a limit on the number of concurrent queries that can be processed based on the number of nodes in the cluster. In contrast, Snowflake offers higher levels of concurrency by separating compute resources from storage, allowing multiple queries to be executed simultaneously without affecting performance. This can be advantageous for businesses with high concurrency requirements.

  3. Data Sharing: Snowflake provides robust data sharing capabilities that allow organizations to securely share data with external partners or other departments without the need for data movement. In comparison, Amazon Redshift requires data to be replicated or transferred to external users, rendering it less efficient for real-time data sharing scenarios.

  4. Performance: Snowflake's architecture enables automatic query optimization and enhanced performance through features such as query federation, while Amazon Redshift users may need to manually manage query optimization and tuning for optimal performance. This can result in Snowflake offering superior performance out of the box, especially for users without extensive technical expertise.

  5. Cost Model: In terms of cost, Snowflake operates on a consumption-based pricing model, where users are charged based on the amount of data processed and the level of compute resources allocated. Amazon Redshift, on the other hand, follows a more traditional pricing model based on the type and number of nodes in the cluster. For organizations looking for more flexibility and cost transparency, Snowflake's pricing model may be more appealing.

  6. Ecosystem Integrations: While both platforms support various integrations with popular BI tools and data sources, Snowflake has a larger ecosystem of connectors and partnerships with third-party vendors. This can be beneficial for organizations looking to seamlessly integrate Snowflake into their existing data ecosystem or leverage specific tools for analytics and reporting.

In Summary, when comparing Amazon Redshift and Snowflake, it's essential to consider factors such as architecture, concurrency, data sharing capabilities, performance, cost model, and ecosystem integrations to make an informed decision based on your organization's requirements and priorities.

Advice on Amazon Redshift and Snowflake

We need to perform ETL from several databases into a data warehouse or data lake. We want to

  • keep raw and transformed data available to users to draft their own queries efficiently
  • give users the ability to give custom permissions and SSO
  • move between open-source on-premises development and cloud-based production environments

We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.

See more
Replies (3)
John Nguyen
Recommends
on
AirflowAirflowAWS LambdaAWS Lambda

You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.

But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.

See more
Recommends
on
AirflowAirflow

Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.

See more
Recommends

You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.

See more
Decisions about Amazon Redshift and Snowflake
Julien Lafont

Cloud Data-warehouse is the centerpiece of modern Data platform. The choice of the most suitable solution is therefore fundamental.

Our benchmark was conducted over BigQuery and Snowflake. These solutions seem to match our goals but they have very different approaches.

BigQuery is notably the only 100% serverless cloud data-warehouse, which requires absolutely NO maintenance: no re-clustering, no compression, no index optimization, no storage management, no performance management. Snowflake requires to set up (paid) reclustering processes, to manage the performance allocated to each profile, etc. We can also mention Redshift, which we have eliminated because this technology requires even more ops operation.

BigQuery can therefore be set up with almost zero cost of human resources. Its on-demand pricing is particularly adapted to small workloads. 0 cost when the solution is not used, only pay for the query you're running. But quickly the use of slots (with monthly or per-minute commitment) will drastically reduce the cost of use. We've reduced by 10 the cost of our nightly batches by using flex slots.

Finally, a major advantage of BigQuery is its almost perfect integration with Google Cloud Platform services: Cloud functions, Dataflow, Data Studio, etc.

BigQuery is still evolving very quickly. The next milestone, BigQuery Omni, will allow to run queries over data stored in an external Cloud platform (Amazon S3 for example). It will be a major breakthrough in the history of cloud data-warehouses. Omni will compensate a weakness of BigQuery: transferring data in near real time from S3 to BQ is not easy today. It was even simpler to implement via Snowflake's Snowpipe solution.

We also plan to use the Machine Learning features built into BigQuery to accelerate our deployment of Data-Science-based projects. An opportunity only offered by the BigQuery solution

See more
Get Advice from developers at your company using StackShare Enterprise. Sign up for StackShare Enterprise.
Learn More
Pros of Amazon Redshift
Pros of Snowflake
  • 41
    Data Warehousing
  • 27
    Scalable
  • 17
    SQL
  • 14
    Backed by Amazon
  • 5
    Encryption
  • 1
    Cheap and reliable
  • 1
    Isolation
  • 1
    Best Cloud DW Performance
  • 1
    Fast columnar storage
  • 7
    Public and Private Data Sharing
  • 4
    Multicloud
  • 4
    Good Performance
  • 4
    User Friendly
  • 3
    Great Documentation
  • 2
    Serverless
  • 1
    Economical
  • 1
    Usage based billing
  • 1
    Innovative

Sign up to add or upvote prosMake informed product decisions

What is Amazon Redshift?

It is optimized for data sets ranging from a few hundred gigabytes to a petabyte or more and costs less than $1,000 per terabyte per year, a tenth the cost of most traditional data warehousing solutions.

What is Snowflake?

Snowflake eliminates the administration and management demands of traditional data warehouses and big data platforms. Snowflake is a true data warehouse as a service running on Amazon Web Services (AWS)—no infrastructure to manage and no knobs to turn.

Need advice about which tool to choose?Ask the StackShare community!

Jobs that mention Amazon Redshift and Snowflake as a desired skillset
What companies use Amazon Redshift?
What companies use Snowflake?
See which teams inside your own company are using Amazon Redshift or Snowflake.
Sign up for StackShare EnterpriseLearn More

Sign up to get full access to all the companiesMake informed product decisions

What tools integrate with Amazon Redshift?
What tools integrate with Snowflake?

Sign up to get full access to all the tool integrationsMake informed product decisions

Blog Posts

Jul 9 2019 at 7:22PM

Blue Medora

DockerPostgreSQLNew Relic+8
11
2344
Jul 2 2019 at 9:34PM

Segment

Google AnalyticsAmazon S3New Relic+25
10
6786
JavaScriptGitHubPython+42
53
21947
GitHubMySQLSlack+44
109
50684
What are some alternatives to Amazon Redshift and Snowflake?
Google BigQuery
Run super-fast, SQL-like queries against terabytes of data in seconds, using the processing power of Google's infrastructure. Load data with ease. Bulk load your data using Google Cloud Storage or stream it in. Easy access. Access BigQuery by using a browser tool, a command-line tool, or by making calls to the BigQuery REST API with client libraries such as Java, PHP or Python.
Amazon Athena
Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.
Amazon DynamoDB
With it , you can offload the administrative burden of operating and scaling a highly available distributed database cluster, while paying a low price for only what you use.
Amazon Redshift Spectrum
With Redshift Spectrum, you can extend the analytic power of Amazon Redshift beyond data stored on local disks in your data warehouse to query vast amounts of unstructured data in your Amazon S3 “data lake” -- without having to load or transform any data.
Hadoop
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
See all alternatives