Need advice about which tool to choose?Ask the StackShare community!
Amazon RDS for PostgreSQL vs Amazon Redshift: What are the differences?
## Introduction
Amazon RDS for PostgreSQL and Amazon Redshift are both popular data storage solutions offered by Amazon Web Services (AWS), each tailored for specific use cases. Here are the key differences between the two services:
1. **Database Type**:
Amazon RDS for PostgreSQL is a managed relational database service that supports PostgreSQL, a powerful open-source object-relational database system. On the other hand, Amazon Redshift is a fully managed data warehouse service that is optimized for online analytical processing (OLAP) workloads and is based on a modified version of PostgreSQL.
2. **Use Case**:
Amazon RDS for PostgreSQL is ideal for OLTP (Online Transaction Processing) applications that require ACID-compliant transactions and relational database features. In contrast, Amazon Redshift is specifically designed for running complex queries on large datasets for analytics and business intelligence purposes, making it suitable for decision support workloads.
3. **Scalability**:
Amazon RDS for PostgreSQL allows you to scale your database vertically by resizing the instance, storage, or implementing read replicas. Amazon Redshift, on the other hand, is designed for horizontal scalability, allowing you to add additional nodes to your cluster to increase storage and computing capacity as needed.
4. **Data Modeling**:
In Amazon RDS for PostgreSQL, data modeling follows traditional relational database principles with normalized tables and relationships. Conversely, Amazon Redshift encourages denormalized data models to optimize query performance, allowing for greater parallelism in processing large volumes of data.
5. **Concurrency**:
Amazon RDS for PostgreSQL supports high levels of concurrent transactions typical of OLTP workloads. In contrast, Amazon Redshift is optimized for handling high levels of concurrent queries for analytics, leveraging massively parallel processing (MPP) to speed up query performance on large datasets.
6. **Cost Structure**:
While both services offer a pay-as-you-go pricing model, Amazon RDS for PostgreSQL is typically more cost-effective for small to medium-sized relational database workloads, whereas Amazon Redshift may be more cost-efficient for organizations dealing with extensive data warehousing and analytics requirements due to its optimized architecture.
In Summary, Amazon RDS for PostgreSQL is a managed relational database service optimized for OLTP workloads, while Amazon Redshift is a fully managed data warehouse service tailored for analytics and business intelligence tasks.
We need to perform ETL from several databases into a data warehouse or data lake. We want to
- keep raw and transformed data available to users to draft their own queries efficiently
- give users the ability to give custom permissions and SSO
- move between open-source on-premises development and cloud-based production environments
We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.
You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.
But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.
Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.
You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.
Considering moving part of our PostgreSQL database infrastructure to the cloud, however, not quite sure between AWS, Heroku, Azure and Google cloud. Things to consider: The main reason is for backing up and centralize all our data in the cloud. With that in mind the main elements are: -Pricing for storage. -Small team. -No need for high throughput. -Support for docker swarm and Kubernetes.
Good balance between easy to manage, pricing, docs and features.
DigitalOcean's offering is pretty solid. Easy to scale, great UI, automatic daily backups, decent pricing.
Cloud Data-warehouse is the centerpiece of modern Data platform. The choice of the most suitable solution is therefore fundamental.
Our benchmark was conducted over BigQuery and Snowflake. These solutions seem to match our goals but they have very different approaches.
BigQuery is notably the only 100% serverless cloud data-warehouse, which requires absolutely NO maintenance: no re-clustering, no compression, no index optimization, no storage management, no performance management. Snowflake requires to set up (paid) reclustering processes, to manage the performance allocated to each profile, etc. We can also mention Redshift, which we have eliminated because this technology requires even more ops operation.
BigQuery can therefore be set up with almost zero cost of human resources. Its on-demand pricing is particularly adapted to small workloads. 0 cost when the solution is not used, only pay for the query you're running. But quickly the use of slots (with monthly or per-minute commitment) will drastically reduce the cost of use. We've reduced by 10 the cost of our nightly batches by using flex slots.
Finally, a major advantage of BigQuery is its almost perfect integration with Google Cloud Platform services: Cloud functions, Dataflow, Data Studio, etc.
BigQuery is still evolving very quickly. The next milestone, BigQuery Omni, will allow to run queries over data stored in an external Cloud platform (Amazon S3 for example). It will be a major breakthrough in the history of cloud data-warehouses. Omni will compensate a weakness of BigQuery: transferring data in near real time from S3 to BQ is not easy today. It was even simpler to implement via Snowflake's Snowpipe solution.
We also plan to use the Machine Learning features built into BigQuery to accelerate our deployment of Data-Science-based projects. An opportunity only offered by the BigQuery solution
Pros of Amazon RDS for PostgreSQL
- Easy setup, backup, monitoring25
- Geospatial support13
- Master-master replication using Multi-AZ instance2
Pros of Amazon Redshift
- Data Warehousing41
- Scalable27
- SQL17
- Backed by Amazon14
- Encryption5
- Cheap and reliable1
- Isolation1
- Best Cloud DW Performance1
- Fast columnar storage1