Need advice about which tool to choose?Ask the StackShare community!
Amazon Redshift vs Azure HDInsight: What are the differences?
Key Differences between Amazon Redshift and Azure HDInsight
- Architecture: Amazon Redshift is a fully-managed data warehouse service in the cloud with massively parallel processing (MPP) architecture, while Azure HDInsight is a fully-managed cloud-based service that makes it easy to process big data using popular open-source frameworks such as Hadoop, Spark, and Hive.
- Technology Stack: Amazon Redshift uses its own columnar storage format and SQL-based query engine, whereas Azure HDInsight is built on top of the open-source Apache Hadoop ecosystem, integrating technologies like HDFS, YARN, and MapReduce.
- Use Case: Amazon Redshift is best suited for online analytical processing (OLAP) workloads requiring fast and complex queries on structured data, while Azure HDInsight is ideal for big data processing, analytics, and machine learning on unstructured or semi-structured data.
- Pricing Model: Amazon Redshift offers a pay-as-you-go pricing model based on the provisioned capacity, while Azure HDInsight has a more flexible pricing model based on the actual usage of resources, allowing users to pay for what they use.
- Integration with Ecosystem: Amazon Redshift seamlessly integrates with other AWS services such as S3, EC2, and IAM for data storage, compute, and security, respectively, while Azure HDInsight integrates with other Azure services like Azure Data Lake Store, Azure Blob Storage, and Azure Active Directory.
- Managed Service: Amazon Redshift is a fully managed service that handles upgrades, backups, and performance tuning automatically, whereas Azure HDInsight is a platform as a service (PaaS) offering where users have more control over the deployment, configuration, and management of the cluster.
In Summary, Amazon Redshift and Azure HDInsight differ in architecture, technology stack, use cases, pricing models, ecosystem integration, and level of managed service.
We need to perform ETL from several databases into a data warehouse or data lake. We want to
- keep raw and transformed data available to users to draft their own queries efficiently
- give users the ability to give custom permissions and SSO
- move between open-source on-premises development and cloud-based production environments
We want to use inexpensive Amazon EC2 instances only on medium-sized data set 16GB to 32GB feeding into Tableau Server or PowerBI for reporting and data analysis purposes.
You could also use AWS Lambda and use Cloudwatch event schedule if you know when the function should be triggered. The benefit is that you could use any language and use the respective database client.
But if you orchestrate ETLs then it makes sense to use Apache Airflow. This requires Python knowledge.
Though we have always built something custom, Apache airflow (https://airflow.apache.org/) stood out as a key contender/alternative when it comes to open sources. On the commercial offering, Amazon Redshift combined with Amazon Kinesis (for complex manipulations) is great for BI, though Redshift as such is expensive.
You may want to look into a Data Virtualization product called Conduit. It connects to disparate data sources in AWS, on prem, Azure, GCP, and exposes them as a single unified Spark SQL view to PowerBI (direct query) or Tableau. Allows auto query and caching policies to enhance query speeds and experience. Has a GPU query engine and optimized Spark for fallback. Can be deployed on your AWS VM or on prem, scales up and out. Sounds like the ideal solution to your needs.
Pros of Amazon Redshift
- Data Warehousing41
- Scalable27
- SQL17
- Backed by Amazon14
- Encryption5
- Cheap and reliable1
- Isolation1
- Best Cloud DW Performance1
- Fast columnar storage1