What is Apache Parquet?
It is a columnar storage format available to any project in the Hadoop ecosystem, regardless of the choice of data processing framework, data model or programming language.
Apache Parquet is a tool in the Databases category of a tech stack.
Apache Parquet is an open source tool with 2K GitHub stars and 1.3K GitHub forks. Here’s a link to Apache Parquet's open source repository on GitHub
Who uses Apache Parquet?
25 companies reportedly use Apache Parquet in their tech stacks, including Walmart, Skyscanner, and platform.
57 developers on StackShare have stated that they use Apache Parquet.
Apache Parquet Integrations
Java, Hadoop, Apache Hive, Apache Impala, and Apache Thrift are some of the popular tools that integrate with Apache Parquet. Here's a list of all 10 tools that integrate with Apache Parquet.
Decisions about Apache Parquet
Here are some stack decisions, common use cases and reviews by companies and developers who chose Apache Parquet in their tech stack.
We are currently storing the data in Amazon S3 using Apache Parquet format. We are using Presto to query the data from S3 and catalog it using AWS Glue catalog. We have Metabase sitting on top of Presto, where our reports are present. Currently, Presto is becoming too costly for us, and we are looking for alternatives for it but want to use the remaining setup (S3, Metabase) as much as possible. Please suggest alternative approaches.
Nov 24 2020 at 7:01PM
Aug 28 2019 at 3:10AM
Apache Parquet's Features
- Columnar storage format
- Type-specific encoding
- Pig integration
- Cascading integration
- Crunch integration
- Apache Arrow integration
- Apache Scrooge integration
- Adaptive dictionary encoding
- Predicate pushdown
- Column stats
Apache Parquet Alternatives & Comparisons
What are some alternatives to Apache Parquet?
See all alternatives
It is a row-oriented remote procedure call and data serialization framework developed within Apache's Hadoop project. It uses JSON for defining data types and protocols, and serializes data in a compact binary format.
A new addition to the open source Apache Hadoop ecosystem, Kudu completes Hadoop's storage layer to enable fast analytics on fast data.
Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL.
Apache HBase is an open-source, distributed, versioned, column-oriented store modeled after Google' Bigtable: A Distributed Storage System for Structured Data by Chang et al. Just as Bigtable leverages the distributed data storage provided by the Google File System, HBase provides Bigtable-like capabilities on top of Apache Hadoop.