Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.
Pachyderm is an open source MapReduce engine that uses Docker containers for distributed computations. | It is an open-source Version Control System for data science and machine learning projects. It is designed to handle large files, data sets, machine learning models, and metrics as well as code. |
Git-like File System;Dockerized MapReduce;Microservice Architecture;Deployed with CoreOS | Git-compatible; Storage agnostic; Reproducible; Low friction branching; Metric tracking; ML pipeline framework; Language- & framework-agnostic; HDFS, Hive & Apache Spark; Track failures |
Statistics | |
GitHub Stars - | GitHub Stars 15.1K |
GitHub Forks - | GitHub Forks 1.3K |
Stacks 24 | Stacks 57 |
Followers 95 | Followers 91 |
Votes 5 | Votes 2 |
Pros & Cons | |
Pros
Cons
| Pros
Cons
|
Integrations | |

Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.

Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.

Mercurial is dedicated to speed and efficiency with a sane user interface. It is written in Python. Mercurial's implementation and data structures are designed to be fast. You can generate diffs between revisions, or jump back in time within seconds.

Distributed SQL Query Engine for Big Data

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL. Athena is serverless, so there is no infrastructure to manage, and you pay only for the queries that you run.

Subversion exists to be universally recognized and adopted as an open-source, centralized version control system characterized by its reliability as a safe haven for valuable data; the simplicity of its model and usage; and its ability to support the needs of a wide variety of users and projects, from individuals to large-scale enterprise operations.

Apache Flink is an open source system for fast and versatile data analytics in clusters. Flink supports batch and streaming analytics, in one system. Analytical programs can be written in concise and elegant APIs in Java and Scala.

It is an open-source data version control system for data lakes. It provides a “Git for data” platform enabling you to implement best practices from software engineering on your data lake, including branching and merging, CI/CD, and production-like dev/test environments.

Druid is a distributed, column-oriented, real-time analytics data store that is commonly used to power exploratory dashboards in multi-tenant environments. Druid excels as a data warehousing solution for fast aggregate queries on petabyte sized data sets. Druid supports a variety of flexible filters, exact calculations, approximate algorithms, and other useful calculations.

Apache Kylin™ is an open source Distributed Analytics Engine designed to provide SQL interface and multi-dimensional analysis (OLAP) on Hadoop/Spark supporting extremely large datasets, originally contributed from eBay Inc.