Alternatives to Pachyderm logo

Alternatives to Pachyderm

Hadoop, Apache Spark, Airflow, Kafka, and DVC are the most popular alternatives and competitors to Pachyderm.
20
66
+ 1
5

What is Pachyderm and what are its top alternatives?

Pachyderm is an open source MapReduce engine that uses Docker containers for distributed computations.
Pachyderm is a tool in the Big Data Tools category of a tech stack.
Pachyderm is an open source tool with GitHub stars and GitHub forks. Here’s a link to Pachyderm's open source repository on GitHub

Top Alternatives to Pachyderm

  • Hadoop

    Hadoop

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. ...

  • Apache Spark

    Apache Spark

    Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning. ...

  • Airflow

    Airflow

    Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command lines utilities makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress and troubleshoot issues when needed. ...

  • Kafka

    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • DVC

    DVC

    It is an open-source Version Control System for data science and machine learning projects. It is designed to handle large files, data sets, machine learning models, and metrics as well as code. ...

  • Argo

    Argo

    Argo is an open source container-native workflow engine for getting work done on Kubernetes. Argo is implemented as a Kubernetes CRD (Custom Resource Definition). ...

  • Kubeflow

    Kubeflow

    The Kubeflow project is dedicated to making Machine Learning on Kubernetes easy, portable and scalable by providing a straightforward way for spinning up best of breed OSS solutions. ...

  • MLflow

    MLflow

    MLflow is an open source platform for managing the end-to-end machine learning lifecycle. ...

Pachyderm alternatives & related posts

Hadoop logo

Hadoop

2K
2.1K
55
Open-source software for reliable, scalable, distributed computing
2K
2.1K
+ 1
55
PROS OF HADOOP
  • 38
    Great ecosystem
  • 11
    One stack to rule them all
  • 4
    Great load balancer
  • 1
    Amazon aws
  • 1
    Java syntax
CONS OF HADOOP
    Be the first to leave a con

    related Hadoop posts

    Conor Myhrvold
    Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 1.1M views

    Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

    Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

    https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

    (Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

    See more
    Shared insights
    on
    KafkaKafkaHadoopHadoop
    at

    The early data ingestion pipeline at Pinterest used Kafka as the central message transporter, with the app servers writing messages directly to Kafka, which then uploaded log files to S3.

    For databases, a custom Hadoop streamer pulled database data and wrote it to S3.

    Challenges cited for this infrastructure included high operational overhead, as well as potential data loss occurring when Kafka broker outages led to an overflow of in-memory message buffering.

    See more
    Apache Spark logo

    Apache Spark

    2.5K
    2.9K
    132
    Fast and general engine for large-scale data processing
    2.5K
    2.9K
    + 1
    132
    PROS OF APACHE SPARK
    • 58
      Open-source
    • 48
      Fast and Flexible
    • 7
      One platform for every big data problem
    • 6
      Easy to install and to use
    • 6
      Great for distributed SQL like applications
    • 3
      Works well for most Datascience usecases
    • 2
      Machine learning libratimery, Streaming in real
    • 2
      In memory Computation
    • 0
      Interactive Query
    CONS OF APACHE SPARK
    • 3
      Speed

    related Apache Spark posts

    Eric Colson
    Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 2.1M views

    The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

    Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

    At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

    For more info:

    #DataScience #DataStack #Data

    See more
    Conor Myhrvold
    Tech Brand Mgr, Office of CTO at Uber · | 7 upvotes · 1.1M views

    Why we built Marmaray, an open source generic data ingestion and dispersal framework and library for Apache Hadoop :

    Built and designed by our Hadoop Platform team, Marmaray is a plug-in-based framework built on top of the Hadoop ecosystem. Users can add support to ingest data from any source and disperse to any sink leveraging the use of Apache Spark . The name, Marmaray, comes from a tunnel in Turkey connecting Europe and Asia. Similarly, we envisioned Marmaray within Uber as a pipeline connecting data from any source to any sink depending on customer preference:

    https://eng.uber.com/marmaray-hadoop-ingestion-open-source/

    (Direct GitHub repo: https://github.com/uber/marmaray Kafka Kafka Manager )

    See more
    Airflow logo

    Airflow

    1.3K
    2.1K
    116
    A platform to programmaticaly author, schedule and monitor data pipelines, by Airbnb
    1.3K
    2.1K
    + 1
    116
    PROS OF AIRFLOW
    • 45
      Features
    • 14
      Task Dependency Management
    • 12
      Beautiful UI
    • 11
      Cluster of workers
    • 10
      Extensibility
    • 5
      Open source
    • 4
      Python
    • 4
      Complex workflows
    • 3
      K
    • 2
      Custom operators
    • 2
      Apache project
    • 2
      Dashboard
    • 2
      Good api
    CONS OF AIRFLOW
    • 2
      Running it on kubernetes cluster relatively complex
    • 2
      Open source - provides minimum or no support
    • 1
      Logical separation of DAGs is not straight forward
    • 1
      Observability is not great when the DAGs exceed 250

    related Airflow posts

    Shared insights
    on
    JenkinsJenkinsAirflowAirflow

    I am looking for an open-source scheduler tool with cross-functional application dependencies. Some of the tasks I am looking to schedule are as follows:

    1. Trigger Matillion ETL loads
    2. Trigger Attunity Replication tasks that have downstream ETL loads
    3. Trigger Golden gate Replication Tasks
    4. Shell scripts, wrappers, file watchers
    5. Event-driven schedules

    I have used Airflow in the past, and I know we need to create DAGs for each pipeline. I am not familiar with Jenkins, but I know it works with configuration without much underlying code. I want to evaluate both and appreciate any advise

    See more
    Shared insights
    on
    AWS Step FunctionsAWS Step FunctionsAirflowAirflow

    I am working on a project that grabs a set of input data from AWS S3, pre-processes and divvies it up, spins up 10K batch containers to process the divvied data in parallel on AWS Batch, post-aggregates the data, and pushes it to S3.

    I already have software patterns from other projects for Airflow + Batch but have not dealt with the scaling factors of 10k parallel tasks. Airflow is nice since I can look at which tasks failed and retry a task after debugging. But dealing with that many tasks on one Airflow EC2 instance seems like a barrier. Another option would be to have one task that kicks off the 10k containers and monitors it from there.

    I have no experience with AWS Step Functions but have heard it's AWS's Airflow. There looks to be plenty of patterns online for Step Functions + Batch. Do Step Functions seem like a good path to check out for my use case? Do you get the same insights on failing jobs / ability to retry tasks as you do with Airflow?

    See more
    Kafka logo

    Kafka

    16.3K
    15.5K
    573
    Distributed, fault tolerant, high throughput pub-sub messaging system
    16.3K
    15.5K
    + 1
    573
    PROS OF KAFKA
    • 122
      High-throughput
    • 116
      Distributed
    • 87
      Scalable
    • 81
      High-Performance
    • 65
      Durable
    • 36
      Publish-Subscribe
    • 19
      Simple-to-use
    • 15
      Open source
    • 10
      Written in Scala and java. Runs on JVM
    • 6
      Message broker + Streaming system
    • 4
      Avro schema integration
    • 2
      Suport Multiple clients
    • 2
      Robust
    • 2
      KSQL
    • 2
      Partioned, replayable log
    • 1
      Fun
    • 1
      Extremely good parallelism constructs
    • 1
      Simple publisher / multi-subscriber model
    • 1
      Flexible
    CONS OF KAFKA
    • 27
      Non-Java clients are second-class citizens
    • 26
      Needs Zookeeper
    • 7
      Operational difficulties
    • 2
      Terrible Packaging

    related Kafka posts

    Eric Colson
    Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 2.1M views

    The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

    Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

    At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

    For more info:

    #DataScience #DataStack #Data

    See more
    John Kodumal

    As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

    We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

    See more
    DVC logo

    DVC

    37
    71
    1
    Open-source Version Control System for Machine Learning Projects
    37
    71
    + 1
    1
    PROS OF DVC
    • 1
      Full reproducibility
    CONS OF DVC
      Be the first to leave a con

      related DVC posts

      Shared insights
      on
      MLflowMLflowDVCDVC

      I already use DVC to keep track and store my datasets in my machine learning pipeline. I have also started to use MLflow to keep track of my experiments. However, I still don't know whether to use DVC for my model files or I use the MLflow artifact store for this purpose. Or maybe these two serve different purposes, and it may be good to do both! Can anyone help, please?

      See more
      Argo logo

      Argo

      283
      252
      3
      Container-native workflows for Kubernetes
      283
      252
      + 1
      3
      PROS OF ARGO
      • 1
        Online service, no need to install anything
      • 1
        Autosinchronize the changes to deploy
      • 1
        Open Source
      CONS OF ARGO
        Be the first to leave a con

        related Argo posts

        Kubeflow logo

        Kubeflow

        159
        487
        16
        Machine Learning Toolkit for Kubernetes
        159
        487
        + 1
        16
        PROS OF KUBEFLOW
        • 8
          System designer
        • 3
          Customisation
        • 3
          Kfp dsl
        • 2
          Google backed
        CONS OF KUBEFLOW
          Be the first to leave a con

          related Kubeflow posts

          Biswajit Pathak
          Project Manager at Sony · | 5 upvotes · 43.1K views

          Can you please advise which one to choose FastText Or Gensim, in terms of:

          1. Operability with ML Ops tools such as MLflow, Kubeflow, etc.
          2. Performance
          3. Customization of Intermediate steps
          4. FastText and Gensim both have the same underlying libraries
          5. Use cases each one tries to solve
          6. Unsupervised Vs Supervised dimensions
          7. Ease of Use.

          Please mention any other points that I may have missed here.

          See more

          Amazon SageMaker constricts the use of their own mxnet package and does not offer a strong Kubernetes backbone. At the same time, Kubeflow is still quite buggy and cumbersome to use. Which tool is a better pick for MLOps pipelines (both from the perspective of scalability and depth)?

          See more
          MLflow logo

          MLflow

          139
          407
          8
          An open source machine learning platform
          139
          407
          + 1
          8
          PROS OF MLFLOW
          • 4
            Simplified Logging
          • 4
            Code First
          CONS OF MLFLOW
            Be the first to leave a con

            related MLflow posts

            Shared insights
            on
            MLflowMLflowDVCDVC

            I already use DVC to keep track and store my datasets in my machine learning pipeline. I have also started to use MLflow to keep track of my experiments. However, I still don't know whether to use DVC for my model files or I use the MLflow artifact store for this purpose. Or maybe these two serve different purposes, and it may be good to do both! Can anyone help, please?

            See more
            Biswajit Pathak
            Project Manager at Sony · | 5 upvotes · 43.1K views

            Can you please advise which one to choose FastText Or Gensim, in terms of:

            1. Operability with ML Ops tools such as MLflow, Kubeflow, etc.
            2. Performance
            3. Customization of Intermediate steps
            4. FastText and Gensim both have the same underlying libraries
            5. Use cases each one tries to solve
            6. Unsupervised Vs Supervised dimensions
            7. Ease of Use.

            Please mention any other points that I may have missed here.

            See more