Need advice about which tool to choose?Ask the StackShare community!
Airflow vs Kubeflow: What are the differences?
Airflow: A platform to programmaticaly author, schedule and monitor data pipelines, by Airbnb. Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command lines utilities makes performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress and troubleshoot issues when needed; Kubeflow: Machine Learning Toolkit for Kubernetes. The Kubeflow project is dedicated to making Machine Learning on Kubernetes easy, portable and scalable by providing a straightforward way for spinning up best of breed OSS solutions.
Airflow and Kubeflow are primarily classified as "Workflow Manager" and "Machine Learning" tools respectively.
Airflow and Kubeflow are both open source tools. It seems that Airflow with 13.3K GitHub stars and 4.91K forks on GitHub has more adoption than Kubeflow with 7.23K GitHub stars and 1.08K GitHub forks.
Airbnb, Slack, and 9GAG are some of the popular companies that use Airflow, whereas Kubeflow is used by Eliiza, Hepsiburada, and Big Insight. Airflow has a broader approval, being mentioned in 98 company stacks & 162 developers stacks; compared to Kubeflow, which is listed in 3 company stacks and 8 developer stacks.
I am so confused. I need a tool that will allow me to go to about 10 different URLs to get a list of objects. Those object lists will be hundreds or thousands in length. I then need to get detailed data lists about each object. Those detailed data lists can have hundreds of elements that could be map/reduced somehow. My batch process dies sometimes halfway through which means hours of processing gone, i.e. time wasted. I need something like a directed graph that will keep results of successful data collection and allow me either pragmatically or manually to retry the failed ones some way (0 - forever) times. I want it to then process all the ones that have succeeded or been effectively ignored and load the data store with the aggregation of some couple thousand data-points. I know hitting this many endpoints is not a good practice but I can't put collectors on all the endpoints or anything like that. It is pretty much the only way to get the data.
For a non-streaming approach:
You could consider using more checkpoints throughout your spark jobs. Furthermore, you could consider separating your workload into multiple jobs with an intermittent data store (suggesting cassandra or you may choose based on your choice and availability) to store results , perform aggregations and store results of those.
Spark Job 1 - Fetch Data From 10 URLs and store data and metadata in a data store (cassandra) Spark Job 2..n - Check data store for unprocessed items and continue the aggregation
Alternatively for a streaming approach: Treating your data as stream might be useful also. Spark Streaming allows you to utilize a checkpoint interval - https://spark.apache.org/docs/latest/streaming-programming-guide.html#checkpointing
Pros of Airflow
- Features50
- Task Dependency Management14
- Beautiful UI12
- Cluster of workers12
- Extensibility10
- Open source6
- Complex workflows5
- Python5
- Good api3
- Apache project3
- Custom operators3
- Dashboard2
Pros of Kubeflow
- System designer9
- Google backed3
- Customisation3
- Kfp dsl3
Sign up to add or upvote prosMake informed product decisions
Cons of Airflow
- Observability is not great when the DAGs exceed 2502
- Running it on kubernetes cluster relatively complex2
- Open source - provides minimum or no support2
- Logical separation of DAGs is not straight forward1