Amazon SWF vs AWS Data Pipeline: What are the differences?
Amazon SWF: Automate the coordination, auditing, and scaling of applications across multiple machines. Amazon Simple Workflow allows you to structure the various processing steps in an application that runs across one or more machines as a set of “tasks.” Amazon SWF manages dependencies between the tasks, schedules the tasks for execution, and runs any logic that needs to be executed in parallel. The service also stores the tasks, reliably dispatches them to application components, tracks their progress, and keeps their latest state; AWS Data Pipeline: Process and move data between different AWS compute and storage services. AWS Data Pipeline is a web service that provides a simple management system for data-driven workflows. Using AWS Data Pipeline, you define a pipeline composed of the “data sources” that contain your data, the “activities” or business logic such as EMR jobs or SQL queries, and the “schedule” on which your business logic executes. For example, you could define a job that, every hour, runs an Amazon Elastic MapReduce (Amazon EMR)–based analysis on that hour’s Amazon Simple Storage Service (Amazon S3) log data, loads the results into a relational database for future lookup, and then automatically sends you a daily summary email.
Amazon SWF belongs to "Cloud Task Management" category of the tech stack, while AWS Data Pipeline can be primarily classified under "Data Transfer".
Some of the features offered by Amazon SWF are:
- Maintaining application state
- Tracking workflow executions and logging their progress
- Holding and dispatching tasks
On the other hand, AWS Data Pipeline provides the following key features:
- You can find (and use) a variety of popular AWS Data Pipeline tasks in the AWS Management Console’s template section.
- Hourly analysis of Amazon S3‐based log data
- Daily replication of AmazonDynamoDB data to Amazon S3