AWS Data Pipeline is a web service that provides a simple management system for data-driven workflows. Using AWS Data Pipeline, you define a pipeline composed of the “data sources” that contain your data, the “activities” or business logic such as EMR jobs or SQL queries, and the “schedule” on which your business logic executes. For example, you could define a job that, every hour, runs an Amazon Elastic MapReduce (Amazon EMR)–based analysis on that hour’s Amazon Simple Storage Service (Amazon S3) log data, loads the results into a relational database for future lookup, and then automatically sends you a daily summary email. | FlyData for Amazon Redshift allows you to transfer your data easily and securely to Amazon Redshift. Getting your data onto Amazon Redshift and keeping it up-to-date can be a real hassle. With FlyData for Amazon Redshift, you can automatically upload and migrate your data to Amazon Redshift, after only a few simple steps. |
You can find (and use) a variety of popular AWS Data Pipeline tasks in the AWS Management Console’s template section.;Hourly analysis of Amazon S3‐based log data;Daily replication of AmazonDynamoDB data to Amazon S3;Periodic replication of on-premise JDBC database tables into RDS | We support four data formats for uploading data to Amazon Redshift: JSON, CSV, TSV, and APACHE logs;FlyData sends your data to Amazon Redshift every 5 minutes. This process is automated, so once the setup is complete, your data on Amazon Redshift will be up-to-date.;FlyData for Heroku will backup all of your logs onto your Amazon S3 bucket, just by adding the FlyData add-on to your application and setting configurations to your S3 bucket. |
Statistics | |
Stacks 95 | Stacks 3 |
Followers 398 | Followers 4 |
Votes 1 | Votes 0 |
Pros & Cons | |
Pros
| No community feedback yet |
Integrations | |
| No integrations available | |

AWS Snowball Edge is a 100TB data transfer device with on-board storage and compute capabilities. You can use Snowball Edge to move large amounts of data into and out of AWS, as a temporary storage tier for large local datasets, or to support local workloads in remote or offline locations.

It is an elegant and simple HTTP library for Python, built for human beings. It allows you to send HTTP/1.1 requests extremely easily. There’s no need to manually add query strings to your URLs, or to form-encode your POST data.

It's focus is on performance; specifically, end-user perceived latency, network and server resource usage.

It is an open-source bulk data loader that helps data transfer between various databases, storages, file formats, and cloud services.

BigQuery Data Transfer Service lets you focus your efforts on analyzing your data. You can setup a data transfer with a few clicks. Your analytics team can lay the foundation for a data warehouse without writing a single line of code.

A cloud-based solution engineered to fill the gaps between cloud applications. The software utilizes Intelligent 2-way Contact Sync technology to sync contacts in real-time between your favorite CRM and marketing apps.

It offers the industry leading data synchronization tool. Trusted by millions of users and thousands of companies across the globe. Resilient, fast and scalable p2p file sync software for enterprises and individuals.

The drop-in data importer that implements in hours, not weeks. Give your users the import experience you always dreamed of, but never had time to build.

Import/Export supports importing and exporting data into and out of Amazon S3 buckets. For significant data sets, AWS Import/Export is often faster than Internet transfer and more cost effective than upgrading your connectivity.

It is a .NET library that can read/write Office formats without Microsoft Office installed. No COM+, no interop.