Amazon Simple Storage Service provides a fully redundant data storage infrastructure for storing and retrieving any amount of data, at any time, from anywhere on the web | The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. | Google Cloud Bigtable offers you a fast, fully managed, massively scalable NoSQL database service that's ideal for web, mobile, and Internet of Things applications requiring terabytes to petabytes of data. Unlike comparable market offerings, Cloud Bigtable doesn't require you to sacrifice speed, scale, or cost efficiency when your applications grow. Cloud Bigtable has been battle-tested at Google for more than 10 years—it's the database driving major applications such as Google Analytics and Gmail. |
Write, read, and delete objects containing from 1 byte to 5 terabytes of data each. The number of objects you can store is unlimited.;Each object is stored in a bucket and retrieved via a unique, developer-assigned key.;A bucket can be stored in one of several Regions. You can choose a Region to optimize for latency, minimize costs, or address regulatory requirements. Amazon S3 is currently available in the US Standard, US West (Oregon), US West (Northern California), EU (Ireland), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), South America (Sao Paulo), and GovCloud (US) Regions. The US Standard Region automatically routes requests to facilities in Northern Virginia or the Pacific Northwest using network maps.;Objects stored in a Region never leave the Region unless you transfer them out. For example, objects stored in the EU (Ireland) Region never leave the EU.;Authentication mechanisms are provided to ensure that data is kept secure from unauthorized access. Objects can be made private or public, and rights can be granted to specific users.;Options for secure data upload/download and encryption of data at rest are provided for additional data protection.;Uses standards-based REST and SOAP interfaces designed to work with any Internet-development toolkit.;Built to be flexible so that protocol or functional layers can easily be added. The default download protocol is HTTP. A BitTorrent protocol interface is provided to lower costs for high-scale distribution.;Provides functionality to simplify manageability of data through its lifetime. Includes options for segregating data by buckets, monitoring and controlling spend, and automatically archiving data to even lower cost storage options. These options can be easily administered from the Amazon S3 Management Console.;Reliability backed with the Amazon S3 Service Level Agreement. | - | Unmatched Performance: Single-digit millisecond latency and over 2X the performance per dollar of unmanaged NoSQL alternatives.;Open Source Interface: Because Cloud Bigtable is accessed through the HBase API, it is natively integrated with much of the existing big data and Hadoop ecosystem and supports Google’s big data products. Additionally, data can be imported from or exported to existing HBase clusters through simple bulk ingestion tools using industry-standard formats.;Low Cost: By providing a fully managed service and exceptional efficiency, Cloud Bigtable’s total cost of ownership is less than half the cost of its direct competition.;Security: Cloud Bigtable is built with a replicated storage strategy, and all data is encrypted both in-flight and at rest.;Simplicity: Creating or reconfiguring a Cloud Bigtable cluster is done through a simple user interface and can be completed in less than 10 seconds. As data is put into Cloud Bigtable the backing storage scales automatically, so there’s no need to do complicated estimates of capacity requirements.;Maturity: Over the past 10+ years, Bigtable has driven Google’s most critical applications. In addition, the HBase API is a industry-standard interface for combined operational and analytical workloads. |