Hadoop logo

Hadoop

Open-source software for reliable, scalable, distributed computing
2.7K
2.3K
+ 1
56

What is Hadoop?

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.
Hadoop is a tool in the Databases category of a tech stack.
Hadoop is an open source tool with 14.2K GitHub stars and 8.7K GitHub forks. Here’s a link to Hadoop's open source repository on GitHub

Who uses Hadoop?

Companies
367 companies reportedly use Hadoop in their tech stacks, including Uber, Airbnb, and Pinterest.

Developers
1745 developers on StackShare have stated that they use Hadoop.

Hadoop Integrations

Datadog, Azure Cosmos DB, Oracle PL/SQL, Apache Flink, and Presto are some of the popular tools that integrate with Hadoop. Here's a list of all 43 tools that integrate with Hadoop.
Pros of Hadoop
39
Great ecosystem
11
One stack to rule them all
4
Great load balancer
1
Amazon aws
1
Java syntax
Decisions about Hadoop

Here are some stack decisions, common use cases and reviews by companies and developers who chose Hadoop in their tech stack.

Shehryar Mallick
Associate Data Engineer at Virtuosoft · | 5 upvotes · 18.2K views
Needs advice
on
Apache HiveApache Hive
and
OpenRefineOpenRefine

I've been going over the documentation and couldn't find answers to different questions like:

Apache Hive is built on top of Hadoop meaning if I wanted to scale it up I could do either horizontal scaling or vertical scaling. but if I want to scale up openrefine to cater more data then how can this be achieved? the only thing I could find was to allocate more memory like 2 of 4GB but using this approach would mean that we would run out of memory to allot. so thoughts on this?

Secondly, Hadoop has MapReduce meaning a task is reduced to many mapper running in parallel to perform the task which in turn increase the processing speed, is there a similar mechanism in OpenRefine or does it only have a single processing unit (as it is running locally). thoughts?

See more
Needs advice
on
HadoopHadoopMarkLogicMarkLogic
and
SnowflakeSnowflake

For a property and casualty insurance company, we currently use MarkLogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus Snowflake versus a hadoop or all three of these platforms redundant with one another?

See more
Needs advice
on
AirflowAirflow
and
Apache NiFiApache NiFi

I am looking for the best tool to orchestrate #ETL workflows in non-Hadoop environments, mainly for regression testing use cases. Would Airflow or Apache NiFi be a good fit for this purpose?

For example, I want to run an Informatica ETL job and then run an SQL task as a dependency, followed by another task from Jira. What tool is best suited to set up such a pipeline?

See more

Blog Posts

MySQLKafkaApache Spark+6
2
1998
Aug 28 2019 at 3:10AM

Segment

PythonJavaAmazon S3+16
7
2549

Hadoop Alternatives & Comparisons

What are some alternatives to Hadoop?
Cassandra
Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL.
MongoDB
MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
Elasticsearch
Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack).
Splunk
It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data.
Snowflake
Snowflake eliminates the administration and management demands of traditional data warehouses and big data platforms. Snowflake is a true data warehouse as a service running on Amazon Web Services (AWS)—no infrastructure to manage and no knobs to turn.
See all alternatives

Hadoop's Followers
2272 developers follow Hadoop to keep up with related blogs and decisions.