Need advice about which tool to choose?Ask the StackShare community!

Hadoop

2.5K
2.3K
+ 1
56
Oracle

2.3K
1.7K
+ 1
113
Add tool

Hadoop vs Oracle: What are the differences?

Introduction:

Hadoop and Oracle are both essential tools in the realm of big data management and analytics. While they serve similar purposes, there are key differences between the two that make each suitable for distinct use cases. Below are the crucial disparities that distinguish Hadoop from Oracle.

  1. Architecture: Hadoop follows a distributed file system architecture, allowing it to store and process massive amounts of data across a cluster of commodity hardware. On the other hand, Oracle is based on a centralized database architecture, which is well-suited for transactional processing and structured data.

  2. Scalability: Hadoop is highly scalable and can effortlessly scale both vertically and horizontally to accommodate growing data volumes and processing requirements. In contrast, Oracle's scalability is typically achieved through costly hardware upgrades and may face limitations in handling petabytes of data efficiently.

  3. Data Types: Hadoop is designed to handle both structured and unstructured data, making it ideal for processing diverse data formats such as text, images, and videos. Oracle, on the other hand, excels in managing structured relational data and is optimized for complex transactional processing.

  4. Cost: Hadoop is generally perceived as a cost-effective solution for big data analytics due to its open-source nature and ability to run on commodity hardware. In contrast, Oracle is a commercial database system that involves licensing fees, maintenance costs, and expenses associated with proprietary hardware.

  5. Data Processing Model: Hadoop employs a batch processing model suitable for processing large datasets with high latency requirements, making it ideal for tasks like log processing and ETL jobs. Oracle, on the other hand, supports real-time transaction processing and complex queries, making it preferable for interactive applications with low latency demands.

  6. Ecosystem: Hadoop offers a rich ecosystem of tools and frameworks such as Apache Spark, Pig, and Hive, enhancing its capabilities for data processing, analysis, and machine learning. Oracle, while also having a robust ecosystem of tools, may require additional licensing for accessing certain features and functionalities.

In Summary, Hadoop and Oracle differ significantly in architecture, scalability, data types, cost, data processing models, and ecosystem, making each better suited for specific data management and analytics requirements.

Advice on Hadoop and Oracle
Needs advice
on
HadoopHadoopMarkLogicMarkLogic
and
SnowflakeSnowflake

For a property and casualty insurance company, we currently use MarkLogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus Snowflake versus a hadoop or all three of these platforms redundant with one another?

See more
Needs advice
on
HadoopHadoopMarkLogicMarkLogic
and
SnowflakeSnowflake

for property and casualty insurance company we current Use marklogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus snowflake versus a hadoop or all three of these platforms redundant with one another?

See more
Replies (1)
Ivo Dinis Rodrigues
none of you bussines at Marklogic · | 1 upvotes · 20.1K views
Recommends

As i see it, you can use Snowflake as your data warehouse and marklogic as a data lake. You can add all your raw data to ML and curate it to a company data model to then supply this to Snowflake. You could try to implement the dw functionality on marklogic but it will just cost you alot of time. If you are using Aws version of Snowflake you can use ML spark connector to access the data. As an extra you can use the ML also as an Operational report system if you join it with a Reporting tool lie PowerBi. With extra apis you can also provide data to other systems with ML as source.

See more
Needs advice
on
HadoopHadoopInfluxDBInfluxDB
and
KafkaKafka

I have a lot of data that's currently sitting in a MariaDB database, a lot of tables that weigh 200gb with indexes. Most of the large tables have a date column which is always filtered, but there are usually 4-6 additional columns that are filtered and used for statistics. I'm trying to figure out the best tool for storing and analyzing large amounts of data. Preferably self-hosted or a cheap solution. The current problem I'm running into is speed. Even with pretty good indexes, if I'm trying to load a large dataset, it's pretty slow.

See more
Replies (1)
Recommends
on
DruidDruid

Druid Could be an amazing solution for your use case, My understanding, and the assumption is you are looking to export your data from MariaDB for Analytical workload. It can be used for time series database as well as a data warehouse and can be scaled horizontally once your data increases. It's pretty easy to set up on any environment (Cloud, Kubernetes, or Self-hosted nix system). Some important features which make it a perfect solution for your use case. 1. It can do streaming ingestion (Kafka, Kinesis) as well as batch ingestion (Files from Local & Cloud Storage or Databases like MySQL, Postgres). In your case MariaDB (which has the same drivers to MySQL) 2. Columnar Database, So you can query just the fields which are required, and that runs your query faster automatically. 3. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. 4. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures 5. Gives ana amazing centralized UI to manage data sources, query, tasks.

See more
Decisions about Hadoop and Oracle
Daniel Moya
Data Engineer at Dimensigon · | 4 upvotes · 451.4K views

We have chosen Tibero over Oracle because we want to offer a PL/SQL-as-a-Service that the users can deploy in any Cloud without concerns from our website at some standard cost. With Oracle Database, developers would have to worry about what they implement and the related costs of each feature but the licensing model from Tibero is just 1 price and we have all features included, so we don't have to worry and developers using our SQLaaS neither. PostgreSQL would be open source. We have chosen Tibero over Oracle because we want to offer a PL/SQL that you can deploy in any Cloud without concerns. PostgreSQL would be the open source option but we need to offer an SQLaaS with encryption and more enterprise features in the background and best value option we have found, it was Tibero Database for PL/SQL-based applications.

See more

We wanted a JSON datastore that could save the state of our bioinformatics visualizations without destructive normalization. As a leading NoSQL data storage technology, MongoDB has been a perfect fit for our needs. Plus it's open source, and has an enterprise SLA scale-out path, with support of hosted solutions like Atlas. Mongo has been an absolute champ. So much so that SQL and Oracle have begun shipping JSON column types as a new feature for their databases. And when Fast Healthcare Interoperability Resources (FHIR) announced support for JSON, we basically had our FHIR datalake technology.

See more

In the field of bioinformatics, we regularly work with hierarchical and unstructured document data. Unstructured text data from PDFs, image data from radiographs, phylogenetic trees and cladograms, network graphs, streaming ECG data... none of it fits into a traditional SQL database particularly well. As such, we prefer to use document oriented databases.

MongoDB is probably the oldest component in our stack besides Javascript, having been in it for over 5 years. At the time, we were looking for a technology that could simply cache our data visualization state (stored in JSON) in a database as-is without any destructive normalization. MongoDB was the perfect tool; and has been exceeding expectations ever since.

Trivia fact: some of the earliest electronic medical records (EMRs) used a document oriented database called MUMPS as early as the 1960s, prior to the invention of SQL. MUMPS is still in use today in systems like Epic and VistA, and stores upwards of 40% of all medical records at hospitals. So, we saw MongoDB as something as a 21st century version of the MUMPS database.

See more
Manage your open source components, licenses, and vulnerabilities
Learn More
Pros of Hadoop
Pros of Oracle
  • 39
    Great ecosystem
  • 11
    One stack to rule them all
  • 4
    Great load balancer
  • 1
    Amazon aws
  • 1
    Java syntax
  • 44
    Reliable
  • 33
    Enterprise
  • 15
    High Availability
  • 5
    Hard to maintain
  • 5
    Expensive
  • 4
    Maintainable
  • 4
    Hard to use
  • 3
    High complexity

Sign up to add or upvote prosMake informed product decisions

Cons of Hadoop
Cons of Oracle
    Be the first to leave a con
    • 14
      Expensive

    Sign up to add or upvote consMake informed product decisions

    - No public GitHub repository available -

    What is Hadoop?

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

    What is Oracle?

    Oracle Database is an RDBMS. An RDBMS that implements object-oriented features such as user-defined types, inheritance, and polymorphism is called an object-relational database management system (ORDBMS). Oracle Database has extended the relational model to an object-relational model, making it possible to store complex business models in a relational database.

    Need advice about which tool to choose?Ask the StackShare community!

    What companies use Hadoop?
    What companies use Oracle?
    Manage your open source components, licenses, and vulnerabilities
    Learn More

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Hadoop?
    What tools integrate with Oracle?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    Blog Posts

    MySQLKafkaApache Spark+6
    2
    2046
    Aug 28 2019 at 3:10AM

    Segment

    PythonJavaAmazon S3+16
    7
    2605
    What are some alternatives to Hadoop and Oracle?
    Cassandra
    Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL.
    MongoDB
    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
    Elasticsearch
    Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack).
    Splunk
    It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data.
    Snowflake
    Snowflake eliminates the administration and management demands of traditional data warehouses and big data platforms. Snowflake is a true data warehouse as a service running on Amazon Web Services (AWS)—no infrastructure to manage and no knobs to turn.
    See all alternatives