Hadoop vs Microsoft SQL Server

Need advice about which tool to choose?Ask the StackShare community!

Hadoop

2.5K
2.3K
+ 1
56
Microsoft SQL Server

19.8K
15.2K
+ 1
539
Add tool

Hadoop vs Microsoft SQL Server: What are the differences?

Introduction

In this article, we will discuss the key differences between Hadoop and Microsoft SQL Server. Both Hadoop and SQL Server are widely used data management and analytics platforms, but they have distinct characteristics and functionalities. Understanding these differences is crucial for organizations to make informed decisions regarding their data processing and analysis needs.

  1. Scalability: One of the key differences between Hadoop and SQL Server is their scalability. Hadoop is designed to handle massive amounts of data and can scale horizontally by adding more commodity hardware to the cluster. On the other hand, SQL Server is primarily built for vertical scalability, where a single server can be scaled vertically by adding more resources such as CPU, memory, and storage. This makes Hadoop more suitable for big data processing and analysis tasks that require distributed computing power.

  2. Data Types and Schema: Hadoop and SQL Server have different approaches to data types and schema. Hadoop, being a distributed file system, can handle structured, semi-structured, and unstructured data without any predefined schema. It allows for schema-on-read, where the structure of the data can be determined during the data processing stage. SQL Server, on the other hand, requires a predefined schema and enforces strict data typing. It is well-suited for structured data management and supports SQL queries and relational data modeling.

  3. Processing Paradigm: Another significant difference between Hadoop and SQL Server is their processing paradigms. Hadoop is designed for batch processing and can efficiently process large volumes of data sequentially. It excels in handling complex data processing tasks like MapReduce. SQL Server, on the other hand, is optimized for transactional processing and supports real-time query processing. It is well-suited for online transaction processing (OLTP) scenarios where low latency is critical.

  4. Cost: Cost is a factor that differentiates Hadoop and SQL Server deployments. Hadoop, being an open-source framework, is generally more cost-effective compared to SQL Server, which is a commercial database management system. Hadoop allows organizations to use commodity hardware and offers flexible licensing options, making it more affordable for large-scale data processing and analysis requirements. SQL Server, on the other hand, involves licensing costs for both the software and additional resources for vertical scalability.

  5. Ecosystem and Integration: Hadoop has a vast ecosystem of tools and frameworks, providing capabilities for data ingestion, processing, analytics, and visualization. It integrates well with various open-source technologies, such as Apache Hive, Apache Pig, and Apache Spark, offering a comprehensive data processing and analytics platform. SQL Server, on the other hand, provides a comprehensive suite of tools and services that are tightly integrated with the Microsoft technology stack. It offers seamless integration with other Microsoft products like Excel, Power BI, and Azure services.

  6. Maturity and Support: Hadoop and SQL Server also differ in terms of their maturity and support. Hadoop, being a relatively newer technology, has a rapidly evolving ecosystem and is supported by the Apache Software Foundation and a large community of contributors. SQL Server, on the other hand, is a mature and widely adopted database management system. It has been in the market for several years and has a well-established support structure from Microsoft, including regular updates, patches, and comprehensive documentation.

In summary, Hadoop and SQL Server differ in terms of scalability, data types and schema, processing paradigms, cost, ecosystem and integration, and maturity and support. Understanding these differences is crucial for organizations to determine which platform best fits their specific data processing, analysis, and management requirements.

Advice on Hadoop and Microsoft SQL Server

I have a project (in production) that a part of it is generating HTML from JSON object normally we use Microsoft SQL Server only as our main database. but when it comes to this part some team members suggest working with a NoSQL database as we are going to handle JSON data for both retrieval and querying. others replied that will add complexity and we will lose SQL Servers' Unit Of Work which will break the Atomic behavior, and they suggest to continue working with SQL Server since it supports working with JSON. If you have practical experience using JSON with SQL Server, kindly share your feedback.

See more
Replies (2)
TwoBySea

I agree with the advice you have been given to stick with SQL Server. If you are on the latest SQL Server version you can query inside the JSON field. You should set up a test database with a JSON field and try some queries. Once you understand it and can demonstrate it, show it to the other developers that are suggesting MongoDB. Once they see it working with their own eyes they may drop their position of Mongo over SQL. I would only seriously consider MongoDB if there was no other SQL requirements. I wouldn't do both. I'd be all SQL or all Mongo.

See more
Kevin Deyne
Principal Software Engineer at Accurate Background · | 2 upvotes · 48.7K views
Recommends

I think the key thing to look for is what kind of queries you're expecting to do on that JSON and how stable that data is going to be. (And if you actually need to store the data as JSON; it's generally pretty inexpensive to generate a JSON object)

MongoDB gets rid of the relational aspect of data in favor of data being very fluid in structure.

So if your JSON is going to vary a lot/is unpredictable/will change over time and you need to run queries efficiently like 'records where the field x exists and its value is higher than 3', that's a great use case for MongoDB.

It's hard to solve this in a standard relational model: Indexing on a single column that has wildly different values is pretty much impossible to do efficiently; and pulling out the data in its own columns is hard because it's hard to predict how many columns you'd have or what their datatypes would be. If this sounds like your predicament, 100% go for MongoDB.

If this is always going to be more or less the same JSON and the fields are going to be predictably the same, then the fact that it's JSON doesn't particularly matter much. Your indexes are going to approach it similar to a long string.

If the queried fields are very predictable, you should probably consider storing the fields as separate columns to have better querying capabilities. Ie if you have {"x":1, "y":2}, {"x":5, "y":6}, {"x":9, "y":0} - just make a table with an x and y column and generate the JSON. The CPU hit is worth it compared to the querying capabilities.

See more
Needs advice
on
HadoopHadoopMarkLogicMarkLogic
and
SnowflakeSnowflake

For a property and casualty insurance company, we currently use MarkLogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus Snowflake versus a hadoop or all three of these platforms redundant with one another?

See more
Needs advice
on
HadoopHadoopMarkLogicMarkLogic
and
SnowflakeSnowflake

for property and casualty insurance company we current Use marklogic and Hadoop for our raw data lake. Trying to figure out how snowflake fits in the picture. Does anybody have some good suggestions/best practices for when to use and what data to store in Mark logic versus snowflake versus a hadoop or all three of these platforms redundant with one another?

See more
Replies (1)
Ivo Dinis Rodrigues
none of you bussines at Marklogic · | 1 upvotes · 20.3K views
Recommends

As i see it, you can use Snowflake as your data warehouse and marklogic as a data lake. You can add all your raw data to ML and curate it to a company data model to then supply this to Snowflake. You could try to implement the dw functionality on marklogic but it will just cost you alot of time. If you are using Aws version of Snowflake you can use ML spark connector to access the data. As an extra you can use the ML also as an Operational report system if you join it with a Reporting tool lie PowerBi. With extra apis you can also provide data to other systems with ML as source.

See more
Needs advice
on
HadoopHadoopInfluxDBInfluxDB
and
KafkaKafka

I have a lot of data that's currently sitting in a MariaDB database, a lot of tables that weigh 200gb with indexes. Most of the large tables have a date column which is always filtered, but there are usually 4-6 additional columns that are filtered and used for statistics. I'm trying to figure out the best tool for storing and analyzing large amounts of data. Preferably self-hosted or a cheap solution. The current problem I'm running into is speed. Even with pretty good indexes, if I'm trying to load a large dataset, it's pretty slow.

See more
Replies (1)
Recommends
on
DruidDruid

Druid Could be an amazing solution for your use case, My understanding, and the assumption is you are looking to export your data from MariaDB for Analytical workload. It can be used for time series database as well as a data warehouse and can be scaled horizontally once your data increases. It's pretty easy to set up on any environment (Cloud, Kubernetes, or Self-hosted nix system). Some important features which make it a perfect solution for your use case. 1. It can do streaming ingestion (Kafka, Kinesis) as well as batch ingestion (Files from Local & Cloud Storage or Databases like MySQL, Postgres). In your case MariaDB (which has the same drivers to MySQL) 2. Columnar Database, So you can query just the fields which are required, and that runs your query faster automatically. 3. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. 4. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures 5. Gives ana amazing centralized UI to manage data sources, query, tasks.

See more

I am a Microsoft SQL Server programmer who is a bit out of practice. I have been asked to assist on a new project. The overall purpose is to organize a large number of recordings so that they can be searched. I have an enormous music library but my songs are several hours long. I need to include things like time, date and location of the recording. I don't have a problem with the general database design. I have two primary questions:

  1. I need to use either MySQL or PostgreSQL on a Linux based OS. Which would be better for this application?
  2. I have not dealt with a sound based data type before. How do I store that and put it in a table? Thank you.
See more
Replies (6)

Hi Erin,

Honestly both databases will do the job just fine. I personally prefer Postgres.

Much more important is how you store the audio. While you could technically use a blob type column, it's really not ideal to be storing audio files which are "several hours long" in a database row. Instead consider storing the audio files in an object store (hosted options include backblaze b2 or aws s3) and persisting the key (which references that object) in your database column.

See more
Aaron Westley
Recommends
on
PostgreSQLPostgreSQL

Hi Erin, Chances are you would want to store the files in a blob type. Both MySQL and Postgres support this. Can you explain a little more about your need to store the files in the database? I may be more effective to store the files on a file system or something like S3. To answer your qustion based on what you are descibing I would slighly lean towards PostgreSQL since it tends to be a little better on the data warehousing side.

See more
Christopher Wray
Web Developer at Soltech LLC · | 3 upvotes · 466.1K views
Recommends
on
DirectusDirectus
at

Hey Erin! I would recommend checking out Directus before you start work on building your own app for them. I just stumbled upon it, and so far extremely happy with the functionalities. If your client is just looking for a simple web app for their own data, then Directus may be a great option. It offers "database mirroring", so that you can connect it to any database and set up functionality around it!

See more
Julien DeFrance
Principal Software Engineer at Tophatter · | 3 upvotes · 465.7K views
Recommends
on
Amazon AuroraAmazon Aurora

Hi Erin! First of all, you'd probably want to go with a managed service. Don't spin up your own MySQL installation on your own Linux box. If you are on AWS, thet have different offerings for database services. Standard RDS vs. Aurora. Aurora would be my preferred choice given the benefits it offers, storage optimizations it comes with... etc. Such managed services easily allow you to apply new security patches and upgrades, set up backups, replication... etc. Doing this on your own would either be risky, inefficient, or you might just give up. As far as which database to chose, you'll have the choice between Postgresql, MySQL, Maria DB, SQL Server... etc. I personally would recommend MySQL (latest version available), as the official tooling for it (MySQL Workbench) is great, stable, and moreover free. Other database services exist, I'd recommend you also explore Dynamo DB.

Regardless, you'd certainly only keep high-level records, meta data in Database, and the actual files, most-likely in S3, so that you can keep all options open in terms of what you'll do with them.

See more
Recommends
on
PostgreSQLPostgreSQL

Hi Erin,

  • Coming from "Big" DB engines, such as Oracle or MSSQL, go for PostgreSQL. You'll get all the features you need with PostgreSQL.
  • Your case seems to point to a "NoSQL" or Document Database use case. Since you get covered on this with PostgreSQL which achieves excellent performances on JSON based objects, this is a second reason to choose PostgreSQL. MongoDB might be an excellent option as well if you need "sharding" and excellent map-reduce mechanisms for very massive data sets. You really should investigate the NoSQL option for your use case.
  • Starting with AWS Aurora is an excellent advise. since "vendor lock-in" is limited, but I did not check for JSON based object / NoSQL features.
  • If you stick to Linux server, the PostgreSQL or MySQL provided with your distribution are straightforward to install (i.e. apt install postgresql). For PostgreSQL, make sure you're comfortable with the pg_hba.conf, especially for IP restrictions & accesses.

Regards,

See more
Klaus Nji
Staff Software Engineer at SailPoint Technologies · | 1 upvotes · 465.8K views
Recommends
on
PostgreSQLPostgreSQL

I recommend Postgres as well. Superior performance overall and a more robust architecture.

See more
Decisions about Hadoop and Microsoft SQL Server
Asif Khan
Software Development Engineer at Stier Solution Private Limited · | 10 upvotes · 69.8K views

Easy to start, lightweight and open source.

When I started with PHP, MySQL was everywhere so this is how I started with it. I am no expert in databases but I started learning joins, stored procedures, triggers, etc. with MySQL.

Recently used it in one of my projects - Picfam.com with Node.js + Express backend

See more
Josip Užarević
Senior frontend developer · | 6 upvotes · 70.8K views

Needed to transform intranet desktop application to the web-based one, as mid-term project. My choice was to use Django/Angular stack - Django since it, in conjunction with Python, enabled rapid development, an Angular since it was stable and enterprise-level framework. Deadlines were somewhat tight since the project to migrate was being developed for several years and had a lot of domain knowledge integrated into it. Definitely was good decision, since deadlines was manageable, juniors were able to enter the project very quickly and we were able to continuously deploy very well.

See more
Manage your open source components, licenses, and vulnerabilities
Learn More
Pros of Hadoop
Pros of Microsoft SQL Server
  • 39
    Great ecosystem
  • 11
    One stack to rule them all
  • 4
    Great load balancer
  • 1
    Amazon aws
  • 1
    Java syntax
  • 139
    Reliable and easy to use
  • 101
    High performance
  • 95
    Great with .net
  • 65
    Works well with .net
  • 56
    Easy to maintain
  • 21
    Azure support
  • 17
    Full Index Support
  • 17
    Always on
  • 10
    Enterprise manager is fantastic
  • 9
    In-Memory OLTP Engine
  • 2
    Security is forefront
  • 2
    Easy to setup and configure
  • 1
    Docker Delivery
  • 1
    Columnstore indexes
  • 1
    Great documentation
  • 1
    Faster Than Oracle
  • 1
    Decent management tools

Sign up to add or upvote prosMake informed product decisions

Cons of Hadoop
Cons of Microsoft SQL Server
    Be the first to leave a con
    • 4
      Expensive Licensing
    • 2
      Microsoft

    Sign up to add or upvote consMake informed product decisions

    - No public GitHub repository available -

    What is Hadoop?

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.

    What is Microsoft SQL Server?

    Microsoft® SQL Server is a database management and analysis system for e-commerce, line-of-business, and data warehousing solutions.

    Need advice about which tool to choose?Ask the StackShare community!

    What companies use Hadoop?
    What companies use Microsoft SQL Server?
    Manage your open source components, licenses, and vulnerabilities
    Learn More

    Sign up to get full access to all the companiesMake informed product decisions

    What tools integrate with Hadoop?
    What tools integrate with Microsoft SQL Server?

    Sign up to get full access to all the tool integrationsMake informed product decisions

    Blog Posts

    MySQLKafkaApache Spark+6
    2
    2046
    Aug 28 2019 at 3:10AM

    Segment

    PythonJavaAmazon S3+16
    7
    2606
    What are some alternatives to Hadoop and Microsoft SQL Server?
    Cassandra
    Partitioning means that Cassandra can distribute your data across multiple machines in an application-transparent matter. Cassandra will automatically repartition as machines are added and removed from the cluster. Row store means that like relational databases, Cassandra organizes data by rows and columns. The Cassandra Query Language (CQL) is a close relative of SQL.
    MongoDB
    MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.
    Elasticsearch
    Elasticsearch is a distributed, RESTful search and analytics engine capable of storing data and searching it in near real time. Elasticsearch, Kibana, Beats and Logstash are the Elastic Stack (sometimes called the ELK Stack).
    Splunk
    It provides the leading platform for Operational Intelligence. Customers use it to search, monitor, analyze and visualize machine data.
    Snowflake
    Snowflake eliminates the administration and management demands of traditional data warehouses and big data platforms. Snowflake is a true data warehouse as a service running on Amazon Web Services (AWS)—no infrastructure to manage and no knobs to turn.
    See all alternatives