Kafka vs MongoDB

Need advice about which tool to choose?Ask the StackShare community!

Kafka

23K
21.6K
+ 1
607
MongoDB

91.6K
79K
+ 1
4.1K
Add tool

Kafka vs MongoDB: What are the differences?

Introduction

Kafka and MongoDB are two popular technologies used in the field of data management. While Kafka is a distributed streaming platform, MongoDB is a NoSQL document database. There are several key differences between these two technologies that set them apart in terms of their architecture, data handling capabilities, and use cases.

  1. Scalability and Performance: One key difference between Kafka and MongoDB lies in their ability to handle large volumes of data and provide high-performance capabilities. Kafka is known for its high-throughput, low-latency, and fault-tolerant nature, making it ideal for streaming and real-time data processing scenarios. On the other hand, MongoDB offers horizontal scalability and can handle large volumes of structured and unstructured data efficiently, making it suitable for applications requiring high write and read operations.

  2. Data Model: Another significant difference between Kafka and MongoDB lies in their data models. Kafka is primarily designed for processing streams of records in a fault-tolerant and scalable manner. It does not provide complex querying capabilities or storage of persistent state. On the contrary, MongoDB is a document database with a flexible schema. It allows the storage of structured, semi-structured, and unstructured data and provides powerful querying capabilities for data retrieval and analysis.

  3. Data Persistence: Kafka and MongoDB also differ in terms of data persistence mechanisms. Kafka is a distributed publish-subscribe messaging system, where data is typically stored in a log-based manner for a defined retention period. It relies on replication and fault-tolerance mechanisms to ensure data durability. MongoDB, in contrast, stores data persistently in a document-based format. It offers ACID transactions and supports various storage engines, providing durability and consistency guarantees to the stored data.

  4. Data Processing Paradigm: Kafka and MongoDB employ different data processing paradigms. Kafka follows the publish-subscribe model, where data is continuously streamed, processed, and consumed by multiple consumers. It enables real-time data processing, stream processing, and event-driven architectures. MongoDB, on the other hand, supports a document-oriented approach, where data is stored in documents (in JSON-like structures) and can be accessed and processed using diverse query types, including document-based joins and aggregations.

  5. Use Case Focus: Kafka and MongoDB have different use case focuses. Kafka is commonly used for building real-time streaming pipelines, messaging systems, event sourcing, and complex event processing scenarios. It excels in handling large amounts of data in motion, connecting disparate systems, and enabling data streaming architectures. MongoDB, on the contrary, is widely utilized in use cases such as content management systems, real-time analytics, customer data management, and internet of things (IoT) applications, where flexibility, scalability, and rich querying capabilities are desired.

  6. Ecosystem and Integrations: Finally, Kafka and MongoDB differ in terms of their ecosystem and integrations with other technologies. Kafka has a vast ecosystem with various connectors, integrations, and tooling support for data ingestion, processing, and integration with external systems like Apache Spark or ElasticSearch. MongoDB, too, has a mature ecosystem with support for languages, libraries, and frameworks, making it easier to integrate with popular programming languages and frameworks for seamless application development.

In Summary, Kafka and MongoDB differ in terms of scalability, data models, data persistence, data processing paradigms, use case focuses, and their ecosystems and integrations. Understanding these distinctions is crucial for choosing the right technology for the specific requirements of a project.

Advice on Kafka and MongoDB
Needs advice
on
MongoDBMongoDB
and
PostgreSQLPostgreSQL

I need urgent advice from you all! I am making a web-based food ordering platform which includes 3 different ordering methods (Dine-in using QR code scanning + Take away + Home Delivery) and a table reservation system. We are using React for the front-end, and I need your advice if I should use NestJS or ExpressJS for the backend. And regarding the database, which database should I use, MongoDB or PostgreSQL? Which combination will be better? PS. We want to follow the microservice architecture as scalability, reliability, and usability are the most important Non Functional requirements. Expert advice is needed, please. A load of thanks in advance. Kind Regards, Miqdad

See more
Replies (3)
Stephen Badger | Vital Beats
Senior DevOps Engineer at Vital Beats · | 9 upvotes · 251.3K views
Recommends
on
PostgreSQLPostgreSQL
at

I can't speak for the NestJS vs ExpressJS discussion, but I can given a viewpoint on databases.

The main thing to consider around database choice, is what "shape" the data will be in, and the kind of read/write patterns you expect of that data. The blog example shows up so much for DBMS like MongoDB, because it showcases what NoSQL / document storage is very scalable and performant in: mostly isolated documents with a few views / ways to order them and filter them. In your case, I can imagine a number of "relations" already, which suggest a more traditional SQL solution would work well: You have restaurants, they have maybe a few menus (regular, gluten-free etc), with menu items in, which have different prices over time (25% discount on christmas food just after christmas, 50% off pizzas on wednesdays). Then there's a whole different set of "relations" for people ordering, like showing them past orders, which need to refer to the restaurant etc, and credit card transaction information for refunds etc. That to me suggests PostgreSQL, which will scale quite well if you database design is okay.

PostgreSQL also offers you some extensions, which are just amazing for your use-case. https://postgis.net/ for example will let you query for restaurants based on location, without the big cost that comes from constantly using something like Google Maps API to work out which restaurants are near to someone ordering. Partitioning and window functions will be great for your own use internally too, like answering questions of "What types of takeways perform the best for us, Italian, Mexican?" or in combination with PostGIS, answering questions like "What kind of takeways do we need to market to, to improve our selection?".

While these things can all be implemented in MongoDB, you tend to lose some of the convenience of ACID or have to deal with things like eventual consistency, which requires more thinking on the part of your engineers. PostgreSQL offers decent (if more complex) scalablity and redundancy solutions, and is honestly very well proven and plenty of documentation exists on optimising queries.

See more
Anis Zehani
Recommends
on
MongoDBMongoDB

Hello, i build microservice systems using Angular And Spring (Java) so i can't help with with ur back end choice, BUT, i definitely advice you to use a Nosql database, thus MongoDB of course or even Cassandra if your looking for extreme scalability with zero point of failure. Anyway, Nosql if much more faster then Sql (in your case Postresql DB). All you wanna do with sql can also be done by nosql (not the opposite of course).I also advice you to use docker containers + kubernetes to orchestrate them, if you need scalability and replication, that way your app can support auto scalability (in case ur users number goes high). Best of luck

See more
Carlos Iglesias
Recommends

About PostgreSQL vs MongoDB: short answer. Both are great. Choose what you like the most. Only if you expect millions of users, I‘ll incline with MongoDB.

See more
Needs advice
on
ElasticsearchElasticsearchKafkaKafka
and
MongoDBMongoDB

Hello Stackshare. I'm currently doing some research on real-time reporting and analytics architectures. We have a use case where 1million+ records of users, 4million+ activities, and messages that we want to report against. The start was to present it directly from MySQL, which didn't go well and puts a heavy load on the database. Anybody can suggest something where we feed the data and can report in realtime? Read some articles about ElasticSearch and Kafka https://medium.com/@D11Engg/building-scalable-real-time-analytics-alerting-and-anomaly-detection-architecture-at-dream11-e20edec91d33 EDIT: also considering Neo4j

See more
Replies (4)
Daniel Zurawski
Technical Lead at SuperAwesome · | 10 upvotes · 12.2K views

One of the reasons why your real-time reporting built on top of MySQL might not be performing so well is due to the fact that you are most likely interested in aggregates (e.g. group by & SUM, AVG, TopN). In data warehousing, there is a term known as column-oriented vs row-oriented databases - the key here is that in column-oriented DBMSs, you more precisely access the data you need to answer a question, avoiding having to scan the entire table to calculate an answer. Most of the time pre-aggregates can be calculated on insertion instead of at query time.

An excellent OLAP modern tool that I successfully used for many years to index events from Kafka at a staggering rate and query millions of events in less than a second is Apache Druid and it's an example of a distributed column-oriented data store. There are of course many more technologies out there for answering OLAP business intelligence questions, but personally, I think you won't go very far with a traditional RDBMS or a Lucene based search engine like ElasticSearch for building a Business Intelligence database for vast amounts of data.

"Apache Druid is an open-source data store designed for sub-second queries on real-time and historical data. It is primarily used for business intelligence (OLAP) queries on event data. Druid provides low latency (real-time) data ingestion, flexible data exploration, and fast data aggregation."

If you don't want to invest resources into deploying and hosting it yourself, there are other companies out there that can host it for you, but I will leave that up to you to research.

Here is an excellent article by my former work colleagues explaining how they implemented real-time analytics on top of Druid: https://medium.com/superawesome-engineering/how-we-use-apache-druids-real-time-analytics-to-power-kidtech-at-superawesome-8da6a0fb28b1. Also, I recommend reading through this HackerNews thread that talks in-depth about time-series databases: https://news.ycombinator.com/item?id=18403507.

See more
snid chakravarty
Data Engineer at Westpac · | 3 upvotes · 9.4K views
Recommends
on
DruidDruidKSQLKSQL

With the nature of application that you're building, you might even consider setting up some KSQL streams. I have just recently finished a poc on establishing a streaming analytics pipeline with KSQL dB (Both standalone and confluent supported) setting up kafka streams. Also they have a headless deployment mode in production which keeps your KSQL script pretty secured.

See more
Recommends
on
ElasticsearchElasticsearch

One important thing in your design will be the arrival rate of new data.

See more
Michał Kwieciński
CTO at Platforma Detalistów sp. z o.o. · | 1 upvotes · 4.7K views
Recommends
on
DruidDruid

I suppose your reporting will not be static. For live queries on large scale Druid is a popular tool. Clickhouse, Kylin and Pinot could also be of consideration.

See more
Needs advice
on
MongoDBMongoDB
and
MySQLMySQL

Hello, I am developing a new project with an internal chat between users. Also, there are complex relationships between the other project entities but I wolud like to build something scalable and fast and right now I am designing the data model. What kind of database would you recommend me to manage all application data? relational like MySQL, no relational like MongoDB or a mixed one? Thank you

See more
Replies (6)
Recommends
on
PostgreSQLPostgreSQL

In MongoDB, a write operation is atomic on the level of a single document, so it's harder to deal with consistency without transactions.

See more
Recommends
on
MongoDBMongoDB

MongoDB supports horizontal scaling through Sharding , distributing data across several machines and facilitating high throughput operations with large sets of data. ... Sharding allows you to add additional instances to increase capacity when required

See more
Recommends
on
ArangoDBArangoDB

If you are trying with "complex relationships", give a chance to learn ArangoDB and Graph databases. Its database structures allow doing this with faster and simpler queries. The database is not as strict as others and allows arbitrary data. The data model is really like a neural network and you will never need foreign keys tables anymore. In Udemy there is a free course about it to get started.

See more
Kit Ruparel
Recommends
on
Apache AuroraApache Aurora

The most important question is where are you planning to host? On-premise, or in the cloud.

Particularly if you are planning to host in either AWS or Azure, then your first point of call should be the PaaS (Platform as a Service) databases supplied by these vendors, as you will find yourself requiring a lot less effort to support them, much easier Disaster Recovery options, and also, depending on how PAYG the database is that you use, potentially also much cheaper costs than having a dedicated database server.

Your question regards 'Relational or not' is obviously key, and you need to consider both your required data structure, as well as the ACID requirements of your application model, as well as the non-functional requirements in terms of scalability, resilience, whether you want security authorisation at the highest application tier, or right down to 'row' level in the database, etc. - however please don't fall into the trap of considering 'NoSQL' as being single category. MongoDB, with its document-store type solution is a very different model to key-value-pair stores (like AWS DynamoDB), or column stores (like AWS RedShift) or for more complex data relationships, Entity Graph Stores (like AWS Neptune), to stores designed for tokenisation and text search (ElasticSearch) etc.

Also critical in all this is how many items you believe you need to index by. RDBMS/SQL stores are great for having as many indexes as you want, other than the slow-down in write speed, whereas databases like Amazon DynamoDB provide blisteringly fast read/write performance, but are very limited on key indexing capabilities.

It feels like you have most experience with SQL/RDBMS technologies, so for the simplest learning curve, and if your application fits it, then I'd personally start by looking at AWS Aurora https://aws.amazon.com/rds/aurora/ .

See more
Daniel Mwakanema
Software Developer at Kuunika - Data for Action · | 2 upvotes · 595.3K views
Recommends
on
MySQLMySQL

FIrstly, it may help if you explain what you mean by "complex relationships between project entities". Secondly, you can build a fast and scalable solution using either. With that said however, the data sounds relational so I would recommend MySQL.

See more
RODIALSON Tojo
FullStack Developer / CTO at O2Development · | 2 upvotes · 595.7K views
Recommends
on
MySQLMySQL

I think, Its depend of your project type and your skills. MySQL is good and simple for maintenance but MongoDB need more skills and knowledge. If you work on little project, use MySQL. For your project type, MySQL is enough after you can migrate with PostgreSQL

See more
Prithvi Singh
Application Developer at Montaigne Smart Business Solutions · | 8 upvotes · 848.9K views
Needs advice
on
MongoDBMongoDBMySQLMySQL
and
PostgreSQLPostgreSQL

I am going to work on a real estate project and have to decide on a database. Now, SQL databases can be very efficient if appropriately designed. More relations between the data and less redundancy. But with a #NoSQL database, the development time is reduced, and it is easy to query. Since this is my first time working on the real estate domain, I would like to pick a database that would be efficient in the long run.

See more
Replies (4)
Aric Fedida
Founder, CTO at ASK Technologies Inc · | 15 upvotes · 841.1K views
Recommends
on
PostgreSQLPostgreSQL

I recommend PostgreSQL as it’s the most powerful out of the 3 databases you mentioned. It supports JSON objects so you can mimic the MongoDB functionality, but I would also argue that SQL is actually quite powerful and in many cases significantly easier to work with than with NoSQL databases.

Stay away from foreign keys, keep it fast and simple. Define your data structures well in advance. Try to model your data structures based on your system’s vision; based on where it’s going and not based solely on what you currently need it to do. This will help you avoid drastic changes to your database after your system is launched. Populate the database with fake data and run tests. PostgreSQL allows you to create Views from multiple tables. Try to create those views and make sure you can easily create useful views from multiple tables. Run an Explain on those view queries to make sure you created your indexes correctly. Make sure it’s fast!

See more
Matthew Rothstein
Recommends
on
PostgreSQLPostgreSQL

Any of those three databases are going to be efficient, scalable, and reliable in the long term if you configure and use them correctly. They all also have solid hosting solutions.

All things being equal, I would agree with other posters that Postgres is my preference among the three, but there are caveats.

MongoDB and MySQL have better support for mutli-region replication in your big three cloud environments. Azure recently bought Citus Data, which was a best-in-class Postgres replication solution, so they might be the only one I trust to provide cross-region replication at the moment.

If you have a single region deployment and are on AWS, I can't recommend Aurora Postgres highly enough. It's a very good implementation and extremely performant.

See more
Josh Dzielak
Co-Founder & CTO at Orbit · | 4 upvotes · 836.5K views
Recommends
on
PostgreSQLPostgreSQL

I'll second another piece of advice. Postgresql's JSON columns are a dream when it comes to productivity and I use them frequently with our Rails application. In these cases, no migration is required to change schema. We store payloads with dozens or hundreds of keys and performance has not been an issue. We also have a lot of relational tables, so the joins we get with SQL are very important to us and hard to replicate with a NoQL solution.

See more
Danilo Kaltner
Recommends
on
PostgreSQLPostgreSQL

That really depends of where do you see you application in the long run. On any application, any of those choices are excellent. You could argue about good support on JSON binaries, but even MySQL has an excellent support for that on the latest versions.

On the long run, when your application gets hundreds of thousands of requests per second, you might start thinking about how many inputs you will have in the database compared to the outputs. PostgresSQL it’s excellent at giving you outputs, but table corruption can happen when you start receiving this massive number of inputs (Which was the reason Uber switched from Postgres to MySQL)

On our OPS Platform at CTO.ai , we decided to use Postgres, because we need a reliable and agile way to send the output to our users, so that was out best choice in the long run for our product.

See more
Needs advice
on
MongoDBMongoDB
and
PostgreSQLPostgreSQL

I am one of those who believes that MongoDB can be used for everything, this thanks to the advertising of MongoDB.

We are creating an e-commerce platform, we know that it has many relationships, but with MongoDB we can avoid some, but in the end, some relationships have to exist.

A single developer to create two native applications in Flutter, a web application with React, create the backend with multiple microservices hosted with Google Cloud Run. PostgreSQL can be heavy because it should be used with an ORM, on the contrary, with MongoDB you can avoid some relationships and avoid ORM / ODM.

We need advice from someone who has the experience and has had to choose between these two databases for an e-commerce site.

See more
Replies (4)
Recommends
on
PostgreSQLPostgreSQL

The real question here is not about the technology but rather your real needs and your data. Do you need to manage data that has core concepts and relations ? (such as a family, with parents and children) or do you need to manage a basic collection of similar data (such as blog entries)? PostgreSQL is definitely a relational database for managing entities and their relationships whereas MongoDB (I may be strongly opinionated here ;-) ) is more targeted at managing collection of entities (such as the blog entries). For an e-commerce site (with some products, products categories, user ratings and comments, prices, bundles...) I would go for PostgreSQL as it will support/guide you in creating a structured data set with all your products, organized in categories and with user ratings/comments attached to them. HTH

See more
Damián Gil
Advisor at Empresa En Crecimiento · | 3 upvotes · 582.2K views
Recommends
on
MongoDBMongoDB

I am in your spot, exactly. A few months ago, I had decided to use Postgres because since its version 9 it showed a lot of progress for being a high-availability database. However, frankly, I didn't want to model statically all data, since I have several distinct schemas (like for different product types) and I wanted some flexibility to add or remove as I saw fit. One of the main challenges with analyzing a NoSQL database being familiar in the SQL ways, is that it's easy to look for "analogies" for what makes SQL useful, like relationship enforcing, transactions and the cascading effect on deletes, updates and inserts, and that limit your vision a lot when analyzing a tool like Mongo, especially in a micro-services pattern. Now-a-days, I really found my solution in Mongo. Not just because of it being NoSQL, but because all of the support I find in the NodeJS community through packages and utilities that make it dead easy to use it for several use-cases. Whatever Postgres offers, Mongo does it a little easier and better, like text search and geo-queries. What you need to see is to model your data in a way that makes sense with Mongo. For instance, I've got a User service that has all auth related information of a user. But then, I have the same user in the Profile service, with the same id, but totally different fields. You have two de facto ways to connect data, by reference and embedding, which in Ecommerce, both have big uses. Like using references to relate a User to a Profile, and an embed to relate a Product to an Order. There's even a third, albeit a little more "manual" implementation here, the graph relationship in which you can model data, in which you can easily model event-driven documents, like a Purchase that goes from "a customer" to "a store", which you can later use for much easier and deep analytics than with the classical SQL stance. MariaDB has it readily available, and also has many improvements over MySQL and Postgres, especially for NoSQL features and scalability. Sadly it is just seen as a MySQL clone, but it offers more than that (although its documentation could be improved). Using Mongo in a micro-service environment is even better because your models can be smaller, meaning less burden on relationships, although you do compensate with a bit of duplication, but a well-designed schema will have minimal impact on that. Whatever tool might do the job, but I want to cheer on the newer generation. Hope it helps.

See more
Valeriy Bykanov
Founder, CEO at X1 Group · | 3 upvotes · 583.8K views
Recommends
at

Had exactly the same question when selecting data storage for our new product. Not e-commerce though, rather interactive and content-focused HR SaaS for SME.

The key arguments for PostgreSQL

  • It gives you the opportunity to use relationships where you really need it and just go with key-value tables where you don't.

  • With Jsonb datatype you can store documents/objects/arrays as JSON then use JSON elements in queries and even indexes.

  • There are more tools/integrations working with PostgreSQL which you can use out of the box, e.g. Hasura

See more
Needs advice
on
MongoDBMongoDBMySQLMySQL
and
PostgreSQLPostgreSQL

Hello,

I am trying to design an online ordering app similar to Doordash or Uber Eats. I'm having a hard time trying to finalise on what database (or mixture of databases) to use. I'm leaning towards using a relational database like MySQL or PostgreSQL. But, when the application grows, I don't want to join on 20 tables to get a data. Any help would be greatly appreciated. Thank you for your time.

See more
Replies (2)
Rupen Makhecha
Recommends
on
MySQLMySQL

Hello Suhas , We build our product www.voilacabs.com which is in the same lines as yours but we have used a combination of Mysql and MongoDB. When using MySQL, i would recommend doing the following: 1. Use Mysql only for storage only and for realtime updates we recommend MongoDB. 2. Don't try to Join more than 3 tables. ( the moment you reach 3 join stop there and try to un-normalized database. 3. Never or very rarely use Auto-increments. ( we recommend using UUIDS ) . Use UUIDS always for Auto increments for MYSQL. If you using Postgre SQL then i would suggest you to please check this https://instagram-engineering.com/sharding-ids-at-instagram-1cf5a71e5a5c There is a stored procedure that generated unique keys instead of auto-increment keys and that will help you sharding or clustering database without sync errors. 4. Also For MongoDB if you can put a layer of REDIS Cache then that will boost your api performance under large loads. 5. Use Node.js programing language as that function asynchronously .

Let me know if you still need any suggestion's . Thanks & Regards Rupen Makhecha CTO @ Voila Cab's www.voilacabs.com

See more
Rafey Iqbal Rahman
Recommends
on
MySQLMySQL
at

I would recommend a mixture of MySQL and MongoDB. Using MongoDB for the Content Distribution Network (CDN) will make it easy to store high volume incoming data. MySQL is recommended to be used for business logic. PostgreSQL is not recommended since you will be faced with inefficient database replication features and constant migration from one PostgreSQL version to another.

See more
Needs advice
on
IndexedDBIndexedDBMongoDBMongoDB
and
PostgreSQLPostgreSQL

I'm currently developing an app that ranks trending stuff ( such as games, memes or movies, etc. ) or events in a particular country or region. Here are the specs: My app does not require registration and requires cookies and localStorage to track users. Users can add new entries to each trending category provided that their country of origin is recorded in cookies. If each category contains more than 100 items then the oldest items get deleted. The question is: what kind of database should I use for managing this app? Thanks in advance

See more
Replies (1)
Recommends
on
MongoDBMongoDB

I think your best and cheapest choice is going to be MongoDB, Although Postgres is probably going to be the more scaleable approach, you likely have a good idea of how you want to present your data, and the app seems small enough that you shouldn't need to worry about scaling issues. It also sounds like your app can grow in a linear capacity based on the number of users, and the amount of data, which is the perfect use-case for noSQL databases (linear, predictable scaling).

Correct me if I have any of these assumptions wrong. 1. You're looking to have a relatively high-read with a lower write volume 2. Your app is essentially a list of objects that can belong to a category 3. users can create objects in this list.

I think Mongo is going to be what you're looking for on the following basis: 1. you absolutely need a database that is shared by all users of your app, therefor IndexedDB is out of the question. 2. You have semi-structured data 3. you probably want the cheapest solution.

I think Postgres is wrong for the following reasons: 1. your app is pretty simple in concept, SQL databases will add unnecessary complexity to your system, either through ORMs or SQL queries. (use an ORM if you go with SQL) 2. Hosting SQL databases for production is not cheap! the cheapest solution I know of for Postgres is ElephantSQL. It provides 20MB for free with 5 concurrent connections, you should be okay to manage these limitations if you decide to go Postgres in the end. Whereas mongoDB Atlas has some great free-tier options.

Although your data might be easier to model in Postgres, you can certainly model your data as a single list of items that have a category attached.

I don't want to officially recommend another tool, but you should really checkout prisma, firebase, amplify, or Azure App Services for this app! Just go completely backend-less [Firebase] https://firebase.google.com/ [Amplify] https://aws.amazon.com/amplify/ [Prisma] https://www.prisma.io/ [Azure App Services] https://azure.microsoft.com/en-us/services/app-service/?v=18.51

See more
Needs advice
on
MongoDBMongoDB
and
PostgreSQLPostgreSQL

Hi everybody, I'm developing an application to be used in a gym setting where athletes fill out a health survey, and coaches can analyze the results. However, due to the dynamic nature of some aspects of the app and more static aspects of the other, I am wondering if/how I would integrate MongoDB with my existing PostgreSQL database. I would like to store things like registrations, license information, and club information in Postgres, while I am thinking about moving things like user surveys, logging, and user settings information over to MongoDB. Some fields on the survey are integers, some large blocks of text, and some are arrays. My thought is, if I moved that data to MongoDB, it would give us greater flexibility in terms of adding and removing fields and data to them, and it would scale a lot easier than Postgres. Not to mention it will be easier to organize that kind of data. Is that overkill or am I approaching this issue the right way? Thank you!

See more
Replies (4)
Brian Ploetz
Recommends
on
PostgreSQLPostgreSQL

You can have your cake and eat it too. If you really need the flexibility of a document store, Postgresql's JSONB support allows you to mix and match relational data and document data within the same database/table. You can just as easily run analytical queries against JSONB data in Postgresql as you can against "normal" relational data. MongoDB comes with a significant operational overhead and cost (hello replica sets), so unless you really need MongoDB's sharding capabilities (which you shouldn't until you get to extreme scaling numbers), then just stick with Postgresql and use JSONB where you need it.

See more
Recommends
on
MongoDBMongoDB

With PostgreSQL you could easily integrate JSON or array type columns and develope a simple interface to add columns on your application. Anyway handling all the data this way will require some intermediate skill with PostgreSQL dialect and a mix and match of syntaxes for your analitical queryes. Also you will need to have a good design for you backend to handle all this. MongoDB will handle all this in a more natural way and I believe will be more easily integrated with a Node.js backend.

See more
Max Musing
Founder & CEO at BaseDash · | 4 upvotes · 394.4K views
Recommends
on
PostgreSQLPostgreSQL
at

How are you managing your PostgreSQL schema? It doesn't have to be hard to add or remove fields. We're working on a SQL database client at BaseDash that lets you add/remove columns in a couple clicks.

If you decide to migrate some of your data to MongoDB, you can definitely manage the two databases in parallel. For any records that need to be linked, you can treat it just like a foreign key by creating a column that points to an ID in the other database. For example, you might store user settings in MongoDB, and include a UserId field that points to your User record in your Postgres database.

See more
Recommends
on
PostgreSQLPostgreSQL

Those types of things should fit fine in a postgres json column. You'll actually have more flexibility with postgres because you can have a field as a normal column or in a json column, and you can have constraints and indexes on fields within a json column (or not).

See more
Needs advice
on
InfluxDBInfluxDBMongoDBMongoDB
and
TimescaleDBTimescaleDB

We are building an IOT service with heavy write throughput and fewer reads (we need downsampling records). We prefer to have good reliability when comes to data and prefer to have data retention based on policies.

So, we are looking for what is the best underlying DB for ingesting a lot of data and do queries easily

See more
Replies (3)
Yaron Lavi
Recommends
on
PostgreSQLPostgreSQL

We had a similar challenge. We started with DynamoDB, Timescale, and even InfluxDB and Mongo - to eventually settle with PostgreSQL. Assuming the inbound data pipeline in queued (for example, Kinesis/Kafka -> S3 -> and some Lambda functions), PostgreSQL gave us a We had a similar challenge. We started with DynamoDB, Timescale and even InfluxDB and Mongo - to eventually settle with PostgreSQL. Assuming the inbound data pipeline in queued (for example, Kinesis/Kafka -> S3 -> and some Lambda functions), PostgreSQL gave us better performance by far.

See more
Recommends
on
DruidDruid

Druid is amazing for this use case and is a cloud-native solution that can be deployed on any cloud infrastructure or on Kubernetes. - Easy to scale horizontally - Column Oriented Database - SQL to query data - Streaming and Batch Ingestion - Native search indexes It has feature to work as TimeSeriesDB, Datawarehouse, and has Time-optimized partitioning.

See more
Ankit Malik
Software Developer at CloudCover · | 3 upvotes · 321.5K views
Recommends
on
Google BigQueryGoogle BigQuery

if you want to find a serverless solution with capability of a lot of storage and SQL kind of capability then google bigquery is the best solution for that.

See more
Needs advice
on
KafkaKafkaRabbitMQRabbitMQ
and
RedisRedis

We are going to develop a microservices-based application. It consists of AngularJS, ASP.NET Core, and MSSQL.

We have 3 types of microservices. Emailservice, Filemanagementservice, Filevalidationservice

I am a beginner in microservices. But I have read about RabbitMQ, but come to know that there are Redis and Kafka also in the market. So, I want to know which is best.

See more
Replies (4)
Maheedhar Aluri
Recommends
on
KafkaKafka

Kafka is an Enterprise Messaging Framework whereas Redis is an Enterprise Cache Broker, in-memory database and high performance database.Both are having their own advantages, but they are different in usage and implementation. Now if you are creating microservices check the user consumption volumes, its generating logs, scalability, systems to be integrated and so on. I feel for your scenario initially you can go with KAFKA bu as the throughput, consumption and other factors are scaling then gradually you can add Redis accordingly.

See more
Recommends
on
AngularAngular

I first recommend that you choose Angular over AngularJS if you are starting something new. AngularJs is no longer getting enhancements, but perhaps you meant Angular. Regarding microservices, I recommend considering microservices when you have different development teams for each service that may want to use different programming languages and backend data stores. If it is all the same team, same code language, and same data store I would not use microservices. I might use a message queue, in which case RabbitMQ is a good one. But you may also be able to simply write your own in which you write a record in a table in MSSQL and one of your services reads the record from the table and processes it. The most challenging part of doing it yourself is writing a service that does a good job of reading the queue without reading the same message multiple times or missing a message; and that is where RabbitMQ can help.

See more
Amit Mor
Software Architect at Payoneer · | 3 upvotes · 769.1K views
Recommends
on
KafkaKafka

I think something is missing here and you should consider answering it to yourself. You are building a couple of services. Why are you considering event-sourcing architecture using Message Brokers such as the above? Won't a simple REST service based arch suffice? Read about CQRS and the problems it entails (state vs command impedance for example). Do you need Pub/Sub or Push/Pull? Is queuing of messages enough or would you need querying or filtering of messages before consumption? Also, someone would have to manage these brokers (unless using managed, cloud provider based solution), automate their deployment, someone would need to take care of backups, clustering if needed, disaster recovery, etc. I have a good past experience in terms of manageability/devops of the above options with Kafka and Redis, not so much with RabbitMQ. Both are very performant. But also note that Redis is not a pure message broker (at time of writing) but more of a general purpose in-memory key-value store. Kafka nowadays is much more than a distributed message broker. Long story short. In my taste, you should go with a minialistic approach and try to avoid either of them if you can, especially if your architecture does not fall nicely into event sourcing. If not I'd examine Kafka. If you need more capabilities than I'd consider Redis and use it for all sorts of other things such as a cache.

See more
Recommends
on
NATSNATS

We found that the CNCF landscape is a good advisor when working going into the cloud / microservices space: https://landscape.cncf.io/fullscreen=yes. When choosing a technology one important criteria to me is if it is cloud native or not. Neither Redis, RabbitMQ nor Kafka is cloud native. The try to adapt but will be replaced eventually with technologies that are cloud native.

We have gone with NATS and have never looked back. We haven't spend a single minute on server maintainance in the last year and the setup of a cluster is way too easy. With the new features NATS incorporates now (and the ones still on the roadmap) it is already and will be sooo much mure than Redis, RabbitMQ and Kafka are. It can replace service discovery, load balancing, global multiclusters and failover, etc, etc.

Your thought might be: But I don't need all of that! Well, at the same time it is much more leightweight than Redis, RabbitMQ and especially Kafka.

See more
Decisions about Kafka and MongoDB
Sergey Rodovinsky

At Pushnami we were looking at several alternative databases that would support following architectural requirements: - very quick prototyping for an unknown domain - ability to support large amounts of data - native ability to replicate and fail over - full stack approach for Node.js development After careful consideration MongoDB came on top, and 3 years later we are still very happy with that decision. Currently we keep almost 2TB of data in our cluster, and start thinking about sharding.

See more
Gabriel Pa

After using couchbase for over 4 years, we migrated to MongoDB and that was the best decision ever! I'm very disappointed with Couchbase's technical performance. Even though we received enterprise support and were a listed Couchbase Partner, the experience was horrible. With every contact, the sales team was trying to get me on a $7k+ license for access to features all other open source NoSQL databases get for free.

Here's why you should not use Couchbase

Full-text search Queries The full-text search often returns a different number of results if you run the same query multiple types

N1QL queries Configuring the indexes correctly is next to impossible. It's poorly documented and nobody seems to know what to do, even the Couchbase support engineers have no clue what they are doing.

Community support I posted several problems on the forum and I never once received a useful answer

Enterprise support It's very expensive. $7k+. The team constantly tried to get me to buy even though the community edition wasn't working great

Autonomous Operator It's actually just a poorly configured Kubernetes role that no matter what I did, I couldn't get it to work. The support team was useless. Same lack of documentation. If you do get it to work, you need 6 servers at least to meet their minimum requirements.

Couchbase cloud Typical for Couchbase, the user experience is awful and I could never get it to work.

Minimum requirements The minimum requirements in production are 6 servers. On AWS the calculated monthly cost would be ~$600. We achieved better performance using a $16 MongoDB instance on the Mongo Atlas Cloud

writing queries is a nightmare While N1QL is similar to SQL and it's easier to write because of the familiarity, that isn't entirely true. The "smart index" that Couchbase advertises is not smart at all. Creating an index with 5 fields, and only using 4 of them won't result in Couchbase using the same index, so you have to create a new one.

Couchbase UI The UI that comes with every database deployment is full of bugs, barely functional and the developer experience is poor. When I asked Couchbase about it, they basically said they don't care because real developers use SQL directly from code

Consumes too much RAM Couchbase is shipped with a smaller Memcached instance to handle the in-memory cache. Memcached ends up using 8 GB of RAM for 5000 documents! I'm not kidding! We had less than 5000 docs on a Couchbase instance and less than 20 indexes and RAM consumption was always over 8 GB

Memory allocations are useless I asked the Couchbase team a question: If a bucket has 1 GB allocated, what happens when I have more than 1GB stored? Does it overflow? Does it cache somewhere? Do I get an error? I always received the same answer: If you buy the Couchbase enterprise then we can guide you.

See more
Omran Jamal
CTO & Co-founder at Bonton Connect · | 4 upvotes · 521.2K views

We actually use both Mongo and SQL databases in production. Mongo excels in both speed and developer friendliness when it comes to geospatial data and queries on the geospatial data, but we also like ACID compliance hence most of our other data (except on-site logs) are stored in a SQL Database (MariaDB for now)

See more
Kyle Harrison
Web Application Developer at Fortinet · | 11 upvotes · 905.6K views

MySQL has a lot of strengths working for it. It's simple and easy to set up and use. It's JSON engine is also really good these days. Mongo is also simple to setup and use, and it's speed as a document-object storage engine is first class.

Where Postgres has both beat is in it's combining of all of the features that make both MySQL and Mongo great, while adding on enterprise grade level scalability and replication. It's Postgres' stability and robustness, while still fulfilling the roles of it's contemporaries extremely well that edge Postgre for me.

See more

When I was new with web development, I was using PHP for backend and MySQL for database. But after improving my JS skills, I chosen Node.js. Because of too many reasons including npm, express, community, fast coding and etc. MongoDB is so good for using with Node.js. If your JS skills are enough good, I recommend to migrate to Node.js and MongoDB.

See more
David Österreicher

Easier scalability of MongoDB prompted this migration from MySQL.

As Runtastic grew, at some point it would have outgrown our MySQL installation. We looked for a couple of alternatives and found MongoDB as a great replacement for our use case. Read how a migration of live data from one database to another worked for us.

See more
Chose
MongoDBMongoDB
over
MySQLMySQL

My data was inherently hierarchical, but there was not enough content in each level of the hierarchy to justify a relational DB (SQL) with a one-to-many approach. It was also far easier to share data between the frontend (Angular), backend (Node.js) and DB (MongoDB) as they all pass around JSON natively. This allowed me to skip the translation layer from relational to hierarchical. You do need to think about correct indexes in MongoDB, and make sure the objects have finite size. For instance, an object in your DB shouldn't have a property which is an array that grows over time, without limit. In addition, I did use MySQL for other types of data, such as a catalog of products which (a) has a lot of data, (b) flat and not hierarchical, (c) needed very fast queries.

See more

We used Mongo for the first iterations of our app, but the relational nature of our data was an awkward fit for a database that is not relational. We sorely lacked relational database integrity features that needed to be done on the application side (poorly) and it was a huge relief when we managed to port our application over to Postgres, which performs great and never gives us trouble, while having very user friendly extensions like JSON and PubSub that made the transition easy.

See more

We wanted a JSON datastore that could save the state of our bioinformatics visualizations without destructive normalization. As a leading NoSQL data storage technology, MongoDB has been a perfect fit for our needs. Plus it's open source, and has an enterprise SLA scale-out path, with support of hosted solutions like Atlas. Mongo has been an absolute champ. So much so that SQL and Oracle have begun shipping JSON column types as a new feature for their databases. And when Fast Healthcare Interoperability Resources (FHIR) announced support for JSON, we basically had our FHIR datalake technology.

See more

In the field of bioinformatics, we regularly work with hierarchical and unstructured document data. Unstructured text data from PDFs, image data from radiographs, phylogenetic trees and cladograms, network graphs, streaming ECG data... none of it fits into a traditional SQL database particularly well. As such, we prefer to use document oriented databases.

MongoDB is probably the oldest component in our stack besides Javascript, having been in it for over 5 years. At the time, we were looking for a technology that could simply cache our data visualization state (stored in JSON) in a database as-is without any destructive normalization. MongoDB was the perfect tool; and has been exceeding expectations ever since.

Trivia fact: some of the earliest electronic medical records (EMRs) used a document oriented database called MUMPS as early as the 1960s, prior to the invention of SQL. MUMPS is still in use today in systems like Epic and VistA, and stores upwards of 40% of all medical records at hospitals. So, we saw MongoDB as something as a 21st century version of the MUMPS database.

See more
Get Advice from developers at your company using StackShare Enterprise. Sign up for StackShare Enterprise.
Learn More
Pros of Kafka
Pros of MongoDB
  • 126
    High-throughput
  • 119
    Distributed
  • 92
    Scalable
  • 86
    High-Performance
  • 66
    Durable
  • 38
    Publish-Subscribe
  • 19
    Simple-to-use
  • 18
    Open source
  • 12
    Written in Scala and java. Runs on JVM
  • 9
    Message broker + Streaming system
  • 4
    KSQL
  • 4
    Avro schema integration
  • 4
    Robust
  • 3
    Suport Multiple clients
  • 2
    Extremely good parallelism constructs
  • 2
    Partioned, replayable log
  • 1
    Simple publisher / multi-subscriber model
  • 1
    Fun
  • 1
    Flexible
  • 827
    Document-oriented storage
  • 593
    No sql
  • 553
    Ease of use
  • 464
    Fast
  • 410
    High performance
  • 257
    Free
  • 218
    Open source
  • 180
    Flexible
  • 145
    Replication & high availability
  • 112
    Easy to maintain
  • 42
    Querying
  • 39
    Easy scalability
  • 38
    Auto-sharding
  • 37
    High availability
  • 31
    Map/reduce
  • 27
    Document database
  • 25
    Easy setup
  • 25
    Full index support
  • 16
    Reliable
  • 15
    Fast in-place updates
  • 14
    Agile programming, flexible, fast
  • 12
    No database migrations
  • 8
    Easy integration with Node.Js
  • 8
    Enterprise
  • 6
    Enterprise Support
  • 5
    Great NoSQL DB
  • 4
    Support for many languages through different drivers
  • 3
    Drivers support is good
  • 3
    Aggregation Framework
  • 3
    Schemaless
  • 2
    Fast
  • 2
    Managed service
  • 2
    Easy to Scale
  • 2
    Awesome
  • 2
    Consistent
  • 1
    Good GUI
  • 1
    Acid Compliant

Sign up to add or upvote prosMake informed product decisions

Cons of Kafka
Cons of MongoDB
  • 32
    Non-Java clients are second-class citizens
  • 29
    Needs Zookeeper
  • 9
    Operational difficulties
  • 5
    Terrible Packaging
  • 6
    Very slowly for connected models that require joins
  • 3
    Not acid compliant
  • 1
    Proprietary query language

Sign up to add or upvote consMake informed product decisions

What is Kafka?

Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design.

What is MongoDB?

MongoDB stores data in JSON-like documents that can vary in structure, offering a dynamic, flexible schema. MongoDB was also designed for high availability and scalability, with built-in replication and auto-sharding.

Need advice about which tool to choose?Ask the StackShare community!

What companies use Kafka?
What companies use MongoDB?
See which teams inside your own company are using Kafka or MongoDB.
Sign up for StackShare EnterpriseLearn More

Sign up to get full access to all the companiesMake informed product decisions

What tools integrate with Kafka?
What tools integrate with MongoDB?

Sign up to get full access to all the tool integrationsMake informed product decisions

Blog Posts

Dec 22 2021 at 5:41AM

Pinterest

MySQLKafkaDruid+3
3
571
Amazon S3KafkaZookeeper+5
8
1567
Mar 24 2021 at 12:57PM

Pinterest

GitJenkinsKafka+7
3
2140
What are some alternatives to Kafka and MongoDB?
ActiveMQ
Apache ActiveMQ is fast, supports many Cross Language Clients and Protocols, comes with easy to use Enterprise Integration Patterns and many advanced features while fully supporting JMS 1.1 and J2EE 1.4. Apache ActiveMQ is released under the Apache 2.0 License.
RabbitMQ
RabbitMQ gives your applications a common platform to send and receive messages, and your messages a safe place to live until received.
Amazon Kinesis
Amazon Kinesis can collect and process hundreds of gigabytes of data per second from hundreds of thousands of sources, allowing you to easily write applications that process information in real-time, from sources such as web site click-streams, marketing and financial information, manufacturing instrumentation and social media, and operational logs and metering data.
Apache Spark
Spark is a fast and general processing engine compatible with Hadoop data. It can run in Hadoop clusters through YARN or Spark's standalone mode, and it can process data in HDFS, HBase, Cassandra, Hive, and any Hadoop InputFormat. It is designed to perform both batch processing (similar to MapReduce) and new workloads like streaming, interactive queries, and machine learning.
Akka
Akka is a toolkit and runtime for building highly concurrent, distributed, and resilient message-driven applications on the JVM.
See all alternatives