Need advice about which tool to choose?Ask the StackShare community!
InfluxDB vs MySQL: What are the differences?
InfluxDB and MySQL are two popular database management systems. Let's explore the key differences between them:
Data Model: InfluxDB is a time series database designed to handle large amounts of data that is timestamped. It is optimized for storing and analyzing time-based data, making it ideal for applications that require tracking and monitoring various metrics over time. On the other hand, MySQL is a relational database management system (RDBMS) that organizes data into tables and supports relational operations among the tables. It is suitable for applications with complex relationships between different types of data.
Scalability and Performance: InfluxDB is built to handle high write and query loads efficiently. It can handle large volumes of time series data with minimal impact on performance. With its native clustering and sharding capabilities, InfluxDB can scale horizontally to handle increasing data ingest rates. On the contrary, while MySQL can handle moderate workloads effectively, its performance can degrade when handling massive amounts of data. Scaling MySQL horizontally across multiple servers can be a complex task compared to InfluxDB.
Query Language: InfluxDB uses InfluxQL, a SQL-like query language specifically designed for time series data. It provides specialized functions and syntax for working with timestamps, intervals, and aggregations. InfluxQL allows for efficient filtering and manipulation of time series data. MySQL, on the other hand, supports SQL, a widely-used language for querying and managing relational databases. SQL provides a broad range of features for complex joins, subqueries, and data relationships.
Data Storage: InfluxDB stores data in a compressed, columnar format optimized for time series data. It uses a log-structured merge (LSM) tree data structure that provides high write throughput and efficient data compaction. On the other hand, MySQL typically uses a row-based storage format, although it also supports columnar storage engines like InnoDB. The row-based storage is more suitable for handling relational data with complex relationships.
Data Retention Policies: InfluxDB provides a flexible mechanism for defining data retention policies. It allows users to specify the duration and precision of data that needs to be retained. This enables automatic handling of data retention and efficient storage management. In contrast, MySQL does not have built-in mechanisms for data retention policies. It is up to the user to manage and control data retention through manual deletion or archiving procedures.
Use Cases: InfluxDB is commonly used in applications that require continuous monitoring and analysis of time series data, such as IoT sensor data, application metrics, and financial trading data. Its ability to handle high write and query loads makes it suitable for real-time monitoring and analytics. On the other hand, MySQL is widely used for general-purpose applications that require complex relational data modeling, including e-commerce, content management systems, and enterprise resource planning (ERP) systems.
In summary, InfluxDB is a time series database optimized for handling large volumes of timestamped data with efficient write and query performance. It provides a specialized query language and flexible data retention policies. On the other hand, MySQL is a relational database management system suitable for applications with complex relationships and general-purpose data modeling needs.
Hello everyone. We have a project that it's like a candidate tracking system. It has candidates, projects, assessments, etc. A consultant senior developer started it by using MongoDB. The thing is that he designed the database like it's a relational DB.
Personally, I didn't imagine that it was a good thing to do. Because you won't have the power of SQL functionalities like join
, on delete
, and more. You have to be very careful, I think things may go unmaintainable very fast. I asked him about this and he said "I don't see a problem doing it like this."
What are your thoughts on this? Did you see examples like this? Should I avoid it or go for it? Any advice is appreciated.
Here is what it looks like in Moon Modeler: https://imgur.com/a/RNwNBNY
It happened to me that you actually construct a relational schema with MongoDB. It is not good. You do not use the modeling benefits of MongoDB, and you do not have the benefits of SQL. So I recommend taking it into MySQL. Since you think in a relational way, it is best you move to MySQL
Specifically, do you need non-normalized data? If not, MySQL is best. Otherwise, MongoDB is best. If you think non-relational, you do not need joins, and the problems with cascade disappear.
What is the best way to think? If you work in terms of whole tree of related object, then you think non-relational and non-normalized.
It makes no sense if you use MongoDB primarily as a relational database. As you scale MongoDB will be more expensive than SQL and as you said without having the advantages of "join" etc.
We use MongoDB in our company. It is useful for us, as we work with different types of devices and we love the functionality of being able to add fields whenever we have a new device type etc. Mongo also allows enables easy scaling and fault tolerance. However, you will have to learn how to manage it.
If you are already comfortable with SQL and don't need NoSQL, stick to SQL. At scale, it is cheaper than Mongo.
I have been using Firebase with almost all my web projects as well as SwiftUI projects. I use it for the database as well as the user authentication via Google.
Is it good enough?? I have learned MySQL but I'm not that comfortable…
So for user authentication and database should I keep using firebase or switch to MySQL or MongoDB?? Or any other combination?
Hi!
I’m not an expert, but I can tell you some things:
- Firebase is a great option for a very simple to implement, fast and reliable authentication method. Nonetheless, the free authentications are limited, so if you will potentially have millions of monthly authentications, it’s probably best to take the time to build it into your app directly.
- MySQL is great for simple tables where the data structures are not too complex, but it lacks some speed when you are trying to retrieve time data series. Also, I believe it’s a bit more difficult to distribute.
- MongoDB is great when your information is a bit more complex and you need very peculiar data structures, nested data, dynamic structures, etc. For me at least, it’s a bit more complex to master than MySQL, but the freedom it gives you is incredible. It also performs super fast, especially with time data series, and if I’m not wrong, it’s more scalable.
In general, almost all technologies have their good things, it’s just a matter of what you want to do and then choosing the right ones.
Look if you are comfortable with firebase you can go with it, after all, It's all about development and running your program bug-free and fast, but firebase is costly fo long run and if you are comfortable with that cost then I suggest you go with it.
Doing User authentication (oauth) and session management by ourself is kind a challenging, so if possible use firebase itself since it provides these features out of the box.
I'm starting to work on a Jira-like bug tracker web app. This is a hobby project that is mostly a way for me to learn about different technologies and development processes(CI/CD, etc..) so I could be more ready when I start applying for programming jobs.
I'm debating between MySQL, which I'm less familiar with, and MongoDB which I have used in the past.
My two points of consideration are the following:
1) Which one is more likely to be relevant for web dev jobs? While I want to learn new technologies, I prefer learning ones that will make me more hireable in the future.
2) Which one is more flexible when it comes to changing the shape of the stored data? I expect to need to make some changes as the project goes on.
Thanks, everyone!
MySQL is still more popular than MongoDB if you look at Google Trends. I've also added MariaDB, which is pretty much a copy from MySQL and its features, and PostgreSQL, which is also a popular relational database.
This is a very good article for comparing MySQL to MongoDB and which one you should use: MongoDB vs MySQL: A Comparative Study on Databases.
If you just want to learn and you have the time, I would opt for using both MySQL and MongoDB. For example using MySQL for most of the site content and MongoDB for saving log messages. As you get more and more logs you start to see the benefits from MongoDB's faster document fetching.
There's really not an awful lot of difference between the two, they have wildly different storage mechanisms but they each have their fairly similar benefits. If you want to learn something that might be a requisite skill for a job, I would also look at alternatives such as time based and column based systems like InfluxDB and the unbelievably fast and flexible ClickHouse. While they may seem like an unlikely fit for a personal bug tracker app, there's no reason not to use them. Since I got into InfluxDB people have been requesting it a lot and I'll be using ClickHouse for all large databases, probably forever. Expand your horizons beyond your competition's.
Hey everyone, My users love Microsoft Excel, and so do I. I've been making tools for them in the form of workbooks for years, these tools usually have databases included in the spreadsheets or communicate to free APIs around the web, but now I want to distribute these tools in the form of Excel Add-ins for several reasons.
I want these Add-ins to communicate to a personal server to authorize users, read from my databases, and write to them while they're using their Excel environment. I have never built a website, so what would be a good solution for this, considering I'm new to all of these technologies? I know about the existence of Microsoft Azure, Microsoft SharePoint, and Google Sheets, but I don't know how to feel about those.
Just definitely don't use firebase. All of MongoDB, MySQL, MariaDB and PostGreSQL have a lot of community support and history.
Snowflake is a NoSQL database in the cloud, which also accepts SQL calls. Users can obtain an ODBC driver for SnowFlake, which would allow your Excel apps to write/read from the backend, locally.
I am building a fintech startup with a friend, we decided to use Go for its performance and friendly syntax. We want to know if we should use a web framework or just use the pure net/http lib and also for the databases we put PostgreSQL and MySQL on the table, we want to know which one is better, from the community support to the best open-source implementation?
Postgres is a better option to consider compared to MySQL. With respect to performance, postgres has an edge over MySQL. Don't use net/http for production. Read this https://medium.com/@nate510/don-t-use-go-s-default-http-client-4804cb19f779 I prefer gorilla/mux as it is simple and provides all the basic features. Other lib seems to be an overhead if you just need basic routing.
MySQL and Postgre both are great and awesome and great support, community, support. Whatever will be good. Postgree have some little advantages.
I recommend Elixir, even though I work in a fintech with Go, Elixir is a FP language so in my opinion the immutability is a important topic when working with money.
I'm planning to build a freelance marketplace website, using tools like Next.js, Firebase Authentication, Node.js, but I need to know which type of database is suitable with performance and powerful features. I'm trying to figure out what the best stack is for this project. If anyone has advice please, I’d love to hear more details. Thanks.
Postgres and MySQL are very similar, but Mongo has differences in terms of storage type and the CAP theorem. For your requirement, I prefer Postgres (or MySQL) over MongoDB. Mongo gives you no schema which is not always good. on the other hand, it is more common in NodeJS community, so you may find more articles about Node-Mongo stuff. I suggest to stay with RDBMS if possible.
This is a little about experience. Postgresql is fine. You can use either the related table structure or the json table structure.
We have a ready-made engine for the online exchange and marketplace. To customize it, you only need to know sql. Connecting any database is not a problem. https://falconspace.site/list/solutions
I need to add a DBMS to my stack, but I don't know which. I'm tempted to learn SQLite since it would be useful to me with its focus on local access without concurrency. However, doing so feels like I would be defeating the purpose of trying to expand my skill set since it seems like most enterprise applications have the opposite requirements.
To be able to apply what I learn to more projects, what should I try to learn? MySQL? PostgreSQL? Something else? Is there a comfortable middle ground between high applicability and ease of use?
You can easily start with SQlite. Really easy to startup since it doesn't require you to install any additional software since is self-contained. It has interfaces in almost any language and also GUIs. Start learning SQL basics and simpler data models and structures. There are many tutorials, also available in the official website. From there you will easily migrate to another database. MySQL could be next, sonce it's easier to learn at first and has more resources available. PostgreSQL is less widespread, more challenging and has the fewer resorces, but once you have some experience with MySQL is really easy to learn as well. All these technologies are really widespread and used accross the industry so you won't make a wrong decision with any of these.
A question you might want to think about is "What kind of experience do I want to gain, by using a DBMS?". If your aim is to have experience with SQL and any related libraries and frameworks for your language of choice (python, I think?), then it kind of doesn't matter too much which you pick so much. As others have said, SQLite would offer you the ability to very easily get started, and would give you a reasonably standard (if a little basic) SQL dialect to work with.
If your aim is actually to have a bit of "operational" experience, in terms of things like what command line tools might be available as standard for the DBMS, understanding how the DBMS handles multiple databases, when to use multiple schemas vs multiple databases, some basic privilege management etc. Then I would recommend PostgreSQL. SQLite's simplicity actually avoids most of these experiences, which is not helpful to you if that is what you hope to learn. MySQL has a few "quirks" to how it manages things like multiple databases, which may lead you to making less good decisions if you tried to take your experience over to different DBMS, especially in bigger enterprise roles. PostgreSQL is kind of a happy middle ground here, with the ability to start PostgreSQL servers via docker or docker-compose making the actual day-to-day management pretty easy, while still giving you experience of the kinds of considerations I have listed above.
At Vital Beats we make use of PostgreSQL, largely because it offers us a happy balance between good management and backup of data, and good standard command line tools, which is essential for us where we are deploying our solutions within Kubernetes / docker, and so more graphical tools are not always appropriate for us. PostgreSQL is also pretty universally supported in terms of language libraries and frameworks, without having to make compromises on how we want to store and layout our data.
MySQL's very popular, easy to install, is also available as a managed service across most popular cloud offerings. The support/default tooling (such as MySQL Query Workbench) certainly is a little more baked than what you'll find for Postgres.
I have a lot of data that's currently sitting in a MariaDB database, a lot of tables that weigh 200gb with indexes. Most of the large tables have a date column which is always filtered, but there are usually 4-6 additional columns that are filtered and used for statistics. I'm trying to figure out the best tool for storing and analyzing large amounts of data. Preferably self-hosted or a cheap solution. The current problem I'm running into is speed. Even with pretty good indexes, if I'm trying to load a large dataset, it's pretty slow.
Druid Could be an amazing solution for your use case, My understanding, and the assumption is you are looking to export your data from MariaDB for Analytical workload. It can be used for time series database as well as a data warehouse and can be scaled horizontally once your data increases. It's pretty easy to set up on any environment (Cloud, Kubernetes, or Self-hosted nix system). Some important features which make it a perfect solution for your use case. 1. It can do streaming ingestion (Kafka, Kinesis) as well as batch ingestion (Files from Local & Cloud Storage or Databases like MySQL, Postgres). In your case MariaDB (which has the same drivers to MySQL) 2. Columnar Database, So you can query just the fields which are required, and that runs your query faster automatically. 3. Druid intelligently partitions data based on time and time-based queries are significantly faster than traditional databases. 4. Scale up or down by just adding or removing servers, and Druid automatically rebalances. Fault-tolerant architecture routes around server failures 5. Gives ana amazing centralized UI to manage data sources, query, tasks.
Hello, I am developing a new project with an internal chat between users. Also, there are complex relationships between the other project entities but I wolud like to build something scalable and fast and right now I am designing the data model. What kind of database would you recommend me to manage all application data? relational like MySQL, no relational like MongoDB or a mixed one? Thank you
In MongoDB, a write operation is atomic on the level of a single document, so it's harder to deal with consistency without transactions.
MongoDB supports horizontal scaling through Sharding , distributing data across several machines and facilitating high throughput operations with large sets of data. ... Sharding allows you to add additional instances to increase capacity when required
If you are trying with "complex relationships", give a chance to learn ArangoDB and Graph databases. Its database structures allow doing this with faster and simpler queries. The database is not as strict as others and allows arbitrary data. The data model is really like a neural network and you will never need foreign keys tables anymore. In Udemy there is a free course about it to get started.
The most important question is where are you planning to host? On-premise, or in the cloud.
Particularly if you are planning to host in either AWS or Azure, then your first point of call should be the PaaS (Platform as a Service) databases supplied by these vendors, as you will find yourself requiring a lot less effort to support them, much easier Disaster Recovery options, and also, depending on how PAYG the database is that you use, potentially also much cheaper costs than having a dedicated database server.
Your question regards 'Relational or not' is obviously key, and you need to consider both your required data structure, as well as the ACID requirements of your application model, as well as the non-functional requirements in terms of scalability, resilience, whether you want security authorisation at the highest application tier, or right down to 'row' level in the database, etc. - however please don't fall into the trap of considering 'NoSQL' as being single category. MongoDB, with its document-store type solution is a very different model to key-value-pair stores (like AWS DynamoDB), or column stores (like AWS RedShift) or for more complex data relationships, Entity Graph Stores (like AWS Neptune), to stores designed for tokenisation and text search (ElasticSearch) etc.
Also critical in all this is how many items you believe you need to index by. RDBMS/SQL stores are great for having as many indexes as you want, other than the slow-down in write speed, whereas databases like Amazon DynamoDB provide blisteringly fast read/write performance, but are very limited on key indexing capabilities.
It feels like you have most experience with SQL/RDBMS technologies, so for the simplest learning curve, and if your application fits it, then I'd personally start by looking at AWS Aurora https://aws.amazon.com/rds/aurora/ .
FIrstly, it may help if you explain what you mean by "complex relationships between project entities". Secondly, you can build a fast and scalable solution using either. With that said however, the data sounds relational so I would recommend MySQL.
I think, Its depend of your project type and your skills. MySQL is good and simple for maintenance but MongoDB need more skills and knowledge. If you work on little project, use MySQL. For your project type, MySQL is enough after you can migrate with PostgreSQL
I am going to work on a real estate project and have to decide on a database. Now, SQL databases can be very efficient if appropriately designed. More relations between the data and less redundancy. But with a #NoSQL database, the development time is reduced, and it is easy to query. Since this is my first time working on the real estate domain, I would like to pick a database that would be efficient in the long run.
I recommend PostgreSQL as it’s the most powerful out of the 3 databases you mentioned. It supports JSON objects so you can mimic the MongoDB functionality, but I would also argue that SQL is actually quite powerful and in many cases significantly easier to work with than with NoSQL databases.
Stay away from foreign keys, keep it fast and simple. Define your data structures well in advance. Try to model your data structures based on your system’s vision; based on where it’s going and not based solely on what you currently need it to do. This will help you avoid drastic changes to your database after your system is launched. Populate the database with fake data and run tests. PostgreSQL allows you to create Views from multiple tables. Try to create those views and make sure you can easily create useful views from multiple tables. Run an Explain on those view queries to make sure you created your indexes correctly. Make sure it’s fast!
Any of those three databases are going to be efficient, scalable, and reliable in the long term if you configure and use them correctly. They all also have solid hosting solutions.
All things being equal, I would agree with other posters that Postgres is my preference among the three, but there are caveats.
MongoDB and MySQL have better support for mutli-region replication in your big three cloud environments. Azure recently bought Citus Data, which was a best-in-class Postgres replication solution, so they might be the only one I trust to provide cross-region replication at the moment.
If you have a single region deployment and are on AWS, I can't recommend Aurora Postgres highly enough. It's a very good implementation and extremely performant.
I'll second another piece of advice. Postgresql's JSON columns are a dream when it comes to productivity and I use them frequently with our Rails application. In these cases, no migration is required to change schema. We store payloads with dozens or hundreds of keys and performance has not been an issue. We also have a lot of relational tables, so the joins we get with SQL are very important to us and hard to replicate with a NoQL solution.
That really depends of where do you see you application in the long run. On any application, any of those choices are excellent. You could argue about good support on JSON binaries, but even MySQL has an excellent support for that on the latest versions.
On the long run, when your application gets hundreds of thousands of requests per second, you might start thinking about how many inputs you will have in the database compared to the outputs. PostgresSQL it’s excellent at giving you outputs, but table corruption can happen when you start receiving this massive number of inputs (Which was the reason Uber switched from Postgres to MySQL)
On our OPS Platform at CTO.ai , we decided to use Postgres, because we need a reliable and agile way to send the output to our users, so that was out best choice in the long run for our product.
As an advanced user, I prefer Postgres over MySQL. MySQL was the first database I learned from my institute. I always have to undergo that infamous date and time dilemma many Java devs know. Both are adequate for a small project. When I worked on a project with a date and time-intensive data, I spent a lot of time dealing with the conversion and transition, leaving me frustrated. I tried Postgres to see how well it can perform. To my surprise, all became a breeze, and the transactions were faster too. I've been using Postgres ever since, and no more dilemma.
The tool I was hosting was a relatively small NodeJS application which utilized the Spotify API - it was meant to be very low maintenance, but still required intervention (to renew certificates, restart the Node app when it crashed, etc). It was also using old NodeJS frameworks that were either deprecated or very outdated.
I made the decision to migrate the service to Google Cloud Run, and change the underlying database from MySQL to RavenDB, for performance and ease-of-use reasons. The move was relatively easy - the only challenge was around migrating from old libraries I was using to perform REST requests, and of course adjusting from a password authentication system to client certificates
I chose to migrate to RavenDB for their advanced dashboard, which allows you to monitor databases, queries, and cluster node performance. Working with RavenDB has been a much smoother and user-friendly experience compared to MySQL.
Hosting the application in Cloud Run, rather than on a dedicated Linux VM, meant that costs were drastically reduced (from £10/month for an AWS EC2 micro instance to £0 for Cloud Run). The serverless architecture means you're only paying when a request is made to the URL - for a small service such as mine, this was a life saver.
Best of all? I get advanced monitoring statistics from Google Cloud, showing me exactly how many requests I get per day, how much memory/CPU is used, and how many container instances are active to serve traffic. When an error occurs, Cloud Trace keeps track of the exception, the line it occurred on, and how many times the error has been seen.
I knew this migration would lead to a low-maintenance solution that I was hoping for, but I didn't realize how low maintenance it would truly be - I haven't needed to even look at the service since it was migrated, aside from checking I allocated enough cores/memory to the containers.
As a startup, managing my own database, backups and even the schemas/migrations are all overhead. Next to that, I needed both Backend and Frontend ways to write to the database. With firebase this is possible, this saved us some time: Some API calls were not needed because I could directly fetch data in the FE.
Offline support & realtime data updates is also supported out of the box. No need to write your own websockets.
Once the startup grows, moving to a different relational database might make sense. But in a pre-product-market-fit startup, Firebase is a good, and cheaper fit!
The pricing model of firebase firestore is a bit risky. But it saves a lot of time to get quickly to market.
Easy to start, lightweight and open source.
When I started with PHP, MySQL was everywhere so this is how I started with it. I am no expert in databases but I started learning joins, stored procedures, triggers, etc. with MySQL.
Recently used it in one of my projects - Picfam.com with Node.js + Express backend
We will be getting data in the form of CSVs. Because the data in a CSV is highly structured, it will be easy to create schemas and it works well in a SQL database as opposed to noSQL. For a SQL database, both mySQL and Postgres are very viable options. Both of them are highly performant, definitely enough for our application, even if we needed to scale drastically. Postgres does include some extra features over mySQL such as table inheritance and function overloading. However, the extra features are not advantageous to us given our database use case. Because both databases seemed to suit our use case perfectly, we chose to use mySQL simply because it is more familiar tech within our team.
One of our biggest technical pillars is to "let the pros manage it", thus we settled on using Heroku PostgreSQL
to manage our SQL cluster. We can take advantage of the free tier and the requests will be fast since it is integrated into Heroku. PostgreSQL
also support Full text search which can come into handy with manually searching through the tables.
At Pushnami we were looking at several alternative databases that would support following architectural requirements: - very quick prototyping for an unknown domain - ability to support large amounts of data - native ability to replicate and fail over - full stack approach for Node.js development After careful consideration MongoDB came on top, and 3 years later we are still very happy with that decision. Currently we keep almost 2TB of data in our cluster, and start thinking about sharding.
MySQL has a lot of strengths working for it. It's simple and easy to set up and use. It's JSON engine is also really good these days. Mongo is also simple to setup and use, and it's speed as a document-object storage engine is first class.
Where Postgres has both beat is in it's combining of all of the features that make both MySQL and Mongo great, while adding on enterprise grade level scalability and replication. It's Postgres' stability and robustness, while still fulfilling the roles of it's contemporaries extremely well that edge Postgre for me.
When I was new with web development, I was using PHP for backend and MySQL for database. But after improving my JS skills, I chosen Node.js. Because of too many reasons including npm, express, community, fast coding and etc. MongoDB is so good for using with Node.js. If your JS skills are enough good, I recommend to migrate to Node.js and MongoDB.
Easier scalability of MongoDB prompted this migration from MySQL.
As Runtastic grew, at some point it would have outgrown our MySQL installation. We looked for a couple of alternatives and found MongoDB as a great replacement for our use case. Read how a migration of live data from one database to another worked for us.
Pros of InfluxDB
- Time-series data analysis59
- Easy setup, no dependencies30
- Fast, scalable & open source24
- Open source21
- Real-time analytics20
- Continuous Query support6
- Easy Query Language5
- HTTP API4
- Out-of-the-box, automatic Retention Policy4
- Offers Enterprise version1
- Free Open Source version1
Pros of MySQL
- Sql800
- Free679
- Easy562
- Widely used528
- Open source490
- High availability180
- Cross-platform support160
- Great community104
- Secure79
- Full-text indexing and searching75
- Fast, open, available26
- Reliable16
- SSL support16
- Robust15
- Enterprise Version9
- Easy to set up on all platforms7
- NoSQL access to JSON data type3
- Relational database1
- Easy, light, scalable1
- Sequel Pro (best SQL GUI)1
- Replica Support1
Sign up to add or upvote prosMake informed product decisions
Cons of InfluxDB
- Instability4
- Proprietary query language1
- HA or Clustering is only in paid version1
Cons of MySQL
- Owned by a company with their own agenda16
- Can't roll back schema changes3