Need advice about which tool to choose?Ask the StackShare community!
ActiveMQ vs Amazon SQS vs RabbitMQ: What are the differences?
Introduction
ActiveMQ, Amazon SQS, and RabbitMQ are all messaging systems that facilitate the communication between different components of a distributed system. However, there are key differences between these three messaging systems.
Messaging Model: ActiveMQ and RabbitMQ both support multiple messaging models including point-to-point (queues) and publish-subscribe (topics), while Amazon SQS only supports point-to-point model with queues. This means that with ActiveMQ and RabbitMQ, a message can be sent to one or many consumers, whereas with SQS, a message can only be received by one consumer.
Delivery Guarantees: ActiveMQ and RabbitMQ provide a higher level of delivery guarantee compared to Amazon SQS. ActiveMQ and RabbitMQ support both at least once and exactly once delivery semantics, ensuring that messages are not lost. On the other hand, Amazon SQS guarantees "at least once" delivery, meaning a message may be duplicated or delayed but not lost.
Message Size: ActiveMQ and RabbitMQ have higher message size limits compared to Amazon SQS. ActiveMQ supports message sizes up to 100 MB, RabbitMQ supports up to 128 MB, while SQS has a limit of 256 KB for standard queues and 2 GB for FIFO queues.
Message Ordering: ActiveMQ and RabbitMQ guarantee message ordering within a single queue or topic. Messages are processed in the order they are received. However, Amazon SQS does not guarantee strict message ordering. While FIFO queues in SQS provide ordering at the message group level, messages within a group can be processed out of order.
Scalability and Availability: ActiveMQ and RabbitMQ require manual setup and management of infrastructure for scalability and availability. Amazon SQS, being a managed service, automatically scales and provides high availability without requiring manual intervention. With SQS, developers can focus on building applications instead of managing infrastructure.
Pricing Model: ActiveMQ and RabbitMQ are self-hosted messaging systems, and their pricing is based on various factors like server infrastructure, software licenses, and maintenance. On the other hand, Amazon SQS follows a pay-per-use pricing model, where users are charged based on the number of messages sent, received, and stored in queues.
In summary, ActiveMQ and RabbitMQ provide more flexibility in terms of messaging model and delivery guarantees, while Amazon SQS offers the benefits of managed infrastructure, scalability, and easier pricing.
We are currently moving to a microservice architecture and are debating about the different options there are to handle communication between services. We are currently considering Kafka, Redis or RabbitMQ as a message broker. As RabbitMQ is a little bit older, we thought that it may be outdated. Is that true? Can RabbitMQ hold up to more modern tools like Redis and Kafka?
We have faced the same question some time ago. Before I begin, DO NOT use Redis as a message broker. It is fast and easy to set up in the beginning but it does not scale. It is not made to be reliable in scale and that is mentioned in the official docs. This analysis of our problems with Redis may help you.
We have used Kafka and RabbitMQ both in scale. We concluded that RabbitMQ is a really good general purpose message broker (for our case) and Kafka is really fast but limited in features. That’s the trade off that we understood from using it. In-fact I blogged about the trade offs between Kafka and RabbitMQ to document it. I hope it helps you in choosing the best pub-sub layer for your use case.
It depends on your requirements like number of messages to be processed per second, real time messages vs delayed, number of servers available for your cluster, whether you need streaming, etc.. Kafka works for most use cases. Not related to answer but would like to add no matter whatever broker you chose, for connecting to the broker always go for the library provided by the broker rather than Spring kafka or Spring AMQP. If you use Spring, then you will be stuck with specific Spring versions. In case you find bugs in spring then difficult because you will have to upgrade entire application to use a later Spring core version. In general, use as minimum libraries as possible to get rid of nuisance of upgrading them when they are outdated or bugs are found with them.
I would not recommend RabbitMQ. It does not scale well.
I would recommend Redis if you want non-durable, fast, distributed, and scalable messaging. Note it has almost no guarantees around at least once delivery and similar so you will have to handle that.
I would recommend Kafka if you want the whole deal with a bit more bloat. It is durable, fast, distributed and scalable. It has few downsides other than a learning curve and higher cost.
I currently use Redis as my message queue and plan to upgrade to Kafka in Q1
Hi, First of all, understand the difference. All works as message brokers. All know JSON. But Redis is not a queue or topic. Its an in memory cache. RabbitMq/Kafka persists data on file system but redis holds in a temporary memory. If redis is down and if u cant use the cache dump wisely then ur data gone. Redis is very short lived broker though its fast due to less i/o operations and other commits that kafak does. I dont have much exp in RabbitMq so cant comment on that but RabbitMq is better in administration than open sourcen kafka in case your maanger dont want to pay money ….. 😂
Coming to decision… If You are ready to risk/ compromise the in memory data inside cache then go for redis. If you are not concerned about horizontal scaling and can do ur job by vertical scaling then go for redis. If you want horizontal scaling and want to persist data on disk for fetching later then go for kafka. But u cant achieve speed unless you fine tune your consumers. Need better understanding of consumer threads, partitioning, poll intervals etc. If you use confluent platform the dont even compare kafka with redis and rabbitmq.
Cheers! Sunil.
Hi! I am creating a scraping system in Django, which involves long running tasks between 1 minute & 1 Day. As I am new to Message Brokers and Task Queues, I need advice on which architecture to use for my system. ( Amazon SQS, RabbitMQ, or Celery). The system should be autoscalable using Kubernetes(K8) based on the number of pending tasks in the queue.
Hello, i highly recommend Apache Kafka, to me it's the best. You can deploy it in cluster mode inside K8S, thus you can have a Highly available system (also auto scalable).
Good luck
I am just a beginner at these two technologies.
Problem statement: I am getting lakh of users from the sequel server for whom I need to create caches in MongoDB by making different REST API requests.
Here these users can be treated as messages. Each REST API request is a task.
I am confused about whether I should go for RabbitMQ alone or Celery.
If I have to go with RabbitMQ, I prefer to use python with Pika module. But the challenge with Pika is, it is not thread-safe. So I am not finding a way to execute a lakh of API requests in parallel using multiple threads using Pika.
If I have to go with Celery, I don't know how I can achieve better scalability in executing these API requests in parallel.
For large amounts of small tasks and caches I have had good luck with Redis and RQ. I have not personally used celery but I am fairly sure it would scale well, and I have not used RabbitMQ for anything besides communication between services. If you prefer python my suggestions should feel comfortable.
Sorry I do not have a more information
Hi, we are in a ZMQ set up in a push/pull pattern, and we currently start to have more traffic and cases that the service is unavailable or stuck. We want to: * Not loose messages in services outages * Safely restart service without losing messages (ZeroMQ seems to need to close the socket in the receiver before restart manually)
Do you have experience with this setup with ZeroMQ? Would you suggest RabbitMQ or Amazon SQS (we are in AWS setup) instead? Something else?
Thank you for your time
ZeroMQ is fast but you need to build build reliability yourself. There are a number of patterns described in the zeromq guide. I have used RabbitMQ before which gives lot of functionality out of the box, you can probably use the worker queues
example from the tutorial, it can also persists messages in the queue.
I haven't used Amazon SQS before. Another tool you could use is Kafka.
Both would do the trick, but there are some nuances. We work with both.
From the sound of it, your main focus is "not losing messages". In that case, I would go with RabbitMQ with a high availability policy (ha-mode=all) and a main/retry/error queue pattern.
Push messages to an exchange, which sends them to the main queue. If an error occurs, push the errored out message to the retry exchange, which forwards it to the retry queue. Give the retry queue a x-message-ttl and set the main exchange as a dead-letter-exchange. If your message has been retried several times, push it to the error exchange, where the message can remain until someone has time to look at it.
This is a very useful and resilient pattern that allows you to never lose messages. With the high availability policy, you make sure that if one of your rabbitmq nodes dies, another can take over and messages are already mirrored to it.
This is not really possible with SQS, because SQS is a lot more focused on throughput and scaling. Combined with SNS it can do interesting things like deduplication of messages and such. That said, one thing core to its design is that messages have a maximum retention time. The idea is that a message that has stayed in an SQS queue for a while serves no more purpose after a while, so it gets removed - so as to not block up any listener resources for a long time. You can also set up a DLQ here, but these similarly do not hold onto messages forever. Since you seem to depend on messages surviving at all cost, I would suggest that the scaling/throughput benefit of SQS does not outweigh the difference in approach to messages there.
Hello dear developers, our company is starting a new project for a new Web App, and we are currently designing the Architecture (we will be using .NET Core). We want to embark on something new, so we are thinking about migrating from a monolithic perspective to a microservices perspective. We wish to containerize those microservices and make them independent from each other. Is it the best way for microservices to communicate with each other via ESB, or is there a new way of doing this? Maybe complementing with an API Gateway? Can you recommend something else different than the two tools I provided?
We want something good for Cost/Benefit; performance should be high too (but not the primary constraint).
Thank you very much in advance :)
There are many different messaging frameworks available for IPC use. It's not really a question of how "new" the technology is, but what you need it to do. Azure Service Bus can be a great service to use, but it can also take a lot of effort to administrate and maintain that can make it costly to use unless you need the more advanced features it offers for routing, sequencing, delivery, etc. I would recommend checking out this link to get a basic idea of different messaging architectures. These only cover Azure services, but there are many other solutions that use similar architectural models.
https://docs.microsoft.com/en-us/azure/event-grid/compare-messaging-services
I want to schedule a message. Amazon SQS provides a delay of 15 minutes, but I want it in some hours.
Example: Let's say a Message1 is consumed by a consumer A but somehow it failed inside the consumer. I would want to put it in a queue and retry after 4hrs. Can I do this in Amazon MQ? I have seen in some Amazon MQ videos saying scheduling messages can be done. But, I'm not sure how.
Mithiridi, I believe you are talking about two different things. 1. If you need to process messages with delays of more 15m or at specific times, it's not a good idea to use queues, independently of tool SQM, Rabbit or Amazon MQ. you should considerer another approach using a scheduled job. 2. For dead queues and policy retries RabbitMQ, for example, doesn't support your use case. https://medium.com/@kiennguyen88/rabbitmq-delay-retry-schedule-with-dead-letter-exchange-31fb25a440fc I'm not sure if that is possible SNS/SQS support, they have a maximum delay for delivery (maxDelayTarget) in seconds but it's not clear the number. You can check this out: https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html
We are going to develop a microservices-based application. It consists of AngularJS, ASP.NET Core, and MSSQL.
We have 3 types of microservices. Emailservice, Filemanagementservice, Filevalidationservice
I am a beginner in microservices. But I have read about RabbitMQ, but come to know that there are Redis and Kafka also in the market. So, I want to know which is best.
Kafka is an Enterprise Messaging Framework whereas Redis is an Enterprise Cache Broker, in-memory database and high performance database.Both are having their own advantages, but they are different in usage and implementation. Now if you are creating microservices check the user consumption volumes, its generating logs, scalability, systems to be integrated and so on. I feel for your scenario initially you can go with KAFKA bu as the throughput, consumption and other factors are scaling then gradually you can add Redis accordingly.
I first recommend that you choose Angular over AngularJS if you are starting something new. AngularJs is no longer getting enhancements, but perhaps you meant Angular. Regarding microservices, I recommend considering microservices when you have different development teams for each service that may want to use different programming languages and backend data stores. If it is all the same team, same code language, and same data store I would not use microservices. I might use a message queue, in which case RabbitMQ is a good one. But you may also be able to simply write your own in which you write a record in a table in MSSQL and one of your services reads the record from the table and processes it. The most challenging part of doing it yourself is writing a service that does a good job of reading the queue without reading the same message multiple times or missing a message; and that is where RabbitMQ can help.
I think something is missing here and you should consider answering it to yourself. You are building a couple of services. Why are you considering event-sourcing architecture using Message Brokers such as the above? Won't a simple REST service based arch suffice? Read about CQRS and the problems it entails (state vs command impedance for example). Do you need Pub/Sub or Push/Pull? Is queuing of messages enough or would you need querying or filtering of messages before consumption? Also, someone would have to manage these brokers (unless using managed, cloud provider based solution), automate their deployment, someone would need to take care of backups, clustering if needed, disaster recovery, etc. I have a good past experience in terms of manageability/devops of the above options with Kafka and Redis, not so much with RabbitMQ. Both are very performant. But also note that Redis is not a pure message broker (at time of writing) but more of a general purpose in-memory key-value store. Kafka nowadays is much more than a distributed message broker. Long story short. In my taste, you should go with a minialistic approach and try to avoid either of them if you can, especially if your architecture does not fall nicely into event sourcing. If not I'd examine Kafka. If you need more capabilities than I'd consider Redis and use it for all sorts of other things such as a cache.
We found that the CNCF landscape is a good advisor when working going into the cloud / microservices space: https://landscape.cncf.io/fullscreen=yes. When choosing a technology one important criteria to me is if it is cloud native or not. Neither Redis, RabbitMQ nor Kafka is cloud native. The try to adapt but will be replaced eventually with technologies that are cloud native.
We have gone with NATS and have never looked back. We haven't spend a single minute on server maintainance in the last year and the setup of a cluster is way too easy. With the new features NATS incorporates now (and the ones still on the roadmap) it is already and will be sooo much mure than Redis, RabbitMQ and Kafka are. It can replace service discovery, load balancing, global multiclusters and failover, etc, etc.
Your thought might be: But I don't need all of that! Well, at the same time it is much more leightweight than Redis, RabbitMQ and especially Kafka.
Our backend application is sending some external messages to a third party application at the end of each backend (CRUD) API call (from UI) and these external messages take too much extra time (message building, processing, then sent to the third party and log success/failure), UI application has no concern to these extra third party messages.
So currently we are sending these third party messages by creating a new child thread at end of each REST API call so UI application doesn't wait for these extra third party API calls.
I want to integrate Apache Kafka for these extra third party API calls, so I can also retry on failover third party API calls in a queue(currently third party messages are sending from multiple threads at the same time which uses too much processing and resources) and logging, etc.
Question 1: Is this a use case of a message broker?
Question 2: If it is then Kafka vs RabitMQ which is the better?
RabbitMQ is great for queuing and retrying. You can send the requests to your backend which will further queue these requests in RabbitMQ (or Kafka, too). The consumer on the other end can take care of processing . For a detailed analysis, check this blog about choosing between Kafka and RabbitMQ.
Well, first off, it's good practice to do as little non-UI work on the foreground thread as possible, regardless of whether the requests take a long time. You don't want the UI thread blocked.
This sounds like a good use case for RabbitMQ. Primarily because you don't need each message processed by more than one consumer. If you wanted to process a single message more than once (say for different purposes), then Apache Kafka would be a much better fit as you can have multiple consumer groups consuming from the same topics independently.
Have your API publish messages containing the data necessary for the third-party request to a Rabbit queue and have consumers reading off there. If it fails, you can either retry immediately, or publish to a deadletter queue where you can reprocess them whenever you want (shovel them back into the regular queue).
In my opinion RabbitMQ fits better in your case because you don’t have order in queue. You can process your messages in any order. You don’t need to store the data what you sent. Kafka is a persistent storage like the blockchain. RabbitMQ is a message broker. Kafka is not a good solution for the system with confirmations of the messages delivery.
As far as I understand, Kafka is a like a persisted event state manager where you can plugin various source of data and transform/query them as event via a stream API. Regarding your use case I will consider using RabbitMQ if your intent is to implement service inter-communication kind of thing. RabbitMQ is a good choice for one-one publisher/subscriber (or consumer) and I think you can also have multiple consumers by configuring a fanout exchange. RabbitMQ provide also message retries, message cancellation, durable queue, message requeue, message ACK....
Hello! [Client sends live video frames -> Server computes and responds the result] Web clients send video frames from their webcam then on the back we need to run them through some algorithm and send the result back as a response. Since everything will need to work in a live mode, we want something fast and also suitable for our case (as everyone needs). Currently, we are considering RabbitMQ for the purpose, but recently I have noticed that there is Redis and Kafka too. Could you please help us choose among them or anything more suitable beyond these guys. I think something similar to our product would be people using their webcam to get Snapchat masks on their faces, and the calculated face points are responded on from the server, then the client-side draw the mask on the user's face. I hope this helps. Thank you!
For your use case, the tool that fits more is definitely Kafka. RabbitMQ was not invented to handle data streams, but messages. Plenty of them, of course, but individual messages. Redis is an in-memory database, which is what makes it so fast. Redis recently included features to handle data stream, but it cannot best Kafka on this, or at least not yet. Kafka is not also super fast, it also provides lots of features to help create software to handle those streams.
I've used all of them and Kafka is hard to set up and maintain. Mostly is a Java dinosaur that you can set up and. I've used it with Storm but that is another big dinosaur. Redis is mostly for caching. The queue mechanism is not very scalable for multiple processors. Depending on the speed you need to implement on the reliability I would use RabbitMQ. You can store the frames(if they are too big) somewhere else and just have a link to them. Moving data through any of these will increase cost of transportation. With Rabbit, you can always have multiple consumers and check for redundancy. Hope it clears out your thoughts!
For this kind of use case I would recommend either RabbitMQ or Kafka depending on the needs for scaling, redundancy and how you want to design it.
Kafka's true value comes into play when you need to distribute the streaming load over lot's of resources. If you were passing the video frames directly into the queue then you'd probably want to go with Kafka however if you can just pass a pointer to the frames then RabbitMQ should be fine and will be much simpler to run.
Bear in mind too that Kafka is a persistent log, not just a message bus so any data you feed into it is kept available until it expires (which is configurable). This can be useful if you have multiple clients reading from the queue with their own lifecycle but in your case it doesn't sound like that would be necessary. You could also use a RabbitMQ fanout exchange if you need that in the future.
Maybe not an obvious comparison with Kafka, since Kafka is pretty different from rabbitmq. But for small service, Rabbit as a pubsub platform is super easy to use and pretty powerful. Kafka as an alternative was the original choice, but its really a kind of overkill for a small-medium service. Especially if you are not planning to use k8s, since pure docker deployment can be a pain because of networking setup. Google PubSub was another alternative, its actually pretty cheap, but I never tested it since Rabbit was matching really good for mailing/notification services.
In addition to being a lot cheaper, Google Cloud Pub/Sub allowed us to not worry about maintaining any more infrastructure that needed.
We moved from a self-hosted RabbitMQ over to CloudAMQP and decided that since we use GCP anyway, why not try their managed PubSub?
It is one of the better decisions that we made, and we can just focus about building more important stuff!
Pros of ActiveMQ
- Easy to use18
- Open source14
- Efficient13
- JMS compliant10
- High Availability6
- Scalable5
- Distributed Network of brokers3
- Persistence3
- Support XA (distributed transactions)3
- Docker delievery1
- Highly configurable1
- RabbitMQ0
Pros of Amazon SQS
- Easy to use, reliable62
- Low cost40
- Simple28
- Doesn't need to maintain it14
- It is Serverless8
- Has a max message size (currently 256K)4
- Triggers Lambda3
- Easy to configure with Terraform3
- Delayed delivery upto 15 mins only3
- Delayed delivery upto 12 hours3
- JMS compliant1
- Support for retry and dead letter queue1
- D1
Pros of RabbitMQ
- It's fast and it works with good metrics/monitoring235
- Ease of configuration80
- I like the admin interface60
- Easy to set-up and start with52
- Durable22
- Standard protocols19
- Intuitive work through python19
- Written primarily in Erlang11
- Simply superb9
- Completeness of messaging patterns7
- Reliable4
- Scales to 1 million messages per second4
- Better than most traditional queue based message broker3
- Distributed3
- Supports MQTT3
- Supports AMQP3
- Clear documentation with different scripting language2
- Better routing system2
- Inubit Integration2
- Great ui2
- High performance2
- Reliability2
- Open-source2
- Runs on Open Telecom Platform2
- Clusterable2
- Delayed messages2
- Supports Streams1
- Supports STOMP1
- Supports JMS1
Sign up to add or upvote prosMake informed product decisions
Cons of ActiveMQ
- ONLY Vertically Scalable1
- Support1
- Low resilience to exceptions and interruptions1
- Difficult to scale1
Cons of Amazon SQS
- Has a max message size (currently 256K)2
- Proprietary2
- Difficult to configure2
- Has a maximum 15 minutes of delayed messages only1
Cons of RabbitMQ
- Too complicated cluster/HA config and management9
- Needs Erlang runtime. Need ops good with Erlang runtime6
- Configuration must be done first, not by your code5
- Slow4