Need advice about which tool to choose?Ask the StackShare community!
AWS Lambda vs RabbitMQ: What are the differences?
1. Invocation Model: AWS Lambda is an event-driven computing service that executes functions in response to events, such as changes to data in an Amazon S3 bucket or a new record being inserted into a DynamoDB table. In contrast, RabbitMQ is a message broker that allows applications to communicate by sending and receiving messages. It uses a publish-subscribe model where producers send messages to exchanges and consumers receive messages from queues created for subscriptions.
2. Scaling: AWS Lambda automatically scales the execution environment to handle incoming requests. It can rapidly scale out to support a high volume of concurrent invocations, ensuring that functions are executed without the need for manual intervention. RabbitMQ, on the other hand, requires manual scaling of consumer applications to handle increased message load. It provides features like message acknowledgments and prefetch limits to control the flow of messages and avoid overwhelming consumer applications.
3. Event Sources: AWS Lambda can be triggered by various event sources such as Amazon S3, Amazon DynamoDB, Amazon Kinesis, Amazon CloudWatch Events, and more. This allows developers to build serverless applications that respond to events generated by AWS services. RabbitMQ, on the other hand, is not limited to AWS services and can be integrated with any applications or systems capable of communicating over protocols like AMQP (Advanced Message Queuing Protocol).
4. Execution Environment: AWS Lambda provides a fully managed execution environment for running functions, abstracting away the underlying infrastructure and operating system. Developers can focus on writing code without worrying about provisioning servers or managing resources. RabbitMQ, on the other hand, requires setting up and managing the RabbitMQ message broker infrastructure, including servers, queues, exchanges, and connections.
5. Message Persistence: AWS Lambda does not provide built-in message persistence. It functions as an event-driven compute service and does not retain messages for subsequent processing by default. RabbitMQ, on the other hand, provides message persistence by storing messages in queues until they are consumed or expire. This ensures that messages are not lost in case of consumer failures or system restarts.
6. Message Delivery Guarantees: AWS Lambda provides at-least-once execution semantics, meaning that functions are guaranteed to be invoked at least once, but they can be invoked multiple times in case of failures or retries. RabbitMQ provides different levels of message delivery guarantees, including at-most-once (default) and at-least-once delivery options. Developers can choose the level of reliability based on their application requirements.
In Summary, AWS Lambda and RabbitMQ differ in terms of invocation model, scaling behavior, event sources, execution environment, message persistence, and message delivery guarantees.
Need advice on what platform, systems and tools to use.
Evaluating whether to start a new digital business for which we will need to build a website that handles all traffic. Website only right now. May add smartphone apps later. No desktop app will ever be added. Website to serve various countries and languages. B2B and B2C type customers. Need to handle heavy traffic, be low cost, and scale well.
We are open to either build it on AWS or on Microsoft Azure.
Apologies if I'm leaving out some info. My first post. :) Thanks in advance!
I recommend this : -Spring reactive for back end : the fact it's reactive (async) it consumes half of the resources that a sync platform needs (so less CPU -> less money). -Angular : Web Front end ; it's gives you the possibility to use PWA which is a cheap replacement for a mobile app (but more less popular). -Docker images. -Kubernetes to orchestrate all the containers. -I Use Jenkins / blueocean, ansible for my CI/CD (with Github of course) -AWS of course : u can run a K8S cluster there, make it multi AZ (availability zones) to be highly available, use a load balancer and an auto scaler and ur good to go. -You can store data by taking any managed DB or u can deploy ur own (cheap but risky).
You pay less money, but u need some technical 2 - 3 guys to make that done.
Good luck
My advice will be Front end: React Backend: Language: Java, Kotlin. Database: SQL: Postgres, MySQL, Aurora NOSQL: Mongo db. Caching: Redis. Public : Spring Webflux for async public facing operation. Admin api: Spring boot, Hibrernate, Rest API. Build Container image. Kuberenetes: AWS EKS, AWS ECS, Google GKE. Use Jenkins for CI/CD pipeline. Buddy works is good for AWS. Static content: Host on AWS S3 bucket, Use Cloudfront or Cloudflare as CDN.
Serverless Solution: Api gateway Lambda, Serveless Aurora (SQL). AWS S3 bucket.
Hi! I am creating a scraping system in Django, which involves long running tasks between 1 minute & 1 Day. As I am new to Message Brokers and Task Queues, I need advice on which architecture to use for my system. ( Amazon SQS, RabbitMQ, or Celery). The system should be autoscalable using Kubernetes(K8) based on the number of pending tasks in the queue.
Hello, i highly recommend Apache Kafka, to me it's the best. You can deploy it in cluster mode inside K8S, thus you can have a Highly available system (also auto scalable).
Good luck
I am just a beginner at these two technologies.
Problem statement: I am getting lakh of users from the sequel server for whom I need to create caches in MongoDB by making different REST API requests.
Here these users can be treated as messages. Each REST API request is a task.
I am confused about whether I should go for RabbitMQ alone or Celery.
If I have to go with RabbitMQ, I prefer to use python with Pika module. But the challenge with Pika is, it is not thread-safe. So I am not finding a way to execute a lakh of API requests in parallel using multiple threads using Pika.
If I have to go with Celery, I don't know how I can achieve better scalability in executing these API requests in parallel.
For large amounts of small tasks and caches I have had good luck with Redis and RQ. I have not personally used celery but I am fairly sure it would scale well, and I have not used RabbitMQ for anything besides communication between services. If you prefer python my suggestions should feel comfortable.
Sorry I do not have a more information
Hi, we are in a ZMQ set up in a push/pull pattern, and we currently start to have more traffic and cases that the service is unavailable or stuck. We want to: * Not loose messages in services outages * Safely restart service without losing messages (ZeroMQ seems to need to close the socket in the receiver before restart manually)
Do you have experience with this setup with ZeroMQ? Would you suggest RabbitMQ or Amazon SQS (we are in AWS setup) instead? Something else?
Thank you for your time
ZeroMQ is fast but you need to build build reliability yourself. There are a number of patterns described in the zeromq guide. I have used RabbitMQ before which gives lot of functionality out of the box, you can probably use the worker queues
example from the tutorial, it can also persists messages in the queue.
I haven't used Amazon SQS before. Another tool you could use is Kafka.
Both would do the trick, but there are some nuances. We work with both.
From the sound of it, your main focus is "not losing messages". In that case, I would go with RabbitMQ with a high availability policy (ha-mode=all) and a main/retry/error queue pattern.
Push messages to an exchange, which sends them to the main queue. If an error occurs, push the errored out message to the retry exchange, which forwards it to the retry queue. Give the retry queue a x-message-ttl and set the main exchange as a dead-letter-exchange. If your message has been retried several times, push it to the error exchange, where the message can remain until someone has time to look at it.
This is a very useful and resilient pattern that allows you to never lose messages. With the high availability policy, you make sure that if one of your rabbitmq nodes dies, another can take over and messages are already mirrored to it.
This is not really possible with SQS, because SQS is a lot more focused on throughput and scaling. Combined with SNS it can do interesting things like deduplication of messages and such. That said, one thing core to its design is that messages have a maximum retention time. The idea is that a message that has stayed in an SQS queue for a while serves no more purpose after a while, so it gets removed - so as to not block up any listener resources for a long time. You can also set up a DLQ here, but these similarly do not hold onto messages forever. Since you seem to depend on messages surviving at all cost, I would suggest that the scaling/throughput benefit of SQS does not outweigh the difference in approach to messages there.
Hello dear developers, our company is starting a new project for a new Web App, and we are currently designing the Architecture (we will be using .NET Core). We want to embark on something new, so we are thinking about migrating from a monolithic perspective to a microservices perspective. We wish to containerize those microservices and make them independent from each other. Is it the best way for microservices to communicate with each other via ESB, or is there a new way of doing this? Maybe complementing with an API Gateway? Can you recommend something else different than the two tools I provided?
We want something good for Cost/Benefit; performance should be high too (but not the primary constraint).
Thank you very much in advance :)
There are many different messaging frameworks available for IPC use. It's not really a question of how "new" the technology is, but what you need it to do. Azure Service Bus can be a great service to use, but it can also take a lot of effort to administrate and maintain that can make it costly to use unless you need the more advanced features it offers for routing, sequencing, delivery, etc. I would recommend checking out this link to get a basic idea of different messaging architectures. These only cover Azure services, but there are many other solutions that use similar architectural models.
https://docs.microsoft.com/en-us/azure/event-grid/compare-messaging-services
We are going to develop a microservices-based application. It consists of AngularJS, ASP.NET Core, and MSSQL.
We have 3 types of microservices. Emailservice, Filemanagementservice, Filevalidationservice
I am a beginner in microservices. But I have read about RabbitMQ, but come to know that there are Redis and Kafka also in the market. So, I want to know which is best.
Kafka is an Enterprise Messaging Framework whereas Redis is an Enterprise Cache Broker, in-memory database and high performance database.Both are having their own advantages, but they are different in usage and implementation. Now if you are creating microservices check the user consumption volumes, its generating logs, scalability, systems to be integrated and so on. I feel for your scenario initially you can go with KAFKA bu as the throughput, consumption and other factors are scaling then gradually you can add Redis accordingly.
I first recommend that you choose Angular over AngularJS if you are starting something new. AngularJs is no longer getting enhancements, but perhaps you meant Angular. Regarding microservices, I recommend considering microservices when you have different development teams for each service that may want to use different programming languages and backend data stores. If it is all the same team, same code language, and same data store I would not use microservices. I might use a message queue, in which case RabbitMQ is a good one. But you may also be able to simply write your own in which you write a record in a table in MSSQL and one of your services reads the record from the table and processes it. The most challenging part of doing it yourself is writing a service that does a good job of reading the queue without reading the same message multiple times or missing a message; and that is where RabbitMQ can help.
I think something is missing here and you should consider answering it to yourself. You are building a couple of services. Why are you considering event-sourcing architecture using Message Brokers such as the above? Won't a simple REST service based arch suffice? Read about CQRS and the problems it entails (state vs command impedance for example). Do you need Pub/Sub or Push/Pull? Is queuing of messages enough or would you need querying or filtering of messages before consumption? Also, someone would have to manage these brokers (unless using managed, cloud provider based solution), automate their deployment, someone would need to take care of backups, clustering if needed, disaster recovery, etc. I have a good past experience in terms of manageability/devops of the above options with Kafka and Redis, not so much with RabbitMQ. Both are very performant. But also note that Redis is not a pure message broker (at time of writing) but more of a general purpose in-memory key-value store. Kafka nowadays is much more than a distributed message broker. Long story short. In my taste, you should go with a minialistic approach and try to avoid either of them if you can, especially if your architecture does not fall nicely into event sourcing. If not I'd examine Kafka. If you need more capabilities than I'd consider Redis and use it for all sorts of other things such as a cache.
We found that the CNCF landscape is a good advisor when working going into the cloud / microservices space: https://landscape.cncf.io/fullscreen=yes. When choosing a technology one important criteria to me is if it is cloud native or not. Neither Redis, RabbitMQ nor Kafka is cloud native. The try to adapt but will be replaced eventually with technologies that are cloud native.
We have gone with NATS and have never looked back. We haven't spend a single minute on server maintainance in the last year and the setup of a cluster is way too easy. With the new features NATS incorporates now (and the ones still on the roadmap) it is already and will be sooo much mure than Redis, RabbitMQ and Kafka are. It can replace service discovery, load balancing, global multiclusters and failover, etc, etc.
Your thought might be: But I don't need all of that! Well, at the same time it is much more leightweight than Redis, RabbitMQ and especially Kafka.
Our backend application is sending some external messages to a third party application at the end of each backend (CRUD) API call (from UI) and these external messages take too much extra time (message building, processing, then sent to the third party and log success/failure), UI application has no concern to these extra third party messages.
So currently we are sending these third party messages by creating a new child thread at end of each REST API call so UI application doesn't wait for these extra third party API calls.
I want to integrate Apache Kafka for these extra third party API calls, so I can also retry on failover third party API calls in a queue(currently third party messages are sending from multiple threads at the same time which uses too much processing and resources) and logging, etc.
Question 1: Is this a use case of a message broker?
Question 2: If it is then Kafka vs RabitMQ which is the better?
RabbitMQ is great for queuing and retrying. You can send the requests to your backend which will further queue these requests in RabbitMQ (or Kafka, too). The consumer on the other end can take care of processing . For a detailed analysis, check this blog about choosing between Kafka and RabbitMQ.
Well, first off, it's good practice to do as little non-UI work on the foreground thread as possible, regardless of whether the requests take a long time. You don't want the UI thread blocked.
This sounds like a good use case for RabbitMQ. Primarily because you don't need each message processed by more than one consumer. If you wanted to process a single message more than once (say for different purposes), then Apache Kafka would be a much better fit as you can have multiple consumer groups consuming from the same topics independently.
Have your API publish messages containing the data necessary for the third-party request to a Rabbit queue and have consumers reading off there. If it fails, you can either retry immediately, or publish to a deadletter queue where you can reprocess them whenever you want (shovel them back into the regular queue).
In my opinion RabbitMQ fits better in your case because you don’t have order in queue. You can process your messages in any order. You don’t need to store the data what you sent. Kafka is a persistent storage like the blockchain. RabbitMQ is a message broker. Kafka is not a good solution for the system with confirmations of the messages delivery.
As far as I understand, Kafka is a like a persisted event state manager where you can plugin various source of data and transform/query them as event via a stream API. Regarding your use case I will consider using RabbitMQ if your intent is to implement service inter-communication kind of thing. RabbitMQ is a good choice for one-one publisher/subscriber (or consumer) and I think you can also have multiple consumers by configuring a fanout exchange. RabbitMQ provide also message retries, message cancellation, durable queue, message requeue, message ACK....
Hello! [Client sends live video frames -> Server computes and responds the result] Web clients send video frames from their webcam then on the back we need to run them through some algorithm and send the result back as a response. Since everything will need to work in a live mode, we want something fast and also suitable for our case (as everyone needs). Currently, we are considering RabbitMQ for the purpose, but recently I have noticed that there is Redis and Kafka too. Could you please help us choose among them or anything more suitable beyond these guys. I think something similar to our product would be people using their webcam to get Snapchat masks on their faces, and the calculated face points are responded on from the server, then the client-side draw the mask on the user's face. I hope this helps. Thank you!
For your use case, the tool that fits more is definitely Kafka. RabbitMQ was not invented to handle data streams, but messages. Plenty of them, of course, but individual messages. Redis is an in-memory database, which is what makes it so fast. Redis recently included features to handle data stream, but it cannot best Kafka on this, or at least not yet. Kafka is not also super fast, it also provides lots of features to help create software to handle those streams.
I've used all of them and Kafka is hard to set up and maintain. Mostly is a Java dinosaur that you can set up and. I've used it with Storm but that is another big dinosaur. Redis is mostly for caching. The queue mechanism is not very scalable for multiple processors. Depending on the speed you need to implement on the reliability I would use RabbitMQ. You can store the frames(if they are too big) somewhere else and just have a link to them. Moving data through any of these will increase cost of transportation. With Rabbit, you can always have multiple consumers and check for redundancy. Hope it clears out your thoughts!
For this kind of use case I would recommend either RabbitMQ or Kafka depending on the needs for scaling, redundancy and how you want to design it.
Kafka's true value comes into play when you need to distribute the streaming load over lot's of resources. If you were passing the video frames directly into the queue then you'd probably want to go with Kafka however if you can just pass a pointer to the frames then RabbitMQ should be fine and will be much simpler to run.
Bear in mind too that Kafka is a persistent log, not just a message bus so any data you feed into it is kept available until it expires (which is configurable). This can be useful if you have multiple clients reading from the queue with their own lifecycle but in your case it doesn't sound like that would be necessary. You could also use a RabbitMQ fanout exchange if you need that in the future.
In addition to being a lot cheaper, Google Cloud Pub/Sub allowed us to not worry about maintaining any more infrastructure that needed.
We moved from a self-hosted RabbitMQ over to CloudAMQP and decided that since we use GCP anyway, why not try their managed PubSub?
It is one of the better decisions that we made, and we can just focus about building more important stuff!
When adding a new feature to Checkly rearchitecting some older piece, I tend to pick Heroku for rolling it out. But not always, because sometimes I pick AWS Lambda . The short story:
- Developer Experience trumps everything.
- AWS Lambda is cheap. Up to a limit though. This impact not only your wallet.
- If you need geographic spread, AWS is lonely at the top.
Recently, I was doing a brainstorm at a startup here in Berlin on the future of their infrastructure. They were ready to move on from their initial, almost 100% Ec2 + Chef based setup. Everything was on the table. But we crossed out a lot quite quickly:
- Pure, uncut, self hosted Kubernetes — way too much complexity
- Managed Kubernetes in various flavors — still too much complexity
- Zeit — Maybe, but no Docker support
- Elastic Beanstalk — Maybe, bit old but does the job
- Heroku
- Lambda
It became clear a mix of PaaS and FaaS was the way to go. What a surprise! That is exactly what I use for Checkly! But when do you pick which model?
I chopped that question up into the following categories:
- Developer Experience / DX 🤓
- Ops Experience / OX 🐂 (?)
- Cost 💵
- Lock in 🔐
Read the full post linked below for all details
Pros of AWS Lambda
- No infrastructure129
- Cheap83
- Quick70
- Stateless59
- No deploy, no server, great sleep47
- AWS Lambda went down taking many sites with it12
- Event Driven Governance6
- Extensive API6
- Auto scale and cost effective6
- Easy to deploy6
- VPC Support5
- Integrated with various AWS services3
Pros of RabbitMQ
- It's fast and it works with good metrics/monitoring235
- Ease of configuration80
- I like the admin interface60
- Easy to set-up and start with52
- Durable22
- Standard protocols19
- Intuitive work through python19
- Written primarily in Erlang11
- Simply superb9
- Completeness of messaging patterns7
- Reliable4
- Scales to 1 million messages per second4
- Better than most traditional queue based message broker3
- Distributed3
- Supports MQTT3
- Supports AMQP3
- Clear documentation with different scripting language2
- Better routing system2
- Inubit Integration2
- Great ui2
- High performance2
- Reliability2
- Open-source2
- Runs on Open Telecom Platform2
- Clusterable2
- Delayed messages2
- Supports Streams1
- Supports STOMP1
- Supports JMS1
Sign up to add or upvote prosMake informed product decisions
Cons of AWS Lambda
- Cant execute ruby or go7
- Compute time limited3
- Can't execute PHP w/o significant effort1
Cons of RabbitMQ
- Too complicated cluster/HA config and management9
- Needs Erlang runtime. Need ops good with Erlang runtime6
- Configuration must be done first, not by your code5
- Slow4