Alternatives to Hangfire logo

Alternatives to Hangfire

RabbitMQ, NServiceBus, Azure Functions, Kafka, and JavaScript are the most popular alternatives and competitors to Hangfire.
167
244
+ 1
17

What is Hangfire and what are its top alternatives?

Hangfire is a powerful tool that allows developers to perform background processing in .NET applications. Its key features include job scheduling, background task processing, recurring tasks, and monitoring capabilities. However, the limitations of Hangfire include lack of real-time processing and potential performance issues with large-scale applications.

  1. Quartz.NET: Quartz.NET is a full-featured, open-source job scheduling system that can be integrated with .NET applications. Key features include job scheduling, support for cron-like expressions, clustering support, and persistent job storage. Pros of Quartz.NET include a robust feature set and active community support, but it may have a steeper learning curve compared to Hangfire.
  2. Celery: Celery is a distributed task queue framework for Python applications. It supports scheduling and running periodic tasks, task prioritization, and distributed task processing. Pros of Celery include a wide range of integrations and support for real-time processing, but it may require additional setup and configuration compared to Hangfire.
  3. Sidekiq: Sidekiq is a popular background processing framework for Ruby applications. It provides features like job scheduling, retries, and monitoring capabilities. Pros of Sidekiq include high throughput and low latency, but it may not be suitable for .NET applications like Hangfire.
  4. Resque: Resque is a Redis-backed Ruby library for creating background jobs. It offers job scheduling, task prioritization, and failure handling features. Pros of Resque include simplicity and reliability, but it may lack some advanced features compared to Hangfire.
  5. Apache Kafka: Apache Kafka is a distributed streaming platform that can be used for real-time data processing. It provides features for publishing and subscribing to streams of records, which can be utilized for background processing tasks. Pros of Apache Kafka include scalability and fault tolerance, but it may require more complex setup compared to Hangfire.
  6. Iron.io: Iron.io is a cloud-based message queue service that can be used for background processing tasks. It offers features like task queuing, worker scaling, and monitoring capabilities. Pros of Iron.io include ease of use and scalability, but it may involve additional costs compared to self-hosted solutions like Hangfire.
  7. Gearman: Gearman is an open-source distributed job queuing system that can be used for background task processing. It supports task scheduling, load balancing, and fault tolerance features. Pros of Gearman include cross-language support and high performance, but it may require more manual configuration compared to Hangfire.
  8. RabbitMQ: RabbitMQ is a popular message broker that can be integrated with .NET applications for background task processing. It provides features like message queuing, routing, and delivery acknowledgement. Pros of RabbitMQ include reliability and scalability, but it may have a steeper learning curve compared to Hangfire.
  9. Luigi: Luigi is a Python module that can be used for building complex pipelines of batch jobs. It offers features like dependency resolution, workflow visualization, and task prioritization. Pros of Luigi include flexibility and extensibility, but it may have a different workflow compared to Hangfire.
  10. Node.js Cluster Module: The Node.js Cluster Module allows developers to create background processing tasks in a Node.js application by leveraging the cluster module for parallel processing. Key features include task distribution, load balancing, and fault tolerance. Pros of the Node.js Cluster Module include simplicity and performance, but it may require additional setup and monitoring compared to Hangfire.

Top Alternatives to Hangfire

  • RabbitMQ
    RabbitMQ

    RabbitMQ gives your applications a common platform to send and receive messages, and your messages a safe place to live until received. ...

  • NServiceBus
    NServiceBus

    Performance, scalability, pub/sub, reliable integration, workflow orchestration, and everything else you could possibly want in a service bus. ...

  • Azure Functions
    Azure Functions

    Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or 3rd party service as well as on-premises systems. ...

  • Kafka
    Kafka

    Kafka is a distributed, partitioned, replicated commit log service. It provides the functionality of a messaging system, but with a unique design. ...

  • Sidekiq
    Sidekiq

    Sidekiq uses threads to handle many jobs at the same time in the same process. It does not require Rails but will integrate tightly with Rails 3/4 to make background processing dead simple. ...

  • Resque
    Resque

    Background jobs can be any Ruby class or module that responds to perform. Your existing classes can easily be converted to background jobs or you can create new classes specifically to do work. Or, you can do both. ...

  • Beanstalkd
    Beanstalkd

    Beanstalks's interface is generic, but was originally designed for reducing the latency of page views in high-volume web applications by running time-consuming tasks asynchronously. ...

  • PHP-FPM
    PHP-FPM

    It is an alternative PHP FastCGI implementation with some additional features useful for sites of any size, especially busier sites. It includes Adaptive process spawning, Advanced process management with graceful stop/start, Emergency restart in case of accidental opcode cache destruction etc. ...

Hangfire alternatives & related posts

RabbitMQ logo

RabbitMQ

20.9K
18.4K
527
Open source multiprotocol messaging broker
20.9K
18.4K
+ 1
527
PROS OF RABBITMQ
  • 234
    It's fast and it works with good metrics/monitoring
  • 79
    Ease of configuration
  • 59
    I like the admin interface
  • 50
    Easy to set-up and start with
  • 21
    Durable
  • 18
    Intuitive work through python
  • 18
    Standard protocols
  • 10
    Written primarily in Erlang
  • 8
    Simply superb
  • 6
    Completeness of messaging patterns
  • 3
    Scales to 1 million messages per second
  • 3
    Reliable
  • 2
    Distributed
  • 2
    Supports MQTT
  • 2
    Better than most traditional queue based message broker
  • 2
    Supports AMQP
  • 1
    Clusterable
  • 1
    Clear documentation with different scripting language
  • 1
    Great ui
  • 1
    Inubit Integration
  • 1
    Better routing system
  • 1
    High performance
  • 1
    Runs on Open Telecom Platform
  • 1
    Delayed messages
  • 1
    Reliability
  • 1
    Open-source
CONS OF RABBITMQ
  • 9
    Too complicated cluster/HA config and management
  • 6
    Needs Erlang runtime. Need ops good with Erlang runtime
  • 5
    Configuration must be done first, not by your code
  • 4
    Slow

related RabbitMQ posts

James Cunningham
Operations Engineer at Sentry · | 18 upvotes · 1.7M views
Shared insights
on
CeleryCeleryRabbitMQRabbitMQ
at

As Sentry runs throughout the day, there are about 50 different offline tasks that we execute—anything from “process this event, pretty please” to “send all of these cool people some emails.” There are some that we execute once a day and some that execute thousands per second.

Managing this variety requires a reliably high-throughput message-passing technology. We use Celery's RabbitMQ implementation, and we stumbled upon a great feature called Federation that allows us to partition our task queue across any number of RabbitMQ servers and gives us the confidence that, if any single server gets backlogged, others will pitch in and distribute some of the backlogged tasks to their consumers.

#MessageQueue

See more

Around the time of their Series A, Pinterest’s stack included Python and Django, with Tornado and Node.js as web servers. Memcached / Membase and Redis handled caching, with RabbitMQ handling queueing. Nginx, HAproxy and Varnish managed static-delivery and load-balancing, with persistent data storage handled by MySQL.

See more
NServiceBus logo

NServiceBus

56
130
2
Enterprise-grade scalability and reliability for your workflows and integrations
56
130
+ 1
2
PROS OF NSERVICEBUS
  • 1
    Not as good as alternatives, good job security
  • 1
    Brings on-prem issues to the cloud
CONS OF NSERVICEBUS
    Be the first to leave a con

    related NServiceBus posts

    Azure Functions logo

    Azure Functions

    665
    691
    62
    Listen and react to events across your stack
    665
    691
    + 1
    62
    PROS OF AZURE FUNCTIONS
    • 14
      Pay only when invoked
    • 11
      Great developer experience for C#
    • 9
      Multiple languages supported
    • 7
      Great debugging support
    • 5
      Can be used as lightweight https service
    • 4
      Easy scalability
    • 3
      WebHooks
    • 3
      Costo
    • 2
      Event driven
    • 2
      Azure component events for Storage, services etc
    • 2
      Poor developer experience for C#
    CONS OF AZURE FUNCTIONS
    • 1
      No persistent (writable) file system available
    • 1
      Poor support for Linux environments
    • 1
      Sporadic server & language runtime issues
    • 1
      Not suited for long-running applications

    related Azure Functions posts

    Kestas Barzdaitis
    Entrepreneur & Engineer · | 16 upvotes · 764.8K views

    CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

    CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

    AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

    It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

    The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

    In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

    Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

    See more
    Michal Nowak

    In a couple of recent projects we had an opportunity to try out the new Serverless approach to building web applications. It wasn't necessarily a question if we should use any particular vendor but rather "if" we can consider serverless a viable option for building apps. Obviously our goal was also to get a feel for this technology and gain some hands-on experience.

    We did consider AWS Lambda, Firebase from Google as well as Azure Functions. Eventually we went with AWS Lambdas.

    PROS
    • No servers to manage (obviously!)
    • Limited fixed costs – you pay only for used time
    • Automated scaling and balancing
    • Automatic failover (or, at this level of abstraction, no failover problem at all)
    • Security easier to provide and audit
    • Low overhead at the start (with the certain level of knowledge)
    • Short time to market
    • Easy handover - deployment coupled with code
    • Perfect choice for lean startups with fast-paced iterations
    • Augmentation for the classic cloud, server(full) approach
    CONS
    • Not much know-how and best practices available about structuring the code and projects on the market
    • Not suitable for complex business logic due to the risk of producing highly coupled code
    • Cost difficult to estimate (helpful tools: serverlesscalc.com)
    • Difficulty in migration to other platforms (Vendor lock⚠️)
    • Little engineers with experience in serverless on the job market
    • Steep learning curve for engineers without any cloud experience

    More details are on our blog: https://evojam.com/blog/2018/12/5/should-you-go-serverless-meet-the-benefits-and-flaws-of-new-wave-of-cloud-solutions I hope it helps 🙌 & I'm curious of your experiences.

    See more
    Kafka logo

    Kafka

    23K
    21.6K
    607
    Distributed, fault tolerant, high throughput pub-sub messaging system
    23K
    21.6K
    + 1
    607
    PROS OF KAFKA
    • 126
      High-throughput
    • 119
      Distributed
    • 92
      Scalable
    • 86
      High-Performance
    • 66
      Durable
    • 38
      Publish-Subscribe
    • 19
      Simple-to-use
    • 18
      Open source
    • 12
      Written in Scala and java. Runs on JVM
    • 9
      Message broker + Streaming system
    • 4
      KSQL
    • 4
      Avro schema integration
    • 4
      Robust
    • 3
      Suport Multiple clients
    • 2
      Extremely good parallelism constructs
    • 2
      Partioned, replayable log
    • 1
      Simple publisher / multi-subscriber model
    • 1
      Fun
    • 1
      Flexible
    CONS OF KAFKA
    • 32
      Non-Java clients are second-class citizens
    • 29
      Needs Zookeeper
    • 9
      Operational difficulties
    • 5
      Terrible Packaging

    related Kafka posts

    Eric Colson
    Chief Algorithms Officer at Stitch Fix · | 21 upvotes · 6.1M views

    The algorithms and data infrastructure at Stitch Fix is housed in #AWS. Data acquisition is split between events flowing through Kafka, and periodic snapshots of PostgreSQL DBs. We store data in an Amazon S3 based data warehouse. Apache Spark on Yarn is our tool of choice for data movement and #ETL. Because our storage layer (s3) is decoupled from our processing layer, we are able to scale our compute environment very elastically. We have several semi-permanent, autoscaling Yarn clusters running to serve our data processing needs. While the bulk of our compute infrastructure is dedicated to algorithmic processing, we also implemented Presto for adhoc queries and dashboards.

    Beyond data movement and ETL, most #ML centric jobs (e.g. model training and execution) run in a similarly elastic environment as containers running Python and R code on Amazon EC2 Container Service clusters. The execution of batch jobs on top of ECS is managed by Flotilla, a service we built in house and open sourced (see https://github.com/stitchfix/flotilla-os).

    At Stitch Fix, algorithmic integrations are pervasive across the business. We have dozens of data products actively integrated systems. That requires serving layer that is robust, agile, flexible, and allows for self-service. Models produced on Flotilla are packaged for deployment in production using Khan, another framework we've developed internally. Khan provides our data scientists the ability to quickly productionize those models they've developed with open source frameworks in Python 3 (e.g. PyTorch, sklearn), by automatically packaging them as Docker containers and deploying to Amazon ECS. This provides our data scientist a one-click method of getting from their algorithms to production. We then integrate those deployments into a service mesh, which allows us to A/B test various implementations in our product.

    For more info:

    #DataScience #DataStack #Data

    See more
    John Kodumal

    As we've evolved or added additional infrastructure to our stack, we've biased towards managed services. Most new backing stores are Amazon RDS instances now. We do use self-managed PostgreSQL with TimescaleDB for time-series data—this is made HA with the use of Patroni and Consul.

    We also use managed Amazon ElastiCache instances instead of spinning up Amazon EC2 instances to run Redis workloads, as well as shifting to Amazon Kinesis instead of Kafka.

    See more
    Sidekiq logo

    Sidekiq

    1.1K
    629
    408
    Simple, efficient background processing for Ruby
    1.1K
    629
    + 1
    408
    PROS OF SIDEKIQ
    • 124
      Simple
    • 99
      Efficient background processing
    • 60
      Scalability
    • 37
      Better then resque
    • 26
      Great documentation
    • 15
      Admin tool
    • 14
      Great community
    • 8
      Integrates with redis automatically, with zero config
    • 7
      Stupidly simple to integrate and run on Rails/Heroku
    • 7
      Great support
    • 3
      Ruby
    • 3
      Freeium
    • 2
      Pro version
    • 1
      Dashboard w/live polling
    • 1
      Great ecosystem of addons
    • 1
      Fast
    CONS OF SIDEKIQ
      Be the first to leave a con

      related Sidekiq posts

      Cyril Duchon-Doris

      We decided to use AWS Lambda for several serverless tasks such as

      • Managing AWS backups
      • Processing emails received on Amazon SES and stored to Amazon S3 and notified via Amazon SNS, so as to push a message on our Redis so our Sidekiq Rails workers can process inbound emails
      • Pushing some relevant Amazon CloudWatch metrics and alarms to Slack
      See more
      Simon Bettison
      Managing Director at Bettison.org Limited · | 8 upvotes · 766.1K views

      In 2012 we made the very difficult decision to entirely re-engineer our existing monolithic LAMP application from the ground up in order to address some growing concerns about it's long term viability as a platform.

      Full application re-write is almost always never the answer, because of the risks involved. However the situation warranted drastic action as it was clear that the existing product was going to face severe scaling issues. We felt it better address these sooner rather than later and also take the opportunity to improve the international architecture and also to refactor the database in. order that it better matched the changes in core functionality.

      PostgreSQL was chosen for its reputation as being solid ACID compliant database backend, it was available as an offering AWS RDS service which reduced the management overhead of us having to configure it ourselves. In order to reduce read load on the primary database we implemented an Elasticsearch layer for fast and scalable search operations. Synchronisation of these indexes was to be achieved through the use of Sidekiq's Redis based background workers on Amazon ElastiCache. Again the AWS solution here looked to be an easy way to keep our involvement in managing this part of the platform at a minimum. Allowing us to focus on our core business.

      Rails ls was chosen for its ability to quickly get core functionality up and running, its MVC architecture and also its focus on Test Driven Development using RSpec and Selenium with Travis CI providing continual integration. We also liked Ruby for its terse, clean and elegant syntax. Though YMMV on that one!

      Unicorn was chosen for its continual deployment and reputation as a reliable application server, nginx for its reputation as a fast and stable reverse-proxy. We also took advantage of the Amazon CloudFront CDN here to further improve performance by caching static assets globally.

      We tried to strike a balance between having control over management and configuration of our core application with the convenience of being able to leverage AWS hosted services for ancillary functions (Amazon SES , Amazon SQS Amazon Route 53 all hosted securely inside Amazon VPC of course!).

      Whilst there is some compromise here with potential vendor lock in, the tasks being performed by these ancillary services are no particularly specialised which should mitigate this risk. Furthermore we have already containerised the stack in our development using Docker environment, and looking to how best to bring this into production - potentially using Amazon EC2 Container Service

      See more
      Resque logo

      Resque

      118
      125
      9
      A Redis-backed Ruby library for creating background jobs, placing them on multiple queues, and processing them later
      118
      125
      + 1
      9
      PROS OF RESQUE
      • 5
        Free
      • 3
        Scalable
      • 1
        Easy to use on heroku
      CONS OF RESQUE
        Be the first to leave a con

        related Resque posts

        Beanstalkd logo

        Beanstalkd

        111
        160
        74
        A simple, fast work queue
        111
        160
        + 1
        74
        PROS OF BEANSTALKD
        • 23
          Fast
        • 12
          Free
        • 12
          Does one thing well
        • 9
          Scalability
        • 8
          Simplicity
        • 3
          External admin UI developer friendly
        • 3
          Job delay
        • 2
          Job prioritization
        • 2
          External admin UI
        CONS OF BEANSTALKD
          Be the first to leave a con

          related Beanstalkd posts

          Frédéric MARAND
          Core Developer at OSInet · | 2 upvotes · 232.4K views

          I used Kafka originally because it was mandated as part of the top-level IT requirements at a Fortune 500 client. What I found was that it was orders of magnitude more complex ...and powerful than my daily Beanstalkd , and far more flexible, resilient, and manageable than RabbitMQ.

          So for any case where utmost flexibility and resilience are part of the deal, I would use Kafka again. But due to the complexities involved, for any time where this level of scalability is not required, I would probably just use Beanstalkd for its simplicity.

          I tend to find RabbitMQ to be in an uncomfortable middle place between these two extremities.

          See more
          PHP-FPM logo

          PHP-FPM

          108
          119
          0
          An alternative FastCGI daemon for PHP
          108
          119
          + 1
          0
          PROS OF PHP-FPM
            Be the first to leave a pro
            CONS OF PHP-FPM
              Be the first to leave a con

              related PHP-FPM posts