Alternatives to OpenFaaS logo

Alternatives to OpenFaaS

Serverless, Knative, Nuclio, Kubeless, and Fission are the most popular alternatives and competitors to OpenFaaS.
55
230
+ 1
17

What is OpenFaaS and what are its top alternatives?

Serverless Functions Made Simple for Docker and Kubernetes
OpenFaaS is a tool in the Serverless / Task Processing category of a tech stack.
OpenFaaS is an open source tool with 24.6K GitHub stars and 1.9K GitHub forks. Here’s a link to OpenFaaS's open source repository on GitHub

Top Alternatives to OpenFaaS

  • Serverless
    Serverless

    Build applications comprised of microservices that run in response to events, auto-scale for you, and only charge you when they run. This lowers the total cost of maintaining your apps, enabling you to build more logic, faster. The Framework uses new event-driven compute services, like AWS Lambda, Google CloudFunctions, and more. ...

  • Knative
    Knative

    Knative provides a set of middleware components that are essential to build modern, source-centric, and container-based applications that can run anywhere: on premises, in the cloud, or even in a third-party data center ...

  • Nuclio
    Nuclio

    nuclio is portable across IoT devices, laptops, on-premises datacenters and cloud deployments, eliminating cloud lock-ins and enabling hybrid solutions. ...

  • Kubeless
    Kubeless

    Kubeless is a Kubernetes native serverless Framework. Kubeless supports both HTTP and event based functions triggers. It has a serverless plugin, a graphical user interface and multiple runtimes, including Python and Node.js. ...

  • Fission
    Fission

    Write short-lived functions in any language, and map them to HTTP requests (or other event triggers). Deploy functions instantly with one command. There are no containers to build, and no Docker registries to manage. ...

  • AWS Lambda
    AWS Lambda

    AWS Lambda is a compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security. ...

  • Azure Functions
    Azure Functions

    Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or 3rd party service as well as on-premises systems. ...

  • Kubernetes
    Kubernetes

    Kubernetes is an open source orchestration system for Docker containers. It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches the users declared intentions. ...

OpenFaaS alternatives & related posts

Serverless logo

Serverless

1.3K
1.2K
26
The most widely-adopted toolkit for building serverless applications
1.3K
1.2K
+ 1
26
PROS OF SERVERLESS
  • 14
    API integration
  • 7
    Supports cloud functions for Google, Azure, and IBM
  • 3
    Lower cost
  • 1
    Auto scale
  • 1
    Openwhisk
CONS OF SERVERLESS
    Be the first to leave a con

    related Serverless posts

    Praveen Mooli
    Engineering Manager at Taylor and Francis · | 18 upvotes · 3.8M views

    We are in the process of building a modern content platform to deliver our content through various channels. We decided to go with Microservices architecture as we wanted scale. Microservice architecture style is an approach to developing an application as a suite of small independently deployable services built around specific business capabilities. You can gain modularity, extensive parallelism and cost-effective scaling by deploying services across many distributed servers. Microservices modularity facilitates independent updates/deployments, and helps to avoid single point of failure, which can help prevent large-scale outages. We also decided to use Event Driven Architecture pattern which is a popular distributed asynchronous architecture pattern used to produce highly scalable applications. The event-driven architecture is made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events.

    To build our #Backend capabilities we decided to use the following: 1. #Microservices - Java with Spring Boot , Node.js with ExpressJS and Python with Flask 2. #Eventsourcingframework - Amazon Kinesis , Amazon Kinesis Firehose , Amazon SNS , Amazon SQS, AWS Lambda 3. #Data - Amazon RDS , Amazon DynamoDB , Amazon S3 , MongoDB Atlas

    To build #Webapps we decided to use Angular 2 with RxJS

    #Devops - GitHub , Travis CI , Terraform , Docker , Serverless

    See more
    Nitzan Shapira

    At Epsagon, we use hundreds of AWS Lambda functions, most of them are written in Python, and the Serverless Framework to pack and deploy them. One of the issues we've encountered is the difficulty to package external libraries into the Lambda environment using the Serverless Framework. This limitation is probably by design since the external code your Lambda needs can be usually included with a package manager.

    In order to overcome this issue, we've developed a tool, which we also published as open-source (see link below), which automatically packs these libraries using a simple npm package and a YAML configuration file. Support for Node.js, Go, and Java will be available soon.

    The GitHub respoitory: https://github.com/epsagon/serverless-package-external

    See more
    Knative logo

    Knative

    82
    336
    21
    Kubernetes-based platform for serverless workloads
    82
    336
    + 1
    21
    PROS OF KNATIVE
    • 5
      Portability
    • 4
      Autoscaling
    • 3
      Open source
    • 3
      Eventing
    • 3
      Secure Eventing
    • 3
      On top of Kubernetes
    CONS OF KNATIVE
      Be the first to leave a con

      related Knative posts

      Currently been using an older version of OpenFaaS, but the new version now requires payment for things we did on the older version. Been looking for alternatives to OpenFaas that have Kafka integrations, and scale to 0 capabilities.

      looked at Apache OpenWhisk, but we run on RKE2, and my initial install of Openwhisk appears to be too out of date to support RKE2 and missing images from docker.io. So now looking at Knative. What are your thoughts? We need support to be able to process functions about 10k a min, which can vary on time of execution, between ms and mins. So looking for horizontal scaling that can be controlled by other metrics, than just cpu and ram utilization, but more so, for example if the wait is over 5 scale out.. Issue with older openfaas, was scaling on RKE2 was not working great, for example, I could get it to scale from 5 to 20 pods, but only 12 of them would ever have data, but my backlog would have 100k's of files waiting.. So even though it scaled up, it was as if the distribution of work was only being married to specific pods. If I killed the pods that had no work, they come up again with no work, if I killed one with work, then another pod would scale up and another pod would start to get work. And On occasion with hours, it would reset down to the original deployment allotment of pods, and never scale up again, until I go into Kubernetes and tell it to add more pods.

      So hoping to find a solution that doesn't require as much triage, to work with scaling, as points in time we are at higher volume and other points of time could be no volume.

      See more
      Nuclio logo

      Nuclio

      16
      47
      11
      Real-time serverless platform
      16
      47
      + 1
      11
      PROS OF NUCLIO
      • 1
        Enterprise grade
      • 1
        Air gap friendly
      • 1
        Actively maintained and supported
      • 1
        Variety of runtimes
      • 1
        Variety of triggers
      • 1
        Secure image building
      • 1
        Scale to zero
      • 1
        Autoscaling
      • 1
        Parallelism
      • 1
        Performance
      • 1
        Open source
      CONS OF NUCLIO
        Be the first to leave a con

        related Nuclio posts

        Kubeless logo

        Kubeless

        39
        192
        0
        Kubernetes Native Serverless Framework
        39
        192
        + 1
        0
        PROS OF KUBELESS
          Be the first to leave a pro
          CONS OF KUBELESS
            Be the first to leave a con

            related Kubeless posts

            Fission logo

            Fission

            27
            80
            3
            Serverless Functions as a Service for Kubernetes
            27
            80
            + 1
            3
            PROS OF FISSION
            • 1
              Any language
            • 1
              Portability
            • 1
              Open source
            CONS OF FISSION
              Be the first to leave a con

              related Fission posts

              AWS Lambda logo

              AWS Lambda

              23.7K
              18.4K
              432
              Automatically run code in response to modifications to objects in Amazon S3 buckets, messages in Kinesis streams, or...
              23.7K
              18.4K
              + 1
              432
              PROS OF AWS LAMBDA
              • 129
                No infrastructure
              • 83
                Cheap
              • 70
                Quick
              • 59
                Stateless
              • 47
                No deploy, no server, great sleep
              • 12
                AWS Lambda went down taking many sites with it
              • 6
                Event Driven Governance
              • 6
                Extensive API
              • 6
                Auto scale and cost effective
              • 6
                Easy to deploy
              • 5
                VPC Support
              • 3
                Integrated with various AWS services
              CONS OF AWS LAMBDA
              • 7
                Cant execute ruby or go
              • 3
                Compute time limited
              • 1
                Can't execute PHP w/o significant effort

              related AWS Lambda posts

              Jeyabalaji Subramanian

              Recently we were looking at a few robust and cost-effective ways of replicating the data that resides in our production MongoDB to a PostgreSQL database for data warehousing and business intelligence.

              We set ourselves the following criteria for the optimal tool that would do this job: - The data replication must be near real-time, yet it should NOT impact the production database - The data replication must be horizontally scalable (based on the load), asynchronous & crash-resilient

              Based on the above criteria, we selected the following tools to perform the end to end data replication:

              We chose MongoDB Stitch for picking up the changes in the source database. It is the serverless platform from MongoDB. One of the services offered by MongoDB Stitch is Stitch Triggers. Using stitch triggers, you can execute a serverless function (in Node.js) in real time in response to changes in the database. When there are a lot of database changes, Stitch automatically "feeds forward" these changes through an asynchronous queue.

              We chose Amazon SQS as the pipe / message backbone for communicating the changes from MongoDB to our own replication service. Interestingly enough, MongoDB stitch offers integration with AWS services.

              In the Node.js function, we wrote minimal functionality to communicate the database changes (insert / update / delete / replace) to Amazon SQS.

              Next we wrote a minimal micro-service in Python to listen to the message events on SQS, pickup the data payload & mirror the DB changes on to the target Data warehouse. We implemented source data to target data translation by modelling target table structures through SQLAlchemy . We deployed this micro-service as AWS Lambda with Zappa. With Zappa, deploying your services as event-driven & horizontally scalable Lambda service is dumb-easy.

              In the end, we got to implement a highly scalable near realtime Change Data Replication service that "works" and deployed to production in a matter of few days!

              See more
              Tim Nolet

              Heroku Docker GitHub Node.js hapi Vue.js AWS Lambda Amazon S3 PostgreSQL Knex.js Checkly is a fairly young company and we're still working hard to find the correct mix of product features, price and audience.

              We are focussed on tech B2B, but I always wanted to serve solo developers too. So I decided to make a $7 plan.

              Why $7? Simply put, it seems to be a sweet spot for tech companies: Heroku, Docker, Github, Appoptics (Librato) all offer $7 plans. They must have done a ton of research into this, so why not piggy back that and try it out.

              Enough biz talk, onto tech. The challenges were:

              • Slice of a portion of the functionality so a $7 plan is still profitable. We call this the "plan limits"
              • Update API and back end services to handle and enforce plan limits.
              • Update the UI to kindly state plan limits are in effect on some part of the UI.
              • Update the pricing page to reflect all changes.
              • Keep the actual processing backend, storage and API's as untouched as possible.

              In essence, we went from strictly volume based pricing to value based pricing. Here come the technical steps & decisions we made to get there.

              1. We updated our PostgreSQL schema so plans now have an array of "features". These are string constants that represent feature toggles.
              2. The Vue.js frontend reads these from the vuex store on login.
              3. Based on these values, the UI has simple v-if statements to either just show the feature or show a friendly "please upgrade" button.
              4. The hapi API has a hook on each relevant API endpoint that checks whether a user's plan has the feature enabled, or not.

              Side note: We offer 10 SMS messages per month on the developer plan. However, we were not actually counting how many people were sending. We had to update our alerting daemon (that runs on Heroku and triggers SMS messages via AWS SNS) to actually bump a counter.

              What we build is basically feature-toggling based on plan features. It is very extensible for future additions. Our scheduling and storage backend that actually runs users' monitoring requests (AWS Lambda) and stores the results (S3 and Postgres) has no knowledge of all of this and remained unchanged.

              Hope this helps anyone building out their SaaS and is in a similar situation.

              See more
              Azure Functions logo

              Azure Functions

              665
              691
              62
              Listen and react to events across your stack
              665
              691
              + 1
              62
              PROS OF AZURE FUNCTIONS
              • 14
                Pay only when invoked
              • 11
                Great developer experience for C#
              • 9
                Multiple languages supported
              • 7
                Great debugging support
              • 5
                Can be used as lightweight https service
              • 4
                Easy scalability
              • 3
                WebHooks
              • 3
                Costo
              • 2
                Event driven
              • 2
                Azure component events for Storage, services etc
              • 2
                Poor developer experience for C#
              CONS OF AZURE FUNCTIONS
              • 1
                No persistent (writable) file system available
              • 1
                Poor support for Linux environments
              • 1
                Sporadic server & language runtime issues
              • 1
                Not suited for long-running applications

              related Azure Functions posts

              Kestas Barzdaitis
              Entrepreneur & Engineer · | 16 upvotes · 765.3K views

              CodeFactor being a #SAAS product, our goal was to run on a cloud-native infrastructure since day one. We wanted to stay product focused, rather than having to work on the infrastructure that supports the application. We needed a cloud-hosting provider that would be reliable, economical and most efficient for our product.

              CodeFactor.io aims to provide an automated and frictionless code review service for software developers. That requires agility, instant provisioning, autoscaling, security, availability and compliance management features. We looked at the top three #IAAS providers that take up the majority of market share: Amazon's Amazon EC2 , Microsoft's Microsoft Azure, and Google Compute Engine.

              AWS has been available since 2006 and has developed the most extensive services ant tools variety at a massive scale. Azure and GCP are about half the AWS age, but also satisfied our technical requirements.

              It is worth noting that even though all three providers support Docker containerization services, GCP has the most robust offering due to their investments in Kubernetes. Also, if you are a Microsoft shop, and develop in .NET - Visual Studio Azure shines at integration there and all your existing .NET code works seamlessly on Azure. All three providers have serverless computing offerings (AWS Lambda, Azure Functions, and Google Cloud Functions). Additionally, all three providers have machine learning tools, but GCP appears to be the most developer-friendly, intuitive and complete when it comes to #Machinelearning and #AI.

              The prices between providers are competitive across the board. For our requirements, AWS would have been the most expensive, GCP the least expensive and Azure was in the middle. Plus, if you #Autoscale frequently with large deltas, note that Azure and GCP have per minute billing, where AWS bills you per hour. We also applied for the #Startup programs with all three providers, and this is where Azure shined. While AWS and GCP for startups would have covered us for about one year of infrastructure costs, Azure Sponsorship would cover about two years of CodeFactor's hosting costs. Moreover, Azure Team was terrific - I felt that they wanted to work with us where for AWS and GCP we were just another startup.

              In summary, we were leaning towards GCP. GCP's advantages in containerization, automation toolset, #Devops mindset, and pricing were the driving factors there. Nevertheless, we could not say no to Azure's financial incentives and a strong sense of partnership and support throughout the process.

              Bottom line is, IAAS offerings with AWS, Azure, and GCP are evolving fast. At CodeFactor, we aim to be platform agnostic where it is practical and retain the flexibility to cherry-pick the best products across providers.

              See more

              REST API for SaaS application

              I'm currently developing an Azure Functions REST API with TypeScript, tsoa, Mongoose, and Typegoose that contains simple CRUD activities. It does the job and has type-safety as well as the ability to generate OpenAPI specs for me.

              However, as the app scales up, there are more duplicated codes (for similar operations - like CRUD in each different model). It's also becoming more complex because I need to implement a multi-tenancy SaaS for both the API and the database.

              So I chose to implement a repository pattern, and I have a "feeling" that .NET and C# will make development easier because, unlike TypeScript, it includes native support for Dependency Injection and great things like LINQ.

              It wouldn't take much effort to migrate because I can easily translate interfaces and basic CRUD operations to C#. So, I'm looking for advice on whether it's worth converting from TypeScript to.NET.

              See more
              Kubernetes logo

              Kubernetes

              58.7K
              50.8K
              677
              Manage a cluster of Linux containers as a single system to accelerate Dev and simplify Ops
              58.7K
              50.8K
              + 1
              677
              PROS OF KUBERNETES
              • 164
                Leading docker container management solution
              • 128
                Simple and powerful
              • 106
                Open source
              • 76
                Backed by google
              • 58
                The right abstractions
              • 25
                Scale services
              • 20
                Replication controller
              • 11
                Permission managment
              • 9
                Supports autoscaling
              • 8
                Cheap
              • 8
                Simple
              • 6
                Self-healing
              • 5
                No cloud platform lock-in
              • 5
                Promotes modern/good infrascture practice
              • 5
                Open, powerful, stable
              • 5
                Reliable
              • 4
                Scalable
              • 4
                Quick cloud setup
              • 3
                Cloud Agnostic
              • 3
                Captain of Container Ship
              • 3
                A self healing environment with rich metadata
              • 3
                Runs on azure
              • 3
                Backed by Red Hat
              • 3
                Custom and extensibility
              • 2
                Sfg
              • 2
                Gke
              • 2
                Everything of CaaS
              • 2
                Golang
              • 2
                Easy setup
              • 2
                Expandable
              CONS OF KUBERNETES
              • 16
                Steep learning curve
              • 15
                Poor workflow for development
              • 8
                Orchestrates only infrastructure
              • 4
                High resource requirements for on-prem clusters
              • 2
                Too heavy for simple systems
              • 1
                Additional vendor lock-in (Docker)
              • 1
                More moving parts to secure
              • 1
                Additional Technology Overhead

              related Kubernetes posts

              Conor Myhrvold
              Tech Brand Mgr, Office of CTO at Uber · | 44 upvotes · 10M views

              How Uber developed the open source, end-to-end distributed tracing Jaeger , now a CNCF project:

              Distributed tracing is quickly becoming a must-have component in the tools that organizations use to monitor their complex, microservice-based architectures. At Uber, our open source distributed tracing system Jaeger saw large-scale internal adoption throughout 2016, integrated into hundreds of microservices and now recording thousands of traces every second.

              Here is the story of how we got here, from investigating off-the-shelf solutions like Zipkin, to why we switched from pull to push architecture, and how distributed tracing will continue to evolve:

              https://eng.uber.com/distributed-tracing/

              (GitHub Pages : https://www.jaegertracing.io/, GitHub: https://github.com/jaegertracing/jaeger)

              Bindings/Operator: Python Java Node.js Go C++ Kubernetes JavaScript OpenShift C# Apache Spark

              See more
              Ashish Singh
              Tech Lead, Big Data Platform at Pinterest · | 38 upvotes · 3M views

              To provide employees with the critical need of interactive querying, we’ve worked with Presto, an open-source distributed SQL query engine, over the years. Operating Presto at Pinterest’s scale has involved resolving quite a few challenges like, supporting deeply nested and huge thrift schemas, slow/ bad worker detection and remediation, auto-scaling cluster, graceful cluster shutdown and impersonation support for ldap authenticator.

              Our infrastructure is built on top of Amazon EC2 and we leverage Amazon S3 for storing our data. This separates compute and storage layers, and allows multiple compute clusters to share the S3 data.

              We have hundreds of petabytes of data and tens of thousands of Apache Hive tables. Our Presto clusters are comprised of a fleet of 450 r4.8xl EC2 instances. Presto clusters together have over 100 TBs of memory and 14K vcpu cores. Within Pinterest, we have close to more than 1,000 monthly active users (out of total 1,600+ Pinterest employees) using Presto, who run about 400K queries on these clusters per month.

              Each query submitted to Presto cluster is logged to a Kafka topic via Singer. Singer is a logging agent built at Pinterest and we talked about it in a previous post. Each query is logged when it is submitted and when it finishes. When a Presto cluster crashes, we will have query submitted events without corresponding query finished events. These events enable us to capture the effect of cluster crashes over time.

              Each Presto cluster at Pinterest has workers on a mix of dedicated AWS EC2 instances and Kubernetes pods. Kubernetes platform provides us with the capability to add and remove workers from a Presto cluster very quickly. The best-case latency on bringing up a new worker on Kubernetes is less than a minute. However, when the Kubernetes cluster itself is out of resources and needs to scale up, it can take up to ten minutes. Some other advantages of deploying on Kubernetes platform is that our Presto deployment becomes agnostic of cloud vendor, instance types, OS, etc.

              #BigData #AWS #DataScience #DataEngineering

              See more