StackShareStackShare
Follow on
StackShare

Discover and share technology stacks from companies around the world.

Follow on

© 2025 StackShare. All rights reserved.

Product

  • Stacks
  • Tools
  • Feed

Company

  • About
  • Contact

Legal

  • Privacy Policy
  • Terms of Service
  1. Stackups
  2. Application & Data
  3. Serverless
  4. Serverless Task Processing
  5. Apache OpenWhisk vs Knative

Apache OpenWhisk vs Knative

OverviewComparisonAlternatives

Overview

Apache OpenWhisk
Apache OpenWhisk
Stacks58
Followers149
Votes7
Knative
Knative
Stacks86
Followers342
Votes21
GitHub Stars5.9K
Forks1.2K

Apache OpenWhisk vs Knative: What are the differences?

  1. Cost model: One key difference between Apache OpenWhisk and Knative is their cost model. OpenWhisk follows a traditional pay-per-use model, where users are billed based on the actual usage of compute resources. On the other hand, Knative provides a more flexible pricing structure by allowing users to define their own custom metrics and thresholds for resource usage, thus enabling more granular and efficient cost management.

  2. Language Support: While both OpenWhisk and Knative support multiple programming languages, there is a difference in the level of language support. OpenWhisk has wider language support as it can execute code written in several languages, including Java, JavaScript, Python, Swift, and more. Knative, on the other hand, primarily focuses on supporting container-based workloads, which means it can work with any programming language as long as it can be packaged as a container.

  3. Eventing Model: Apache OpenWhisk and Knative have different eventing models. OpenWhisk uses a fully event-driven model, where actions are triggered by events and can perform a specific task in response to an event. Knative also has eventing capabilities but provides a more structured event-driven approach by allowing users to define sources, filters, and channels, enabling more sophisticated event management and routing.

  4. Scaling Technology: Another important difference lies in the scaling technology employed by OpenWhisk and Knative. OpenWhisk uses a technology called "scale-to-zero," where functions are automatically scaled down to zero when not in use, leading to efficient resource utilization. On the other hand, Knative utilizes a "scale-to-one" approach, where it keeps at least one instance of a service running at all times, reducing the cold start delay but potentially consuming more resources when idle.

  5. Deployment Flexibility: OpenWhisk and Knative also differ in terms of deployment flexibility. Apache OpenWhisk provides a fully managed serverless platform, where users do not need to worry about infrastructure management. Knative, on the other hand, can be deployed on any Kubernetes cluster, providing more flexibility and control over the underlying infrastructure.

  6. Community Support: The level of community support and adoption also differs between Apache OpenWhisk and Knative. OpenWhisk has been around for a longer time and has established a larger community with a broader range of contributors, resulting in a more extensive ecosystem of tools, plugins, and integrations. Knative, being a relatively newer project, has a smaller community but is rapidly gaining popularity, especially within the Kubernetes community.

In Summary, Apache OpenWhisk and Knative differ in their cost model, language support, eventing model, scaling technology, deployment flexibility, and community support.

Share your Stack

Help developers discover the tools you use. Get visibility for your team's tech choices and contribute to the community's knowledge.

View Docs
CLI (Node.js)
or
Manual

Detailed Comparison

Apache OpenWhisk
Apache OpenWhisk
Knative
Knative

OpenWhisk is an open source serverless platform. It is enterprise grade and accessible to all developers thanks to its superior programming model and tooling. It powers IBM Cloud Functions, Adobe I/O Runtime, Naver, Nimbella among others.

Knative provides a set of middleware components that are essential to build modern, source-centric, and container-based applications that can run anywhere: on premises, in the cloud, or even in a third-party data center

Serverless functions;FaaS;Fine-grained resource consumption;Use any language;Containers as functions; service;Functions-as-a-Service;Function composition;Step Functions;Docker;Kubernetes;Open source community;Apache
Serving - Scale to zero, request-driven compute model; Build - Cloud-native source to container orchestration; Events - Universal subscription, delivery and management of events; Serverless add-on on GKE - Enable GCP managed serverless stack on Kubernetes
Statistics
GitHub Stars
-
GitHub Stars
5.9K
GitHub Forks
-
GitHub Forks
1.2K
Stacks
58
Stacks
86
Followers
149
Followers
342
Votes
7
Votes
21
Pros & Cons
Pros
  • 4
    You are not tied to a provider. IBM available however
  • 3
    Still exploring... its just intresting
Pros
  • 5
    Portability
  • 4
    Autoscaling
  • 3
    On top of Kubernetes
  • 3
    Secure Eventing
  • 3
    Open source
Integrations
Node.js
Node.js
Visual Studio Code
Visual Studio Code
JavaScript
JavaScript
Python
Python
npm
npm
Kubernetes
Kubernetes
Docker
Docker
Swift
Swift
Java
Java
Slack
Slack
Google Kubernetes Engine
Google Kubernetes Engine

What are some alternatives to Apache OpenWhisk, Knative?

AWS Lambda

AWS Lambda

AWS Lambda is a compute service that runs your code in response to events and automatically manages the underlying compute resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own back-end services that operate at AWS scale, performance, and security.

Azure Functions

Azure Functions

Azure Functions is an event driven, compute-on-demand experience that extends the existing Azure application platform with capabilities to implement code triggered by events occurring in virtually any Azure or 3rd party service as well as on-premises systems.

Google Cloud Run

Google Cloud Run

A managed compute platform that enables you to run stateless containers that are invocable via HTTP requests. It's serverless by abstracting away all infrastructure management.

Serverless

Serverless

Build applications comprised of microservices that run in response to events, auto-scale for you, and only charge you when they run. This lowers the total cost of maintaining your apps, enabling you to build more logic, faster. The Framework uses new event-driven compute services, like AWS Lambda, Google CloudFunctions, and more.

Google Cloud Functions

Google Cloud Functions

Construct applications from bite-sized business logic billed to the nearest 100 milliseconds, only while your code is running

OpenFaaS

OpenFaaS

Serverless Functions Made Simple for Docker and Kubernetes

Nuclio

Nuclio

nuclio is portable across IoT devices, laptops, on-premises datacenters and cloud deployments, eliminating cloud lock-ins and enabling hybrid solutions.

Cloud Functions for Firebase

Cloud Functions for Firebase

Cloud Functions for Firebase lets you create functions that are triggered by Firebase products, such as changes to data in the Realtime Database, uploads to Cloud Storage, new user sign ups via Authentication, and conversion events in Analytics.

AWS Batch

AWS Batch

It enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. It dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted.

Fission

Fission

Write short-lived functions in any language, and map them to HTTP requests (or other event triggers). Deploy functions instantly with one command. There are no containers to build, and no Docker registries to manage.

Related Comparisons

Bootstrap
Materialize

Bootstrap vs Materialize

Laravel
Django

Django vs Laravel vs Node.js

Bootstrap
Foundation

Bootstrap vs Foundation vs Material UI

Node.js
Spring Boot

Node.js vs Spring-Boot

Liquibase
Flyway

Flyway vs Liquibase