Serverless is ever the hot-button topic, removing the frustration and concern of hardware and allowing you to focus on individual code functions. Here are six serverless frameworks that run on Kubernetes that you need to know...
OpenFaas is an independent project started by Alex Ellis who was a Senior Engineer at VMware and is now working full time on the project. It has a wide and active community of contributors.
It is described as a framework for Docker and Kubernetes with first-class support for metrics. Its architecture consists of:
API gateway: This handles routing requests to functions and the collection of metrics using Prometheus.
Watchdog: This is a lightweight Golang based web server that acts as a generic entry point for functions. It receives an HTTP request and invokes a function by forwarding the requests via standard in and awaiting response via standard out.
Queue Worker: Works with a NATs queue to provide asynchronous invocation of functions with a configurable callback URL.
OpenFaas is managed using the faas-cli, which can be installed on OSX using Brew. Deployment of OpenFaas to Kubernetes is done using either a Helm chart or raw resource YAML.
Prometheus is used for exposing function metrics and the default autoscaling behaviour (which can be swapped out for HPA).
Lots of prebuilt triggers and runtimes available
Useful metrics available out of the box
Detailed performance test instructions
Popular with an active community
No serverless.com provider
Actions: This is the function, which contains application code of whatever language you choose.
Triggers: This refers to a group of events e.g. messages published to a topic or HTTP requests.
Feeds: This refers to a stream of events e.g. inbound webhook calls. A feed can be implemented using three patterns, Hooks, Polling and Connections, by creating an action (function) which accepts a set of defined parameters.
Alarms: Used to create periodic, time-based triggers.
Rules: A rule associates one trigger with one action and injects the trigger event as an input.
It supports deployment on Kubernetes, Mesos, OpenShift and Compose although recommends Kubernetes.
Option for self-managing or consuming a hosted version using IBM Bluemix
Feels very complex in terms of architecture and getting started
Written in scala (all others are Golang based)
Kubeless is a Bitnami project which describes itself as “the most Kubernetes native of them all”. It has a very active community and high-quality documentation. Getting it installed and running on a Kubernetes cluster was super easy and the architecture is very simple to get your head around. On deployment, kubeless creates three Custom Resource Definitions called functions, httptriggers and cronjobtriggers and a Deployment consisting of three containers, the function controller, http trigger controller and cron job controller.
Very simple to provision
The kubectl CLI can be used to manage kubeless functions
AWS Lambda compatible interface
A serverless.com provider is available for managing deployment.
Uses CRD to store state (in etcd) so no need for a database
At the time of writing, it does not provide scale to zero functionality.
Knative is a framework developed mainly by Google and Pivotal, with Red Hat and IBM contributing in a smaller way. It is made up of three high level components:
Building: This provides tools to help building source code into containers.
Serving: This is the runtime element which takes care of routing requests or events to serverless functions.
Eventing: This is the component which deals with both producing and consuming events.
It is a Kubernetes based platform which is dependent on Istio, a project which heavily uses the super fast Envoy proxy built at Lyft. Istio offers a range of features for enforcing network policy, monitoring traffic between microservices, load balancing and others. Knative can be enabled on GKE using the Google form or it can be deployed to any cluster using Kubernetes resource YAML. Knative is comprised of a large number of open source tools including Zipkin, Statsd, Fluentd, Elasticsearch and others.
Google have recently released Cloud Run which is a managed serverless service based on Knative.
Built with a focus on the CloudEvents specification
Backed by Google!
Provisioning Knative and its dependencies creates 110 CRDs, 24 deployments, 3 daemonsets and 51 containers in total, and that’s before deploying any functions!
The minimum recommended cluster size is four n1-standard-4 nodes in GKE. This has a cost of just under $400 per month, which could be seen as a significant cost overhead, depending on team size & budget.
Fission is a project built and maintained by Platform9 along with other contributors. It is described as being focussed on “developer productivity and high performance” and is specifically designed for running atop of Kubernetes. It is written in Golang (shock) and although the Github page states that the project is “in early alpha and not ready for production just yet” it feels relatively mature given the list of features including canary deployments and live-reload (very cool) as well as it’s active Developer community and over 4000 stars on Github.
It defines three main concepts including:
Environment: This refers to a pre-built Docker image providing the runtime components such as a specific language installation along with a web server and dynamic loader used to wire a request or event into application code upon invocation.
Function: This is the application code, following fissions structure.
Trigger: This is something that causes a function to be executed, at the time of writing this can be a HTTP request, a time based trigger (think cron) or a message queue (either NATS, Kafka or Azure Storage Queue).
Fission provides a CLI called… fission! which is distributed as a binary and is used to administer the fission platform. Creating/deleting/updating functions, environments and triggers as well as viewing invocation logs.
Choice of executors allowing for zero scale or keeping pods warm to avoid it
Most of fission’s components can’t scale up (only the router currently) leaving some uncertainty about performance at real scale and meaning a risk of downtime due to a lack of resilience
Fn is a project which started as Iron Functions and is now backed by Oracle. It is marketed as being 'container native' and not specific to any particular cloud provider or container orchestrator. When provisioning Fn on Kubernetes it depends upon cert-manager, ingress-nginx, MySQL and Redis. Functions are built into a Docker image, pushed to registry and deployed using the fn CLI. HTTP is the only supported trigger type.
It is composed of four components including:
Fn Server: The core component of the platform which manages build, deployment and scaling for functions. Described as being multi-cloud and container-native.
Load balancer: The load balancer routes requests to functions and keeps track of “hot functions”, those that have their image pre-pulled to a node and are ready to receive requests.
Fn FDK’s: Function Development Kits allow developers to bootstrap functions in their chosen language by providing a data binding model for function inputs.
Fn Flow: Allows developers to orchestrate workflows for functions e.g. parallel, sequential and fan out style execution.
Build and push of Docker images is abstracted away from the Developer, meaning little knowledge of Docker is required
A serverless provider is available for deploying and managing Fn functions
There aren't any off the top of the head!
To get started with any of the frameworks above, see this repository which contains a guide on writing and deploying a simple Python function to each.