BLOGDevOps

How to Get Started With Serverless Architecture

Category
DevOps
Time to read
Published
February 19, 2024
Author

Key Takeaways

Understanding the roles of Workload Identities, Cluster Service Accounts, IAM Policies, and IAM Roles in managing access controls within AWS environments.

Exploring real-world use cases to illustrate the importance of effective IAM policy management in securing multi-tenant environments and aligning access controls with business requirements.

Comparing manual IAM policy management with streamlined approaches, such as Wayfinder's Package Workload Identities, to highlight the benefits of automation and centralised policy management.

Serverless refers to an architecture where code is executed in response to events using short-lived containers rather than being daemonised into a long-lived background process using something like gunicorn for Python or Puma for Ruby.

Events can be HTTP requests, time-based rules (think cron), a publish-subscribe mechanism (e.g. Kafka/SNS) or come from a stream (e.g. NATs/Kinesis). Custom event sources can be created and there is an open specification to follow which you can see at CloudEvents. A common use case for implementing serverless is to receive inbound webhooks from external systems such as Slack custom commands or SaaS-based monitoring platforms. Both of which may only be triggered very intermittently.

Serverless applications are built into functions, which are usually packaged and shipped as either a Zip containing code and any dependencies or as a Docker image depending on the serverless framework/platform.

What isn’t it?

Serverless doesn’t refer generally to any PaaS for which server management isn’t required by the consumer such as Amazon’s RDS or Elastic Beanstalk services, which is a common misnomer. Although some PaaS providers may well implement serverless architectures under the hood…

What are the benefits of serverless?

There are three main reasons for adopting serverless technologies:

  • Outsourcing complexityDeploying software can be complex, platforms that solve this can offer huge advantages for development teams to focus more on the features of the code rather than how to run it. Things like provisioning a set of instances that auto-scale to cope with the demand of docker containers running within it, tends to involve significant levels of complexity and domain knowledge.
  • Event driven developmentWhere code only needs to execute in response to some external event such as an inbound webhook or messages published to a queue. Serverless platforms aim to provide a framework for wiring functions to events without requiring processes running in the background 24/7 and consuming compute resources. Some events might only happen once a week/month/year which means a long lived daemon is a waste of compute (and electricity!).
  • Reducing costServerless platforms such as AWS Lambda and Google Cloud Functions provide a free tier for a set amount of invocations per month, and a pricing model which charges based on the number of requests (executions) performed by your application with no overhead. This can be used to develop functions with little to no hosting cost while traffic is relatively low.

Before you get started

There are a few things to be aware of before rushing into serverless. A common issue which is a topic of much discussion recently is “Cold Starts”, which is the time difference between a request being made and the function been executed, this delay can be caused by:

  • The function not being present on a server (yes there are still servers involved) and needs to be pulled in from an image registry or downloaded from a storage endpoint before it can execute.
  • The application doesn’t yet have a network interface up and ready to receive connections.
  • The application runtime isn’t fully initialised e.g. JVM start up.

This latency can be anywhere from a few hundred milliseconds up to 10 seconds depending on a number of factors.

There are workarounds available to reduce the frequency and impact of cold starts but they tend to be a little messy. Teams tend to avoid the use of serverless for applications that require a performant response such as APIs/websites and use it only for event-based or asynchronous applications where a delay of up to 10 seconds has no impact.

What to use to get started

Popular managed serverless platforms:

Google Cloud Run (beta)
AWS Lambda
Google Cloud Functions
Azure Functions
IBM Cloud Functions

Popular self hosted serverless frameworks:

Fission
Knative
Kubeless
OpenFaas
OpenWhisk
Fn

Note: All of the self-hosted options mentioned above can run on Kubernetes

Self hosted vs managed serverless

Managed serverless platforms are popular with freelance Developers and small startups due to their minimal cost overhead and out-of-the-box solutions for monitoring, logging and metrics. They also provide simplified CLI’s for doing reliable deployments and mostly have high availability by default.

Managed serverless platforms can often make use of automated SSL certificate generation and renewal systems such as AWS ACM or Lets Encrypt. All this usually means sacrificing some flexibility due to constraints such as function runtimes not being fully customisable, static IP addresses not being configurable or hard upper limits on things like function package size or execution time.

Self hosted serverless options such as those mentioned above are mainly designed to be run on top of Kubernetes using custom resource definitions.

They offer greater control over function runtimes, deployment and scaling behaviours but come with some cost overhead for running the Kubernetes master and node instances 24/7. They also require domain knowledge on Kubernetes administration which may not be available in house. In an enterprise scenario with existing cluster infrastructure and less limited budgets, self hosted may be the preferred choice in order to conform with network topology, operational tooling and to make use of existing IAM and RBAC solutions.

Related Posts

Related Resources