BLOGCloud

Why Does Being Cloud-Native Matter?

Category
Cloud
Time to read
Published
February 19, 2024
Author

Key Takeaways

Understanding the roles of Workload Identities, Cluster Service Accounts, IAM Policies, and IAM Roles in managing access controls within AWS environments.

Exploring real-world use cases to illustrate the importance of effective IAM policy management in securing multi-tenant environments and aligning access controls with business requirements.

Comparing manual IAM policy management with streamlined approaches, such as Wayfinder's Package Workload Identities, to highlight the benefits of automation and centralised policy management.

We're seeing an industry shift as organisations are trying to implement better ways of developing software, on the hunt for faster operations, lower costs, increased scalability and tightened security. There are terms to encompass how organisations can achieve those things: Becoming cloud-native, developing 12-factor applications and adopting a micro-service architecture are a few buzz terms floating around.

But what do those terms mean and how can they help companies create maintainable and secure applications? We'll dig into those below, as well as how Docker and Kubernetes come into the picture.

Being 'Cloud-Native' actually refers to how your application is built and run, rather than where it is run - a common misconception. Before getting any further, we put together a quick assessment to help you  get an idea of where your organisation is in its journey to becoming more cloud-native:

Take the Cloud-Native Maturity Assessment

There are a few subtle differences between cloud-native and the other two terms we used above, with each of them holding their rightful place in this space:

  • Cloud-native: An approach to building and running applications with a view to remove or dramatically reduce the cost of change/tech debt.
  • 12 Factor Applications: Explicitly declares 12 important factors to take into consideration when writing your applications.
  • Microservice Architecture: Describes how to build and run your application as a collection of services that are organised around your business services.


1. Writing and building small discrete services

Developers being responsible for writing small discrete services has numerous benefits, most of which are about managing complexity. Smaller services mean developers can grasp the full functionality of the service at speed, allowing them to start iterating and providing business value much quicker than maintaining large and usually more complex monolithic applications.

This type of approach also enables a team to more easily deploy independently, automate testing, change the underlying technology without affecting the overall service as well as being able to fail independently, enabling a much simpler route cause analysis.

Introducing containers to package these applications, further encourages the principles of the microservice architecture of having a single purpose, lightweight and portable application.

2. Running scalable applications

Scalability defines how well your service handles the load. It’s key to ensuring you aren’t running under or over-resourced applications which in turn, would have a massive impact on how many users you’re able to serve or how much you’re spending in terms of cloud infrastructure costs.

Broadly speaking, you can scale your service to handle the increased load, in one of two ways:

Up: giving an application more resources to work with ie. CPU/Ram/disk

Out: increasing the instances dealing with the load.

Both of these have their pros and cons, but scaling out, although harder to implement (requires load balancing and/or clustering etc), is seen as more efficient as it allows you to scale back down easily once your load has decreased, therefore saving some precious £££’s.

This is where orchestration tools such as Kubernetes, shine, as it allows you to scale your application, using horizontal pod autoscaler (out) or the vertical pod autoscaler (up) in combination with the cluster-autoscaler to automatically scale the underlying infrastructure, giving you the elasticity you need to scale your application to deal with demand, whilst keeping infrastructure costs to a minimum.

3. Resilience

Resilience is key to ensuring your services are able to operate normally in the face of faults and challenges. This normally refers to challenges caused by external factors, such as instances being removed from service, or targeted attacks to your service but could also be down to misconfiguration of components in the makeup of your system.

Again, the scheduling capabilities of Kubernetes excels in this area, as it gives you the ability to handle instances going down, creating highly available services across datacentres (affinity rules), and allowing for complex network traffic routing to minimise the effects of targeted attacks (service meshes).

4. Observability

Observability can help analyse your entire system for potential faults and changes in user behaviour through a combination of monitoring, alerting, tracing, log aggregation and analytics.

A well observable system will not only allow you to mitigate potential faults, avoiding service downtime entirely, but also enable a new level of innovation as you understand user behaviour and are able to ask questions of your system in a consistent manner.

Observability can be an incredibly difficult thing to administer, especially if you have written your microservices in a range of languages. However, a platform capability, with an understanding of traffic flow makes it a lot easier to implement.

5. Robustness

Robust Automation touches upon most of the terms noted above, as well as introducing new principles to improve the overall security posture of your service, these are:

  • Paranoia: Developers assume users are able to break out of their code, or that their code will fail
  • Stupidity: Developers assume users will try incorrect or malformed inputs

Dangerous implements: Users should not be able to gain access to libraries, data structure or pointers to data structures, avoiding the risk of users being able to find loopholes in their service.

Can’t happen: Over time, when code is modified developers may introduce cases that shouldn’t be possible.

A developer or operations engineer should think about how each of the areas above should be handled and mitigated to guard against erroneous use and external threats, generally through testing to build robust, well-automated services.

In the case of Docker, having slimmed down containers with only the absolutely necessary binaries and dependencies needed to run your service, helps with the paranoia and dangerous implements principles, but there’s no replacement for rigorous testing of your application code to ensure all four principles are adhered to.

Becoming more cloud-native

While those are key components and benefits to becoming a cloud-native organisation, writing 12-factor apps in a microservice architecture can be a huge undertaking. However, the efficiencies that it brings to your organisation and teams are worth the effort. Overall, it will allow your services to be developed with a lower cost of change/technical debt, improved speed of delivery, and provide a more reliable service to your users.

Related Posts

Related Resources