Tennis Smith, October 29, 2021
Since it burst onto the scene in 2015, the industry seems to be constantly talking about Kubernetes: What it can do, the future of it, how it integrates with technology X and tool Y. But you’ll rarely hear concrete reasoning pointing to why organizations should look at using Kubernetes in the first place.
As defined by Kubernetes itself, it’s an open-source system for automating deployment, scaling and management of containerized applications. There are solid reasons for adopting Kubernetes as your container orchestration platform (and there are some equally solid reasons why you might not be ready to adopt Kubernetes).
Here are five of the biggest benefits that Kubernetes brings, that might just convince you to make the move.
The Cloud Native model has 4 layers of security, almost infamously known as the ‘4 Cs of Cloud Native security’:
Each layer has its own security hooks and recommendations. At the Cloud Provider level, there are literally hundreds of security options to thoroughly harden the environment. Within the Cluster level, there’s a very granular Role-Based Access Control (RBAC) facility to further ensure only authorized users have access to cloud resources. At the Container level, you can use facilities such as Seccomp to limit a process’s system call capabilities as well as limitations on object access based on user or group ID. Finally, in the Code level, you can restrict access to only certain ports and if you use TCP you can encrypt your traffic.
A tiered approach to security has the obvious benefit of lessening the chance of having your environment hacked. But less obvious is the ability to enforce compliance to industry-specific standards such as PCI-DSS, HIPAA and SOC 2.
Kubernetes facilities, such as logging and monitoring, are exported to the infrastructure instead of being required to be part of the application itself. In other words, the infrastructure takes on complexity so the application does not have to.
Because of this, applications can be written in a simplified and uniform way that’s easy to reproduce and quick to implement! With this simplified process, developers can stay focused on building the application instead of all the ancillary things they had to be responsible for before (like logging and load balancing). This has a hugely positive effect, greatly reducing the amount of code needed to create an application.
By running a microservice-based application, your individual components tend to be broken into small chunks, making it much easier and faster to iterate. Developers can go from a few releases a year in the monolithic model to several releases per day in the containerized microservices model.
In the old monolithic architecture, scaling was exceedingly difficult. Once an application was written, frequently the only option was to run larger VMs in order to handle additional traffic. Simply put, the answer to higher traffic volumes was to ‘throw hardware at it’ (below).
Kubernetes assumes applications will need to scale. If more than one copy of an application component is needed, Kubernetes can automatically create it. By using the Horizontal Pod Autoscaler, the applications can be scaled to meet traffic volume demands based on CPU utilization.
A big benefit: The application does not have to have any awareness of the number or location of the application instances. Kubernetes takes care of generating as many copies as needed to meet those demands. In addition, there are also rules governing the location of each application instance to avoid bunching instances on a single node (and therefore a single point of failure).
Previously, it was necessary to have elaborate (and generally custom) solutions for monitoring processes and ensuring that they were restarted. It was even more complex if multiple processes were involved, much less multiple VMs.
Kubernetes, however, is self-healing. If a node, pod or container fails, Kubernetes will automatically reallocate resources to heal the failure. All of the major components like ETCD (the cluster status database) and the Master nodes expect to be replicated multiple times. There are elaborate protocols to ensure one representative of each type is automatically elected as the master instance.
Should one of these components fail, a new primary is elected. Similarly, if a worker node fails or anything on our worker note fails it will be automatically recreated. Components are never allowed to simply die without some kind of error recovery process.
Every application needs to log various information for auditing and debugging purposes. In the old days, one of the first things that needed to be done when building an application was to create a logging subsystem of some kind, which had to be done manually each and every time an application was developed. Yikes.
Not anymore, because logging is included with Kubernetes. All major components of K8s generate logs which are available from the command line. You can retrieve logs from the current instance of the container or previous instances. Applications don’t have to have any special facility for logging. If they simply write to standard out (stdout) and standard error (stderr), their data will be captured in the Kubernetes logs.
There are some big benefits of Kubernetes if it’s done the right way. We don’t want to sugar-coat it, those big benefits can come at a cost: The sheer complexity of Kubernetes. As much as it’s renowned for being widely used and widely loved, it’s also widely understood that the complexity of Kubernetes can lead organizations into a downward spiral.
As early adopters, we’re big fans of Kubernetes. Throughout the long journey we’ve been on with implementing Kubernetes in highly-regulated, highly-secure environments, we’ve also experienced those complexities ourselves. That’s exactly why we set out to make Kubernetes less complicated in the first place: to help enable enterprises to harness the power of cloud and Kubernetes without the pains.
Our Professional Services team can work with you to refactor and containerize your applications. And our flagship product, Wayfinder, is a cost-effective solution that automates the complexities of Kubernetes cluster management so that your teams don’t need to climb Mt. Everest of endless Kubernetes upskilling and education.
If you’re going to get into Kubernetes, know exactly what you’re getting into and how to make it as easy as possible.
Already using Kubernetes? Take our free Kubernetes Risk Assessment to make sure your implementation is secure.