Go Beyond Kubernetes Management With Kore Operate

07 May 2020 by Jon Shanks

Before diving into the detail of what Kore Operate is, these are the core (pun intended) principles that drove us to create it:

  1. Make Kubernetes a commodity for teams with repeatability and single-tenancy and as much isolation as possible

  2. Integrate into an organisations identity provider, to make authenticating central and easy

  3. Challenge the relationship between teams, cloud accounts, Kubernetes and cloud resources

  4. Support multiple cloud providers and leverage cloud services as much as possible

  5. Give developers the ability to self-serve

  6. Make sure that security best practices are built-in

  7. Make costs visible

  8. Make it easy to manage and reduce costs accordingly

So, why specifically these 8 principles?

Companies going from a business idea or a new feature and iterating through to Alpha, Beta and Live is a painful experience. Pulling at infrastructure in various ways, with different tools, systems and processes to enable development teams to deliver involves a lot of varying roles, skills, time and money. On top of that, is the complexity of understanding the security posture as things are changing at an infrastructure and tooling level around the applications.

The roles involved that play a big part of the success of what is delivered will be made up of engineering roles such as developers, operations / platform engineers, security specialists, product owners and budget holders.

The developer

Container technologies have revolutionised application development for developers, making it a lot easier to test and iterate code locally by pulling down dependencies of versioned artefacts. The speed of local development is often hampered when it comes to working with infrastructure and cloud. Not being able to self-serve the cloud resources your application depends on or create environments when you need them, causes a lot of delays as there are dependencies between external teams or specialist resources.

Developers know which cloud technologies are the ones to help build business features, but are often removed from having autonomy on being able to consume what they need, when they need it. If technologies are presented as a way to allow them to provision cloud services, they are often very complex,  require a steep learning curve and not particularly developer friendly. 

Operations and security teams

For platform or operations teams, the need to gate is usually driven by either tooling not being developer friendly, not a good use of a developers time to focus on or a need to enforce best practices and security principles that requires specialist domain knowledge. 

Being able to meet Developer needs in a secure and best practice way, means upfront effort on working with cloud tooling, varying technologies and being a jack of all trades. This results in time delays to features as it isn’t as simple as providing an environment or cloud dependency like a database, but providing an isolated secure environment with the relevant role based access controls for the team members or a database with the right level of computational resources, backups, encryption and so-forth. 

Product owner / budget holder

Above the technical implementation, is the cost and time management of what is being delivered. Can we A/B test feature Z to understand the business value quickly? How much is the product really costing us and can we reduce the costs accordingly? Can the service scale to the demand we need without additional engineering cost and time?

These are all valid questions and business requirements but managing answers to these or having solutions without them becoming backlog items in the sprints is quite challenging. When the focus has been on delivery and not so much cost, it’s likely not to be as efficient as it could be when it comes to cloud hosting and visibility.

Modelling the relationships

Having come from running several multi-tenant clusters, supporting a huge amount of projects, teams and developers; we knew having central platforms can cause a lot of operational nightmares and engineering challenges as well as difficulties in managing more granular cost transparency to the projects. 

We wanted to take a more single tenant approach but focus more on repeatability with the right guard rails in place for teams. We also didn’t want to burden developers with things that offer no real business value and divert them away from being productive.

When it came to product objectives, there were several things we wanted to achieve with Kore Operate:

  1. Make Kubernetes a commodity for teams

  2. Integrate into an organisations identity provider

  3. Map the relationship between teams, cloud accounts, kubernetes and cloud resources

  4. Support multiple cloud providers and leverage cloud services as much as possible

  5. Give developers the ability to self-serve

  6. Make sure that security best practices are built-in

  7. Make costs visible

  8. Make it easy for teams to manage and reduce costs

The flow ended up looking like below:

This then translated into aspects that we wanted Kore Operate to manage, which was:

  • Isolated cloud project / account provisioning for teams, (non production and production), to create a totally isolated team area to run their services in.

    Identity configuration into the organisations identity provider

    Provisioning Kubernetes but leveraging the Kubernetes Managed Services

    Managing access into the clusters for teams

    Allowing teams to provision namespaces, (environments) within the clusters, with relevant team access proliferated across

The areas in green represent Kore responsibilities and something we want to manage and automate on behalf of developer users. How we went about this, had to keep a reference to the main objectives mentioned at the beginning of this article, self-service with security best practices! When it comes to both of these, we wanted to make sure it was kept as simple as possible and use the design principle KiSS, (Keep it Simple Stupid) throughout.

Plans, policies and settings

To not burden the developer teams with complex decisions around Kubernetes security or cloud account architecture best practices; we needed to split the structure into three.


A predefined set of options for the Kubernetes managed service in the cloud i.e. EKS, GKE, AKS etc. These are things such as instance type, subnets, number of nodes, private endpoints / nodes etc. To the developer teams this would look like “Production GKE” or “Development EKS” and is what is presented to the developer teams.


Something that controls behaviour in a specific way, in the instance above, would be a policy around a plan. This meant an admin of Kore being able to allow teams to override things in plans such as instance types or the number of nodes; both at deployment time or run time, but keep the rest of the configuration enforced.

Cloud automation settings

A way of controlling a desired outcome automatically; this could be through decisions an Admin can make on what their ideal cloud account structure might be for a team i.e. every team gets a  non-production and production cloud account / project. Production plans may then be associated with the production accounts / projects, so if a developer team chooses a production plan, the Kubernetes cluster gets provisioned in the appropriate production account / project. 

The above is how it looks for an administrator of Kore Operate. They get to define ‘what good looks like’ for the clusters being provisioned by teams.

To not waste the admins or developers time in going backwards and forwards with plan changes, we made the parameters overridable by teams where an Admin sees fit, things such as the instance type, number of nodes can be made overridable, however, making sure the API endpoints are locked down to known IP addresses can be enforced, along with pod security policies and other best practices.

How an organisation wants the cloud account or project architecture to look for a team is managed in the above way, as mentioned in cloud automation settings. As organisations may have a defined structure they want to adopt, we want to make it reasonably flexible and allow the Admin to configure the flow and creation that meets their organisations needs. Obviously there is always the main best practice of separating production and non-production data and workloads, which is what we provide by default.

How it comes together for developers

The administrative configuration was all about keeping it simple and secure for Developer teams, this means that a lot of the configuration and setup is hidden away from the Developer.

The entry point for them as mentioned is the cloud provider, Google, Amazon or Azure and cluster plans, either ones setup by the admin or the ones we have provided.

A developer's focus is deploying their applications into Kubernetes without needing any specialist knowledge on how to run it securely, reliably or at scale. 

Environments, access and industry tooling

In Kubernetes, namespaces are a way of separating access and services, they are often used as an equivalent to environments. As we know the team responsible for the clusters, we can enable them to self-serve namespaces, (environments) to deploy applications into and populate all the relevant access controls.

Logging in to the environments is made simple with the Kore Operate CLI tool, that wraps the single sign on sign-in request and populates the kubernetes configuration files that the developer has access to. 


Wanting to promote a true cloud-native experience across many cloud vendors is the main aim for why we wrote Kore Operate. Developers should be able to focus on what matters and DevOps should be able to implement a structure around best practice principles for teams in a product to guarantee a known secure outcome. Where things need to deviate, understanding that deviation and the potential risk it has introduced is also paramount. 

Having teams working differently across many cloud providers or consuming services in different ways with different technologies, means teams are not as effective as they could be and spend more time learning technologies and cloud more than writing business driven product features. Also, having specialist knowledge in so much technology has a huge cost overhead either through custom engineering, meaning companies that are not platform businesses are investing heavily on custom tooling in that space, or, through heavy recruitment of hard to find skills.

Visibility and transparency is usually the most difficult aspect when it comes to technology implementation. When there are a plethora of tools that don’t integrate directly with one another it is hard to understand the risk and cost in the business of the varying implementations. We plan to start adding more value by increasing visibility inside of Kore to teams, to really allow teams to stay lean, innovate quickly and elevate companies into new markets or into a competitive advantage. 

Share this article

About the author

Picture of Jon Shanks

Jon Shanks


Jon is our executive lead, driving Appvia forward to make operations’ and developers’ lives easier.

Related articles