December 9, 2021
Duration: 58 min
Kubernetes out of the box is a big undertaking. You need to grow a priesthood of people to manage that and it’s too complicated and expensive. The providers have offered a level of simplification with products that provide a pipeline into Kubernetes for your apps – which makes things a lot simpler for you. We then offer yet more simplification with our two-part series Kubernetes from 10,000 ft.
In Pt 1: The Anatomy of Kubernetes, we’ll explain how Kubernetes work and how to get started. If you are new to K8s, this webinar is ideal just to understand the basics of how to run your microservices resiliently and at scale, and then how containers and K8s help with that. Live Q&A with our experts following.
Webinar Summary: Kubernetes from 10,000 ft: Part One – The Anatomy of Kubernetes
Introduction In this informative webinar, Kubernetes experts, Val and Tennis, provide an introduction to Kubernetes, an open-source platform that automates the deployment, scaling, and management of application containers. The webinar aims to equip the audience with a basic understanding of Kubernetes and its architecture.
Understanding Kubernetes
The hosts start by explaining what Kubernetes is and why it is essential in the modern software landscape. They describe Kubernetes as a container orchestration platform that helps manage and scale applications across multiple hosts. Kubernetes also provides service discovery and load balancing, storage orchestration, automated rollouts and rollbacks, and more.
The Need for Kubernetes
The webinar discusses the need for Kubernetes in today’s cloud-native environment. The hosts explain that as applications become more distributed and complex, managing them becomes a challenge. Kubernetes addresses this by providing a framework to run distributed systems resiliently, handling scaling and failover for applications, providing deployment patterns, and more.
Kubernetes Architecture
The webinar delves into the architecture of Kubernetes, focusing on its two main components:
Control Plane: Also known as the master node, the control plane is the central control hub for Kubernetes. It maintains the desired state of the cluster, such as which applications are running and which container images they use. Worker Nodes: These are the machines where the applications are actually run. Each worker node contains a Kubelet, which is an agent for managing containers on the node and communicating with the Kubernetes control plane. The Role of Containers in Kubernetes The hosts discuss the role of containers in Kubernetes. They explain that containers are lightweight and standalone executable packages that include everything needed to run a piece of software. In Kubernetes, containers are the smallest deployable units of computing that can be created and managed.
Understanding Pods in Kubernetes
The webinar provides an overview of pods in Kubernetes. Pods are the smallest and simplest units in the Kubernetes object model that you create or deploy. A Pod represents a single instance of a running process in a cluster and can contain one or more containers.
Kubernetes Services The hosts discuss Kubernetes services, which are an abstract way to expose an application running on a set of Pods as a network service. Services enable communication between Pods and can expose them to the internet or other parts of the cluster.
0:08 hello and once again here we are with a webinar from appvia i’m i’m tennis smith at the apvia world 0:15 headquarters it looks like i’m in jail i’m not actually in jail i’m in a room with a nice 0:21 screen on it um and this is salman who’s in sunny cardiff 0:26 uh and you might want to introduce yourself some and talk about your background a little bit for those who are joining us for the first time and 0:33 don’t know much about either one of us of course 1:17 and i’m tennis i’m tennis smith i’m uh was a technical architect now i’m going to be pre-sales here at uh appiah 1:25 i’ve been in the business a long time too many years to count i’ve been working with kubernetes now for three or 1:31 four years and between the two of us we’ve got quite a bit of experience with the technology 1:38 and we’re here to talk about kubernetes from ten thousand feet uh in other words just a brief overview 1:44 of kubernetes and what it is and so uh simon do you wanna share your screen with the 1:55 oh yes the housekeeping 2:08 yes yes 3:35 nice just fine salmon 4:37 sure 4:46 yeah you’re just fine guys 5:06 yes sir 5:22 yeah we’ll want to make sure that the audience can certainly uh understand us before we 5:28 continue on yes yes 5:34 can can somebody ask i mean excuse me can somebody post a question uh whether 5:39 or not they can now hear salman i can hear him just fine naturally but 5:45 it’s more important that the audience can hear him 5:59 that’s right uh if you want some and i can go ahead and 6:04 yeah yeah uh what we’re going to talk about is um 6:10 pretty much three areas we’re going to talk about the the the old classical way of doing things which 6:16 is monoliths and the new paradigm which is microservices and that gives rise to 6:21 the concept of containers and uh oh they’re saying they’re still they’re saying they still can’t hear you 6:27 yet salmon all right well 6:39 okay 6:47 okay all right it’s a yes um what we’re going to talk about is 6:54 as i was saying is initially the concept between a monolithic application and how that’s done a pre 7:01 kubernetes then the concept of microservices and then after that the the the the 7:09 concept of a container and what it is and then we’ll talk about kubernetes and how that is a technology 7:16 that coordinates containers um and this is while salmon is rejoining 7:22 i’m afraid i don’t have any graphics to show you but if you think of it in in these terms that you have 7:28 in the old days an application that had to have all of the 7:34 uh various and sundry uh 7:40 reactions built into the application itself when i say reactions error recovery networking 7:48 security scalability all of those things had to be programmed inside the application itself 7:55 so what you ended up with is a monolithic app that tried to do everything for all time 8:02 uh and react to all situations that’s what i mean by reactive 8:07 along comes the concept of microservices which is where you break down the large 8:13 application into smaller components that would be networked together 8:20 and then you would be able to isolate your problems excuse me 8:25 and in such a way that it it’s no longer one giant uh code bed you would have lots of little 8:32 code bits um the the problem though 8:38 is that those little applications talking to each other through a network 8:44 makes certain things more complex like testing for example and we’re going to go over that 8:51 again i don’t have the graphics at the moment but hopefully you’re following along with me 8:57 that gives rise to the concept of containers and containers are a way to 9:04 shrink wrap so to speak a very small piece of 9:11 well shrink wrap only the things you need the libraries the file system 9:18 the utilities and the application itself 9:23 in one small entity that can be run 9:29 in its own namespace inside an operating system 9:34 and excuse me and it’s isolated from all the other containers running in the same operating system 9:40 and we’ll talk about this a little bit more um in 9:45 in much more detail with a graphic to give you a better idea but the idea of a container is as you 9:52 like i said you have the shrink wrapped components and only what’s needed to run so 9:59 if you think about it when you run classically when you run an application 10:04 inside of a virtual machine or on your laptop or wherever let’s say for example you’re running 10:11 microsoft word you are actually only leveraging a small 10:16 percentage of what’s inside the operating system at any given time you’re using word you’re using some 10:22 subservient libraries and you’re using keyboard interrupt handling and that sort of thing and that’s about 10:28 it the rest of the operating system is sitting there not exactly idle but it’s 10:34 using unleveraged facilities what a container does is this is a way to say okay i only want the application 10:43 i only want its subservient libraries and i want them encapsulated into a single entity and i want it saved off 10:49 into what’s called a container image and then you run that image and that image can 10:56 converse with other containers and they’re standalone entities so what you have is a collection of 11:03 containers that are talking to each other well here is the problem with that 11:08 the problem with that the problem with that is 11:14 the let’s see i’m just getting a message here 11:22 the problem with that is that you have to coordinate among those containers and coordination among 11:28 the containers is difficult you need some overarching up there’s iqbal because 11:34 salmon mr well sorry can we can we try to see if they can see 11:40 can you hear me can you ask people if they can hear me now yes um salman has rejoined can people um 11:47 indicate whether or not they can hear him now let’s see uh 11:53 let’s see if that works the slides i’ve sent you the links to the slides tennis oh thank you slack so 12:00 if you can if you can just open them up and share them yes sir i shall i shall 12:05 maybe kirsten can let’s see if people are messaging or not not yet let me 12:11 tell us there’s the slides for you so maybe you can share them yes sir i certainly will 12:18 all right people can hear me now oh okay i’m not 12:24 sure what happened there people apologies apologies but we’re good to go we are good to go so 12:31 um okay oh thank you very much uh not sure what happened there but you know thank you for joining us thank you very 12:37 much so last thought yes please go ahead yes sir let’s go let’s go sorry about that 12:43 no you know we still gotta have fun go tennis you continue i’m not sure 12:49 where you were oh i was just talking about uh um well first of all we haven’t talked Who is Appvia 12:55 about who appvia is and and uh we are a company that uh offers uh an 13:00 enabling technology for kubernetes without going into great detail just yet and we 13:08 we have technology that enables a lot of efficiencies we’re using things like 13:14 multi-cloud so go ahead to the next slide Monolithic vs Microservices 13:20 okay so monolith versus microservices this is what i was alluding to before 13:26 uh do you want to pick up on this one salmon absolutely yes so i mean i’m sure we’re 13:32 all aware or what we’re doing right now which is we have sometimes we just have one 13:38 massive application or one big solution everything goes in the application we’ve got the business logic the ui the api 13:45 the authentication every piece goes in that and all the testing is written in that case and then 13:52 over time you start to realize that that’s perhaps not the best thing to do because if i want to make changes to an 13:57 application one part of the application maybe the ui i have to ch release everything again 14:02 after regression test everything again becomes a bit of a problem and then what we 14:07 decided as uh as technology that maybe what we can do is probably break things up into smaller 14:13 chunks and they’re independent of one another so we can scale them separately we can 14:18 release them separately the teams don’t really have to rely on one another to say oh um we’re releasing next week can 14:25 you make sure we haven’t broken anything so that’s what the the microservices architecture we’re 14:31 moving towards um anything you want to add attendance on that or will be good no it just 14:38 yeah the main thing is that you went from one big application trying to do everything to lots of little applications that are 14:45 cooperative amongst themselves and that leads to the martin fowler uh 14:51 quote that it’s an architectural style is developing a suite of small services 14:58 and presumably they would be lightweight in character the the idea is that you would not have 15:04 huge pro huge uh apps talking to each other you want lots of little apps you know just a few lines of code talking to 15:10 each other correct and they talk to each other using rest requests they’re perhaps sending some rest requests and monoliths vs microservices 15:17 responses uh i mean we put some some of this up by the way you you will all receive the 15:22 slides so you don’t have to frantically make notes about any of this stuff there’s just some discussion points between monoliths 15:29 and mic services as we were talking about previously really briefly if if i have a monolith i have to make sure if i 15:36 make a change in one part of the application we haven’t broken anything else in the other part of the application 15:43 but in terms of microservices pretty much every component is independent as long as we don’t 15:48 break any of the contracts between the services we’re doing okay so we can 15:54 independently deploy them update them you know move move a lot faster than we 15:59 have to because we don’t really have to sync with the other teams to try and figure out oh should we release this should we release that 16:05 and um should we release our features next week or whatever it might be and i think one 16:10 of the most important things um in my opinion is that uh back in the day when 16:16 i used to work in monoliths if there was an issue in the back end service let’s say the back end logic there was a 16:22 problem with that even the ui would stop working right because everything is part of the same application um 16:28 and then you can’t really serve a a reasonable error page to say oh please come back because we’re having an error here or 16:35 whatever it might be so you know failures can one 16:40 if one part of the application goes down it will definitely take out the other parts of the application because it is one bit 16:46 then in our case in microservices everything is self-contained right so if my back end fails that’s fine the front-end can still 16:52 uh you know give me a reasonable answer maybe maybe if it’s a bank application a loan service is failed but accounts are 16:59 still working so that’s fine we can at least work in in that way and i think one of the last bits i would really 17:06 say with with the microservices architecture we can have whatever 17:11 framework we want to use let’s say you say oh the front end is very good if i can write that and react and let’s say 17:19 you might say oh i need to write some business logic and that really works well with either go or if i’ve got 17:25 something just you know i need to do some bit of machine learning in some parts of you can write that in python and that’s 17:31 absolutely fine because everything is separate we would do that with single development stack for monoliths that was 17:38 kind of hard to do anything you’d like to add tennis yeah it’s just a really uh interesting aspect 17:44 what you just brought up which is let’s say that the ui was written with react like you said and you know next week we 17:51 decided to write it in a completely different package well we just have to modify that one 17:57 microservice or the back end for example we used to be talking to mysql now we’re going to be talking to 18:04 another database uh that’s one microservice that you change and if assuming that the the rest api 18:11 conversations are honored in their contracts the user doesn’t know that and you can just swap them out crosscutting 18:17 absolutely absolutely but it’s not everything is rosy right yeah 18:22 tennis because we’ve got we’ve gone from monoliths to 18:27 microservices and there are some use cases in which the model might make sense uh of course you know there’s a 18:33 tool for everything but now we have a distributed system which we didn’t 18:38 have before right we didn’t have all these components that perhaps deployed in different machines or working 18:44 differently so we’ve added a bit of extra complexity on top of what we were doing before because now i have to start to figure 18:50 out like how do these communicate with one another and when i’m doing some development how do i 18:56 test everything just to make sure everything works fine you know and also like one of the 19:02 things i put we put in here is cross-cutting this is all about i’ve got all these different micro services 19:07 how do i collect the log from all these services and more importantly these services could be written in different 19:14 programming languages so if i collect logs where do i get the libraries from and how do i make sure everything is exactly the same and also 19:21 you know if there’s a circuit breaking or you know fault tolerance how do i do all of that so we started to add 19:29 challenges on top of what we had before uh tennis would you like to add anything more 19:35 no sir that’s that’s uh that’s nothing for that yeah so i think patterns 19:41 so there’s patterns out there uh that help you with some of these challenges for 19:46 microservices um then one of the patterns is the 12 factor apps 19:51 i’m just we’re just going to put it out here we’re not going to go into too much detail about the 12 factor acts but 19:57 basically this if we put six with it there are 12. but these are things we 20:03 can do to make our lives a bit easier when we’re working with microservices 20:08 for example number you know one codebase money deploys so you have one codebase you 20:13 check all the code in and you can from that single code base you can deploy multiple services multiple market 20:19 services you can deploy them in different environments uh you know such and 20:25 so on and so forth the second one which perhaps will become a bit more useful when we talk about containers about 20:30 declaring dependencies i mean there have been times where you’re working on an application 20:36 and one person has developed this application and they tell you oh can you try and test this on your laptop or on 20:42 your machine are you trying to run the application on your machine it could be written in java could be written in python could be 20:47 written in dot that could be written in any language and when you do that things don’t work you test it out and 20:54 like oh things things are not working what’s going on and then you realize they installed a dependency on the 20:59 machine which didn’t which wasn’t on your machine so this is this is a bit about declaring 21:05 dependency it doesn’t say how you declare the dependencies but it just says you make sure you declare all the dependencies 21:11 and i think i’ll i’ll add one more and i will get tennis to add his his points as well like one of the things that 21:18 all of us try to strive is to make sure all environments are similar 21:23 not the same but at least similar for example the dev has to be similar to what the uat should be or 21:29 what the production environment should be and the only reason why we say they should be similar this 12 factor apps 21:35 says it should be similar is because it’s easier to replicate any issues that you might come across in 21:41 production if the dev is so far away from what production is it’s going to be hard to figure out what 21:47 happens when the activation actually lands in production this should what we don’t want is surprises we want 21:52 surprises in dev i don’t know a set environment or uat environment we don’t want any surprises 21:58 in production environment so that’s why we want to keep them similar anything tell us 22:05 yeah exactly you you strive to keep the environmental simplicity as 22:12 as generic vanilla however you want to put it as possible 22:18 and to carry on with the list here you want also your application to be 22:24 stateless in character on the whole now there are some exceptions to this but for the most part what you want is an application that does not have to have a 22:32 lot of uh data retained to tell it what state it’s 22:38 supposed to be and when it comes up that it just comes up and it reads from a queue and that then and processes what it has 22:44 it doesn’t have to to read you know special databases and things like that 22:49 um also which reminds me the fifth point about configuration in 22:55 the environment what you want to do is to try to keep all your your configs in environment variables 23:01 that are passed at runtime to the app as it comes up so that way you can change let’s say you know how 23:08 many threads you’re running in a configuration statement without having to 23:13 uh to do stop and start and those kinds of things 23:18 and finally you want you want a microservice to have fast startup and 23:25 a graceful shutdown in other words you want it to be resilient in the sense that the application 23:32 doesn’t cause problems if you bring up new copies or if you shut down unneeded 23:38 copies um i notice i’m going to am i missing anything let’s see no no absolutely yeah so i think yeah i i 23:45 think it’s i mean there’s more parts but yeah the fast startup is important as well because let’s say your 23:50 application’s running normally and you want to scale up from i don’t know you got five replicas 23:55 running and you want to go up to 100 replicas uh yes and you want to you want to do that really quickly and that’s why 24:00 this part start this far startup is is useful and graceful shutdown it’s just we just don’t want to lose any traffic 24:07 if something something goes wrong right so that’s um that’s what it is um virtual machines 24:12 there’s we could we used to deploy our applications you know we still are deploying 24:18 sometimes our applications in virtual machines and that’s absolutely fine because we just need some resources to be able to 24:24 run them and the thing with virtual machines that i think the diagram will see on your left is 24:30 in the virtual machine you have this technology called the hypervisor which allows you to separate you know 24:35 which allows you to divide a server actual physical server into 24:42 multiple machines multiple virtual machines and that’s absolutely great right 24:47 because i have a machine i have one actual server and i can make it make into smaller virtual machines and i can 24:54 give them to different teams and they can do what they want to do with it and there’s good security isolation because 25:00 whatever happens in one virtual machine doesn’t affect what happens in another virtual machine it’s like as if they are 25:05 actual machines but the only problem or i mean we can’t see the problem the challenge is that 25:11 when i have this virtual machine each machine has to have its own operating system or we call it in in this case we 25:18 call it the guest operating system that you might say that’s fine that’s okay and it seems 25:25 like okay of course i need a bit of an operating system to be able to run it by but that takes about 10 of the resources 25:32 sometimes on that virtual machine to be able to run just the operating system 25:37 and if you add that imagine you have 100 virtual machines and 10 of each of those 25:43 if you add it that adds up to i don’t know however much cpu or memory resources it is that basically adds up 25:49 to some resources that you are just using to run this guest operating system on the other 25:56 hand we have the containers and the containers say you don’t have to have this guest operating system all you need 26:02 to have is container engine you know your dockers or your container d your run times 26:07 as long as you have that you don’t have to have the guest operating system the the container engine will take care of 26:13 it and whatever the host operating system is on that machine it basically uses that 26:18 in order to spin up your applications inside the container tennis would you like to add anything on 26:24 that yeah a couple of things um the the the diagram on the right 26:29 alludes to what i was talking about before which is you know shrink wrapping only those things you need 26:35 to run a particular application and you share actually the operating system kernel and 26:42 can have apps one through three sitting there and it’s much more 26:47 efficient than having an entire operating system as you have on the left just to run one application 26:53 so the the the benefits of the approach on the left 27:01 is a much greater efficiency of operating system leveraging 27:07 uh and a much more bounded collection of things that you have to 27:13 manage in the form of that container as i said that shrink wrapped piece so 27:26 no i think uh i think that’s uh that’s a good point uh as we as we just talked about there’s some limitations because limitations 27:32 you have to use some of the resources for virtual machines and i think one of the things that we haven’t talked about is 27:38 if i have a virtual machine in azure it doesn’t work the same virtual machine won’t work in 27:44 gcp google cloud platform or aws you kind of have to like recreate it 27:49 so basically there are some limitations there are some limitations for for the application but on containers i containerization 27:56 don’t know uh tennis we want to talk about this slide here real quick yeah this is this is um um the process 28:03 of containerization and this slide is meant to depict uh the fact that you are taking only 28:09 those little components that you need in order to run that one application and your your uh extracting it from a larger 28:18 context and saving it off as a container it says the the the graphic says container but 28:24 it’s technically container image and that’s only that’s the only thing that now you have 28:30 to uh manage so all the overhead of the operating system 28:35 and the libraries and the maintenance that what goes with the operating system all that goes away 28:40 yeah i mean you still have to update the base operating system system of 28:46 container when you need to this is another depiction of the same thing using the container you have some sort container image 28:51 of a file system and on that you you do use an operating system of one con 28:58 a piece of the operating system these are the operators not the full thing the piece of the operating system and then 29:03 you have a bit of storage and all you do on top of it you put all your files in and all your dependencies and you 29:10 package it all together into a container and that becomes like i know sometimes this term 29:16 container and image is used interchangeably image is basically a container that’s not running it’s just 29:22 there and a container is basically a running image but i think the takeaway for containers is a 29:29 a container is some sort of uh i think runtime is probably a better term to use 29:34 than an operating system some sort of a runtime let’s say you want to run a java application you have java virtual machine or let’s say you want to run 29:41 python and python runtime your application code so all your code and stuff in all your 29:47 configuration files all your application and any of the dependencies that you are using they’re all defined and they’re 29:53 all included and this goes back to the point that we’re tennis and i were making earlier on in this uh 12-factor 29:59 app one of the things you have to declare your dependencies in order to create a container you create it using 30:05 this file which is called a docker file imagine if you’re using docker and in there it’s just instructions of 30:11 how to install your applications and then basically we’ve got containers which are running and i had just some of container benefits 30:17 the benefits we put on here um depending on uh what we you know what 30:23 kind of container you’re using it’s quite lightweight you can get started really quickly you can stop it really quickly there’s no operating system we 30:29 have to boost boot it’s just the runtime and i think the most important bit is we 30:35 have standardized the way we package and run software so for example 30:42 let’s say if you want to run you know i don’t know net application or a go application on your laptop you need to 30:48 make sure all the frameworks are installed all the things are installed correctly but if i have a container it doesn’t 30:54 matter what the container is i can just do docker run name of the image and it will just run 31:00 so this this you know this standardization of packaging and it’ll run anyway the container will run exactly the way it is running on my 31:06 laptop will run on the our different server will run on uh 31:11 tennis’s machine will run everywhere it’ll run on azure it’ll run on aws 31:16 absolutely absolutely you want to add anything tennis no no that’s that’s that’s pretty much covers it i think 31:22 yeah and you know we can if we are running containers we can try and run multiple containers in a machine and container terms 31:28 basically uh try and save some resources to containers are useful for 31:33 reproducibility we can take them and run them there’s some of these terms the keywords that you might have come across 31:39 before uh like an image and a container we talked about it really briefly images 31:44 just contents of a docker container risk all it includes is a runtime it includes 31:50 your files all your files that you need to run your application and any other dependencies and then the 31:56 container is when you start running the images by the way if you have any questions please feel free to use that 32:02 ask a question to add them and throw in we’ll we’ll try and answer them apart from that you can’t hear me you know 32:09 which was where apologies for that but i think we’re good here and then a couple of 32:14 other uh keywords in terms of uh if someone talks about a container runtime or an engine this is the bit that 32:21 actually people use to start the container or stop the container or build it and then 32:26 if you have containers and images we need to be able to share with others how do we share them we use registries 32:32 think of uh you know like the registries for your dependencies where we store dependencies and stuff and usually we tag an image 32:39 with uh with a version of one sort of another so some of the terms 32:45 but tennis if you want to ask this next question yes um 32:50 that’s containers are wonderful but what happens when you have lots of containers and those containers are 32:56 talking to each other and some of them are coming up and some of them are dropping and some of them 33:02 need to talk to a container on another virtual machine what do you do and that’s where 33:09 excuse me that’s where the concept of uh 33:14 container coordination comes in what happens when you have this situation where you have lots of Kubernetes 33:21 containers and the need to coordinate among them and to scale them 33:27 and to load balance them this is where we introduce the kubernetes 33:33 and kubernetes is as it says here it’s an open source system for 33:39 among other things automating deployment scaling and management of containerized applications 33:45 and think of it as an overarching scaffolding for 33:50 coordination among containers and it’s referred to sometimes as kate’s 33:56 k followed by eight and followed by the letter s because it is k and eight 34:02 letters and s just as an abbreviation um anything you’d want to add uh salmon 34:08 yeah and i think it’s important to also mention that uh the these these things are called container 34:15 orchestrators and kubernetes is not the only one there’s there’s there’s a few more you know there’s uh 34:21 hashicorp nomad and uh um i can’t really think of others because i have them on Container Orchestrators 34:26 the slide here no there’s docker swarm there’s docker swarm and you know there’s some others but 34:32 uh we are focusing on on kubernetes because kubernetes has taken the market share a lot more than the others but 34:38 that’s what we’re going to focus on and yeah these container orchestrators for example kubernetes help with a with a 34:44 lot of things um yeah so you can see the market share for for communities is a 34:49 lot more than some of the other ones you know we’ve got docker compose we’ve got mesos and i mean we’ve got some some of 34:56 the other ones as well so this this is why we are you know kubernetes seems to be like the the 35:02 container orchestrator of choice and so what do you get with with kubernetes uh one of the things 35:08 that we we want to do with our applications is avoid downtime avoid downtime when something crashes or avoid 35:16 downtime even we’re doing updates and this is where kubernetes comes in it helps us with if you’re doing new 35:22 deployments it helps us uh to make sure that we can do rolling updates so we don’t have to take the old one down 35:28 until the next one ends up and that’s where kubernetes is kind of is very very powerful in that sense when you’re 35:34 rolling out new updates if there’s any breaking changes there’s there’s there there are techniques for 35:40 that but also if something if one of your container crashes one of the things that uh tennis mentioned what happens if 35:46 the containers die well you know i don’t want to be woken up at 2 am in the morning and logging 35:52 into cells and starting and stopping the containers and that’s what kubernetes does it will go in and 35:58 it’ll start things up what they’re supposed to be running because it knows the state of the 36:04 cluster at all time and it knows what the desired state of our applications in the cluster 36:10 should be it stores the information and once that stores the information it will always strive to go towards the 36:17 the desired state and we’ll look in into that in more detail next week of how it 36:22 does it what it does um and it’s quite you know that’s how it does self feeling and also i’m gonna i’m 36:29 gonna talk about scalability and then tennis is gonna talk about the bits at the bottom i think that they’ll kind of make sense in here oh yeah um 36:36 in in terms of scalability what kubernetes allows us to do is automatically not just scale the 36:42 applications also scale the cluster if our cluster is too small or 36:47 what a cluster rate so far if we haven’t got enough resources to run an application what kubernetes will do is 36:54 you’ll spin up more resources spin up more virtual machines and attach the cluster 37:00 uh yeah so tell us what about the infrastructure abstract abstraction and development yes 37:07 yeah the interesting thing about kubernetes well let me start with developer velocity basically if if you have a 37:14 kubernetes infrastructure uh configured you can turn around your application 37:21 changes very quickly so the developer doesn’t have to have a lot of understanding about the 37:27 infrastructure they can have just a pipeline to install an app and update it as they need to and 37:34 can do that very rapidly so you can deploy quite quickly on kubernetes without having to worry about uh you 37:41 know do i have to do something a little bit differently because i’m going to be uh 37:47 bringing it up today versus tomorrow and you know there’s there’s no issue with uh trying to make changes to the 37:54 infrastructure the infrastructure is just there so the developer can concentrate on just developing the application 38:01 as far as infrastructure obstruction is concerned what’s interesting about kubernetes is properly implemented kubernetes running 38:08 on azure versus running on aws versus running on gcp are all interchangeable 38:15 in the way the excuse me the applications developed for one would be interchangeable from the other 38:20 so that the the kubernetes environment is abstracted away from the hardware and 38:28 for that matter the provider that’s underneath so as a consequence you have 38:33 a greater chance of portability now you can always write applications to make them providers specific i mean you can 38:39 always include certain things to make them provider for specific but if your goal is true portability 38:46 you can absolutely not have to worry about the infrastructure on which kubernetes is 38:53 running it’s its own ecosystem if you write it for one kind of kubernetes it will run on kubernetes on aws just as 39:00 well as azure finally kubernetes is set up in such a way that 39:06 it leverages the resources that it has is very efficient efficiently it will 39:11 not uh it will uh use 39:16 to the fullest the vms that are associated with east with 39:22 uh with kubernetes in such a way that you you don’t have to constantly try to play catch up with resizing your 39:29 environment for kubernetes now long term of course you have to watch the metrics and that sort of thing but my point is 39:35 is that the metric the environment that you do have it very efficiently leverages anything you’d want to add 39:40 salmon yeah no absolutely i think you tennis has mentioned all this stuff and 39:45 this infrastructure abstraction is something which is which is great right so a lot of times that you know we 39:52 people think about we don’t want to be tied into one cloud provider or another um 39:59 well that’s uh that’s absolutely fine of course this is something that everybody considers when 40:04 when they’re working on the infrastructure and that’s the good thing about kubernetes if we write our applications 40:10 for kubernetes we can run them anywhere i can run run it on my laptop and i can 40:15 run it on on aws i can run in gcp i can run any of those cloud providers that allow me to Kubernetes Applications 40:22 run that and those things don’t change so what does a kubernetes cluster 40:27 actually look like well looks kind of like this we have imagine you have machines Kubernetes Clusters 40:34 and they could be actual physical machines they could be virtual machines they could be raspberry pi’s they could 40:40 be anything as long as they provide memory and resources that’s what a 40:45 kubernetes cluster requires it needs some memory resources there’s some minimum requirements how 40:51 much memory they can have and the way we set this up is we have a cluster in the cluster 40:58 it can include basically one or two things it can include a control plane which is the bit that’s in charge and 41:04 we’ll look into the control plane next week as to how it does that and then the other bit is the actual 41:09 worker notes and the working nodes of the bit where we actually deploy our containers they get deployed into 41:15 into the workloads and the point about a kubernetes cluster is i can have as many worker nodes as i want there are 41:23 some restrictions in cloud providers they they might not depending on the network setup they might only allow you 41:29 200 machines in in a node but in a cluster sorry 200 machines in a 41:35 cluster but the point is you can have as many machines as you like they can be of any size they can be a gpu it can be a 41:42 cpu could be anything you like and also if you want it for resiliency you can 41:47 even have multiple control points and that’s what aws does by default 41:52 if you ask aws to give you a kubernetes cluster it will actually 41:58 spin you up three control planes and each of them will be deployed in different availability zones so if one 42:04 availability zone goes down you still have a control plane that’s that’s up and running and this is basically what a 42:10 cluster looks like and every time we deploy our resources we are communicating with the control plane and 42:15 we’re saying can you deploy this pod or this application or whatever it might be and that that takes our our thing and 42:22 actually deploys into uh the work announced and we’ll look into that next week 42:27 is that is that anything else you want out of tennis uh no Kubernetes Containers 42:32 well a couple of small things yes just uh um when you see on the left control plane the the that’s also a separate 42:39 machine or machines as simon pointed out uh so and the graphic that we’re using 42:46 which which is the profile of a whale with little boxes on top of it that’s the uh graphic for a docker container so 42:54 if you’re confused by that basically think of that as just know the whale with the little containers on that’s just a docker container 43:00 that’s all i just wanted to add that to make sure it was clear yeah absolutely thanks uh thanks for 43:06 clearing that up yeah it’s just running containers but um so a couple of things what are the features Kubernetes Features 43:13 i think so this is one of the last things we’ll discuss in in terms of what we’re going to be talking about what 43:19 does kubernetes help us with and then uh um yeah if you have any questions feel free to ask us i think the important 43:26 also thing the second part of this is next week so definitely do dial in next week and we’ll show you how you can 43:31 deploy your applications in your kubernetes cluster and explain some of the other concepts that are in kubernetes cluster so one of the things 43:38 that we have to do if you know if you don’t have a an automated 43:44 you know container orchestrator is for us to log into machines and say 43:49 i’m going to deploy this application this machine i’m going to deploy my database and that machine and this 43:56 application that machine but with kubernetes kubernetes says you don’t have to worry about that 44:01 you just give me the container and i will decide where it gets placed so it does automate scheduling and it does 44:07 that based on what your requirements for the application are and it’s quite smart 44:12 for example if you have multiple replicas of an application that needs to be deployed in the cluster it won’t take 44:19 all of them and deployed on the same worker note because if that worker node goes down we’ve lost the application 44:26 so it will do and just start distributing the workload it will deploy one of them in the first node maybe the 44:32 another one in the second node and what it’ll try and do is based on what your requirements for the application are the 44:37 the uh the cpu and memory requirement it will try and pack it as close as possible to 44:44 make sure your virtual machine utilization or your machine utilization is high you’re not losing out on much 44:52 um that’s what the the automated scheduling is it’s quite advanced in a sense that you can customize it too 44:59 and one of the things that kubernetes also helps with is that the concept that we talked about earlier on if any time 45:05 my my application crashes it can bring it up without any issues 45:10 and then before tennis talks about auto scaling i’m just going to mention this automated deployment 45:17 in kubernetes you can deploy your applications as part of what’s known as a as a deployment which 45:22 we’ll talk about next week and deployment is just a recipe of what your application looks like how many replicas 45:28 it wants to run what container is running what are the environment variables it needs to use 45:33 and if i say i need to have three replicas running at all times and one of them goes down it will bring it back up 45:39 and also i can release i can make releases in an automated fashion i can do a rolling 45:46 update so if i’m going from version one to version two what kubernetes does is it brings back 45:51 version two brings up version two first and then it says yeah everything is looking good some people start going to 45:58 version two and it’ll start killing some of the possible version once just so you don’t have any down time 46:04 and there’s other different types of uh deployments as well you can do a green blue deployment you can do a canary 46:10 deployment um and in a canary deployment you send some traffic to the older application and 46:16 some traffic to the newer application see how everybody’s doing it is are we getting any errors and if you’re all 46:22 happy you can switch over all the traffic to the new um to the new application and in a green 46:28 blue deployment we basically uh can have two applications running at one time and if there’s a breaking 46:34 change and we want to switch from one to the other we can switch it really quickly uh what about auto scaling 46:40 tennis auto scaling is is a really interesting aspect of kubernetes you can define 46:47 how many minimum and maximum copies of a given uh 46:52 well i’ll say container it’s actually a pod but let’s say container for the sake of argument how many copies of a given application 46:59 can be running at any given time and when there is traffic to justify it 47:05 make new copies to to accommodate that so let’s say you’re running with a 47:10 minimum of two copies of an application and then suddenly there’s a lot of traffic coming in 47:16 you can auto scale that or you can you can arrange kubernetes 47:21 and then auto scale automatically to say 10 copies and then when the traffic volume drops down it goes back to two 47:29 copies so that’s a very powerful feature that once again like salman was saying you don’t have to get up at two in the 47:34 morning to bring up new copies just to accommodate new traffic uh kubernetes has the hooks 47:43 such that it will accommodate that for you and that leads to load balancing which 47:49 is a high availability feature so that uh so that you can 47:54 take your traffic and judiciously 47:59 balance it across different applications so you don’t overrun one application and 48:05 you can have you know three or four different things accepting data at the same time and therefore balancing all 48:12 the work across different worker nodes if you like 48:18 and excuse me and we talked about infrastructure abstraction before i don’t know that there’s really 48:23 very much to add to that other than to say that um 48:29 you can have a generic environment and that it it as 48:34 long as it’s designed to work in kubernetes it can work in kubernetes anywhere um and just like salmon said it 48:41 could work just as easily on a raspberry pi as it would on aws uh anything you would add to that salmon 48:48 yeah i think um like in terms of auto scaling you mentioned about the application over 48:53 scaling also cluster auto scaling you have mentioned it before we can we can define we can define metrics say if if if an 49:01 application if a container takes like longer than two seconds to respond scale up and that’s how we can we can do 49:08 this scaling up so that’s what uh the different features of kubernetes are and Summary 49:13 uh be sure to join us next week on in terms of we’re going to dive in a little bit more 49:18 detail of what all this is so i think a bit of a summary uh as we said we’ve 49:24 been talking about today about how containers can help with with microservices we can you know we we can 49:31 microservices produce a lot of challenges and we want to be able to run all these things together and 49:36 orchestrate them and start them and stop them we can stick them in containers to make sure we can run them 49:43 always without any issues and we can run them in different environments and then we would talk about how you have containers 49:50 and managing multiple containers is a bit of a problem in terms of if all of them go down how do i bring 49:56 them back up and what about if something happens to 50:03 a machine that so our container is running in one machine it needs to talk to a container that’s running in another machine how do we do that communication 50:11 that’s where kubernetes helps and we’ll talk about a number of kubernetes benefits which was um different things 50:17 in terms of auto scaling auto scheduling and uh self recovery or anything because 50:23 that goes bad it can start the containers and also in terms 50:29 of what happens if you know we want to do a deployment with without any 50:35 downtime and this is which is to help go and we’ve got a question come in uh 50:41 actually a few questions okay and i’ll i’ll read them out and then we can we can take them online 50:47 how easy is it to implement this on a small scale um 50:52 for learning purposes they’re asking and what kind of applications would you recommend deploying on 50:58 uh deploying to them to learn how to work with kubernetes um so that’s not that’s not all but we 51:04 can we can we can start with that um you can deploy kubernetes on your laptop MiniCube 51:13 uh if you would like you can also deploy kubernetes on raspberry pi’s there’s a very yes mini cube runs on 51:20 your laptop uh is a wonderful uh uh 51:25 learning tool um so there there are several out there if you go to 51:33 excuse me if you yeah if you if you want to do it for like learning purposes i highly recommend this mini cube if you 51:39 just like make a note of that mini cube stuff you can go in there or you can do a kind and what mini cube does is 51:44 basically spins you a virtual machine that virtual machine just one mode is your control plane and 51:51 it’s your um here’s your working note and everything gets deployed on there and kind is 51:57 kubernetes in docker actually spins up kubernetes inside docker so you can definitely use this on a small scale and 52:03 run them there now what kind of applications you should deploy if you if you want to learn i’m just going to give 52:09 a shout out to coda uh just make a note of me make a note of this stuff the scatter code is 52:15 this is a great platform by the way if you if you if you’re doing it for learning purposes it will spin you up a kubernetes cluster 52:22 inside the browser well it actually spins up in a virtual machine that you don’t see but there’s kubernetes 52:28 introduction courses you can go in here and you can start up it launches the mini cube cluster uh 52:34 on in the browser for you so you don’t have to launch it but definitely definitely give a look uh uh 52:40 try these because there’s examples in here that allows you that helps you to deploy stuff um okay and i 52:45 think there’s another there’s another one i’m gonna uh let me let me quickly find that but what i’ll show you in a K3S 52:52 second as well uh god do you wanna add anything else uh uh yeah the next question was how does kate’s and k3s 52:59 differ and k3s comes from 53:05 rancher and that’s i think that’s mostly meant for iot uh 53:11 it’s a lightweight kubernetes that is primarily i mean is there anything you would add with that 53:17 yeah i think yeah so so the only as as uh as tennis says uh they’re basically 53:23 supposed to do the same thing the only thing is that k3s is is just a single binary uh it’s like a 53:29 lightweight kubernetes disagreement it’s a kubernetes distribution which is a bit more lightweight and it’s a single 53:34 binary which is like less than 40 uh megabytes and they it completely 53:40 implements the the kubernetes api so um but what it doesn’t have is a lot of 53:45 add-ons that you have to perhaps adam yourself so some things in k3s don’t 53:51 exist but um yeah you you’re more than more than welcome to to try things in 53:57 k3s so ka test whatever is easier for you of course there’s more like information out whatever you’re running 54:03 in ka des or k3s will also run in ka test so i think i think that’s absolutely fine 54:09 that’s not no issues whatsoever i’m gonna add one more thing so 54:15 we have some we also have some exercises for for kubernetes um in github which i 54:20 will uh share a link to that you can definitely try on uh for a while while i’m looking for it uh 54:27 i can you just have to give me a second i’m not probably looking for it in in the in the in the github i should 54:34 have i should have had it handy that’s all right and one follow-up question was how applicable are the skills 54:39 uh skill sets learned by deploying one or the other over the or to the other uh 54:45 and the short answer from what’s uh to echo what salman was saying is they 54:50 are very applicable so if you become very skilled with k3s you shouldn’t have any problems going to uh kate’s um i you 54:59 know would start with something like mini cube or raspberry pi or something like that 55:04 especially minicube um because it’s it’s so easy to uh to do these days 55:11 yeah uh definitely if you don’t wanna like uh yeah definitely start with mini 55:16 cube there’s examples out there so many uh things that you can try and find out and we will also release uh 55:23 release some stuff watch out um our blog so we have a blog as well uh if 55:28 i if i head over to app via io if there’s some stuff you wanna touch logs 55:35 such blog if you want to learn a little bit more about kubernetes we have tons of blogs and i think we are also going to put out another blog about getting 55:42 started with kubernetes and what kind of things yes you should look down so we’re going to stick it here so just you know 55:48 keep an eye on this uh um on this uh on this page there’s going to be a blog out 55:54 quite soon uh so that’s that’s that there’s a couple of things if you are interested in if you are using 56:01 kubernetes and uh kubernetes is very complex in order to manage hopefully we answer the questions by the way if you 56:07 have any more questions that was a good question uh if you have any more questions please please feel free to ask 56:12 um one of the things that kubernetes is the problem with kubernetes is kind of hard to manage and 56:19 deploy and get it up and running getting up and running is kind of easy but doing it properly 56:24 make sure it’s secure and all that’s kind of hard but we have a tool called wayfinder 56:29 check it out if you’re interested just head over to appear.io forward slash wayfinder dash trial just go ahead 56:36 over to the website you find it’s cool tool it helps you uh ease a bit of a kubernetes pain when 56:43 you’re getting started anything else you’d like to add tennis on that nose not really it’s it’s um just that it’s 56:50 it really eases the pain of implementing kubernetes to begin with so you can get 56:57 in and start developing applications quickly yeah absolutely that’s right and i think 57:03 i’m going to add a couple more things in here uh yeah please do join next week if you want to 57:08 learn a little bit more about how we deploy applications you know the person who asked that question we were deploying applications next week 57:14 we’re going to be doing that together we’ll be sharing some resources and tennis and i will actually be there in person we’re going to be doing that i 57:21 know this is the first time ever we’ll do this we’ll be there together sitting in one room and talking to you about 57:26 this stuff hopefully you will find that useful so please do sign up for the next one uh which is on next thursday 16th of 57:35 december same time as it is right now if you have any questions feel free to 57:40 ask us i’m going to end my screen here so we can you can see us uh just so we can show you that we’re actually real 57:47 human beings and not just uh some clouds running inside a kubernetes cluster if you have any 57:52 questions about anything feel free to ask us and uh do check out our website at the i o we’ve got some 57:59 use of blogs we have a new release of a product out which helps with kubernetes so 58:06 anything else tennis no sir and uh so far no questions so 58:11 yes uh yeah thank you very much for everyone for joining in we really appreciate that you took your taking time out and 58:18 joining us and yeah uh please to join us for the next one thank you all