Kubernetes from 10,000 ft: Part Two – Getting Started

December 16, 2021

Duration: 48 min

Following our webinar “Do You Even Need Kubernetes Anyway” – you’ve decided that you do and have now started to wonder if you can handle this.


Kubernetes out of the box is a big undertaking. You need to grow a priesthood of people to manage that and it’s too complicated and expensive. The providers have offered a level of simplification with products that provide a pipeline into Kubernetes for your apps – which makes things a lot simpler for you. We then offer yet more simplification with our two-part series Kubernetes from 10,000 ft.

Pt 2: Getting Started – understand the overview of K8s architecture and how to deploy applications using Kubernetes. Live Q&A with our Kubernetes experts following the talk.

Webinar Summary: Kubernetes from 10,000 ft: Part Two – Getting Started

Introduction In this comprehensive webinar, experienced Kubernetes and DevOps engineers, Val and Tennis, introduce Kubernetes. They describe it as an open-source system that automates the deployment, scaling, and management of containerized applications. This introduction sets the stage for a deep dive into the world of Kubernetes, providing a foundation for understanding the platform’s capabilities.

Understanding Kubernetes Architecture The hosts delve into the architecture of Kubernetes, explaining its two main components:

  • Control Plane: Often referred to as the master node, the control plane is the operational brain of the system. It manages the worker nodes and the containers they run, making global decisions about the cluster, such as scheduling.
  • Worker Nodes: These are the workhorses of Kubernetes. They run the actual applications and workloads, making them the backbone of any Kubernetes deployment.

Deploying an Application with Kubernetes Transitioning from theory to practice, the webinar demonstrates how to deploy an application using Kubernetes. The hosts use a YAML file to define the application and its settings. This human-readable data serialization standard specifies the type of resource to create, metadata such as the name and labels, and the container image to use. The hosts then explain how to submit this YAML file to the Kubernetes cluster using the command-line tool, kubectl. This tool acts as a bridge between the user and the Kubernetes cluster, making it an essential part of any Kubernetes deployment.

Components of the Kubernetes Control Plane The webinar provides a detailed overview of the components of the Kubernetes control plane.

  • API Server: Acts as the front end for the control plane, receives the YAML file and stores it in a database called etcd.
  • Scheduler: Determines the best node to run the pod based on various criteria.
  • Controller Manager: Ensures the desired number of pod replicas are running, providing a level of redundancy and reliability to the system.

Role of the Kubelet The hosts also discuss the role of the kubelet, a component that runs on each node in the Kubernetes cluster. The kubelet communicates with the control plane to manage the pods and their containers. This communication ensures that the containers are running as expected, making the kubelet an essential part of the Kubernetes ecosystem.

Understanding Kubernetes Deployments The webinar concludes with a discussion on Kubernetes deployments. The hosts explain that deployments allow for defining the number of replicas of a pod, ensuring that the desired number of application instances are always running. This feature provides high availability and scalability, making it a key aspect of Kubernetes.



    0:11 hello again and welcome to our webinar salmon you want to introduce yourself 0:17 and then i can go absolutely uh first of all welcome everyone i don’t know if you can hear me 0:23 we had this problem last time people could not hear me but yeah uh my name is 0:29 val and i am a envelopes engineer and a solutions engineer at appiah i’ve been 0:35 working with kubernetes and containers for the last three years or so and 0:40 my job at fpa is to work with our clients and take the machine learning workloads and run in the cloud 0:47 as efficiently as possible um yeah that’s me and i’m tennis smith i’m a pre-sales 0:54 engineer here at empia and i’ve been working with kubernetes 0:59 for the last three or four years and with devops things for the last 10 years and then 1:06 for the 30 years before that it was any other stuff anyway 1:11 that’s my background so we’re we’re here to talk about kubernetes from ten thousand feet 1:17 and this is part two of our ongoing discussion yeah um and uh we’re gonna talk a little 1:23 bit about what we mentioned last week but that’s 1:29 pretty much what we have planned this time yeah uh so the plan for today uh is as 1:35 uh let me just put up uh in a second yeah they’ll be like we’ll 1:41 as tennis says we’ll just do a bit of a recap from last time around and then we will go into 1:47 a bit more kubernetes terminologies explain them and we will deploy an application together so all of us will 1:54 go in and deploy the apps together uh yeah thank you very much yeah you could hear me i just saw the comment they’re 2:00 awesome uh tennis uh do you wanna do a bit of housekeeping stuff real quick sure yeah just a couple of things to 2:06 point out if you want to ask a question and it’s hopefully pretty intuitive because somebody else has already made a comment 2:13 but if you want to ask a question just put in the in the audience view ask a question area 2:18 and enter and then it’ll be passed to us and we’ll be happy to answer any question 2:23 we can next slide please yeah sure and what we’re going to be talking about 2:30 is you know a little overview quick review of kubernetes and then we’re going to 2:35 talk about the architecture of kubernetes and then we’re going to get a little more of a deep dive and talk 2:43 about the application deployment process in kubernetes 2:51 first off who is appiah who are we and we’re a company that 2:57 tries to simplify and i think do a very good job of it simplify the experience 3:02 of kubernetes so you can read this as well as i can i won’t read it to you but it the net result is that we simplify 3:10 the kubernetes experience and make it much simpler to implement 3:18 and speaking of kubernetes uh for those who are just joining us or 3:23 are unschooled in kubernetes which there’s a lot of folks out there like that it’s an open source system for 3:30 a containerized applications and it’s really it’s another way to put is it’s an orchestration platform for 3:36 containerized applications 3:41 and the hallmarks that it offers for this 3:47 ecosystem we call kubernetes is that it’s made to be available 24 7 in other words it’s meant to be resilient 3:54 and which leads to the next point of avoiding downtime it is self-healing and in that if a 4:00 component fails it can be configured to uh re-constitute 4:05 pieces that have failed in new areas it is scalable in that you can configure 4:12 it such that if you need more copies of an application it can scale 4:17 itself up or down as the case may be it is easily configured so that 4:22 developers can make well easily configure it can be configured so that developers can 4:28 quickly turn around their changes inside the environment very easily very quickly 4:34 and also the app the infrastructure can be abstracted in the sense that if you 4:40 run kubernetes on any cloud provider be it uh 4:46 aws or azure or gcp it’s pretty much the same thing everywhere you are 4:52 so if you have kubernetes on aws it’s going to be the same as kubernetes on azure in its internal workings and 5:00 finally it is efficient enough that it actually will save you money on resources because 5:07 it can very efficiently take advantage of those resources that have been allocated for 5:13 it one of the features and this goes in a 5:19 little bit more detail into it is is you can automate scheduling inside kubernetes in other words you can 5:27 allocate things in in the environment based on automation you can as i said before self-heal 5:34 if something if if some component breaks you can automatically have it 5:39 rebuilt in another area on another node which is a virtual machine 5:44 in common parlance you can automate your deployments so that that way 5:50 you can make sure that uh uh when for example your traffic changes 5:57 traffic patterns changes you can automate how you react to that you can auto scale which is really part 6:04 of the same thing you can load balance in that you can change you can have multiple ingress 6:11 avenues for your data so that when you if you lost one 6:18 load balancing connection you can run data through the other and never 6:23 lose service to an endpoint customer and finally we talked about infor 6:29 infrastructure extraction a little bit before which is if you’ve got kubernetes in one place you’ve pretty much 6:35 reconstituted kubernetes anywhere in the way they the behaviors are going to be the same so it’s it’s agnostic as 6:42 far as infrastructure platforms are concerned and the heart of kubernetes is the cluster 6:50 that’s that’s the the the the main construct in kubernetes and it 6:56 really has two components there is the control plane or the control node or 7:01 nodes if you want for h a and worker nodes and this is this depiction really 7:07 just shows you how a control plane interacts with worker nodes and you can have any number of containers running on 7:14 the different worker nodes and you can run 7:20 this anywhere you like for example you can run it on your personal computer you can run it on 7:26 the cloud providers as i said you can run it in on-premises and you can run it on edge devices like 7:33 uh raspberry pi’s and now we’re going to talk about uh our 7:41 application well our test application for the purposes of this demo 7:48 cool uh thank you very much tennis for for the introduction so that was a bit of a recap from last time around what we 7:54 got is we’ve got kubernetes cluster and we want to be able to deploy our 7:59 applications so if you’re new to kubernetes or you want to get into kubernetes and get your hands dirty with 8:05 it this is the session and what we’re going to do for the next half an hour or so is we will deploy an application 8:11 together and we’ll explain all the components that you have to write in order to run an application um and what 8:18 we’ve done is we have a github repository i think if i can bring it over this is the github repository 8:24 kubernetes hello i’m going to share the link in the chat so you have it and once you get once it’s over as well you will 8:29 you will get this information so i’m just going to check put that stick it out um 8:36 in here everything that we do in the demo today it’s all in the github repository this 8:41 is called kubernetes hello uh from apple kubernetes hello world from appgier and 8:46 you can follow along later on uh whenever you have time if you want to play around i play uh play with it 8:53 you can all you need is a laptop uh or a machine could be windows linux or um 8:59 or or mac you can run it on there uh there’s a few options to run this on a 9:04 local machine the the reason why we’re running a local machine is because it doesn’t cost as much any money if we 9:10 spin up a cluster in a cloud provider it will cost us some money so we have a we have a repository it takes you through 9:17 everything that you need to do to set up your own cluster and once it’s set up you’ll be deploying things so 9:23 what what i’m doing what i’m going to do today is exactly written out in this repository and you 9:29 can you can you can try it on in your own time when you have so i’m just going to move that out and what we’ll do is 9:35 we’ll jump into explaining some of the concepts of kubernetes some of the basic concepts of 9:40 kubernetes and and how it works in terms of application deployment and how would you move if you want to deploy an 9:46 application now at the heart of everything is containers and that’s what we are working with what 9:52 we want to do is we want to deploy containers and in kubernetes we actually 9:58 don’t deploy a container in itself the smallest thing and smallest unit of 10:04 work that kubernetes deals with is what’s known as a pod and a pod is simply either one container 10:12 or a collection of containers and now there’s reasons as to why you might want to run just one container they you can 10:18 run multiple containers because maybe these two containers talk to each other quite often they scale together or 10:25 whatever it might be but this is what we deploy in kubernetes so anytime we 10:30 people talk about pods they’re talking about either one container or multiple containers inside at that point 10:38 so next what we’re going to do is look at how we actually create this pod in 10:44 kubernetes and the way we create things in kubernetes is using resource definitions in any yammer file 10:52 and yammer file is just a configuration file which includes key value pairs 10:57 we’re jumping straight in to creating resources well i’m going to explain some concepts and we’ll actually also 11:04 do a deployment ourselves and we’ll test it all out in in a few minutes um 11:09 in a yama file we we define all the all the key value pairs so as you can see at 11:14 the top i’ve got a key called the api version and the value is v1 and have a key called kind and the value is is part 11:23 most of the java files that you see pretty much every yama file you’ll see that’s related to kubernetes will include this api version and also 11:30 include the kind and the kind is the type of resource that you want to be creating in our case we are going to be 11:35 creating a pod it could be a deployment could be an ingress could be a service it could be anything it could be a kind 11:42 that you defined yourself but in our case we’re just defining her pod api version in kubernetes everything is tied 11:49 to an api that kubernetes itself uses and there’s different apis with different resources different versions 11:55 of the apis in this case we just got api version v1 12:00 and with like with everything else in in with most of the kubernetes resources 12:06 what you’ll see is this metadata section and metadata section is uh you know we 12:12 as as the name suggests basically what we’ve got is metadata we’ve got things like the is the name of what this part 12:19 is going to be some labels we attach some labels to things again labels are key value pairs and we look at what 12:25 labels are and i think the most important bit then is the actual container that we want to 12:30 run in the pod and you can see we’ve defined the spec containers and then if i look at this last part which that’s 12:37 the name i give the containers a name and the last line itself is image this is this says server core forward slash 12:44 iis that’s the container image that we want to be running and that container image is just basically uh 12:51 any container that you want to run in this case it’s an is web server it could be a website that you created 12:57 it could be nginx it could be ubuntu could be any container that you want to 13:02 run and that’s what that’s how a pod yama is created once you write this configuration file 13:09 with this key value pairs um we can basically go ahead and deploy this on our kubernetes cluster 13:16 um if by the way if you have any questions please write them out in in the chat and 13:21 we’ll be happy to answer any any questions that you have um 13:27 okay so next is we’re gonna go ahead imagine 13:33 i write this configuration file this java file is written and 13:38 it’s a website it’s actually a website so if it is a website and i want to access the website 13:44 i’ve got i have to define what port that website runs on because i need to be able to access that 13:50 website and the way we access the website is by defining this container port 13:55 and if the website is running on port 80 i can write port 80 here if the website 14:01 is running on port 8080 i’ll just write that’s the word that’s what the website is running on 14:08 so how do i take this configuration file that i have this and convert it into 14:14 actual resources in kubernetes how do i set that up and the way we set it up is 14:19 imagine i have i am on one side this devil support which is basically you know you yourself 14:26 or myself and uh um what what we can do is uh um we can take 14:36 so i’ve got this uh myself on one side or yourself or your we we want to deploy 14:42 this resource into kubernetes cluster and that cluster could be anywhere that cluster could be on my local laptop it 14:48 could be in in a cloud provider somewhere or it could be in in a machine 14:53 that’s running in in in a data center somewhere it could be anywhere um but i can connect to it 14:59 using what’s known as uh using a command line tool called cubectl or cubecontrol 15:05 and i can i can connect to that cluster using this cubectl and submit my request 15:10 using cube ctrl apply minus f minus f means the name of the file if i 15:16 that’s the name of the file uh it’s called pod.yaml i can i can submit that whatever the name of the file is i can 15:22 submit that and what happens next is this request gets taken in and then 15:28 it gets sent to the kubernetes cluster and the kubernetes cluster as tennis was saying 15:34 before has multiple components it has what’s known as a control plane and the control plane is basically the heart of 15:40 everything and then the next is what we call the node pool which is number of missions uh 15:46 you know like the worker nodes or the pool of machines that we have available where we can deploy 15:51 our uh our worker uh our containers and in no pool we usually have a machine 15:56 in this case i’ve got like a node or another node i could have two i could have three i could have four i could 16:02 have five i could have as many as a as i like and these nodes could be uh linux 16:07 nodes or it could be windows nodes there’s an option available for us to run that 16:12 and the number of nodes you run depends on how many things you will you’ll be deploying what kind of resource your 16:18 resources you’ll be deploying plus um how much resilience you want if one 16:24 of the node goes down and there’s only another node there might not be enough space to run everything that you want to 16:29 run so perhaps you might want to run more than maybe a number of multiple uh worker 16:35 nodes so what happens next so that’s on on the 16:40 other side it’s you and i we’ve taken this pod.yaml file and we submitted it to the kubernetes cluster we do cubesat 16:47 apply minus f part. and that request gets taken from my cubes details connected to the 16:54 kubernetes cluster you can connect it using uh configuration files or if you have 16:59 credentials you can log into the credentials and set it out so imagine i send that request and that 17:05 quest gets sent to kubernetes cluster control plane the control pin is made up of number of components which themselves 17:12 are running inside containers as pods in the kubernetes cluster and one of the components is the api and 17:18 api is the heart of old communication within kubernetes so every time 17:24 you and i need to talk to any component inside kubernetes or any other component 17:30 needs to talk to any other component it has to go by the api and it will become clear clear in a minute what 17:36 actually happens so imagine i take this yaml file and i submit that request to kubernetes cluster the api receives the 17:42 request now the request says i need to create a pod and this is the image that i want to create as we were saying 17:48 earlier on it takes that request and stores that information in a database 17:53 which is called etcd or etcd and lcd is a high availability database 17:58 it runs in a cluster so it’s for a high availability and i usually run in like 18:04 i don’t know three four five six however many instances you want to run 18:09 and the good thing about xcd is replicates all the information amongst itself right so if one thing is written 18:15 to one node it will get replicated to the other node so imagine if i’d lose a node for fct the other nodes will have 18:20 that information so that information gets taken in and gets stored in fcd and it’s just a key 18:27 value database just like the yamafata you saw it’s a key value database takes that information and it stores it in 18:34 that database at cd and that is basically the information 18:39 hub as you can see and this is where the state of the kubernetes cluster store is stored in because this is where we’ve 18:46 submitted and stored our desired state i said i want to create a new pod and i’ve 18:51 sent the request and i’ve uh and i’ve set that and it gets saved there 18:57 what happens next right so information gets saved by the way if you have any questions uh just just just 19:04 do to do jump in tennis anything you’d like to add so far i think your mic might be muted so you 19:09 might want to unmute yourself is anything you want to add tennis 19:16 for some reason my mic was muted no all i would add is that ncd uh is is your 19:21 ongoing state information about your living environment it’s it’s it’s you know for 19:27 backup as well but basically it’s your it’s your living state of the whole system 19:32 correct yes a living state anything that we want to create that’s um that’s all of that there and then 19:38 imagine we want to create this pod or this deployment or whatever resources we want to create um 19:45 there’s another component inside the control plane that’s listening to the changes that get made in the lcd 19:51 database and that the lcd database stores it says oh you want to create this part or you 19:56 want to create this deployment when we’ll explain what deployment is imagine now we want to create this part called 20:02 you know the one that we were talking about earlier there’s another component called the scheduler that’s in charge of listening into 20:08 anytime a new pod needs to be created a part includes a container on multiple containers inside 20:14 and the purpose of the scheduler is to scan your nodes and to see which node is the best fit 20:21 for that part to run and that that’s important because you might want to run it in a node like 20:28 imagine if i have to you and i have to deploy it we have to log into each node and figure out which node has enough 20:34 space so i can run this pod but this is all automated with the scheduler and the scheduler is going to look at 20:40 the nodes and see where there’s enough space to deploy it or if there’s the other any other requirements for example 20:46 that pod might need a gpu if that needs a gpu we should deploy it on a node we have which has a gpu and that’s the 20:53 purpose of the scheduler the scheduler is trying to spread out the replicas make sure that not everything gets 20:58 deployed on the same node because if we lose a node we’re in trouble but 21:04 what it does is basically scans and figures out which is the best fit based on some criteria that it has it’s a long 21:10 list of criteria that it does and once that’s done um uh once that’s done it will basically go 21:16 ahead and deploy this inside the node or it won’t deploy it inside the node it will tell lcd to update and say oh pic 21:24 it picked the node let’s say pick like node one or node two and says oh this is the best fit and we’ll go ahead and 21:30 update the information in xcd next to that part next to the part it will say oh you need to this part should be 21:36 deployed in node one and once that happens there’s another component that kicks in 21:42 and that component is called the controller manager and now the controller manager 21:48 the purpose of the controller manager is to basically take the information and uh 21:53 figure out how many replicas it needs to create right it says okay imagine i want to create like one replica or two 22:00 replicas or how many replicas you want to create basically it creates the number of replicas it needs in terms of 22:05 that part that needs to be created so now so far what we’ve got is the number of replicas that you want to run or one 22:11 or two or how many posts we want to create and which node is going to go in at this time it actually doesn’t 22:19 create the pod inside any of the nodes nothing is actually created so far the 22:24 information is so far just stored in lcd to say this is the desired state the desired state is to create this part on 22:31 this node that’s all it says it doesn’t say anything more then what happens is there’s another 22:37 component inside the worker nodes 22:43 and that component is called the cube load excuse me and cubelet is basically in charge of 22:48 talking to the api to figure out what pod needs to be deployed on its node 22:54 now that’s the purpose of the cubelet that’s all it does it just checks well there’s a few more things but the main 23:00 job is to talk to the control plane by the api and say is there any pod that 23:05 i need to run is there any part need to run it just basically keeps checking all the time and if there is and if there is 23:12 then cubelet is in charge of trying to create this pod cubelet directly it doesn’t create the 23:17 part it calls a runtime like you and i you know we run this docker desktop or docker engine or our machine 23:23 that basically it calls it delegates the secretion of the container but for all intents and purposes we say cubelet is 23:29 in charge of creating the pod and the pod could include perhaps one container or multiple containers and cubelet 23:36 basically brings a when the pot starts up in inside the node it lets the control plane know yeah 23:44 the pod is up and running gives it some more information about what the ip address of the pod is 23:49 and sends that information back to the api and api stores in xcd and say yo this is part number one is running at 23:56 node one and the status is ready or not ready or whatever it might be and this is this information gets stored 24:03 in xcd as tennis was saying the information is all about 24:08 um you know this is the living information so if if one of the parts pressures we can bring it back up now 24:16 imagine i had to run multiple replicas of that so instead of creating a pod we can use what’s called the deployment and 24:22 we’ll talk about deployments in a second and deployment is basically something that’s just similar to a pod but we can 24:28 in a deployment we can define how many replicas i want to run imagine i want to run two replicas of 24:34 this of this part right imagine i want to run two replicas of the pod and if i want to run two replicas of the 24:40 pod in a deployment i can say run two replicas of this pod and once that happens that information 24:46 also gets stored in scd and ncd says at all times i need to have two replicas of 24:51 this pod running and if for whatever reason one of the pod crashes maybe there’s a uh you know 24:58 maybe there is a there’s a bug in the in the application or whatever it might be there’s not enough resources it crashes 25:06 then the the controller manager which is on the left of the controller is always watching to see if the things are still 25:14 up and running if they aren’t and basically brings it back up it tells the cubelet it basically figures out oh 25:21 this this part has gone down and bring it back and this is the self-healing uh feature of kubernetes that’s that tennis 25:28 was talking about earlier on tennis anything you’d like to add 25:34 uh no sir just um that the the one thing that that uh 25:40 i would uh remind people is that the the fact that we’ve created a pod and gone through the 25:47 process of creating a pod the the concept of resiliency 25:53 is actually taken care of at a higher level than the pod definition and we’re going to talk about that in a little bit 25:59 so just because my point is is that just because we’ve created a pod doesn’t mean that it’s automatically 26:04 resilient that requires other facilities we’re going to talk about later 26:10 yeah which is uh which is the deployment so these are the components of kubernetes 26:16 these are major components but this is a simplified super high level uh overview of what 26:22 kubernetes components look like for example if you if you want to go into a little bit more detail there’s not just 26:28 one controller there’s many controllers that look after different things um and the scheduler itself is made up of 26:33 multiple parts um in the node itself there’s not just cubelet there’s other components as well which do other things 26:40 but that can be a topic for another day we just like to give a high level overview of all the things 26:47 so what does my kubernetes application look like if i deploy an application in kubernetes what is the structure of the 26:54 application what does it actually look like and we will deploy it we will all have a look at it together imagine we 27:00 have multiple replicas of the pod that we want to run as tennis was saying and usually we don’t run pod in itself we 27:07 can it’s just one instance that’s absolutely fine but nothing is actually looking over it 27:13 what we usually use is what’s called the deployment and deployment is just a recipe that says how many pods of a 27:19 specific image you like to run how many replicas i want to run and the good thing about that is if you say i want to 27:25 run two replicas kubernetes will make sure there’s two replicas of that part running at all time not three not one 27:32 not zero it will try and always go to the desired state of two pods 27:38 and how do i create this deployment you saw this pod.yaml before earlier on and 27:43 you know we explained some of the bits the metadata and at the bottom we’ve got the containers and 27:48 and the image that uses if you look at the highlighted bit in the middle what we’ve got is this replicas three 27:55 so we’ve got three replicas we want to make sure if i can run like five replicas i can run six replicas and once 28:01 i’m ready i can deploy this into kubernetes as cube ctl apply myself uh you know uh whatever the name of the 28:08 file is dot yam and that will create me a deployment and deployment is basically 28:13 translates the number of pods that you want to run in our case we’re going to run three parts 28:19 there’s some other things as well in here there’s a bit more like we have some labels and selectors and we’ll look 28:25 at what labels and selectors are useful for but every part gets a label and a 28:30 label is again a key value pair and that’s useful for when you want to send the traffic to a given set of pods and 28:37 we’ll when we’ll look at that in a little while in a few minutes 28:42 so what are labels uh every part every resource that you create in kubernetes not just part it could be service could 28:48 be an ingress could be anything you can get this key value pair so in our case i’ve got this environment and it is the 28:54 key and the value is the value is qa now the this is entirely uh arbitrary you 28:59 could give any key or any value and you can assign it to any of those components 29:05 and what that allows us to do is um what we have is we want to be able to send 29:11 traffic to these pods that’s the point imagine if i have multiple replicas of a 29:16 pod running and i want to load balance how do i make sure i pick one pod to 29:21 send like some traffic to one of the parts and some traffic to the other part 29:27 and that’s done by what’s known as a service which we call which we can think of as an internal internal load balancer 29:34 and once we have that service uh we can send the the traffic to to those pods and that internal load and 29:42 the service uses those labels to be able to send that traffic and we’ll explain in in a little in a little while and how 29:48 do we create these resources in communities the reason the way we create these resources in kubernetes is again 29:54 using yaml files so we have a ton of yellow files we we had a deployment yellow file now we’re going to write a 30:00 service yaml file and all service is doing is doing this internal load balancing and this is a component that’s 30:06 required if you need to send the traffic to the kubernetes a2 to a pod and if you look at this service it’s 30:13 kind of kind of straightforward it’s quite quite simple we’ve got at the top it’s kind service we have some metadata 30:19 we give it a name whatever name you want to give web service and there’s a selector app where in the in in in the 30:26 middle app code on web and that’s basically selecting the labels of the pods as we defined it earlier 30:33 the most important thing at the end of the day is what port it needs to target which is which port is a container 30:39 running on inside that pod and the service itself runs on the port so the port it’s port 30:45 here is the services port it’s just we have to define that and then the target port 30:51 is where like the pod is running if it sounds confusing it is but bear with me 30:56 for a minute there’s a diagram that’ll explain exactly what i’m trying to say here 31:02 so imagine i have a deployment on the right and i have a service on the left and we will do this demo in a few 31:07 minutes and in the deployment we have a container which is server core is could be any 31:12 container we will look at nginx in a little in a little while and that’s running on port 80. that is container is 31:19 right it’s a web service it’s a web server that’s running on port it’s if somebody hits it they’ll get a page back 31:25 that’s running on port so i need to make sure in the service i target port 80 so 31:30 i need to make sure that these two ports match up and the selector app equals up needs to 31:35 match this app because we’re labeled that’s all we’re matching and this port could be any port you can run this at 31:41 any port you like um so that’s what we try to do here and the reason why we use services is 31:47 because i could have multiple replicas of the pods running so in our case we’ve got 3.3 replicas of the pod running 31:54 now which part do we pick do we write the logic to be able to send the traffic to the how do we wait 32:01 load balance the traffic right so this is this is all about internal communication or sending traffic from 32:07 outside and that’s why the service is actually used the service does the load balancing it’s got its own logic and 32:13 that’s a topic in itself how it does it uh one of the you know that every pod gets an ip address that’s just the way 32:19 kubernetes networking works so imagine i have this pod 32:27 and i have a service but i need to be able to access it from outside the cluster the service is usually used for 32:33 communication within inside the cluster you cannot send the request from outside the cluster and we usually send this 32:40 request using an ingress and ingress is an external load balancer we can think 32:45 of it that way and ingress allows us to define the rules as to if somebody sends a request to this url or this part send 32:53 it send it to this service and that service basically forwards that traffic onto the right part 33:00 and how do we define an ingress again with a bunch of yammer files so we’ve got this yaml file here you can see i’ve 33:06 we’ve defined the kind ingress and uh and at the bottom we see what service we 33:12 want to send it to web servers whichever service we want to uh target and then the port is the port of the of the 33:18 service itself and then if we hit that path and we’ll look at those parts in a second basically we define this ingress 33:25 yaml file which allows us to send the request down and i’ll explain a couple more things how it actually works 33:34 so this is this is service in here on the right and ingress on the left 33:39 and ingress just basically is like you know think of ingress as is it runs as uh there’s two parts to 33:46 the ingress which is one is the configuration the right for an application and one is the ingress 33:51 controller an ingress controller basically runs as a component inside a kubernetes cluster and it and it 33:58 implements all the rules that you write which is in our case we say if somebody hits this path forward slash on the left 34:05 send them to this traffic this service called web service and then the request goes to the web service 34:11 which is service called web servers and then it folds it on on onwards so we’ll do a demo in a second but let me quickly 34:17 show you what what i was talking about here so we’ve got pod or multiple bots we have a service 34:26 we have an ingress so we’ve got these three components and we create the pod using a deployment around them creating 34:31 a pod directly what we need to do is we need to get this target port and the container port 34:37 of the two components to match up so we can root the traffic correctly 34:42 and then in an ingress if i5 in english we have a service port which is service. 34:48 import they need to match up so i can root the traffic correctly and the same goes with the one more 34:53 thing is uh you know if i have to have these things in here i have a selector and label they need to match up in a 34:59 cyber center pod and well if i between ingress and a service i need to make sure the service name and the service 35:06 name in the ingress and the service name of the service matches up now that’s a lot 35:11 of diagrams a lot of chat uh i’m sure you’d like to see how if it actually works so um let me just quickly quit 35:20 and quickly show you some some things let’s just bring terminal out now 35:26 this is all based on this um 35:32 this repository here all the things i’m going to do here is what you will what you you’ll have uh you know this 35:37 repository that’s shared we shared in the in the announcement section you can you can try it out yourself um 35:44 so what i’ve got is i’ve set up a cluster on my local machine using kind kind is 35:50 runs a kubernetes cluster in docker so you need to make sure docker is installed you don’t have to use kind you can use mini cube 35:56 or you can use micro cades but all the information is in is in that repository that lets you know how you create the 36:02 cluster and how you want to use it now the cluster is is there but it’s 36:07 empty i can check some i can my cube ctl i installed this command line tool on my 36:13 machine that’s set up um to talk to that kubernetes cluster it’s just automatically being set up so if i do 36:19 cube ctl get notes 36:24 this is showing me how many nodes i’ve got running in that kubernetes cluster that cluster is running on my machine 36:29 and that’s been set up already and i’ve just got one node and that one node is the control plane and also 36:36 um also the working in in kind you can modify to run multiple 36:43 worker nodes and a control plane uh you can configure all the information and that information you can find on on 36:49 their documents um so so far what i’ve got is an empty so what we’re trying to 36:54 deploy is we’ll deploy this website uh we’ll deploy this nginx website it’s just a plain nginx server that’s com 37:01 that comes from this uh from a docker image and but we’ll deploy it using this kubernetes pattern we have a docker 37:07 image that we want to run and that’s what we’ll do so if i do a cube ctl 37:14 i have basically nothing deployed in this kubernetes cluster and once it’s set up 37:20 what i can do is uh we can go ahead and deploy our things that we need so if i 37:25 open my files here i’ll just bring them up so we can see 37:35 so this is the deployment file as i was talking to you about earlier so we’ve got this uh you know i’m running one of 37:41 replicas and it’s just running this nginx image nginx is a web server lightweight web server 37:48 and it runs on port container port 18. that’s what we’re doing so i what i can do is i can do cube ctl 37:56 apply minus f this thing is in manifests folder o2 ooh 38:02 look two hello world deploy.yaml and if i do that 38:09 this will say a deployment has been created so if i do cube ctl get deployments 38:16 you’ll see it will say oh you got a deployment you needed one uh 38:21 one pod and you’ve got one part that’s up timing if i do cube ctl get pods i can see there’s one part running and 38:27 what i can do is i can actually split the pane horizontally that’s probably better 38:34 what i can do is i can do cube ctl 38:39 at w and we can watch this so i can i can we can scale up our deployment and we can 38:46 see the kubernetes will submit the request will go in and it will update the replicas and increase the replicas 38:52 to however many replicas that we need so if i go to uh cd uh 39:00 picture on the right right folder um let’s let’s increase the replicas of three because that’s what we want to do 39:06 right we don’t want just one part running we want to have multiple parts only so if i uh if i apply the same thing 39:13 again it’s the same file i haven’t changed anything i’ll just change the number of replicas you can see at the top 39:18 things have kicked in it’s started to create more containers and now it’s that’s just a running watch it will just 39:24 give me running watch so if i don’t still get put i can see the pod i’ve got like uh three pods up 39:30 and running let’s say i always going to be a bit cheeky and delete one of these parts right or just to just to simulate 39:36 their self healing i get the cube ctl delete odd and that’s the name of the pod and i 39:43 can delete it and what it will do is it will terminate the part but i will bring it back up you can see it creates 39:48 another part just to make sure that things are always running all right this will take a little while because it 39:54 gives a so if i do cube ctl get pods you can see i have three parts of 40:00 running one is quite recent which is 15 seconds and the other one’s running from from before and the one that started way 40:06 before that so this is like a self-healing process for it now i’ve got the deployment but i need to be able to 40:13 access it but we said in order to access it we need to have a service so we’ll define a service so let’s just jump into 40:20 another file this is a service and there’s different types of services we should do slightly different things but 40:25 we can discuss them in a different session but the point is what we’re going to do is we’ll define 40:32 what the selector should be select our app example one that matches the 40:37 selector that we have on here excellent stuff the deployment selectors match and also the container port is 80 40:44 so we need to make sure the service target port is also 80. that sounds good to me everything’s good we can go ahead 40:50 and deploy that so if i do it clear we can close this because actually we don’t really need anymore 40:55 clear that so if i do cube ctl 41:03 manifests uh this is three this is a service we’ll create the service now cube ctl get service we’ll see 41:11 this example one service has been created there’s a kubernetes service that already exists from before 41:17 that that’s just that already exists so when you create a cluster that service exists so now we’re two thirds of the 41:23 way through we’ve defined our pods we defined our service now we have to do is in order to access it from outside the 41:29 cluster as in from a browser i need to create an ingress and we create an 41:34 ingress using this file called english and all this says is if somebody accesses when 41:42 it’s running locally there’s a configuration there’s information inside that we just access it using localhost 41:48 forward slash uh that’s that’s how we access the page um but what in english what you can do is 41:54 you can define multiple parts where you can define multiple hosts and what we say is if somebody types forward slash 42:00 test part send them to a service called example1 on port 80 which is what we just deployed but imagine you had more 42:06 microservices imagine you had a checkout service imagine you had a login service 42:12 they can come to this like i don’t know appvia.com forward slash test part we’ll take them to this example service app 42:18 via dot com forward slash checkout will take them to a different service aphea.com full slash login will take 42:23 them to a different service which go to a different pod and that’s what the ingress does um it allows you to it does 42:29 a lot more than just that but you can also root traffic based on the part that you 42:34 set but anyway what we’re going to do is we’ll create this rule this ingress so we can access this page 42:40 using localhost forward slash test part if i do that we should see this nginx 42:45 page that is running inside that pod so let’s just quickly do that cube ctl 42:51 apply minus f manifest and then this is number five 42:56 you can rest on yaml now it says ingress has been created cube ctrl again fingers 43:02 english is created now if i open a browser if i go localhost 43:09 forward slash test part and i can see this pod 43:14 that’s running inside the kubernetes cluster is act is getting accessed via the 43:19 ingress i request i send the request to forward slash test spot the ingress receives the 43:24 request it forwards on to the service and the service says okay i need to send it to the pod and they it goes inside 43:32 the pod and sends the request back now this was a pre-built image that we were playing with if you look at the 43:38 instructions we do explain how you can um how you can build your own container how you can deploy that and how you can 43:45 play around with that so that’s how you can deploy an application any questions 43:50 anything you would like to add tennis we would like to keep like this is like high level depending on the application 43:55 no nothing really to add it’s just that that’s a really nice succinct way of 44:01 doing something that’s fairly complex for most users is is you 44:07 know getting that first hello world running yeah i think yeah i see once you once 44:13 you have it up and running it’s uh you know you can play around with it there’s a lot more things we can do um with this 44:19 uh but that’s basically how you can get things up and running please have a look at the github repository uh that’s there 44:26 which is github.com for slash api kubernetes hello world um if you have any questions of course you can always 44:31 fire always get in touch with us so i think if there’s any questions by the way 44:36 please feel free to ask if not if you don’t have any questions no problems um so 44:42 we’ll just maybe just do a quick recap of what we discussed so far and what we’ve seen um so pod is the smallest 44:49 kubernetes object that represents a set of containers or just one container uh deployment is a recipe for deploying 44:55 pods we can define what replicas we want to run and if if one of the pod crashes 45:01 as we saw i deleted the pot it just recreated a pod and a service is an internal load balancer so it’s basically 45:07 an extraction to run over the pods and ingress manages external communication 45:13 from outside the cluster and sends it to inside the cluster we used cube ctl to communicate with kubernetes using the 45:20 kubernetes api which is a restful interface so it takes a request it sends it to kubernetes and it does that 45:26 um and then it deals with that afterwards um one more thing so usually an application 45:34 you don’t have to have an ingress if things are just talking internally there’s nothing that comes out externally but we usually deploy an 45:40 application we have deployment and in deployment we have multiple pods and then we have a service which is an 45:47 internal load balancer revenue ingress which is an external load balancer and we can submit requests to the ingress 45:54 and the ingress can fold it down to the service that folds it down to the pods themselves 46:00 if you’d like more information about kubernetes and just in general we have tons of blogs on our website happier.io 46:08 blog so you can you can check out you can go on there and happy at io we’ve got tons of things about getting started 46:14 and some really good blogs the tennis and others have written um well tennis maybe you want to talk about 46:20 the wayfinder yes wayfinder is our uh 46:25 application our management package for kubernetes 46:31 that allows for a lot of the 46:36 nuts and bolts problems that you have in implementing kubernetes it it really 46:41 takes care of that so think of it as eks and 46:46 gke and aks simplify the management of kubernetes 46:52 well we simplify the management of them and it’s a very good 46:57 uh package to look into if you’re looking to uh implement kubernetes with 47:04 relatively rudimentary knowledge of the kubernetes world uh and and not have to worry about 47:12 a lot of the nuts and bolts that most people do when they’re finally getting to doing communities implementations 47:21 yeah definitely check it out after the io trial um if you have any questions at 47:27 any time you can always get in touch with us uh my name my email is salman 47:32 i oh same for tennis tennis dot smither i o um and uh 47:38 please feel free to ask any more questions that you you can put them in there and also if you if you if you’ll be kind enough to leave 47:45 us some feedback that that that will be great we will have more webinars coming in in the new year as well so keep an 47:51 eye out for on on that on that announcement i think in the first or the second week of january we’ll be starting 47:57 back again again uh more more topics are on kubernetes and get up cicd how do you 48:04 do this um a lot more of that discussion um so that’s yeah that’s what we’ll be 48:09 doing anything else tennis you’d like to add no sir just thank you for watching 48:16 thank you all for watching and we will see you next time

Subscribe to receive resource and product updates