Tutorial: How to Expose Kubernetes Services on EKS with DNS and TLS

Time to read
February 28, 2024

Key Takeaways

A production-ready application needs to be discoverable and accessible over a secure endpoint, and it can be both complex and time consuming to implement a solution that is easy to consume, maintain and scale. This tutorial brings a few tools together to publish your Kubernetes applications securely, and offers up an easier solution to reduce some of these complexities.

Browsers often mark non-HTTPS websites as insecure, which could have a negative impact on the reputation or trust of your website/application. That's just one very good reason why you should use HTTPS. Ultimately, it's more secure and more trustworthy.

What we're using in this tutorial…

You’ll find the accompanying code for this post at

Getting Started

We’ll start with a vanilla Amazon EKS cluster, which we can automate with some terraform magic to provide a 3 node cluster.

Then apply that with terraform:

// ./
provider "aws" {}

data "aws_vpc" "default" {
 default = true

data "aws_subnet_ids" "default" {
 vpc_id =

module "eks" {
 source           = "terraform-aws-modules/eks/aws"
 cluster_name     = "appvia-dns-tls-demo"
 cluster_version  = "1.19"
 subnets          = data.aws_subnet_ids.default.ids
 write_kubeconfig = true
 vpc_id           =
 enable_irsa      = true

 workers_group_defaults = {
   root_volume_type = "gp2"

 worker_groups = [
     name                 = "worker-group"
     instance_type        = "t3a.small"
     asg_desired_capacity = 3

data "aws_eks_cluster" "cluster" {
 name = module.eks.cluster_id

data "aws_eks_cluster_auth" "cluster" {
 name = module.eks.cluster_id

provider "kubernetes" {
 host                   = data.aws_eks_cluster.cluster.endpoint
 cluster_ca_certificate = base64decode(
 token                  = data.aws_eks_cluster_auth.cluster.token

$ terraform init
Initializing modules...
Terraform has been successfully initialized!
$ terraform apply
Apply complete! Resources: 27 added, 0 changed, 0 destroyed.

Terraform will helpfully provide a kubeconfig file for you to use with kubectl, you can set that to be used for your terminal session without effecting any other configuration you have already set up. By default its name is based on the cluster name, so in the case of our example it's:


export KUBECONFIG=${PWD}/kubeconfig_appvia-dns-tls-demo

When you want to get back to your Kubernetes config:


Test that everything is working:

$ kubectl get pods -A -o wide
NAMESPACE     NAME                       READY   STATUS    RESTARTS   AGE   IP              NODE                                          NOMINATED NODE   READINESS GATES
kube-system   aws-node-qscpx             1/1     Running   0          18m    <none><none>
kube-system   aws-node-t5qp5             1/1     Running   0          17m    <none><none>
kube-system   aws-node-zk2gj             1/1     Running   0          18m   <none><none>
kube-system   coredns-6fd5c88bb9-5f72v   1/1     Running   0          21m   <none><none>
kube-system   coredns-6fd5c88bb9-zc48s   1/1     Running   0          21m    <none><none>
kube-system   kube-proxy-647rk           1/1     Running   0          18m    <none><none>
kube-system   kube-proxy-6gjvt           1/1     Running   0          18m   <none><none>
kube-system   kube-proxy-6lvnn           1/1     Running   0          17m    <none><none>

$ kubectl get nodes
NAME                                                                     STATUS   ROLES    AGE   VERSION    Ready    <none>   17m   v1.19.6-eks-49a6c0   Ready    <none>   17m   v1.19.6-eks-49a6c0    Ready    <none>   17m   v1.19.6-eks-49a6c0

You should see a few pods running and three ready nodes.


We're going to use external-dns to configure a route53 zone for you, external-dns assumes that you've already got a hosted zone in your account that you can use. Mine's set up for so that I can publicly resolve

To let external-dns make changes to the route53 zone, we can do that with an IAM role and attach that to a service account.

Then apply that:

$ terraform apply
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

You'll see that it's bound to a service account called external-dns in the the external-dnsnamespace.

Now, let's test that all of that works as it should. First, check the current state:

$ kubectl run -i --restart=Never --image amazon/aws-cli $(uuid) -- sts get-caller-identity
"UserId": "AROARZYWN37USPQWOL5XC:i-0633eb78d38a31643",
"Account": "123412341234",
"Arn": "arn:aws:sts::123412341234:assumed-role/appvia-dns-tls-demo20210323123032764000000009/i-0633eb78d38a31643"

You'll see the UserId and Arn have have an i-... in, which is the node instance (this won't have access to much).

Now, make an to provide an easy way to get the AWS account ID. Refresh the terraform state and create the namespace and service account:

$ terraform refresh
aws_account_id = "123412341234"

$ kubectl create namespace external-dns
namespace/external-dns created

$ kubectl create -n external-dns serviceaccount external-dns
serviceaccount/external-dns created

$ kubectl annotate serviceaccount -n external-dns external-dns$(terraform output -raw aws_account_id):role/externaldns_route53
serviceaccount/external-dns annotated

$ kubectl run -i -n external-dns --restart=Never --image amazon/aws-cli $(uuid) -- sts get-caller-identity
   "UserId": "AROARZYWN37USAHEEKT35:botocore-session-1123456767",
   "Account": "123412341234",
   "Arn": "arn:aws:sts::123412341234:assumed-role/externaldns_route53/botocore-session-1123456767"

Notice how the Arn has assumed-role/externaldns_route53 in it to show that you've successfully assumed the role.

Deploy external-dns:

$ kubectl -n external-dns apply -k ""
serviceaccount/external-dns configured created created
deployment.apps/external-dns created

We need to patch the default configuration, so start by creating ak8s/external-dns/deployment.yaml:

Then apply the patch:

>$ kubectl -n external-dns patch deployments.apps external-dns --patch-file k8s/external-dns/deployment.yaml
deployment.apps/external-dns patched

The configuration above will set external-dns to look for hostnames in the ingress configuration and create a record for them that points to the ingress controller's load balancer.


We're going to use ingress-nginx to get us going. Other ingress options are available - you might want to consider other options depending on your specific needs; see comparison of Kubernetes Ingress controllers.

Now, deploy ingress-nginx. This configuration is set up to request an Amazon Elastic Network Load Balancer and attach it to the ingress controller:

$ kubectl apply -k ""
namespace/ingress-nginx created created
serviceaccount/ingress-nginx-admission created
serviceaccount/ingress-nginx created created created created created created created created created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created

This works totally out the box, but you might need to scale the deployment if you want some resilience. Let's go with three replicas for now:

$ kubectl scale -n ingress-nginx --replicas=3 deployment ingress-nginx-controller
deployment.apps/ingress-nginx-controller scaled


Now we need to deploy cert-manager:

cert-manager is going to handle populating a secret, adjacent to our ingress configuration, with a valid TLS certificate that we're going to configure to come from Lets Encrypt using the ACME protocol though cert-manager which supports a number of different issuer types:

$ kubectl apply -f created created created created created created
namespace/cert-manager created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created created created created created created created created created created created created created created created created created created created created created created created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created created created

We need to create a couple of issuers, we're going to use Lets Encrypt HTTP-01.

Replace the with your email address in both issuers, this allows Lets Encrypt to send you email notifications if your certificate is due to expire and hasn't been automatically renewed or removed:

Then apply that:

$ kubectl apply -f ./k8s/cert-manager/issuers.yaml created created

Bringing it all together

We're going to deploy a three replica simple helloworld application with a Service and an Ingress.

Change the references to something within your zone, in line with what you did in the external-dns configuration.

This will cause external-dns to create a record in your route53 zone to point to the ingress-nginx) controller's network load balancer and cert-manager to retrieve a valid certificate for that.

You can test this all with:

$ nslookup


$ curl
<!DOCTYPE html>
<title>Hello World</title>

Tearing it all down

The order you destroy things is REALLY IMPORTANT to not leave orphaned resources behind that could be costly. For example, if you terraform destroy before removing the ingress configuration and ingress-nginx you'll likely leave behind both a route53 A record, which if you're creating and destroying regularly could incur costs if you end up with over 10,000 records (which will happen faster than you think). And also a Load Balancer which could cost just under $30/month.

To do it in order:

$ kubectl delete ingress --all -A
ingress.extensions "helloworld" deleted
$ kubectl delete namespaces ingress-nginx
namespace "ingress-nginx" deleted
$ terraform state rm module.eks.kubernetes_config_map.aws_auth #workaround
Removed module.eks.kubernetes_config_map.aws_auth[0]
Successfully removed 1 resource instance(s).
$ terraform destroy -force
Destroy complete! Resources: 27 destroyed.

Minimising the effort

With the amount of effort involved in doing this there are plenty of pitfalls and risks (specifically orphaned resource cost) that will likely mean most teams will end up with a long lived snowflake setup (aka a pet) ... but you thought you were doing devops.

Kore Operate is an answer to this struggle - managing the complexity of the services that help in securing and exposing applications, in addition to providing a UI to create an ingress API object, making it easy for devs.

Related Posts

Related Resources