Kubernetes Gauntlet

The Kubernetes Gauntlet is a series of challenges set specifically to get you comfortable exploring the world of Kubernetes

Challenge One: An Application and a Proxy Walk into a Bar

Instruction

Resource

Explanation

0. Set up Kubernetes on your machine or find a cluster you can borrow

Hopefully this one is pretty obvious. You can't do anything with Kubernetes if you don't have access to a cluster.

1. Create an nginx pod

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 80

This will start up a single instance of nginx. You can read more about the Pod resource here. You will not be able to see the default nginx page yet, as nothing has been exposed outside the cluster. Also you will notice that if you kubectl delete pod nginx then it's gone for good.

2. Make your deployment more resilient to pod failure

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

A Deployment provides declarative updates for Pods. You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate.

You can see that quite a lot of this config (particularly the spec under template) is identical to the Pod config.

As you may have understood from looking at the config, the Pods are being labelled up as app: nginx, and the Deployment is keeping track to ensure that there are always two running based on Pods matching that label.

Read more about Deployments and what they can do for you.

3. Expose nginx outside of the cluster

apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 1337
      targetPort: 80

A Service allows you to expose running pods to the cluster or outside of it. Here we are using the LoadBalancer type, which will make the Service reachable from outside the cluster. On AWS this will create an ELB, on other clusters it could be made to interface with your load balancer of choice to create a VIP and pool.

On a Mac, this will just make the Service available on the port at localhost.

4. Change the contents of the page displayed

        volumeMounts:
        - mountPath: /usr/share/nginx/html/
          name: nginx-index
      volumes:
        - name: nginx-index
          configMap:
            name: nginx-index

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-index
data:
  index.html: |
    HELLO!

The top section of the resource should be added to the Deployment config that you created earlier. The bottom section is a new resource type called a ConfigMap. ConfigMaps can be used to define environment variables or, as we are doing here, mount any number of files.

5. Spin up an application

apiVersion: apps/v1
kind: Deployment
metadata:
  name: application
  labels:
    app: application
spec:
  replicas: 2
  selector:
    matchLabels:
      app: application
  template:
    metadata:
      labels:
        app: application
    spec:
      containers:
      - name: application
        image: paulbouwer/hello-kubernetes:1.5
        ports:
        - containerPort: 8080

This is a random Hello, World! style application for Kubernetes, you can use any suitable application in its place as long as you keep note of the ports it exposes so that you can update them in further steps.

6. Expose the application to the cluster

apiVersion: v1
kind: Service
metadata:
  name: application
spec:
  selector:
    app: application
  type: ClusterIP
  ports:
    - protocol: TCP
      port: 8080

We only want this application exposed to the cluster, as we are going to be using nginx to proxy it. As such we are using the type: ClusterIP rather than LoadBalancer as we did with the nginx Service.

7. Add an endpoint in nginx to hit the application

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  default.conf: |
    server {
    listen       1337;

    location / {
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_pass http://application:8080/;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }

    }

Now you can see we are using nginx's proxy_pass to forward requests to the name of the Service for the application we created. Kubernetes has service discovery allowing you to just use the Service name as a DNS record.

You will need to change the target port to 1337 in step 3 if you made it this far, otherwise your Service to Pod mapping for nginx will not work.

Now if you hit your endpoint again, you should see that you are proxied through to the application rather than the nginx default page that you configured.

If you made it this far, congratulations. You are well on your way to becoming a Kubernetes master. If you found this useful please let me know, and I can do further walkthroughs in the future.