Common Uses of Kubernetes

In this post, I will cover common uses of Kubernetes.

Running workloads in Pods

YAML manifest for a pod

Basic YAML manifest for a pod

apiVersion: v1
kind: Pod
metadata:
# pod's name
name: a-simple-web-application
spec:
containers:
# container's name
- name: a-simple-web-application
# container's name
image: tagnja/a-simple-web-application:native
ports:
- containerPort: 8080
  • spec.containers.ports.containerPort: Specifies the port number on which an application inside the container is listening.

Add resource limits

...
spec:
containers:
- ...
resources:
limits:
memory: 300Mi
requests:
memory: 200Mi

Add startupProbe and livenessProbe

...
spec:
containers:
- ...
startupProbe:
httpGet:
port: 8080
path: /
# Time taken to start up
initialDelaySeconds: 1
periodSeconds: 10
timeoutSeconds: 2
failureThreshold: 3
livenessProbe:
httpGet:
port: 8080
path: /
initialDelaySeconds: 60
periodSeconds: 60
timeoutSeconds: 2
failureThreshold: 3

Creating, listing, editing, and deleting pods

Creating pods

1. Creating pods from YAML or JSON files

kubectl apply -f pod.<pod_name>.yaml

2. Creating pods by kubectl run

kubectl run <pod_name> --image=<DOCKER_IMAGE:TAG>
# for example
kubectl run a-simple-web-application --image=tagnja/a-simple-web-application:native

3. Creating pods by kubectl create deployment

kubectl create deployment <deploy_name> --image=<image_name>

Listing pods

# listing pods
kubectl get pods
# or
k get po

# listing with more information
kubectl get pods -o wide

# listing and watching for changes
kubectl get pods -w

Viewing pods

# get basic information about the pod
kubectl get pods <pod_name>

# get more information of the pod
kubectl get pods <pod_name> -o wide

# get the YAML file of the pod
kubectl get pods <pod_name> -o yaml

# get details
kubectl describe pods <pod_name>

Editing pods

1. Edit the YAML manifest file and re-apply it

kubectl apply -f pod.<pod_name>.yaml

Or delete and apply

kubectl delete -f pod.<pod_name>.yaml
kubectl apply -f pod.<pod_name>.yaml

2. Using kubectl edit

kubectl edit pods <pod_name>

Delete pods

Delete a pod

# delete by name
kubectl delete pods <pod_name>
# delete by YAML manifest file
kubectl delete -f pod.<pod_name>.yaml

Delete all pods

kubectl delete pods --all

Interacting with the application and the pod

Sending requests to the application in the pod

Getting the pod’s IP address

kubectl get po -o wide

1. Connecting to pods via kubectl port forwarding

kubectl port-forward <pod_name> <local_port>:<container_port>
kubectl port-forward my_pod 8080:8080
curl localhost:8080

2. Connecting to the pod from the pods or worker nodes

# SSH into a running pod
kubectl exec -it <pod-name> -- sh
curl <pod_ip>:<port>
# SSH into a K8S node in Google Kubernetes Engine
gcloud compute instances list
gcloud compute ssh <instance_name>
curl <pod_ip>:<port>

3. Connecting from a one-off client pod

# send requests
kubectl run --image=alpine/curl -it --restart=Never --rm client-pod curl <pod_ip>:<port>
# or
kubectl run --image=alpine/curl -it --restart=Never --rm client-pod -- curl -v <pod_ip>:<port>

# connect postgres
kubectl run --image=alpine/psql -it --restart=Never --rm client-pod -- psql -h <pod_ip> -p <port> -U <user> -d <database>

Viewing application logs

Retrieving a pod’s log with kubectl logs

kubectl logs <pod_name>
kubectl logs <pod_name> -c <container_name>
kubectl logs <pod_name> -f
kubectl logs <pod_name> --since=2m
kubectl logs <pod_name> --since-time=2020-02-01T09:50:00Z
kubectl logs <pod_name> --tail=10

Copying files to and from containers

kubectl cp <pod_name>:<file_path> <local_file_path>
kubectl <local_file_path> cp <pod_name>:<file_path>

Executing commands in running containers

kubectl exec <pod_name> -- <command>
kubectl exec <pod_name> -c <container_name> -- <command>

For example

kubectl exec <pod_name> -- ps aux
kubectl exec <pod_name> -- curl -s localhost:8080

Running an interactive shell in the container

kubectl exec -it <pod_name> -- bash
# or
kubectl exec -it <pod_name> -- sh

Attaching to a running container

The kubectl attach command is another way to interact with a running container. It attaches itself to the standard input, output and error streams of the main process running in the container. Normally, you only use it to interact with applications that read from the standard input.

kubectl attach -it <pod_name>

Now, when you send new HTTP requests to the application using curl in another terminal, you’ll see the lines that the application logs to standard output also printed in the terminal where the kubectl attach command is executed.

Persisting data in PersistentVolumes

Persistent volumes and claims

Create a disk on Google cloud

$ gcloud compute disks create <disk_name> --size=10GiB
# List disks
$ gcloud compute disks list
# Get Disk details
$ gcloud compute disks describe <disk_name>

Creating a PersistentVolume and PersistentVolumeClaim object

pvc.my-postgres.yaml

# Provision the existing persistent disk as a PersistentVolume.
# Using PersistentVolume objects to represent persistent storage
apiVersion: v1
kind: PersistentVolume
metadata:
name: my-postgres-pv-gce
spec:
capacity:
# the size of your pre-existing persistent disk.
storage: 10Gi
accessModes:
- ReadWriteOnce
csi:
driver: pd.csi.storage.gke.io
# The identifier of your pre-existing persistent disk. The format is projects/{project_id}/zones/{zone_name}/disks/{disk_name} for Zonal persistent disks, or projects/{project_id}/regions/{region_name}/disks/{disk_name} for Regional persistent disks.
volumeHandle: projects/manifest-module-xxx/zones/asia-east2-c/disks/my-k8s-disk
fsType: ext4
---
# Claiming persistent volumes with PersistentVolumeClaim objects
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: default
name: my-postgres-pvc-gce
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
# the size of your pre-existing persistent disk.
storage: 10Gi
# The field must be set to an empty string if you want Kubernetes to bind a pre-provisioned persistent volume to this claim instead of provisioning a new one.
storageClassName: ""
volumeName: my-postgres-pv-gce

Create a secret:

kubectl create secret generic my-postgres-secret --from-literal=password=<YOUR_PASSWORD>

Using PersistentVolumeClaim in a pod

pod.my-postgres.yaml

apiVersion: v1
kind: Pod
metadata:
name: postgres-gce-pv-static
spec:
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: my-postgres-pvc-gce
containers:
- name: postgres1
image: postgres:17-alpine
ports:
- containerPort: 5432
env:
- name: TZ
value: "Asia/Shanghai"
- name: POSTGRES_PASSWORD
valueFrom:
# Create secret: `kubectl create secret generic my-postgres-secret --from-literal=password=xxx`
secretKeyRef:
name: my-postgres-secret
key: password
volumeMounts:
- mountPath: "/var/lib/postgresql/data"
name: postgres-data
subPath: my-postgres-data

Connect to PostgreSQL from a one-off client pod

kubectl run --image=alpine/psql -it --restart=Never --rm  client-pod -- psql -h <pod_ip> -p <port> -U <user> -d <database>
# for example
kubectl run --image=alpine/psql -it --restart=Never --rm client-pod -- psql -h 10.88.0.11 -p 5432 -U postgres -d postgres

Dynamic provisioning of persistent volumes

Dynamic create PersistentVolume objects and disks. No volumeName specified in the manifest file.

pvc.my-postgres.yaml

# Claiming persistent volumes with PersistentVolumeClaim objects
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: default
name: postgres-pvc-gce-dynamic
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
# the size of your persistent disk.
storage: 10Gi

Configuration via ConfigMaps, Secrets

Setting the command, arguments, and environment variables

Setting the command to override the ENTRYPOINT of the Docker image

apiVersion: v1
kind: Pod
metadata:
name: a-simple-web-application-2
spec:
containers:
- name: a-simple-web-application
image: tagnja/a-simple-web-application:slim
ports:
- containerPort: 8081
# Dokcerfile: ENTRYPOINT ["java", "-jar", "app.jar"]
command: [ "java", "-jar", "-Dserver.port=8081", "app.jar" ]

Add arguments

apiVersion: v1
kind: Pod
metadata:
name: a-simple-web-application
spec:
containers:
- name: a-simple-web-application
image: tagnja/a-simple-web-application:native
ports:
- containerPort: 8081
# GraalVM native image Dockerfile: ENTRYPOINT ["/app/myapp"]
# We can add args after the Docker entry point. It's like this `./app/myapp -Dserver.port=8081`.
args: [ "-Dserver.port=8081" ]

Using a config map to decouple configuration from the pod

An example of manifest file for ConfigMap

cm.my-config.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
greeting: Hi, K8S!
name: Taogen

Using a configMap in a Pod with configMapKeyRef

apiVersion: v1
kind: Pod
metadata:
name: a-simple-web-application-pod-env-configmap
spec:
containers:
- name: a-simple-web-application
image: tagnja/a-simple-web-application:slim
ports:
- containerPort: 8080
# Verify the added environment variable GREETING:
# $ kubectl exec a-simple-web-application-pod-env -- env | grep GREETING
env:
- name: GREETING
valueFrom:
configMapKeyRef:
name: my-config
key: greeting
optional: true

Using a configMap in a Pod with configMapRef

apiVersion: v1
kind: Pod
metadata:
name: a-simple-web-application-pod-envfrom-configmap
spec:
containers:
- name: a-simple-web-application
image: tagnja/a-simple-web-application:slim
ports:
- containerPort: 8080
# Verify the added environment variable GREETING:
# $ kubectl exec a-simple-web-application-pod-env -- env | grep GREETING
envFrom:
- configMapRef:
name: my-config
optional: true

Using Secrets to pass sensitive data to containers

Create secret

$ kubectl create secret generic my-postgres-secret --from-literal=password=<YOUR_PASSWORD>

Using secrets in Pod

apiVersion: v1
kind: Pod
metadata:
name: postgres-with-secret
spec:
containers:
- name: postgres1
image: postgres:17-alpine
ports:
- containerPort: 5432
env:
- name: TZ
value: "Asia/Shanghai"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: my-postgres-secret
key: password

Organizing objects using Namespaces, Labels, Annotations

Organizing objects into Namespaces

Namespaced Kubernetes API Object types

  • Pods
  • Deployments
  • Services
  • PersistentVolumeClaims
  • ConfigMaps
  • Secrets
  • Events

Cluster-scoped Kubernetes API Object types

  • Nodes
  • PersistentVolumes
  • StorageClasses
  • Namespaces

To see if a resource is namespaced or cluster-scoped, check the NAMESPACED column when running kubectl api-resources.

Listing namespaces

kubectl get namespaces
k get ns

Listing objects in a specific namespace

kubectl get pods --namespace kube-system
k get po -n kube-system

Listing objects across all namespaces

kubectl get cm --all-namespaces
k get cm -A

Creating a namespace

kubectl create namespace my-namespace
k create ns my-namespace

Creating a namespace from a manifest file

apiVersion: v1
kind: Namespace
metadata:
name: my-namespace

Creating objects in a specific namespace

kubectl apply -f <filename>.yaml -n my-namespace

Specifying the namespace in the object manifest

apiVersion: v1
kind: Pod
metadata:
name: xxx
namespace: default
spec:
...

Making kubectl default to a different namespace

kubectl config set-context --current --namespace my-namespace

Deleting namespaces

kubectl delete ns my-namespace

Organizing pods with labels

Defining labels in object manifests

metadata:
name: xxx
labels:
# app name
app: xxx
# release type: stable, canary
rel: stable

Show pods’ labels

kubectl get pods --show-labels

Show specific labels with the –label-columns option

kubectl get pods -L <label_name_1>,<label_name_2>

Adding or changing labels to an existing object

$ kubectl label pod <pod_name> <label_name1>=<value1> <label_name2>=<value2>
$ kubectl label pod <pod_name> <label_name1>=<value1> <label_name2>=<value2> --overwrite

Labelling all objects of a kind

kubectl label pod --all <label_name>=<value1>

Removing a label from an object

kubectl label pod <pod_name> <label_name>-
kubectl label pod --all <label_name>-

Standard label keys

Filtering objects with label selectors

Label selectors allow you to select a subset of pods or other objects that contain a particular label and perform an operation on those objects. A label selector is a criterion that filters objects based on whether they contain a particular label key with a particular value.

There are two types of label selectors:

  • equality-based selectors:
    • label_name=value1
    • label_name!=value1
  • set-based selectors:
    • label_name in (value1, value2)
    • label_name notin (value1, value2)
    • label_name (present)
    • !label_name (not present)

Filtering the list of objects using label selectors

kubectl get pods --selector <label_name>=<value1>
k get po -l <label_name>=<value1>
k get po -l <label_name_1>=<value1>,<label_name_2>=<value2>
k get po -l 'label_name in (value1, value2)'
k get po -l <label_name>
k get po -l '!<label_name>'
  • The --selector argument (or the short equivalent -l)

Deleting objects using a label selector

kubectl delete pods -l <label_name>=<value1>

Annotating objects

You can’t just store anything you want in the label value. For example, the maximum length of a label value is only 63 characters, and the value can’t contain whitespace at all. For this reason, Kubernetes provides a feature similar to labels–object annotations.

Like labels, annotations are also key-value pairs, but they don’t store identifying information and can’t be used to filter objects. Unlike labels, an annotation value can be much longer (up to 256 KB at the time of this writing) and can contain any character.

Adding your own annotations

As with labels, you can add your own annotations to objects. A great use of annotations is to add a description to each pod or other object so that all users of the cluster can quickly see information about an object without having to look it up elsewhere.

For example, storing the name of the person who created the object and their contact information in the object’s annotations can greatly facilitate collaboration between cluster users.

Add or update an annotation to an existing object

# add
kubectl annotate pod <pod_name> <annotation_name>=<value>
kubectl annotate pod <pod_name> created-by='Toagen <taogen@xyz.com>'
# update
kubectl annotate pod <pod_name> created-by='Toagen <taogen@xyz.com>' --overwrite

Specifying annotations in the object manifest

apiVersion: v1
kind: Pod
metadata:
name: xxx
annotations:
created-by: Toagen <taogen@xyz.com>
contact-phone: +1 234 567 890
managed: 'yes'
revision: '3'

Remove annotations

kubectl annotate pod <pod_name> <annotation_name>-

Exposing Pods with Services

ClusterIP

Exposes the Service internally within the Kubernetes cluster.

An example of manifest file

svc.crud-web-application-clusterip.yaml

apiVersion: v1
kind: Service
metadata:
name: crud-web-application
spec:
type: ClusterIP
selector:
app: crud-web-application
ports:
- name: http
port: 8080
targetPort: 8080
protocol: TCP

Accessing ClusterIP services

# Accessing the ClusterIP service from another pod in the cluster
kubectl exec -it <pod_name> -- sh
curl <cluster_ip>:<port>
# or using the service name
curl <service_name>:<port>

# or Accessing the ClusterIP service from node in the cluster
gcloud compute instances list
gcloud compute ssh <node_name>
curl <cluster_ip>:<port>

NodePort

Exposes the Service externally on each Node’s IP at a specific port (the NodePort).

An example of manifest file

svc.crud-web-application-node-port.yaml

apiVersion: v1
kind: Service
metadata:
name: crud-web-application-svc-node-port
spec:
type: NodePort
selector:
app: crud-web-application
ports:
- name: http
port: 8080
# The range of valid ports is 30000-32767
nodePort: 30000
targetPort: 8080

Accessing NodePort services

# In Anywhere
# To allow traffic to node ports when using GKE. Allow access to the node's external IP with NodePort.
$ gcloud compute firewall-rules create gke-allow-nodeports --allow=tcp:30000-32767
$ curl <any_node_external_IP>:<node_port>

# In pod or node
# Get Node's internal IP: `k get no -o wide`
curl <any_node_internal_IP>:<node_port>
# Get Service Cluster IP: `k get svc`
curl <service_cluster_ip>:<port>

# In pod only
curl <service_name>:<port>

LoadBalancer

An example of manifest file

svc.crud-web-application-load-balancer.yaml

apiVersion: v1
kind: Service
metadata:
name: crud-web-application-svc-load-balancer
spec:
type: LoadBalancer
selector:
app: crud-web-application
ports:
- name: http
port: 8080
# The range of valid ports is 30000-32767
# The node ports are only specified so that they aren’t randomly selected by Kubernetes. If you don’t care about the node port numbers, you can omit the nodePort fields.
nodePort: 30000
targetPort: 8080

Accessing LoadBalancer services

# In Anywhere
# Get Service external IP: `k get svc`
curl <service_external_ip>:<port>

Exposing Services with Ingress

Ingress

ing.crud-web-application.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: crud-web-example-com
spec:
rules:
- host: crud-web.tagnja.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: crud-web-application-svc-ing
port:
number: 8080
- path: /app
pathType: Exact
backend:
service:
name: crud-web-application-svc-ing
port:
number: 8080

Ingress with TLS

1. Create a self-signed TLS certificate

openssl req -x509 -newkey rsa:2048 -keyout crud-web.key -out crud-web.crt \
-sha256 -days 365 -nodes -subj '/CN=crud-web.tagnja.com' \
-addext 'subjectAltName = DNS:crud-web.tagnja.com'

2. Create a secret for TLS certificate files

kubectl create secret tls tls-crud-web-tagnja-com \  
--cert=path/to/crud-web.crt \
--key=path/to/crud-web.key

3. The manifest file

ing.crud-web-application-ing-tls.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: crud-web-example-com-ing-tls
spec:
# Note: After applying this ingress, you need to add or update the new IP of the ingress to your DNS.
tls:
- secretName: tls-crud-web-tagnja-com
hosts:
- "crud-web.tagnja.com"
rules:
- host: crud-web.tagnja.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: crud-web-application-svc-ing
port:
number: 8080

4. Access the application

# https
curl https://crud-web.tagnja.com
# or
curl --insecure -v https://crud-web.tagnja.com

Managing Pods with Deployments

An example of manifest file for Deployments

deploy.a-simple-web-application.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: a-simple-web-application-deploy
spec:
replicas: 3
selector:
matchLabels:
app: a-simple-web-application
rel: stable
template:
metadata:
labels:
app: a-simple-web-application
rel: stable
spec:
containers:
- name: a-simple-web-application
image: tagnja/a-simple-web-application:native
ports:
- containerPort: 8080
  • spec.replicas: Number of desired pods. Defaults to 1.

More articles about Kubernetes

References

[1] Kubernetes in Action, 2nd Edition