Craft your Apps for Kubernetes

Use Service’s Discovery

Say you have a service mynode in yourspace and a service myapp in myspace. If the myapp wants to access the mynode servcie, the url is:

1
mynode.yourspace.svc.cluster.local:8000 # 8000 is the service port, not the node port.

Configure Liveness and Readiness Probes

kubectl scale --replicas=3 deployment xxx

1
2
StrategyType: RollingUpdate
RollingUpdateStrategy: 1max unavailable, 1max surge
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
template:
metadata:
labels:
app: myboot
spec:
containers:
- name: myboot
image: myboot:v1
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 2
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 3

Once you understand the basics then you can try the advanced demonstration. Where a stateful shopping cart is preserved across a rolling update based on leveraging the readiness probe.

https://github.com/redhat-developer-demos/popular-movie-store

More information on Live & Ready
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

Deploy Blue/Green

Description of Blue/Green Deployment

You have the new pod as well as the old ones

1
2
3
4
5
6
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mynode-68b9b9ffcc-jv4fd 1/1 Running 0 23m
mynode-68b9b9ffcc-vq9k5 1/1 Running 0 23m
mynodenew-5fc946f544-q9ch2 1/1 Running 0 25s
mynodenew-6bddcb55b5-wctmd 1/1 Running 0 25s

your client/user is still seeing the old one only

1
2
$  curl $(minikube ip):$(kubectl get service/mynode -o jsonpath="{.spec.ports[*].nodePort}")
Node Hello on mynode-668959c78d-j66hl 102

Now update the single Service to point to the new pod and go GREEN

1
$ kubectl patch svc/mynode -p '{"spec":{"selector":{"app":"mynodenew"}}}'

Note: Our deployment yaml did not have a live & ready probe, things worked out OK here because we waited until long after mynodenew was up and running before flipping the service selector.

Built-In Canary

Description of Canary

There are at least two types of deployments that some folks consider “canary deployments” in Kubernetes. The first is simply the rolling update strategy with the health check (liveness probe), if the liveness check fails, it knows to undo the deployment.

Switching back to focusing on myboot and myspace

1
2
3
4
5
6
$ kubectl config set-context --current --namespace=myspace
$ kubectl get pods
kubectl get pods
NAME READY STATUS RESTARTS AGE
myboot-859cbbfb98-4rvl8 1/1 Running 0 55m
myboot-859cbbfb98-rwgp5 1/1 Running 0 55m

Make sure myboot has 2 replicas

1
$ kubectl scale deployment/myboot --replicas=2

and let’s attempt to put some really bad code into production

Go into hello/springboot/MyRESTController.java and add a System.exit(1) into the /health logic

1
2
3
4
5
6
@RequestMapping(method = RequestMethod.GET, value = "/health")
public ResponseEntity<String> health() {
System.exit(1);
return ResponseEntity.status(HttpStatus.OK)
.body("I am fine, thank you\n");
}

Obviously this sort of thing would never pass through your robust code reviews and automated QA but let’s assume it does.

Build the code

1
$ mvn clean package

Build the docker image for v3

1
$ docker build -t 9stepsawesome/myboot:v3 .

Terminal 1: Start a poller

1
2
3
4
5
while true
do
curl $(minikube -p 9steps ip):$(kubectl get service/myboot -o jsonpath="{.spec.ports[*].nodePort}" -n myspace)
sleep .3;
done

Terminal 2: Watch pods

1
$ kubectl get pods -w

Terminal 3: Watch events

1
$ kubectl get events --sort-by=.metadata.creationTimestamp

Terminal 4: rollout the v3 update

1
$ kubectl set image deployment/myboot myboot=9stepsawesome/myboot:v3

and watch the fireworks

1
2
3
4
5
6
$ kubectl get pods -w
myboot-5d7fb559dd-qh6fl 0/1 Error 1 11m
myboot-859cbbfb98-rwgp5 0/1 Terminating 0 6h
myboot-859cbbfb98-rwgp5 0/1 Terminating 0 6h
myboot-5d7fb559dd-qh6fl 0/1 CrashLoopBackOff 1 11m
myboot-859cbbfb98-rwgp5 0/1 Terminating 0 6h

Look at your Events

1
2
3
4
$ kubectl get events -w
6s Warning Unhealthy pod/myboot-64db5994f6-s24j5 Readiness probe failed: Get http://172.17.0.6:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
6s Warning Unhealthy pod/myboot-64db5994f6-h8g2t Readiness probe failed: Get http://172.17.0.7:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
5s Warning Unhealthy

And yet your polling client, stays with the old code & old pod

1
2
Aloha from Spring Boot! 133 on myboot-859cbbfb98-4rvl8
Aloha from Spring Boot! 134 on myboot-859cbbfb98-4rvl8

If you watch a while, the CrashLoopBackOff will continue and the restart count will increment.

Now, go fix the MyRESTController and also change from Hello to Aloha

No more System.exit()

1
2
3
4
5
@RequestMapping(method = RequestMethod.GET, value = "/health")
public ResponseEntity<String> health() {
return ResponseEntity.status(HttpStatus.OK)
.body("I am fine, thank you\n");
}

And change the greeting response to something you recognize.

Save

1
2
3
$ mvn clean package

$ docker build -t 9stepsawesome/myboot:v3 .

and now just wait for the “control loop” to self-correct

Manual Canary with multiple Deployments

Go back to v1

1
$ kubectl set image deployment/myboot myboot=9stepsawesome/myboot:v1

Next, we will use a 2nd Deployment like we did with Blue/Green.

1
$ kubectl create -f kubefiles/myboot-deployment-canary.yml

And you can see a new pod being born

1
$ kubectl get pods

And this is the v3 one

1
2
$ kubectl get pods -l app=mybootcanary
$ kubectl exec -it mybootcanary-6ddc5d8d48-ptdjv curl localhost:8080/

Now we add a label to both v1 and v3 Deployments PodTemplate, causing new pods to be born

1
2
$ kubectl patch deployment/myboot -p '{"spec":{"template":{"metadata":{"labels":{"newstuff":"withCanary"}}}}}'
$ kubectl patch deployment/mybootcanary -p '{"spec":{"template":{"metadata":{"labels":{"newstuff":"withCanary"}}}}}'

Tweak the Service selector for this new label

1
$ kubectl patch service/myboot -p '{"spec":{"selector":{"newstuff":"withCanary","app": null}}}'

You should see approximately 30% canary mixed in with previous deployment

1
2
3
4
Hello from Spring Boot! 23 on myboot-d6c8464-ncpn8
Hello from Spring Boot! 22 on myboot-d6c8464-qnxd8
Aloha from Spring Boot! 83 on mybootcanary-74d99754f4-tx6pj
Hello from Spring Boot! 24 on myboot-d6c8464-ncpn8

You can then manipulate the percentages via the replicas associated with each deployment
20% Aloha (Canary)

1
2
$ kubectl scale deployment/myboot --replicas=4
$ kubectl scale deployment/mybootcanary --replicas=1

The challenge with this model is that you have to have the right pod count to get the right mix. If you want a 1% canary, you need 99 of the non-canary pods.

Istio Cometh

The concept of the Canary rollout gets a lot smarter and more interesting with Istio. You also get the concept of dark launches which allows you to push a change into the production environment, send traffic to the new pod(s) yet no responses are actual sent back to the end-user/client.

See bit.ly/istio-tutorial

Store data with PersistentVolume and PersistentVolumeClaim

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: mystorage
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
hostPath:
path: "/data/mypostgresdata/"
1
2
3
4
5
6
7
8
9
10
11
12
13
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pvc
labels:
app: postgres
spec:
storageClassName: mystorage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.5
imagePullPolicy: "IfNotPresent"
env:
- name: POSTGRES_DB
value: postgresdb
- name: POSTGRES_USER
value: admin
- name: POSTGRES_PASSWORD
value: adminS3cret
ports:
- containerPort: 5432
name: postgres
volumeMounts:
# mountPath within the container
- name: postgres-pvc
mountPath: "/var/lib/postgresql/data/:Z"
volumes:
# mapped to the PVC
- name: postgres-pvc
persistentVolumeClaim:
claimName: postgres-pvc
1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
visualize: "true"
spec:
ports:
# the port that this service should serve on
- port: 5432
selector:
app: postgres