Deploying Applications the DevOps Way

Deploying Applications the DevOps Way

[TOC]

1. Using the Helm Package Manager

  • Helm is used to streamline installing and managing Kubernetes applications.
  • Helm consists of the helm tool, which needs to be installed, and a chart.
  • A chart is a Helm package, which contains the following:
    • A description of the package
    • One or more templates containing Kubernetes manifest files
  • Charts can be stored locally, or accessed from remote Helm repositories.

Demo: Installing the Helm Binary

  • Fetch the binary from https://github.com/helm/helm/releases ; check for the latest release!
  • tar xvf helm-xxx.tar.gz
  • sudo mv linux-amd64/helm /usr/local/bin/
  • helm version

Getting Access to Helm Charts

The main site for finding Helm charts, is through https://artifacthub.io

This is a major way for finding repository names. We can search for specific software here, and run the commands to install it; for instance, to run the kubernetes dashboard:

1
2
3
# helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard

Demo: Managing Helm Repositories

1
2
3
4
5
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo list
# show all the charts in the bitnami repository
helm search repo bitnami
helm repo update

2. Working with Helm Charts

Installing Helm Charts

After adding repositories, use helm repo update to ensure access to the most up-to-date charts.

Use helm install to install a chart with default parameters.

After installation, use helm list to list currently installed charts.

Use helm delete to remove a chart.

Demo: Installing a Helm Chart

1
2
3
4
5
6
7
8
# install mysql and generate name for you
helm install bitnami/mysql --generate-name
kubectl get all
# show the chart installed in the cluster
helm show chart bitnami/mysql
helm show all bitnami/mysql
helm list
helm status mysql-xxx

Customizing Before Installing

  • A helm chart consists of templates to which specific values are applied.
  • The values are specified in the values.yaml file, within the helm chart.
  • The easiest way to customize a helm chart is by first using helm pull to fetch a local copy of the helm chart.
  • Next edit the chartname/values.yaml to change any values.

Demo: Customizing a Helm Chart Before Installing

1
2
3
4
5
6
helm show values bitnami/nginx
helm pull bitnami/nginx
tar xvf nginx-xxx.tgz
vim nginx/values.yaml
helm template --debug nginx
helm install -f nginx/values.yaml my-nginx nginx/

3. Using Kustomize

Understanding Kustomize

  • kustomize is a kubernetes feature, that use a file with the name kustomization.yaml to apply changes to a set of resources.
  • This is convenient for applying changings to input files that the user does not control himeself, and which contents may change because of new versions appearing in Git.
  • Use kubectl apply -k ./ in the directory with the kustomization.yaml file to apply the changes.
  • Use kubectl delete -k ./ in the same directory to delete al that was created by the Kustomization.

Understanding a Sample Kustomization File

1
2
3
4
5
6
7
8
9
10
resources: # defines which resources (in YAML files) apply
- deployment.yaml
- service.yaml
namePrefix: test- # defines a prefix for all the resources
namespace: testing # defines the namespace for all the resources
nameSuffix: "-001"
commonLabels: # defines labels that are common to all the resources
app: bingo
commonAnnotations:
oncallPager: 800-555-1212

Using Kustomization Overlays

Kustomization ca be used to define a base configuration, as wel as multiple deployment scenarios (overlays) as in dev, staging and prod for instance.

In such a configuration, the main kustomization.yaml defines the structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
~/someApp
├── base
│ ├── deployment.yaml
│ ├── kustomization.yaml
│ └── service.yaml
└── overlays
├── development
│ ├── cpu_count.yaml
│ ├── kustomization.yaml
│ └── replica_count.yaml
└── production
├── cpu_count.yaml
├── kustomization.yaml
└── replica_count.yaml

In each of the overlays/{dev,staging,prod}/kustomization.yaml, users would reference the base configuration in the resources field, and specify changes for the specific environment:

1
2
3
4
5
6
resources:
- ../../base
namePrefix: dev-
namespace: development
commonLabels:
environment: development

Demo: Using Kustomization

1
2
3
4
5
cat deployment.yaml
cat service.yaml
kubectl apply -f deployment.yaml service.yaml
cat kustomization.yaml
kubectl apply -k . # use kustomization.yaml to apply the changes
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: nginx-friday20
name: nginx-friday20
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: nginx-friday20
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: nginx-friday20
name: nginx-friday20
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: nginx-friday20
resources: {}
securityContext:
privileged: false
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
# service.yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
k8s-app: nginx-friday20
name: nginx-friday20
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
k8s-app: nginx-friday20
status:
loadBalancer: {}
1
2
3
4
5
6
7
# kustomization.yaml
resources:
- deployment.yaml
- service.yaml
namePrefix: test-
commonLabels:
environment: testing

4. Implementing Blue-Green Deployments

Blue-green deployments are a way to deploy a new version of a service in a cluster, while keeping the old version running. It accomplishes zero downtime application upgrade.

Essential is the possibility to test the new version of the application before taking it into production.

The bule Deployment is the current application version, and the green Deployment is the new version.

Once the green Deployment is ready, the blue Deployment is deleted.

Blue-green deployments can easily be implemented using Kubernetes Services.

Procedure Overview

  • Start with the already running application.
  • Create a new deployment for the new version of the application, and test with temporary Service resource.
  • If all tests pass, remove the temporary Service resource.
  • Remove the old Service resource (pointing to the blue Deployment), and immediately create a new Service resource pointing to the green Deployment.
  • After successful transition, remove the blue Deployment.
  • It is essential to keep the Service name unchanged, so that front-end resources such as Ingress will automatically pick up the transition.

Demo: Blue-Green Deployments

  • kubectl create deploy blue-nginx --image=nginx:1.14 --replicas=3
  • kubectl expose deploy blue-nginx --port=80 --target-port=80 --name=bgnginx
  • kubectl get deploy blue-nginx -o yaml > green-nginx.yaml
    • Clean up dynamic generated stuff
    • Change Image version
    • Change “blue” to “green” throughout
  • kubectl create -f green-nginx.yaml
  • kubectl get pods
  • kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --target-port=80 --name=bgnginx
  • kubectl delete deploy blue-nginx

5. Implement Canary Deployments

A canary deployment is an update strategy where you first push the update at small scale to see if it works well.

In terms of Kuberentes, you could image a Deployment that runs 4 replicas.

Next you add a new Deployment that uses the same label.

As the Service is load balancing, only 1 out of 5 requests would be serviced by the new version.

And if that doesnt seem to be working, you can easily delete it.

6. CRD: Custom Resource Definition

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
#crd-object.yaml
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: backups.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
backupType:
type: string
image:
type: string
replicas:
type: integer
scope: Namespaced
names:
plural: backups
singular: backup
shortNames:
- bks
kind: BackUp

k apply -f crd-object.yaml

k api-versions | grep backup

k api-resources | grep backup

1
2
3
4
5
6
7
8
9
# crd-backup.yaml
apiVersion: "stable.example.com/v1"
kind: BackUp
metadata:
name: mybackup
spec:
backupType: full
image: linux-backup-image
replicas: 5

7. Using Operator

  • Operators are custom applications, based on Custom Resource Definitions.
  • Operators can be seen as a way of packaging, running and managing applications in Kuberentes.
  • Opeartors are based on Controllers, which are Kubernetes components that continuously operate dynamic systems.
  • The Controller loop is the essence of any Controllers.
  • The Kuberentes Controller manager runs a reconciliation loop, which continuously observes the current state, compares it to the desired state, and adjusts it when necessary.
  • Operators are application-specific Controllers.
  • Operators can be added to Kubernetes by devloping them yourself.
  • Operators are alse available from community websites.
  • A common registry for Operators is found at operatorhub.io (which is rather OpenShift oriented).
  • Many solutions from the Kuberentes ecosystem are provided as operators:
    • Prometheus: a monitoring and alerting solution
    • Tigera: the operator that manages the calico network plugin
    • Jaeger: a distributed tracing solution

Demo: Installing the Calico Network Plugin

1
2
3
4
5
6
7
8
9
minikube stop; minikube delete
minikube start --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.10.0.0/16
kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
kubectl api-resources | grep tigera
kubectl get pods -n tigera-operator tier-operator-tigera-xxx-yyy
wget https://docs.projectcalico.org/manifests/custom-resources.yaml
sed -i -e s/192.168.0.0/10.10.0.0/g custom-resources.yaml
kubectl get installation -o yaml
kubectl get pods -n calico-system

8. Using StatefulSets

  • The main purpose of StatefulSets is to provide a persistent identity to Pods as well as the Pod-specific storage.
  • Each Pod in a StatefulSet has a persistent identifier that it keeps across rescheduling.
  • StatefulSet provides ordering as well.
  • Using StatefulSet is valuable for applications that require any of the following:
    • Stable and uniquere network identifiers
    • Stable persistent storage
    • Ordered deployment and scaling
    • Order automated rolling updates

Understanding StatefulSets Limitations

  • Storage Provisioning based on StorageClass must be available.
  • To ensure data safety, volumes created by the StatefulSet are not deleted while deleting the StatefulSet
  • A headless Service is required for StatefulSets
  • To guarantee removal of StatefulSet Pods, scale down the number of Pods to 0 before moving the StatefulSet.

Demo: Using a StatefulSet

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# sfs.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None # This is required for StatefulSets
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx" # This is requred for StatefulSets
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates: # This is required for StatefulSets if we use PersistentVolumeClaim
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "standard"
resources:
requests:
storage: 1Gi

k get storageclass

k apply -f sfs.yaml

k get all

StatefulSets don’t use ReplicaSet.