What do you do after you learn Kubernetes, after you deploy your applications to a production cluster, and after you fully automate continuous deployment pipeline? You work on making your cluster self-sufficient by adding monitoring, alerting, logging, and auto-scaling.
The fact that we can run (almost) anything in Kubernetes and that it will do its best to make it fault tolerant and highly available, does not mean that our applications and clusters are bulletproof. We need to monitor the cluster, and we need alerts that will notify us of potential issues. When we do discover that there is a problem, we need to be able to query metrics and logs of the whole system. We can fix an issue only once we know what the root cause is. In highly dynamic distributed systems like Kubernetes, that is not as easy as it looks.
Further on, we need to learn how to scale (and de-scale) everything. The number of Pods of an application should change over time to accommodate fluctuations in traffic and demand. Nodes should scale as well to fulfill the needs of our applications.
Kubernetes already has the tools that provide metrics and visibility into logs. It allows us to create auto-scaling rules. Yet, we might discover that Kuberentes alone is not enough and that we might need to extend our system with additional processes and tools. We'll discuss how to make your clusters and applications truly dynamic and resilient and that they require minimal manual involvement. We'll try to make our system self-adaptive.
Kubernetes is becoming a de-facto standard for container orchestration. Jenkins is already the standard for continuous deployment. How can we combine both and get the most out of both worlds?
We'll discuss continuous deployment pipeline that delivers new releases on every commit. We'll explore how to ensure that the release is safe to deploy, how to leverage Jenkins and Kubernetes features to speed up the process, and how to guarantee that new versions are running in production without risk, with zero-downtime, and without human intervention.
A fully operational Kubernetes cluster with NGINX Ingress controller and a default StorageClass. The following Kubernetes platforms were tested for this course. Please note that Gists are provided in case you need to create a cluster specific for this course.
- [devops24-docker.sh]( https://gist.github.com/vfarcic/3fbf532b1716d40ae60552baf83b8ed1 ): Docker for Mac with 4 CPUs, 4GB RAM, and with nginx Ingress controller.
- devops24-minikube.sh ( https://gist.github.com/vfarcic/f5863c66867bbe87722998683ea20c41 ): minikube with 4 CPUs, 4GB RAM, and with ingress, storage-provisioner, and default-storageclass addons enabled.
- devops24-kops.sh ( https://gist.github.com/vfarcic/0552be5ccbd5c8d7f87a9dfadb5e66dc ): kops in AWS with 3 t2.medium masters and 3 t2.medium nodes spread in three availability zones, and with nginx Ingress controller.
- devops24-eks.sh ( https://gist.github.com/vfarcic/b6ed77d257964fa2e19c2722739ddad6 ): Elastic Kubernetes Service (EKS) with 3 t2.medium nodes, and with nginx Ingress controller.
- gke.sh ( https://gist.github.com/5c52c165bf9c5002fedb61f8a5d6a6d1 ): Google Kubernetes Engine (GKE) with 3 n1-standard-1 (1 CPU, 3.75GB RAM) nodes (one in each zone), with Cluster Autoscaler, and with nginx Ingress controller running on top of the "standard" one that comes with GKE.
If you are a Windows user, please use GitBash as a terminal for running the commands.
If you are running a local Kubernetes cluster with Docker For Mac/Windows, please install Vagrant (if you do not have it already).
If you have problems fulfilling the requirements, please contact me through DevOps20 ( http://slack.devops20toolkit.com/
) Slack (my user is vfarcic) or send me an email to firstname.lastname@example.org.