Docker has been around for almost 10 years now, and most of us know how to build images and run containers. We have thousands of blog posts, documentations, presentations, telling us how to leverage containers. And yet, in the field, we still make interesting mistakes once in a while. Sometimes it's not a big deal and nobody notices; sometimes it ruins everyone's productivity and makes us wonder if containers were a good idea in the first place. Here are some of these mistakes, along with workarounds to refactor them when needed.
We'll talk about:
big images that take ages to build or pull
small images that are hard to operate and debug
challenges when building from (huge) monorepos
builds or deployment pipelines that take ages even for tiny changes
when and why skip Dockerfiles altogether
and much more!
So you learned how to run containerized applications in Kubernetes using Deployments, and expose them with Services? Congratulations! In this hands-on workshop, we will talk about the next steps.
Security: how do we isolate (firewall) applications? How do we delegate permissions so that each user or team can only control their own deployments and services?
Capacity management: how do we allocate resources (CPU and RAM) to our applications? How do we monitor resource usage? What happens when resource usage is too high on a node, or on the whole cluster? How can we implement auto-scaling?
Extending Kubernetes: we'll review multiple methods like admission webhooks, custom resource definitions (CRD), the aggregation layer, and operators.
Stateful applications: we'll explain and demo the usage of Stateful Sets, Persistent Volumes, and the other resources involved in hosting stateful apps on Kubernetes.
To get the most out of this workshop, you need to be familiar with container and Kubernetes foundations. If you know how to run an application with a few Deployments connected together with Services, you're all set! You also need to be familiar with the Linux command-line.
For the hands-on parts and exercises, you will need to bring your own computer. You will need a web browser and an SSH client, but you won't need to install anything else, as we're going to SSH into remote clusters provided by the instructor.
The workshop spans two days. Each of the following sections will take approximately half a day.
- Isolating workloads with Network Policies
- Authentication and authorization in Kubernetes
- Authentication with tokens and certificates
- Authorization with RBAC (Role-Based Access Control)
- Restricting permissions with Service Accounts
- Working with Roles, Cluster Roles, Role Bindings, etc.
- Example: the Sealed Secrets Operator
- Setting compute resource limits
- Defining default policies for resource usage
- Managing cluster allocation and quotas
- Resource management in practice
- The resource metrics pipeline
- Installing metrics-server
- What happens when the cluster is at, or over, capacity
- Cluster sizing and scaling
- Auto-scaling resources
Extending the Kubernetes API
- Kubernetes API server internals
- The aggregation layer
- Overview of Kubernetes API extensions
- Custom Resource Definitions (CRDs)
- Kubernetes operators
- Dynamic admission control with webhooks
- Policy Management with Kyverno
- Deploying apps with Stateful Sets
- Understanding Persistent Volume Claims and Storage Classes
- Scheduling pods together or separately
- Example: deploying a Consul cluster
- Storage provisioning
- PV, PVC, StorageClass
- Defining volumeClaimTemplates
- Using highly available persistent volumes
- Example: database failover
Jérôme was part of the team that built, scaled, and operated the dotCloud PAAS, before it became Docker. He worked seven years at the famous container company, wearing various hats. When he's not busy with computers, he collects musical instruments. He can arguably play the theme of Zelda on a dozen of them.