Watch talk on YouTube
A talk by Google and Ivanti.
Background
- RBAC is there to limit information access and control
- RBAC can be used to avoid interference in shared envs
- DNS is not really applicable when it comes to RBAC
DNS in Kubernetes
- DNS Info is always public -> No auth
- Services are exposed to all clients
Isolation and Clusters
Just don’t share
- Specially for smaller, high growth companies with infinite VC money
- Just give everyone their own cluster -> Problem solved
- Smaller (<1000) typically use many small clusters
Shared Clusters
- Becomes important when cost is a question and engineers don’t have any platform knowledge
- A dedicated kube team can optimize both hardware and deliver updates fast -> Increased productivity by utilizing specialists
- Problem: Noisy neighbors by leaky DNS
Leaks (demo)
Base scenario
- Cluster with a bunch of deployments and services
- Creating a simple pod results in binding to default RBAC -> No access to anything
- Querying DNS info (aka services) still leaks everything (namespaces, services)
Leak mechanics
- Leaks are based on the
<service>.<nemspace>.<svc>.cluster.local
pattern
- You can also just reverse lookup the entire service CIDR
- SRV records get created for each service including the service ports
Fix the leak
CoreDNS Firewall Plugin
- External plugin provided by the CoreDNS team
- Expression engine built-in with support for external policy engines
flowchart LR
req-->metadata
metadata-->firewall
firewall-->kube
kube-->|Adds namespace/clientnamespace metadata|firewall
firewall-->|send nxdomain|metadata
metadata-->res
Demo
- Firewall rule that only allows queries from the same namespace,
kube-system
or default
- Every other cross-namespace request gets blocked
- Same SVC requests from before now return
NXDOMAIN
Why is this a plugin and not default?
- Requires
pods verified
mode -> Puts the watch on pods and only returns a query result if the pod actually exists
- Puts a watch on all pods -> higher API load and CoreDNS memory usage
- Potential race conditions with initial lookups in larger clusters -> Alternative is to fail open (not really secure)
Per tenant DNS
- Just run a CoreDNS instance for each tenant
- Use a mutating webhook to inject the right DNS into each pod
- Pro: No more pods verified -> Aka no more constant watch
- Limitation: Platform services still need a central CoreDNS
Container Image Workflows at Scale with Buildpacks
Watch talk on YouTube
A talk by Broadcom and Bloomberg (both related to buildpacks.io).
And a very full talk at that.
Baseline
- CN Buildpack provides the spec for buildpacks with a couple of different implementations
- Pack CLI with builder (collection of Buildpacks - for example Paketo or Heroku)
- Output images follow OCI -> Just run them on docker/Podman/Kubernetes
- Built images are
production application images
(small attack surface, SBOM, non-root, reproducible)
Scaling
Builds
- Use in CI (Jenkins, GitHub Actions, Tekton, …)
- KPack: Kubernetes operator -> Builds on new changes
Multiarch support
flowchart LR
subgraph OCIImageIndex
lamd(linux/amd64)
larm(linux/arm64)
end
larm-->imageARM
lamd-->imageAMD
subgraph imageARM
layer1.1
layer2.1
layer3.1
end
subgraph imageAMD
layer1.2
layer2.2
layer3.2
end
- Goal: Just a simple docker full that auto-detects the right architecture
- Needed: Pack, Lifecycle, Buildpacks, Build images, builders, registry
- Current state: There is an RFC to handle image index creation with changes to Buildpack creation
- New folder structure for binaries
- Update config files to include targets
- The user impact is minimal, because the builder abstracts everything away
Majority
- kpack is slsa.dev v3 compliant (party hard)
- 5 years of production
- scaling up to Tanzu/Heroku/GCP levels
- Multiarch is being worked on