
Getting started with Jsonnet: writing reusable Traefik Proxy library pt. I
As mentioned in the previous blog post, I would like to conquer customers’ stacks with good tools. Today I’m gonna show you an easy transition from Helm chart to Jsonnet library for one of my favorites infrastructure components: Traefik Proxy.
The result of this blog post will be available in my personal GitHub repository so you can treat it like proper Open Source Software. Fork it, enhance it, report back. Let’s get started!
Disclaimer: In this episode, I'll be using the really naive composition of objects. Once we get further, I'll try to dig deeper even in some best practices regarding the code organization.
Getting the initial shape
In the past Traefik labs supplied bare YAML resources as the basic installation method. But I can’t find them right now so let’s generate all the resources with Helm.
helm repo add traefik https://helm.traefik.io/traefik
helm repo update
helm template traefik traefik/traefik
This template command is gonna generate something like
- ServiceAccount/traefik
- ClusterRole/traefik
- ClusterRoleBinding/traefik
- Deployment/traefik
- Service/traefik
- IngressRoute/traefik
But we also need to get the CRDs for the Traefik custom resources. These can be found in the Helm chart’s repository.
Installing Jsonnet tooling
All the tooling required for this session has been written in Go. So the installation is pretty straightforward:
go get github.com/google/go-jsonnet/cmd/jsonnetfmt
go get github.com/google/go-jsonnet/cmd/jsonnet
go get github.com/google/go-jsonnet/cmd/jsonnet-lint
go get github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb
And that’s it for the tooling, let’s bootstrap our new project!
Bootstrapping the new Jsonnet project
In theory, we can write all the resources from scratch. But it’s gonna be boring and messy. Let’s try to use some library that will help us with primitive Kubernetes objects. I’m gonna use jsonnet-libs but the original Ksonnet libraries are good too.
mkdir ./traefik
cd ./traefik
jb init
jb install github.com/jsonnet-libs/k8s-alpha/1.18
Now we can quickly verify if jsonnet-libs work as expected together with previously installed Jsonnet tooling. So let's just create main.jsonnet with the following contents:
local k = import "github.com/jsonnet-libs/k8s-alpha/1.18/main.libsonnet";
function()
k.apps.v1.deployment.new(
name="foo",
containers=[
k.core.v1.container.new(name="foo", image="foo/bar"),
],
)
When we run the Jsonnet program, we're gonna get the following output:
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "foo"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"name": "foo"
}
},
"template": {
"metadata": {
"labels": {
"name": "foo"
}
},
"spec": {
"containers": [
{
"image": "foo/bar",
"name": "foo"
}
]
}
}
}
}
Sweet, everything's alright. Let's code some real stuff.
Simple Traefik Proxy Deployment
Let’s check the manifest generated by Helm first.
# Source: traefik/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: traefik
labels:
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-9.12.3
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: traefik
annotations:
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: traefik
app.kubernetes.io/instance: traefik
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
annotations:
labels:
app.kubernetes.io/name: traefik
helm.sh/chart: traefik-9.12.3
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/instance: traefik
spec:
serviceAccountName: traefik
terminationGracePeriodSeconds: 60
hostNetwork: false
containers:
- image: "traefik:2.3.6"
imagePullPolicy: IfNotPresent
name: traefik
resources:
readinessProbe:
httpGet:
path: /ping
port: 9000
failureThreshold: 1
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
httpGet:
path: /ping
port: 9000
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 2
ports:
- name: "traefik"
containerPort: 9000
protocol: "TCP"
- name: "web"
containerPort: 8000
protocol: "TCP"
- name: "websecure"
containerPort: 8443
protocol: "TCP"
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 65532
runAsNonRoot: true
runAsUser: 65532
volumeMounts:
- name: data
mountPath: /data
- name: tmp
mountPath: /tmp
args:
- "--global.checknewversion"
- "--global.sendanonymoususage"
- "--entryPoints.traefik.address=:9000/tcp"
- "--entryPoints.web.address=:8000/tcp"
- "--entryPoints.websecure.address=:8443/tcp"
- "--api.dashboard=true"
- "--ping=true"
- "--providers.kubernetescrd"
- "--providers.kubernetesingress"
volumes:
- name: data
emptyDir: {}
- name: tmp
emptyDir: {}
securityContext:
fsGroup: 65532
So now we know that we need to build Deployment object with the following attributes:
- Rolling update with some sane ratio
- Explicitly configured service account
- TerminationGracePeriodSeconds set to 60 seconds
- Two emptyDir volumes for some temporary data
- Single container with probes, ports, some security configuration, and startup arguments
We can actually reuse the example we used for the validation of the development environment. Let’s just change the container name to traefik and image to traefik:2.3.6.
local k = import "github.com/jsonnet-libs/k8s-alpha/1.18/main.libsonnet";
function()
k.apps.v1.deployment.new(
name="traefik",
containers=[
k.core.v1.container.new(
name="traefik",
image="traefik:2.3.6",
),
],
)
Next I’d like to add Rolling update configuration, according to the documentation it can be done with obj spec.strategy so let’s just try to extend the Deployment object a bit:
local k = import "github.com/jsonnet-libs/k8s-alpha/1.18/main.libsonnet";
function()
k.apps.v1.deployment.new(
name="traefik",
containers=[
k.core.v1.container.new(
name="traefik",
image="traefik:2.3.6",
),
],
) +
k.apps.v1.deployment.spec.strategy.rollingUpdate.withMaxUnavailable(
maxUnavailable=1,
);
Note the plus symbol there, that's actually the method how to merge two objects in JSON. It is heavily used in all libraries.
Next, we would like to add two emptyDir volumes. First of all we need to create these volumes, k8s-alpha has dedicated namespace for such objects, you can find there container ports, taints, sysctls or volumes.
// temporary volumes
local dataVolume = k.core.v1.volume.fromEmptyDir(
name="data",
emptyDir={},
);
local tmpVolume = k.core.v1.volume.fromEmptyDir(
name="tmp",
emptyDir={},
);
We follow the same method here, we're just gonna merge our Deployment with the object we've found in the documentation.
k.apps.v1.deployment.spec.template.spec.withVolumes(
volumes=[
dataVolume,
tmpVolume,
]
)
This is the current output:
{
"apiVersion": "apps/v1",
"kind": "Deployment",
"metadata": {
"name": "traefik"
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"name": "traefik"
}
},
"strategy": {
"rollingUpdate": {
"maxUnavailable": 1
}
},
"template": {
"metadata": {
"labels": {
"name": "traefik"
}
},
"spec": {
"containers": [
{
"image": "traefik:2.3.6",
"name": "traefik"
}
],
"volumes": [
{
"emptyDir": {},
"name": "data"
},
{
"emptyDir": {},
"name": "tmp"
}
]
}
}
}
}
Now I'm pretty happy with the Deployment itself so I can tweak the container a bit. First of all, I'm gonna move the container to the separate local variable. It will make the code a bit cleaner.
local trefikContainer = k.core.v1.container.new(
name="traefik",
image="traefik:2.3.6",
);
Next, I'm gonna add volume mounts for the previously created volumes. Volume mount is actually a separate object so we need to create them first:
// volume mounts for temporary volumes
local dataVolumeMount = k.core.v1.volumeMount.new(
name="data",
mountPath="/data",
readOnly=false,
);
local tmpVolumeMount = k.core.v1.volumeMount.new(
name="tmp",
mountPath="/tmp",
readOnly=false,
);
And this is how we add them to the container:
// traefik container
local trefikContainer = k.core.v1.container.new(
name="traefik",
image="traefik:2.3.6",
) +
k.core.v1.container.withVolumeMounts(
volumeMounts=[
dataVolumeMount,
tmpVolumeMount,
],
);
I guess you got the idea of how to compose all the objects, right? This is the complete example that generates the complete Deployment object:
local k = import "github.com/jsonnet-libs/k8s-alpha/1.18/main.libsonnet";
function()
// temporary volumes
local dataVolume = k.core.v1.volume.fromEmptyDir(
name="data",
emptyDir={},
);
local tmpVolume = k.core.v1.volume.fromEmptyDir(
name="tmp",
emptyDir={},
);
// volume mounts for temporary volumes
local dataVolumeMount = k.core.v1.volumeMount.new(
name="data",
mountPath="/data",
readOnly=false,
);
local tmpVolumeMount = k.core.v1.volumeMount.new(
name="tmp",
mountPath="/tmp",
readOnly=false,
);
// container ports
local traefikPort = k.core.v1.containerPort.newNamed(
containerPort="9000",
name="traefik"
);
local webPort = k.core.v1.containerPort.newNamed(
containerPort="8000",
name="web"
);
local websecurePort = k.core.v1.containerPort.newNamed(
containerPort="8443",
name="websecure"
);
// readiness probe
local readinessProbe = k.core.v1.container.readinessProbe.httpGet.withPort(
port=traefikPort
) +
k.core.v1.container.readinessProbe.httpGet.withPath(
path="/ping",
) +
k.core.v1.container.readinessProbe.withFailureThreshold(
failureThreshold=1,
) +
k.core.v1.container.readinessProbe.withInitialDelaySeconds(
initialDelaySeconds=10,
) +
k.core.v1.container.readinessProbe.withPeriodSeconds(
periodSeconds=10,
) +
k.core.v1.container.readinessProbe.withSuccessThreshold(
successThreshold=1,
) +
k.core.v1.container.readinessProbe.withTimeoutSeconds(
timeoutSeconds=2,
);
// liveness probe
local livenessProbe = k.core.v1.container.livenessProbe.httpGet.withPort(
port=traefikPort,
) +
k.core.v1.container.livenessProbe.httpGet.withPath(
path="/ping",
) +
k.core.v1.container.livenessProbe.withFailureThreshold(
failureThreshold=3,
) +
k.core.v1.container.livenessProbe.withInitialDelaySeconds(
initialDelaySeconds=10,
) +
k.core.v1.container.livenessProbe.withPeriodSeconds(
periodSeconds=10,
) +
k.core.v1.container.livenessProbe.withSuccessThreshold(
successThreshold=1,
) +
k.core.v1.container.livenessProbe.withTimeoutSeconds(
timeoutSeconds=2,
);
// security context
local securityContext = k.core.v1.container.securityContext.withRunAsUser(
runAsUser=65532,
) +
k.core.v1.container.securityContext.withRunAsGroup(
runAsGroup=65532,
) +
k.core.v1.container.securityContext.withRunAsNonRoot(
runAsNonRoot=true,
) +
k.core.v1.container.securityContext.withReadOnlyRootFilesystem(
readOnlyRootFilesystem=true,
) +
k.core.v1.container.securityContext.capabilities.withDrop(
drop=[
"ALL",
],
);
// traefik container
local trefikContainer = k.core.v1.container.new(
name="traefik",
image="traefik:2.3.6",
) +
k.core.v1.container.withVolumeMounts(
volumeMounts=[
dataVolumeMount,
tmpVolumeMount,
],
) +
k.core.v1.container.withArgs(
args=[
"--global.checknewversion",
"--global.sendanonymoususage",
"--entryPoints.traefik.address=:9000/tcp",
"--entryPoints.web.address=:8000/tcp",
"--entryPoints.websecure.address=:8443/tcp",
"--api.dashboard=true",
"--ping=true",
"--providers.kubernetescrd",
"--providers.kubernetesingress",
],
) +
k.core.v1.container.withPorts(
ports=[
traefikPort,
webPort,
websecurePort,
],
) +
readinessProbe +
livenessProbe +
securityContext;
// deployment
k.apps.v1.deployment.new(
name="traefik",
containers=[
trefikContainer,
],
) +
k.apps.v1.deployment.spec.strategy.rollingUpdate.withMaxUnavailable(1) +
k.apps.v1.deployment.spec.template.spec.withVolumes(
volumes=[
dataVolume,
tmpVolume
]
) +
k.apps.v1.deployment.spec.template.spec.securityContext.withFsGroup(
fsGroup=65532,
) +
k.apps.v1.deployment.spec.template.spec.withTerminationGracePeriodSeconds(
terminationGracePeriodSeconds=60,
) +
k.apps.v1.deployment.spec.template.spec.withServiceAccountName(
serviceAccountName="traefik"
)
view raw
And … Next time
Today we created a simple Deployment object that does not support any parameterization. Next time we're gonna use parameterization to update some crucial attributes like resources, replica count, and optional startup arguments.
Once again, our ultimate goal is to create a reusable Traefik Proxy library as an alternative to the official Helm chart.