Hey Pixelers,

Today I thought it would be interesting to talk about 3 game-changing releases in the new Kubernetes 1.26 release.


1. Validating Admission Policy

After enabling the ValidatingAdmissionPolicy feature gate we can create new powerful declarative validating webhooks.

This means that we can easily roll out cluster-wide policies to ensure created resources meet compliance. No third-party policy manager or custom golang code required!

Here is an example policy declaration:

lang yaml

Copy codeExpand code
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicy
metadata:
  name: "demo-policy.example.com"
spec:
  failurePolicy: Fail
  matchConstraints:
    resourceRules:
    - apiGroups:   ["apps"]
      apiVersions: ["v1"]
      operations:  ["CREATE", "UPDATE"]
      resources:   ["deployments"]
  validations:
    - expression: "object.spec.replicas <= 5"

My favourite part is how easy it is to read and write, and not only that, it's really expressive! The purpose of the code is exactly what you think: Ensure whenever a deployment is created or update, that the specified replicas is less, or equal, to 5.

You can customise the expression using the Common Expression Language (CEL), which I think is much simpler than its competitors.

After making the policy, you can easily configure which resources it manages by creating a policy binding:

lang yaml

Copy codeExpand code
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: ValidatingAdmissionPolicyBinding
metadata:
  name: "demo-binding-test.example.com"
spec:
  policyName: "demo-policy.example.com"
  matchResources:
    namespaceSelector:
      matchLabels:
        environment: test

I am excited to see this released as stable, and if you are too, be sure to check out the official documentation on this.


2. Dynamic Resource Allocation

Historically Kubernetes only focused on managing the RAM and CPU of workers as a way to allocate processes to hardware. No longer! Now we can create custom resource allocation drivers.

In 1.26, you can enable this by configuring the DynamicResourceAllocation feature gate to true.

You can then write golang code to handle your custom resource allocation logic, or most likely, find a helm chart from someone who has already done that for you.

You can imagine the possibilities, for example, if you want to partially allocate GPU resources across pods, the definition could look like this:

lang yaml

Copy codeExpand code
apiVersion: gpu.example.com/v1
kind: GPURequirements
metadata:
  name: device-consumer-gpu-parameters
memory: "2Gi"
---
apiVersion: resource.k8s.io/v1alpha1
kind: ResourceClaimTemplate
metadata:
  name: device-consumer-gpu-template
spec:
  metadata:
    # Additional annotations or labels for the
    # ResourceClaim could be specified here.
  spec:
    resourceClassName: "acme-gpu"
    parametersRef:
      apiGroup: gpu.example.com
      kind: GPURequirements
      name: device-consumer-gpu-parameters
---
apiVersion: v1
kind: Pod
metadata:
  name: device-consumer
spec:
  resourceClaims:
  - name: "gpu" # this name gets referenced below under "claims"
    template:
      resourceClaimTemplateName: device-consumer-gpu-template
  containers:
  - name: workload
    image: my-app
    command: ["/bin/program"]
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
      claims:
        - "gpu"
  - name: monitor
    image: my-app
    command: ["/bin/other-program"]
    resources:
      requests:
        memory: "32Mi"
        cpu: "25m"
      limits:
        memory: "64Mi"
        cpu: "50m"
      claims:
      - "gpu"

So far there is no example implementation of a driver, however once it has been made it will be created here.

For more information, check out the blog post or the documentation page.


3. Windows HostProcess!

Microsoft has been working hard to implement the Windows equivalent to Linux privileged containers in Kubernetes.

You can now define your pod to run its containers as a HostProcess by simply configuring a pods securityContext, as such:

lang yaml

Copy codeExpand code
apiVersion: v1
kind: Pod
metadata:
  name: windows-pod
spec:
  securityContext:
    windowsOptions:
      hostProcess: true
      runAsUserName: "NT AUTHORITY\\Local service"
  hostNetwork: true
  nodeSelector:
    "kubernetes.io/os": windows
  containers: []

A detailed configuration guide can be found here.

This is something that will completely change how we see running Windows Kubernetes clusters, as now administrative tasks can be performed directly in Kubernetes itself. For example, security patches, event log collection, installing Windows services, configuring keys and certificates, configuring networking changes, and much, much more!

For more information, check out the blog post or the documentation page.


Things to watch out for!

The legacy authentication methods for Azure and Gcloud have finally been removed! So if your package manager is being a bit proactive, you will be getting errors like this:

lang shell:Shell

Copy codeExpand code
$ kubectl get pods
error: The gcp auth plugin has been removed.
Please use the "gke-gcloud-auth-plugin" kubectl/client-go credential plugin instead.
See https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke for further details

Also, remember that excitement, unfortunately, doesn't magically make alpha features stable in production!

Check out the other deprecations & removals before you start upgrading things.