Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Creating Reusable Kuma Installation YAML

Using CLI tools—instead of a “wall of YAML”—to install things onto Kubernetes is a growing trend, it seems. Istio and Cilium, for example, each have a CLI tool for installing their respective project. I get the reasons why; you can build logic into a CLI tool that you can’t build into a YAML file. Kuma, the open source service mesh maintained largely by Kong and a CNCF Sandbox project, takes a similar approach with its kumactl tool. In this post, however, I’d like to take a look at creating reusable YAML to install Kuma, instead of using the CLI tool every time you install.

You might be wondering, “Why?” That’s a fair question. Currently, the kumactl tool, unless configured otherwise, will generate a set of TLS assets to be used by Kuma (and embeds some of those assets in the YAML regardless of the configuration). Every time you run kumactl, it will generate a new set of TLS assets. This means that the command is not declarative, even if the output is. Unfortunately, you can’t reuse the output, as that would result in duplicate TLS assets across installations. That brings me to the point of this post: how can one create reusable YAML to install Kuma?

Fortunately, this is definitely possible. There are two parts to this process:

  1. Define replacement TLS assets using cert-manager.
  2. Modify the output of kumactl to reference the replacement TLS assets.

Defining TLS Assets

Instead of allowing kumactl to generate TLS assets every time the command is run, you need a way to be able to declaratively define what TLS assets are needed and what the properties of those assets should be. Fortunately, that’s exactly what the cert-manager project does!

Relying on cert-manager to handle TLS assets does mean that cert-manager becomes a dependency (or a prerequisite) for Kuma—it will have to be installed before Kuma can be installed.

To define the necessary TLS assets, you’ll use cert-manager to:

  1. Create a self-signed ClusterIssuer.
  2. Use the self-signed ClusterIssuer to issue a CA root certificate (and a corresponding Secret to store the private key).
  3. Configure the root CA certificate as an Issuer.
  4. Issue a TLS certificate and key that will be used by Kuma.

The root CA certificate definition could look something like this:

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: kuma-root-ca
  namespace: kuma-system
spec:
  isCA: true
  commonName: kuma-root-ca
  secretName: kuma-root-ca
  duration: 43800h # 5 years
  renewBefore: 720h # 30d
  privateKey:
    algorithm: RSA
    encoding: PKCS1
    size: 2048
  usages:
    - digital signature
    - key encipherment
    - cert sign
  issuerRef:
    name: selfsigned-issuer # References self-signed ClusterIssuer
    kind: ClusterIssuer
    group: cert-manager.io

Here’s an example of a cert-manager Certificate resource for the TLS certificate that Kuma would use:

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: kuma-tls-general
  namespace: kuma-system
spec:
  commonName: kuma-tls-general
  secretName: kuma-tls-general
  duration: 8760h # 1 year
  renewBefore: 360h # 15d
  subject:
    organizations:
      - kuma
  isCA: false
  privateKey:
    algorithm: RSA
    encoding: PKCS1
    size: 2048
  usages:
    - digital signature
    - key encipherment
    - server auth
    - client auth
  dnsNames:
    - kuma-control-plane.kuma-system
    - kuma-control-plane.kuma-system.svc
  issuerRef:
    name: kuma-ca-issuer # References Issuer based on kuma-root-ca
    kind: Issuer
    group: cert-manager.io

Make note of the name and secretName values for both certificates; they will be needed later.

After all the TLS assets have been defined—they don’t need to be actually applied against the cluster, just defined—then you’re ready to modify the installation YAMl to make it reusable.

Creating Reusable Installation YAML

Once you’ve defined the TLS assets, then you can make the necessary changes to the YAML output of kumactl to make it reusable. Keep in mind, as described in the previous section, using cert-manager to manage TLS assets means that cert-manager becomes a dependency for Kuma (in other words, you’ll need to install cert-manager before you can install Kuma).

Begin by creating a starting point with kumactl and piping the output to a file:

kumactl install control-plane --tls-general-secret=kuma-tls-general \
--tls-general-ca-bundle=$(echo "blah") > kuma.yaml

For the --tls-general-secret parameter, you’re specifying the name of the Secret created by the general TLS certificate you defined earlier with cert-manager.

The file created by this command needs four changes made to it:

  1. The caBundle value supplied for all webhooks needs to be deleted (hence, the value you specify on the command line doesn’t matter).
  2. All webhooks need to be annotated for the cert-manager CA Injector to automatically inject the correct caBundle value.
  3. The “kuma-control-plane” Deployment needs to be modified to mount the root CA certificate’s Secret (created by cert-manager) as a volume.
  4. The “kuma-control-plane” Deployment needs to be changed to pass in a different value for the KUMA_RUNTIME_KUBERNTES_INJECTOR_CA_CERT_FILE environment variable (it should point to the ca.crt file on the volume added in step 3).

You could make these changes manually, but since we’re going for declarative why not use something like Kustomize?

To make the first change—removing the caBundle value embedded by kumactl—you could use this JSON 6902 patch:

[
    { "op": "remove", "path": "/webhooks/0/clientConfig/caBundle" },
    { "op": "remove", "path": "/webhooks/1/clientConfig/caBundle" },
    { "op": "remove", "path": "/webhooks/2/clientConfig/caBundle" }
]

To make the second change, you could use a JSON 6902 patch like this (the use of “kuma-root-ca” in the patch below refers to the name of the root CA Certificate resource defined earlier with cert-manager):

[
  { "op": "add",
    "path": "/metadata/annotations", 
    "value": 
      { "cert-manager.io/inject-ca-from": "kuma-system/kuma-root-ca" }
  }
]

These two changes enable you to remove the Base64-encoded copy of the CA certificate—referenced by Kuma’s webhooks—and instead have cert-manager’s CA Injector insert the correct value instead.

This JSON 6902 patch would handle the third change:

[
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/0",
    "value": {
      "name": "general-ca-crt",
      "secret": {
        "secretName": "kuma-root-ca"
      }
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/volumeMounts/0",
    "value": {
      "name": "general-ca-crt",
      "mountPath": "/var/run/secrets/kuma.io/ca-cert",
      "readOnly": true
    }
  }
]

The Secret referenced in the first part of the patch above references the Secret for the root CA Certificate resource, as noted in the secretName field on the Certificate’s manifest.

And, finally, the fourth change can be handled using this JSON 6902 patch:

[
  { "op": "replace",
    "path": "/spec/template/spec/containers/0/env/11/value",
    "value": "/var/run/secrets/kuma.io/ca-cert/ca.crt" }
]

I won’t walk through all of these changes in great detail, but I do want to take a moment to dive a bit deeper into the Secret mounted as an additional volume. Using the JSON 6902 patch above against the base YAML created using kumactl will result in a configuration that looks like this (focused only on volumes and volumeMounts in the Deployment, everything else is stripped away):

spec:
  template:
    spec:
      containers:
      - volumeMounts:
        - mountPath: /var/run/secrets/kuma.io/ca-cert
          name: general-ca-crt
          readOnly: true
        - mountPath: /var/run/secrets/kuma.io/tls-cert
          name: general-tls-cert
          readOnly: true
        - mountPath: /etc/kuma.io/kuma-control-plane
          name: kuma-control-plane-config
          readOnly: true
      volumes:
      - name: general-ca-crt
        secret:
          secretName: kuma-root-ca
      - name: general-tls-cert
        secret:
          secretName: kuma-tls-general
      - configMap:
          name: kuma-control-plane-config
        name: kuma-control-plane-config

The “general-ca-cert” Secret and volume are what’s been added by the Kustomize patch. Why? When generating the YAML output, kumactl combines three resources—the TLS certificate, the TLS key, and the CA certificate—into a single Secret. However, cert-manager won’t create a Secret like that. So, to avoid an additional manual step, you mount the Secret created by cert-manager for CA root certificate as a separate volume. This allows you modify the environment variable passed to the control plane specifying where the CA certificate is located to the new path where the volume is mounted.

After the four changes are complete, the resulting modified YAML is completely reusable.

Using the Reusable YAML

To actually use the reusable YAML you’ve just created with the above steps:

  1. Apply the cert-manager TLS definitions to the cluster where you want to install Kuma. This will create all the necessary Certificate resources and Secrets. (Obviously cert-manager will need to be installed already.)
  2. Apply the Kuma YAML created by the steps above. It’s configured to reference the cert-manager assets created in step 1.

That’s it.

Caveats/Limitations

The process and changes outlined in this post only apply to a standalone (single zone) installation. It’s absolutely possible to do this for multizone deployments, too, but I’ll leave that as an exercise for the readers.

Additional Resources

I’ve recently published two GitHub repositories with content that supports what’s described in this post:

  • The “kuma-cert-manager” repository outlines how to replace kumactl-generated TLS assets with resources from cert-manager. This supports the “Defining TLS Assets” section above. Full examples of all the related cert-manager resources are found in this repository.
  • The “kuma-declarative-install” repository builds on the previous repository by showing the additional changes that must be made to the YAML output generated by kumactl. This supports the “Creating Reusable Installation YAML” section above. This includes a Kustomize configuration that will automate all the changes necessary to make the YAML reusable.

I hope this information is useful. If you have questions, feel free to find me on the Kuma community Slack or contact me on Twitter (my DMs are open).

Using the External AWS Cloud Provider for Kubernetes

In 2018, after finding a dearth of information on setting up Kubernetes with AWS integration/support, I set out to try to establish some level of documentation on this topic. That effort resulted in a few different blog posts, but ultimately culminated in this post on setting up an AWS-integrated Kubernetes cluster using kubeadm. Although originally written for Kubernetes 1.15, the process described in that post is still accurate for newer versions of Kubernetes. With the release of Kubernetes 1.22, though, the in-tree AWS cloud provider—which is what is used/described in the post linked above—has been deprecated in favor of the external cloud provider. In this post, I’ll show how to set up an AWS-integrated Kubernetes cluster using the external AWS cloud provider.

In addition to the post I linked above, there were a number of other articles I published on this topic:

Most of the information in these posts, if not all of it, is found in the latest iteration, but I wanted to include these links here for some additional context. Also, all of these focus on the now-deprecated in-tree AWS cloud provider.

Although all of these prior posts focus on the in-tree provider, they are helpful because many of the same prerequisites/requirements for the in-tree provider are still—as far as I know—applicable for the external AWS cloud provider:

  1. The hostname of each node must match the EC2 Private DNS entry for the instance (by default, this is something like ip-10-11-12-13.us-west-2.compute.internal or similar). Note that I haven’t explicitly tested/verified this requirement in a while, so it’s possible that this has changed. As soon as I am able, I’ll conduct some additional testing and update this post.
  2. Each node needs to have an IAM instance profile that grants it access to an IAM role and policy with permissions to the AWS API.
  3. Specific resources used by the cluster must have certain AWS tags assigned to them. As with the hostname requirement, this is an area where I haven’t done extensive testing of the external cloud provider against the in-tree provider.
  4. Specific entries are needed in the kubeadm configuration file used to bootstrap the cluster, join control plane nodes, and join worker nodes.

The following sections describe each of these four areas in a bit more detail.

Setting Node Hostnames

Based on my testing—see my disclaimer in #1 above—the hostname for the OS needs to match the EC2 Private DNS entry for that particular instance. By default, this is typically something like ip-10-11-12-13.us-west-2.compute.internal (change the numbers and the region to appropriately reflect the private IP address and region of the instance, and be aware that the us-east-1 AWS region uses the ec2.internal DNS suffix). The fastest/easiest way I’ve verified to make sure this is the case is with this command:

sudo hostnamectl set-hostname \
$(curl -s http://169.254.169.254/latest/meta-data/local-hostname)

Be sure to set the hostname before starting the bootstrapping process. I have some references of putting this command in the user data for the instance, so that it runs automatically. I have not, however, specifically tested this approach.

Creating and Assigning the IAM Instance Profile

The nodes in the cluster need permission to the AWS APIs in order for the AWS cloud provider to function properly. The “Prerequisites” page on the Kubernetes AWS Cloud Provider site has a sample policy for both control plane nodes and worker nodes. Consider these sample policies to be starting points; test and modify in order to make sure the policies work for your specific implementation. Once you create IAM instance profiles that reference the appropriate roles and policies, then be sure to specify the IAM instance profile when launching your instances. All the major IaC tools (including both Pulumi and Terraform) have support for specifying the IAM instance profile in code.

Tagging Cluster Resources

While the documentation for the cloud provider is improving, this is one area that could still use some additional work. The “Getting Started” page on the Kubernetes AWS Cloud Provider site only says this about tags:

Add the tag kubernetes.io/cluster/<your_cluster_id>: owned (if resources are owned and managed by the cluster) or kubernetes.io/cluster/<your_cluster_id>: shared (if resources are shared between clusters, and should not be destroyed if the cluster is destroyed) to your instances.

Based on my knowledge of the in-tree provider and the testing I’ve done with the external provider, this is correct. However, additional tags are typically needed:

  • Public (Internet-facing) subnets need a kubernetes.io/elb: 1 tag, while private subnets need a kubernetes.io/internal-elb: 1 tag.
  • All subnets need the kubernetes.io/cluster/<your_cluster_id>: owned|shared tag.
  • If the cloud controller manager isn’t started with --configure-cloud-routes: "false", then the route tables also needed to be tagged like the subnets.
  • At least one security group—one which the nodes should be a member of—needs the kubernetes.io/cluster/<your_cluster_id>: owned|shared tag.

Failure to have things properly tagged results in odd failure modes, like ELBs being automatically created in response to the creation of a Service object of type LoadBalancer, but instances never being populated for the ELB (for example). Another failure I’ve seen is the Kubelet failing to start if the nodes aren’t properly tagged. Unfortunately, the failure modes of the external cloud provider aren’t any better documented than the in-tree provider, which can make troubleshooting a bit challenging.

Using Kubeadm Configuration Files

The final piece is adding the correct values to your kubeadm configuration files so that the cluster is bootstrapped properly. I tested the configurations shown below using Kubernetes 1.22.

Three different configuration files are needed:

  1. A configuration file to be used to bootstrap the first control plane node
  2. A configuration file used to join any additional control plane nodes
  3. A configuration file used to join worker nodes

I’ll begin with the natural starting point: the configuration file for bootstrapping the first/initial control plane node.

Bootstrapping the First Control Plane Node

A kubeadm configuration file you could use to bootstrap your first control plane node with the external AWS cloud provider might look something like this:

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
  extraArgs:
    cloud-provider: external
clusterName: foo
controllerManager:
  extraArgs:
    cloud-provider: external
kubernetesVersion: v1.22.2 # can use "stable"
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
  name: ip-10-11-12-13.us-west-2.compute.internal
  kubeletExtraArgs:
    cloud-provider: external

Note that this does not take into account configuration settings related to setting up a highly-available control plane; refer to the kubeadm v1beta3 API docs for details on what additional settings are needed. (For sure the controlPlaneEndpoint field should be added, but there may be additional settings that are necessary for your specific environment.)

The big change from previous kubeadm configurations I’ve shared is that cloud-provider: aws is now cloud-provider: external. Otherwise, the configuration remains largely unchanged. Note the absence of the configure-cloud-routes; this is moved to the AWS cloud controller manager itself.

After you’ve bootstrapped the first control plane node (using kubeadm init --config <filename>.yaml) but before you add any other nodes—control plane or otherwise—you’ll need to install the AWS cloud controller manager. Manifests are available, but you’ll need to use kustomize to build them out:

kustomize build 'github.com/kubernetes/cloud-provider-aws/manifests/overlays/superset-role/?ref=master'

Review the output (to ensure the values supplied are correct for your environment), then send the results to your cluster by piping them into kubectl apply -f -.

You’ll also want to go ahead and install the CNI plugin of your choice.

Adding More Control Plane Nodes

If you are building a highly-available control plane, then a kubeadm configuration similar to the one shown below would work with the external AWS cloud provider:

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
  bootstrapToken:
    token: 123456.a4v4ii39rupz51j3
    apiServerEndpoint: "cp-lb.us-west-2.elb.amazonaws.com:6443"
    caCertHashes: ["sha256:193feed98fb5fd2b497472fb7d9553414e27ff7eeb7b919c82ff3a08fdf5782f"]
nodeRegistration:
  name: ip-10-14-18-22.us-west-2.compute.internal
  kubeletExtraArgs:
    cloud-provider: external
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 10.14.18.22
  certificateKey: "f6fcb672782d6f0581a1060cf135920acde6736ef12562ddbdc4515d1315b518"

You’d want to adjust the values for token, apiServerEndpoint, caCertHashes, and certificateKey as appropriate based on the output of kubeadm init when bootstrapping the first control plane node. Also, refer to the “Adding More Control Plane Nodes” section of the previous post for a few notes regarding tokens, the SHA256 hash, and the certificate encryption key (there are ways to recover/recreate this information if you don’t have it).

Use your final configuration file with kubeadm join --config <filename>.yaml to join the cluster as an additional control plane node.

Adding Worker Nodes

The final step is to add worker nodes. You’d do this with kubeadm join --config <filename>.yaml, where the specified YAML file might look something like this:

---
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
  bootstrapToken:
    token: 123456.a4v4ii39rupz51j3
    apiServerEndpoint: "cp-lb.us-west-2.elb.amazonaws.com:6443"
    caCertHashes:
      - "sha256:193feed98fb5fd2b497472fb7d9553414e27ff7eeb7b919c82ff3a08fdf5782f"
nodeRegistration:
  name: ip-10-12-14-16.us-west-2.compute.internal
  kubeletExtraArgs:
    cloud-provider: external

As noted earlier, be sure to specify a correct and valid bootstrap token and the SHA256 hash of the CA certificate.

Wrapping Up

At this point, you should have a (mostly) functional Kubernetes cluster. You’ll probably still want some sort of storage solution; see here for more details on the AWS EBS CSI driver.

If you run into problems or issues getting this to work, please feel free to reach out to me. You can find me on the Kubernetes Slack community, or you can contact me on Twitter (DMs are open). Also, if you’re well-versed in this area and have corrections, clarifications, or suggestions for how I can improve this article, I welcome all constructive feedback. Thanks!

Kustomize Transformer Configurations for Cluster API v1beta1

The topic of combining kustomize with Cluster API (CAPI) is a topic I’ve touched on several times over the last 18-24 months. I first touched on this topic in November 2019 with a post on using kustomize with CAPI manifests. A short while later, I discovered a way to change the configurations for the kustomize transformers to make it easier to use it with CAPI. That resulted in two posts on changing the kustomize transformers: one for v1alpha2 and one for v1alpha3 (since there were changes to the API between versions). In this post, I’ll revisit kustomize transformer configurations again, this time for CAPI v1beta1 (the API version corresponding to the CAPI 1.0 release).

In the v1alpha2 post (the first post on modifying kustomize transformer configurations), I mentioned that changes were needed to the NameReference and CommonLabel transformers. In the v1alpha3 post, I mentioned that the changes to the CommonLabel transformer became largely optional; if you are planning on adding additional labels to MachineDeployments, then the change to the CommonLabels transformer is required, but otherwise you could probably get by without it.

For v1beta1, the necessary changes are very similar to v1alpha3, and (for the most part) are focused on the NameReference transformer. The NameReference transformer tracks references between objects, so that if the name of an object changes—perhaps due to use of the namePrefix or nameSuffix directives in the kustomization.yaml file—references to that object are also appropriately renamed.

Here are the CAPI-related changes needed for the NameReference transformer:

- kind: Cluster
  group: cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/clusterName
    kind: MachineDeployment
  - path: spec/template/spec/clusterName
    kind: MachineDeployment

- kind: AWSCluster
  group: infrastructure.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/infrastructureRef/name
    kind: Cluster

- kind: KubeadmControlPlane
  group: controlplane.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/controlPlaneRef/name
    kind: Cluster

- kind: AWSMachine
  group: infrastructure.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/infrastructureRef/name
    kind: Machine

- kind: KubeadmConfig
  group: bootstrap.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/bootstrap/configRef/name
    kind: Machine

- kind: AWSMachineTemplate
  group: infrastructure.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/template/spec/infrastructureRef/name
    kind: MachineDeployment
  - path: spec/machineTemplate/infrastructureRef/name
    kind: KubeadmControlPlane

- kind: KubeadmConfigTemplate
  group: bootstrap.cluster.x-k8s.io
  version: v1beta1
  fieldSpecs:
  - path: spec/template/spec/bootstrap/configRef/name
    kind: MachineDeployment

Generally, you’d append this content to the default NameReference transformer configuration, which you’d obtain using kustomize config save. However, somewhere in the Kustomize 3.8.4 release timeframe, the kustomize config save command for extracting the default transformer configurations was removed, and I have yet to figure out another way of getting this information. In theory, when using kustomize with CAPI manifests, you wouldn’t need any of the default NameReference transformer configurations, but I haven’t conducted any thorough testing of that theory (yet).

Aside from replacing all instances of v1alpha3 with v1beta1, the only other difference in the YAML shown above compared to YAML in the the v1alpha3 post is a change to the fieldSpecs list for AWSMachineTemplate. Previously, the KubeadmControlPlane referenced an underlying AWSMachineTemplate at the path spec/infrastructureTemplate/name. In v1beta1, the KubeadmControlPlane object now references an AWSMachineTemplate at the path spec/machineTemplate/infrastructureRef/name.

As mentioned in both of the previous posts, you’ll need to put this content in a file (I use namereference.yaml) and then specify the path to this configuration in kustomization.yaml, like this:

configurations:
  - /path/to/customized/namereference.yaml

I hope this information is useful to readers. Feel free to find me on the Kubernetes Slack instance if you have questions, and I’ll do my best to help answer them. You’re also welcome to contact me on Twitter (DMs are open). Thanks!

Technology Short Take 146

Welcome to Technology Short Take #146! Over the last couple of weeks, I’ve gathered a few technology-related links for you all. There’s some networking stuff, a few security links, and even a hardware-related article. But enough with the introduction—let’s get into the content!

Networking

Servers/Hardware

  • Chris Mellor speculates that Cisco UCS may be on the way out; Kevin Houston responds with a “I don’t think so.” Who will be correct? I guess we will just have to wait and see.

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

Storage

  • Cloudflare recently introduced its own object storage offering, announced in this blog post. Cloudflare’s offering, called R2, offers an S3-compatible API and no egress fees, among other features.

Virtualization

Although this Tech Short Take is a tad shorter than usual, I still hope that you found something useful in here. Feel free to hit me up on Twitter if you have any feedback. Enjoy!

Installing Cilium via a ClusterResourceSet

In this post, I’m going to walk you through how to install Cilium onto a Cluster API-managed workload cluster using a ClusterResourceSet. It’s reasonable to consider this post a follow-up to my earlier post that walked you through using a ClusterResourceSet to install Calico. There’s no need to read the earlier post, though, as this post includes all the information (or links to the information) you need. Ready? Let’s jump in!

Prerequisites

If you aren’t already familiar with Cluster API—hereafter just referred to as CAPI—then I would encourage you to read my introduction to Cluster API post. Although it is a bit dated (it was written in the very early days of the project, which recently released version 1.0). Some of the commands referenced in that post have changed, but the underlying concepts remain valid. If you’re not familiar with Cilium, check out their introduction to the project for more information. Finally, if you’re not familiar at all with the idea of ClusterResourceSets, you can read my earlier post or check out the ClusterResourceSet CAEP document.

Installing Cilium via a ClusterResourceSet

If you want to install Cilium via a ClusterResourceSet, the process looks something like this:

  1. Create a ConfigMap with the instructions for installing Cilium.
  2. Create a ClusterResourceSet that references the ConfigMap.
  3. Profit (when you deploy matching workload clusters).

Let’s look at these steps in a bit more detail.

Creating the Installation ConfigMap

The Cilium docs generally recommend the use of the cilium CLI tool to install Cilium. The reasoning behind this is, as I understand it, that the cilium CLI tool can interrogate the Kubernetes cluster to gather information and then attempt to pick the best configuration options for you. Using helm is also another option recommended by the docs. For our purposes, however, neither of those approaches will work—using a ClusterResourceSet means you need to be able to supply YAML manifests.

Fortunately, the fact that Cilium supports Helm gives us a path forward via the use of helm template to render the templates locally. As per the docs on helm template, there are some caveats/considerations, but this was the only way I found to create YAML manifests for installing Cilium.

So, the first step to creating the ConfigMap you need is to set up the Helm repository:

helm repo add cilium https://helm.cilium.io

Then render the templates locally:

helm template cilium cilium/cilium --version 1.10.4 \
--namespace kube-system > cilium-1.10.4.yaml

You may need to specify additional options/values as needed in the above command in order to accommodate your specific environment or requirements, of course.

Once you have the templates rendered, then create the ConfigMap that the ClusterResourceSet needs:

kubectl create configmap cilium-crs-cm --from-file=cilium-1.10.4.yaml

This ConfigMap should be created on the appropriate CAPI management cluster, so ensure your Kubernetes context is set correctly.

Creating the ClusterResourceSet

Now you’re ready to create the ClusterResourceSet. Here’s an example you could use as a starting point:

---
apiVersion: addons.cluster.x-k8s.io/v1alpha4
kind: ClusterResourceSet
metadata:
  name: cilium-crs
  namespace: default
spec:
  clusterSelector:
    matchLabels:
      cni: cilium 
  resources:
  - kind: ConfigMap
    name: cilium-crs-cm

You can see that the ClusterResourceSet references the ConfigMap, which in turn contains the YAML to install Cilium (and that’s the YAML applied against matching workload clusters).

Deploy a Matching Workload Cluster

What determines a matching workload cluster? The clusterSelector portion of the ClusterResourceSet. In the example above, the ClusterResourceSet’s clusterSelector specifies that workload clusters should have the cni: cilium label attached.

The label should be part of CAPI’s Cluster object, like this:

---
apiVersion: cluster.x-k8s.io/v1alpha4
kind: Cluster
metadata:
  name: cilium-cluster
  namespace: cilium-test
  labels:
    cni: cilium

When CAPI creates a workload cluster with that label and value, as shown in the example CAPI manifest above, then the ClusterResourceSet will automatically apply the contents of the ConfigMap against the cluster after it has been fully provisioned. The result: Cilium gets installed automatically on the new workload cluster!

I hope this post is useful. If you have any questions or any feedback, I’d love to hear from you! Feel free to find me on Twitter, or connect with me on the Kubernetes Slack instance. For more Cilium information and assistance, I’d encourage you to check out the Cilium Slack community.

Recent Posts

Technology Short Take 145

Welcome to Technology Short Take #145! What will you find in this Tech Short Take? Well, let’s see…stuff on Envoy, network automation, network designs, M1 chips (and potential open source variants!), a bevy of security articles (including a couple on very severe vulnerabilities), Kubernetes, AWS IAM, and so much more! I hope that you find something useful here. Enjoy!

Read more...

Technology Short Take 144

Welcome to Technology Short Take #144! I have a fairly diverse set of links for readers this time around, covering topics from microchips to improving your writing, with stops along the way in topics like Kubernetes, virtualization, Linux, and the popular JSON-parsing tool jq. I hope you find something useful!

Read more...

Establishing VPC Peering with Pulumi and Go

I use Pulumi to manage my lab infrastructure on AWS (I shared some of the details in this April 2020 blog post published on the Pulumi site). Originally I started with TypeScript, but later switched to Go. Recently I had a need to add some VPC peering relationships to my lab configuration. I was concerned that this may pose some problems—due entirely to the way I structure my Pulumi projects and stacks—but as it turned out it was more straightforward than I expected. In this post, I’ll share some example code and explain what I learned in the process of writing it.

Read more...

Using the AWS CLI to Tag Groups of AWS Resources

To conduct some testing, I recently needed to spin up a group of Kubernetes clusters on AWS. Generally speaking, my “weapon of choice” for something like this is Cluster API (CAPI) with the AWS provider. Normally this would be enormously simple. In this particular case—for reasons that I won’t bother going into here—I needed to spin up all these clusters in a single VPC. This presents a problem for the Cluster API Provider for AWS (CAPA), as it currently doesn’t add some required tags to existing AWS infrastructure (see this issue). The fix is to add the tags manually, so in this post I’ll share how I used the AWS CLI to add the necessary tags.

Read more...

Technology Short Take 143

Welcome to Technology Short Take #143! I have what I think is an interesting list of links to share with you this time around. Since taking my new job at Kong, I’ve been spending more time with Envoy, so you’ll see some Envoy-related content showing up in this Technology Short Take. I hope this collection of links has something useful for you!

Read more...

Starting WireGuard Interfaces Automatically with Launchd on macOS

In late June of this year, I wrote a piece on using WireGuard on macOS via the CLI, where I walked readers using macOS through how to configure and use the WireGuard VPN from the terminal (as opposed to using the GUI client, which I discussed here). In that post, I briefly mentioned that I was planning to explore how to have macOS' launchd automatically start WireGuard interfaces. In this post, I’ll show you how to do exactly that.

Read more...

An Alternate Approach to etcd Certificate Generation with Kubeadm

I’ve written a fair amount about kubeadm, which was my preferred way of bootstrapping Kubernetes clusters until Cluster API arrived. Along the way, I’ve also discussed using kubeadm to assist with setting up etcd, the distributed key-value store leveraged by the Kubernetes control plane (see here, here, and here). In this post, I’d like to revisit the topic of using kubeadm to set up an etcd cluster once again, this time taking a look at an alternate approach to generating the necessary TLS certificates than what the official documentation describes.

Read more...

Technology Short Take 142

Welcome to Technology Short Take #142! This time around, the Networking section is a bit light, but I’ve got plenty of cloud computing links and articles for you to enjoy, along with some stuff on OSes and applications, programming, and soft skills. Hopefully there’s something useful here for you!

Read more...

Adding Multiple Items Using Kustomize JSON 6902 Patches

Recently, I needed to deploy a Kubernetes cluster via Cluster API (CAPI) into a pre-existing AWS VPC. As I outlined in this post from September 2019, this entails modifying the CAPI manifest to include the VPC ID and any associated subnet IDs, as well as referencing existing security groups where needed. I knew that I could use the kustomize tool to make these changes in a declarative way, as I’d explored using kustomize with Cluster API manifests some time ago. This time, though, I needed to add a list of items, not just modify an existing value. In this post, I’ll show you how I used a JSON 6902 patch with kustomize to add a list of items to a CAPI manifest.

Read more...

Using WireGuard on macOS via the CLI

I’ve written a few different posts on WireGuard, the “simple yet fast and modern VPN” (as described by the WireGuard web site) that aims to supplant tools like IPSec and OpenVPN. My first post on WireGuard showed how to configure WireGuard on Linux, both on the client side as well as on the server side. After that, I followed it up with posts on using the GUI WireGuard app to configure WireGuard on macOS and—most recently—making WireGuard from Homebrew work on an M1-based Mac. In this post, I’m going to take a look at using WireGuard on macOS again, but this time via the CLI.

Read more...

Installing Older Versions of Kumactl on an M1 Mac

The Kuma community recently released version 1.2.0 of the open source Kuma service mesh, and along with it a corresponding version of kumactl, the command-line utility for interacting with Kuma. To make it easy for macOS users to get kumactl, the Kuma community maintains a Homebrew formula for the CLI utility. That includes providing M1-native (ARM64) macOS binaries for kumactl. Unfortunately, installing an earlier version of kumactl on an M1-based Mac using Homebrew is somewhat less than ideal. Here’s one way—probably not the only way—to work around some of the challenges.

Read more...

Making WireGuard from Homebrew Work on an M1 Mac

After writing the post on using WireGuard on macOS (using the official WireGuard GUI app from the Mac App Store), I found the GUI app’s behavior to be less than ideal. For example, tunnels marked as on-demand would later show up as no longer configured as an on-demand tunnel. When I decided to set up WireGuard on my M1-based MacBook Pro (see my review of the M1 MacBook Pro), I didn’t want to use the GUI app. Fortunately, Homebrew has formulas for WireGuard. Unfortunately, the WireGuard tools as installed by Homebrew on an M1-based Mac won’t work. Here’s how to fix that.

Read more...

Kubernetes Port Names and Terminating HTTPS Traffic on AWS

I recently came across something that wasn’t immediately intuitive with regard to terminating HTTPS traffic on an AWS Elastic Load Balancer (ELB) when using Kubernetes on AWS. At least, it wasn’t intuitive to me, and I’m guessing that it may not be intuitive to some other readers as well. Kudos to my teammates Hart Hoover and Brent Yarger for identifying the resolution, which I’m going to call out in this post.

Read more...

Technology Short Take 141

Welcome to Technology Short Take #141! This is the first Technology Short Take compiled, written, and published entirely on my M1-based MacBook Pro (see my review here). The collection of links shared below covers a fairly wide range of topics, from old Sun hardware to working with serverless frameworks in the public cloud. I hope that you find something useful here. Enjoy!

Read more...

Review: Logitech Ergo K860 Ergonomic Keyboard

As part of an ongoing effort to refine my work environment, several months ago I switched to a Logitech Ergo K860 ergonomic keyboard. While I’m not a “keyboard snob,” I am somewhat particular about the feel of my keyboard, so I wasn’t sure how I would like the K860. In this post, I’ll provide my feedback, and provide some information on how well the keyboard works with both Linux and macOS.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!