Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Deploying a CNI Automatically with a ClusterResourceSet

Not too long ago I hosted an episode of TGIK8s, where I explored some features of Cluster API. One of the features I explored on the show was ClusterResourceSet, an experimental feature that allows users to automatically install additional components onto workload clusters when the workload clusters are provisioned. In this post, I’ll show how to deploy a CNI plugin automatically using a ClusterResourceSet.

A lot of this post is inspired by a similar post on installing Calico using a ClusterResourceSet. Although that post is for vSphere and this one focuses on AWS, much of the infrastructure differences are abstracted away by Kubernetes and Cluster API.

At a high level, using ClusterResourceSet to install a CNI plugin automatically looks like this:

  1. Make sure experimental features are enabled on your CAPI management cluster.
  2. Create a ConfigMap that contains the information to deploy the CNI plugin.
  3. Create a ClusterResourceSet that references the ConfigMap.
  4. Deploy one or more workload clusters that match the cluster selector specified in the ClusterResourceSet.

The sections below describe each of these steps in more detail.

Enabling Experimental Features

The preferred way to enable experimental features on your management cluster is to use a setting in the clusterctl configuration file or its environment variable equivalent before you initialize the management cluster. Specifically, putting EXP_CLUSTER_RESOURCE_SET: "true" in the clusterctl configuration file or using export EXP_CLUSTER_RESOURCE_SET=true before initializing the management cluster with clusterctl init will enable the ClusterResourceSet functionality.

But what if your management cluster has already been initialized? It is possible to enable the functionality by editing a couple of the CAPI-related Deployments on your management cluster. Specifically, you’ll need to edit the following Deployments:

  • The “capi-controller-manager” Deployment in the “capi-system” namespace
  • The “capi-controller-manager” Deployment in the “capi-webhook-system” namespace

In both cases, the edit is the same: you’ll want to edit the --featureGates parameter to specify “true” for ClusterResourceSet. For example, before editing the “capi-controller-manager” Deployment in the “capi-system” namespace, running kubectl -n capi-system get deployment capi-controller-manager -o yaml would show this for the command line parameters for the “manager” container:

- args:
  - --metrics-addr=127.0.0.1:8080
  - --enable-leader-election
  - --feature-gates=MachinePool=false,ClusterResourceSet=false
  command:
  - /manager

After editing, it should look like this:

- args:
  - --metrics-addr=127.0.0.1:8080
  - --enable-leader-election
  - --feature-gates=MachinePool=false,ClusterResourceSet=true
  command:
  - /manager

Editing the Deployments will cause Kubernetes to automatically start new versions of the containers with the new command-line flags.

It’s worth noting that the preferred way is to enable the experimental feature through the use of a clusterctl configuration file or the appropriate environment variable before initializing your management cluster with clusterctl init.

Once the functionality is enabled, then you’re ready to start creating the various components needed to use ClusterResourceSets. The first component you’ll need to create is a ConfigMap.

Create the ConfigMap for the CNI Plugin

Most CNI plugins provide a YAML manifest that will define the CustomResourceDefinitions (CRDs), controllers, and Pods/DaemonSets/Deployments that are necessary for the CNI plugin to function correctly. To enable a ClusterResourceSet to install the CNI plugin for you when provisioning a workload cluster, you’ll need to take that installation manifest and place it into a ConfigMap on the management cluster. The ClusterResourceSet, which you’ll create in the next section, will then reference this ConfigMap.

To create a ConfigMap that contains the YAML manifest to install Calico, you’d first download the desired version of the Calico manifest (we’ll assume you call the downloaded manifest calico.yaml) and then run this command against the management cluster:

kubectl create configmap calico-crs-configmap --from-file=calico.yaml

Make a note of the name you use (this example calls the ConfigMap “calico-crs-configmap”), as you’ll need it in the next step when you create the ClusterResourceSet itself.

Create the ClusterResourceSet

Next up is creating the ClusterResourceSet itself. Here’s an example ClusterResourceSet that could be used to install a CNI plugin onto (or into) a workload cluster:

---
apiVersion: addons.cluster.x-k8s.io/v1alpha3
kind: ClusterResourceSet
metadata:
  name: calico-crs
  namespace: default
spec:
  clusterSelector:
    matchLabels:
      cni: calico 
  resources:
  - kind: ConfigMap
    name: calico-crs-configmap

The two key things to note here are the clusterSelector and the resources fields. The clusterSelector field controls how Cluster API will match this ClusterResourceSet against one or more workload clusters. In this case, I’m using a matchLabels approach where the ClusterResourceSet will be applied to all workload clusters that have the cni: calico label present. If the label isn’t present, the ClusterResourceSet won’t apply to that workload cluster.

The resources field references the ConfigMap created in the previous step, which in turn contains the manifest for installing the CNI plugin. Note that a ClusterResourceSet can contain multiple resources; only a single resource is specified in this example. If you do specify multiple resources, keep in mind that all specified resources in the ClusterResourceSet will be applied to all workload clusters that match the cluster selector property. If you need more granularity/flexiblity, use separate ClusterResourceSets for each resource.

The ClusterResourceSet is defined on the management cluster, so once you’ve created the YAML manifest you’d use kubectl apply to apply it against the management cluster:

kubectl apply -f calico-crs.yaml

All the setup is now completed, and you’re ready to use the ClusterResourceSet with your workload cluster(s).

Deploy a Workload Cluster

With the appropriate resources in place (in the form of ConfigMaps that encapsulate the desired YAML manifests) and the ClusterResourceSet defined, you’re ready to deploy a workload cluster and have the ClusterResourceSet automatically install the specified resources—in this case, the CNI plugin.

Use clusterctl config cluster to generate the YAML manifest for a workload cluster, then edit the resulting output to include the “cni: calico” label on the Cluster object, like this:

---
apiVersion: cluster.x-k8s.io/v1alpha3
kind: Cluster
metadata:
  name: workload-cluster-1
  namespace: default
  labels:
    cni: calico

Apply the workload cluster manifest using kubectl apply -f <filename.yaml>, and then sit back and watch Cluster API go to work. After a few minutes (it depends on your provider and your specific configuration), you should see the workload cluster nodes (which you can check after you grab the Kubeconfig with clusterctl get kubeconfig) go to “Ready” status with no further intervention on your part—meaning that the CNI plugin was installed successfully!

Additional Resources

In learning about ClusterResourceSets, I found reading the ClusterResourceSet CAEP to be helpful.

I hope this post is useful. If you have any questions, comments, or suggestions for improvement, I’d love to hear from you. You can find me on the Kubernetes Slack instance, or contact me on Twitter. Thanks!

Setting up Wireguard for AWS VPC Access

Seeking more streamlined access to AWS EC2 instances on private subnets, I recently implemented Wireguard for VPN access. Wireguard, if you’re not familiar, is a relatively new solution that is baked into recent Linux kernels. (There is also support for other OSes.) In this post, I’ll share what I learned in setting up Wireguard for VPN access to my AWS environments.

Since the configuration of the clients and the servers is largely the same (especially since both client and server are Linux), I haven’t separated out the two configurations. At a high level, the process looks like this:

  1. Installing any necessary packages/software
  2. Generating Wireguard private and public keys
  3. Modifying the AWS environment to allow Wireguard traffic
  4. Setting up the Wireguard interface(s)
  5. Activating the VPN

The first thing to do, naturally, is install the necessary software.

Installing Packages/Software

On recent versions of Linux—I’m using Fedora (32 and 33) and Ubuntu 20.04—kernel support for Wireguard ships with the distribution. All that’s needed is to install the necessary userspace tools.

On Fedora, that’s done with dnf install wireguard-tools. On Ubuntu, the command is apt install wireguard-tools. (You can also install the wireguard meta-package, if you’d prefer.) Apple’s macOS, for example, has a Wireguard app on the Mac App Store.

This page on the Wireguard site has full instructions for a variety of operating systems. macOS, for example, has an app in the App Store for Wireguard support.

Once the necessary Wireguard software is installed, then it’s time to start with the configuration of Wireguard. From here forward, I’ll focus only on Linux, as the instructions will vary fairly widely from OS to OS.

Generating Private and Public Keys

This step must be done on both sides of the connection. The installation of the “wireguard-tools” package provided a wg binary that you can use to generate the necessary keys. The steps below will generate a public and private key for you.

  1. Become root using sudo su -.
  2. Switch to the /etc/wireguard directory.
  3. Run wg genkey | tee privatekey | wg pubkey > publickey. This creates the public and private keys used by Wireguard.

With the public keys now generated, you’re ready to move on to setting up the Wireguard interfaces.

Modifying the AWS Environment

By default, Wireguard uses UDP port 51280 as the listening port for the Wireguard interface. If you want or need to use multiple Wireguard interfaces, you’ll need either separate network interfaces or use multiple ports. Modify the security group(s) to allow UDP port 51280 to the instance(s) that will have defined Wireguard interfaces.

Additionally, if you are going to route traffic through the VPN instance instead of masquerade traffic (use network address translation), then you’ll need to disable the source/destination check for the VPN instance. This can be accomplished fairly easily using the AWS CLI:

aws ec2 modify-instance-attribute --no-source-dest-check --instance-id <instance-id>

Setting up the Wireguard Interfaces

There are a couple different ways (at least) to set up the Wireguard interfaces. I’ll show you how to do it from the terminal with a configuration file (suitable for a headless server running in AWS) and how to do it from the GNOME user interface (an approach well-suited for a workstation being used to access resources in AWS).

Using a Configuration File from the CLI

Most of the Wireguard tutorials I saw focused only on this approach, so you’re likely to find other articles out there that share similar (or the same) information.

To set up a Wireguard interface using a configuration file from the CLI, create a wg<X>.conf file in /etc/wireguard, where <X> is the number of the interface. Typically you’d start with wg0 for the first VPN interface, but I’m not aware of any requirement to start with wg0. In this file, place the following contents:

[Interface]
PrivateKey = <private key for this machine>
Address = <IP address for Wireguard interface>
PostUp = iptables -A FORWARD -i wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
PostDown = iptables -D FORWARD -i wg0 -j ACCEPT; iptables -t nat -D POSTROUTING -o eth0 -j MASQUERADE
ListenPort = 51280

[Peer]
PublicKey = <public key for peer machine>
AllowedIPs = <IP address for peer Wireguard interface>, <additional CIDRs>
PersistentKeepalive = 25

There are a few notes I want to make about this configuration file:

  • With regard to the IP address: you’ll have to decide whether all your Wireguard peers will share a common subnet, or whether you’ll have separate interfaces (and therefore separate subnets) for each peer. There are pros and cons to each approach. I decided to go with a common subnet among peers.
  • If you used an interface name other than wg0, be sure to adjust the PostUp and PostDown lines accordingly. Note that this configuration uses NAT to make the VPN traffic appear to the rest of the VPC as if it’s coming from the VPN instance; this avoids the need for disabling the source/destination check or updating routing tables.
  • Because my client devices are behind a NAT, I included the PersistentKeepalive setting. You may not need this (but I suspect many people will).
  • With regard to the <additional CIDRs> notation above: if you want other IP addresses from the peer’s network to be able to route through this connection, specify those addresses/networks here. This is perhaps more important on the “client” side configuration, where you’re funneling all traffic for a VPC (or group of VPCs) through a single Wireguard node.
  • You’ll need a separate [Peer] section for each VPN peer. In my case, I had three different systems from which I wanted VPN access, so there needed to be three separate [Peer] sections.

Once the interface is configured, then you can activate the interface using wg-quick up wg0 (or whatever interface name you’re using).

Using the GNOME User Interface

Using the CLI to configure the Wireguard interface(s) on a server is acceptable, especially in a use case like mine (establishing connectivity to EC2 instances). From the desktop, however, users may prefer using a graphical tool instead of the CLI. In this section, I’ll show what it looks like to use the GNOME Network Connections applet to configure your Wireguard interface(s).

First, you’d run nm-connection-editor to launch the GNOME Network Connections applet, which would look something like this (you’d have different connections with different names, naturally):

GNOME Network Connections window

Clicking on the + symbol in the lower left corner of the window will bring up a dialog to select the type of connection to add:

New interface selection dialog

Selecting “Wireguard” from this dialog and clicking “Create…” brings up the Wireguard page for the new connection:

Wireguard properties page

On this page, you’ll need to supply the following bits of information:

  • An interface name (like wg0 or wg1)
  • The private key generated earlier
  • Check the “Add peer routes” to have the routing table updated with routes from this connection
  • Peer information

Click on “Add” under Peers to add a peer connection with this dialog box:

Wireguard peer properties

Here you’ll need to provide:

  • The public key for the peer that was generated earlier.
  • The IP addresses and address ranges that will be routed across this connection. As you can see in the screenshot above, for “Allowed IPs” you’ll want to not only specify the IP address of the peer Wireguard interface but also the IP range of the VPC behind the VPN gateway.
  • The endpoint (IP address and port) for the VPN gateway. As mentioned earlier, make sure this traffic is allowed through security groups, Network Access Control Lists, and other network traffic controls.
  • You can also set the persistent keepalive interval.

Click “Apply” to commit the changes to the peer configuration. You can use the “Add” button again to add additional properties.png

If you want the VPN connection to come up automatically, flip over to the General page and check “Connect automatically with priority”:

General properties page

The last thing to do is assign an IP address to the interface, which is done via the “IPv4 Settings” and/or the “IPv6 Settings” pages. Here’s the “IPv4 Settings” page:

IPv4 properties page

The IP address you assign needs to be on the same subnet as the IP address given to/specified for the peer. As I mentioned earlier, this could be a common subnet (like a /29 or similar) among all the Wireguard peers, or it could be a separate subnet for each peer. Fill in the other sections as needed.

Click “Save” whenever you’re finished, and your new Wireguard VPN connection should be good to go!

Activating the VPN

After the interfaces have been activated, the VPN connection(s) are automatically active. No additional steps are necessary to establish the VPN connections; the peer interfaces defined on each end automatically negotiate a connection among themselves. You should be able to almost immediately start accessing resources in the remote VPC.

I hope this write-up proves useful to someone out there. If you have any questions, or if you feel something I’ve written is incorrect or inaccurate, please contact me on Twitter. Thanks for reading!

Closing out the Tokyo Assignment

In late 2019, I announced that I would be temporarily relocating to Tokyo for a six-month assignment to build out a team focused on cloud-native services and offerings. A few months later, I was still in Colorado, and I explained what was happening in a status update on the Tokyo assignment. I’ve had a few folks ask me about it, so I thought I’d go ahead and share that the Tokyo assignment did not happen and will not happen.

So why didn’t it happen? In my March 2020 update, I mentioned that paperwork, approvals, and proper budget allocations had slowed down the assignment, but then the pandemic hit. Many folks, myself included, expected that the pandemic would work itself out, but—as we now clearly know—it did not. And as the pandemic dragged on (and continues to drag on), restrictions on travel and concerns over public health and safety continued to mean that the assignment was not going to happen. As many of you know all too well, travel restrictions still exist even today.

OK, but why won’t it happen in the future, when the pandemic is under control? At the time when the Tokyo assignment was offered to me, there were a set of reasons it made sense. The in-country team had no strong Kubernetes and cloud-native expertise, and wanted someone from the former Heptio team to come in and help bootstrap folks. There were business opportunities the in-country team wanted to pursue that would have been possible with the team I had been charged with building out. In reality, though, this was a time-bounded window of opportunity. The longer the pandemic continued and delayed the assignment, the more this time-bounded opportunity window shrank. In-country management lured away folks from competitors who had the requisite experience, and the team started bootstrapping itself. Business opportunities shifted. Strong team members from other parts of the organization and other parts of the world ended up relocating to nearby centers of growth (Singapore, notably). Now, more than a year later, the assignment just doesn’t make sense. It’s no longer needed.

I won’t lie—I’m more than a little sad that the assignment didn’t and won’t happen. Such is life, though; we shift and adapt as the world shifts and changes around us. Perhaps at some point in the future a similar opportunity will arise.

Technology Short Take 137

Welcome to Technology Short Take #137! I’ve got a wide range of topics for you this time around—eBPF, Falco, Snort, Kyverno, etcd, VMware Code Stream, and more. Hopefully one of these links will prove useful to you. Enjoy!

Networking

Servers/Hardware

  • I recently mentioned on Twitter that I was considering building out a new Linux PC to replace my aging Mac Pro (it’s a 2012 model, so going on 9 years old). Joe Utter shared with me his new lab build information, and now I’m sharing it with all of you. Sharing is caring, you know.

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

  • Turns out that the apt-key command on Debian and Debian derivatives (like Ubuntu and its derivatives) has been deprecated. This article walks users through how to work with OpenPGP repository signing keys without the use of apt-key.
  • I recently watched this YouTube video series on tmux in order to get more familiar with this very popular tool. I can definitely see the value, but it’s going to take me some time to adjust my habits and workflows to take advantage of tmux.
  • Red Hat continues its effort to commodotize Docker’s position with developers: this time by taking aim at Docker Compose.

Storage

Virtualization

Career/Soft Skills

  • Lee Briggs shares a great post on learning to code with infrastructure as code (using infrastructure as code is something I think is a good career move for pretty much everyone). I like how Lee shares some very specific recommendations on how folks can get started.

while I’d love to keep going, I’d better wrap it up here. If you have any feedback for me, feel free to hit me on Twitter. I’d love to hear from you.

Technology Short Take 136

Welcome to Technology Short Take #136, the first Short Take of 2021! The content this time around seems to be a bit more security-focused, but I’ve still managed to include a few links in other areas. Here’s hoping you find something useful!

Networking

Servers/Hardware

  • Thinking of buying an M1-powered Mac? You may find this list helpful.

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

Career/Soft Skills

That’s it this time around! If you have any questions, comments, or corrections, feel free to contact me. I’m a regular visitor to the Kubernetes Slack instance, or you can just hit me on Twitter. Thanks!

Recent Posts

Using Velero to Protect Cluster API

Cluster API (also known as CAPI) is, as you may already know, an effort within the upstream Kubernetes community to apply Kubernetes-style APIs to cluster lifecycle management—in short, to use Kubernetes to manage the lifecycle of Kubernetes clusters. If you’re unfamiliar with CAPI, I’d encourage you to check out my introduction to Cluster API before proceeding. In this post, I’m going to show you how to use Velero (formerly Heptio Ark) to backup and restore Cluster API objects so as to protect your organization against an unrecoverable issue on your Cluster API management cluster.

Read more...

Details on the New Desk Layout

Over the holiday break I made some time to work on my desk layout, something I’d been wanting to do for quite a while. I’d been wanting to “up my game,” so to speak, with regard to producing more content, including some video content. Inspired by—and heavily borrowing from—this YouTube video, I decided I wanted to create a similar arrangement for my desk. In this post, I’ll share more details on my setup.

Read more...

Technology Short Take 135

Welcome to Technology Short Take #135! This will likely be the last Technology Short Take of 2020, so it’s a tad longer than usual. Sorry about that! You know me—I just want to make sure everyone has plenty of technical content to read during the holidays. And speaking of holidays…whatever holidays you do (or don’t) celebrate, I hope that the rest of the year is a good one for you. Now, on to the content!

Read more...

Bootstrapping a Cluster API Management Cluster

Cluster API is, if you’re not already familiar, an effort to bring declarative Kubernetes-style APIs to Kubernetes cluster lifecycle management. (I encourage you to check out my introduction to Cluster API post if you’re new to Cluster API.) Given that it is using Kubernetes-style APIs to manage Kubernetes clusters, there must be a management cluster with the Cluster API components installed. But how does one establish that management cluster? This is a question I’ve seen pop up several times in the Kubernetes Slack community. In this post, I’ll walk you through one way of bootstrapping a Cluster API management cluster.

Read more...

Some Site Updates

For the last three years, the site has been largely unchanged with regard to the structure and overall function even while I continue to work to provide quality technical content. However, time was beginning to take its toll, and some “under the hood” work was needed. Over the Thanksgiving holiday, I spent some time updating the site, and there are a few changes I wanted to mention.

Read more...

Assigning Node Labels During Kubernetes Cluster Bootstrapping

Given that Kubernetes is a primary focus of my day-to-day work, I spend a fair amount of time in the Kubernetes Slack community, trying to answer questions from users and generally be helpful. Recently, someone asked about assigning node labels while bootstrapping a cluster with kubeadm. I answered the question, but afterward started thinking that it might be a good idea to also share that same information via a blog post—my thinking being that others who also had the same question aren’t likely to be able to find my answer on Slack, but would be more likely to find a published blog post. So, in this post, I’ll show how to assign node labels while bootstrapping a Kubernetes cluster.

Read more...

Pausing Cluster API Reconciliation

Cluster API is a topic I’ve discussed here in a number of posts. If you’re not already familiar with Cluster API (also known as CAPI), I’d encourage you to check out my introductory post on Cluster API first; you can also visit the official Cluster API site for more details. In this short post, I’m going to show you how to pause the reconciliation of Cluster API cluster objects, a task that may be necessary for a variety of reasons (including backing up the Cluster API objects in your management cluster).

Read more...

Technology Short Take 134

Welcome to Technology Short Take #134! I’m publishing a bit early this time due to the Thanksgiving holiday in the US. So, for all my US readers, here’s some content to peruse while enjoying some turkey (or whatever you’re having this year). For my international readers, here’s some content to peruse while enjoying dramatically lower volumes of e-mail because the US is on holiday. See, something for everyone!

Read more...

Review: CPLAY2air Wireless CarPlay Adapter

In late September, I was given a CPLAY2air wireless CarPlay adapter as a gift. Neither of my vehicles support wireless CarPlay, and so I was looking forward to using the CPLAY2air device to enable the use of CarPlay without having to have my phone plugged into a cable. Here’s my feedback on the CPLAY2air device after about six weeks of use.

Read more...

Resizing Windows to a Specific Size on macOS

I recently had a need (OK, maybe more a desire than a need) to set my browser window(s) on macOS to a specific size, like 1920x1080. I initially started looking at one of the many macOS window managers, but after reading lots of reviews and descriptions and still being unclear if any of these products did what I wanted, I decided to step back to using AppleScript to accomplish what I was seeking. In this post, I’ll share the solution (and the articles that helped me arrive at the solution).

Read more...

Technology Short Take 133

Welcome to Technology Short Take #133! This time around, I have a collection of links featuring the new Raspberry Pi 400, some macOS security-related articles, information on AWS Nitro Enclaves and gVisor, and a few other topics. Enjoy!

Read more...

Technology Short Take 132

Welcome to Technology Short Take #132! My list of links and articles from around the web seems to be a bit heavy on security-related topics this time. Still, there’s a decent collection of networking, cloud computing, and virtualization articles as well as a smattering of other topics for you to peruse. I hope you find something useful!

Read more...

Considerations for using IaC with Cluster API

In other posts on this site, I’ve talked about both infrastructure-as-code (see my posts on Terraform or my posts on Pulumi) and somewhat separately I’ve talked about Cluster API (see my posts on Cluster API). And while I’ve discussed the idea of using existing AWS infrastructure with Cluster API, in this post I wanted to try to think about how these two technologies play together, and provide some considerations for using them together.

Read more...

Technology Short Take 131

Welcome to Technology Short Take #131! I’m back with another collection of articles on various data center technologies. This time around the content is a tad heavy on the security side, but I’ve still managed to pull in articles on networking, cloud computing, applications, and some programming-related content. Here’s hoping you find something useful here!

Read more...

Updating AWS Credentials in Cluster API

I’ve written a bit here and there about Cluster API (aka CAPI), mostly focusing on the Cluster API Provider for AWS (CAPA). If you’re not yet familiar with CAPI, have a look at my CAPI introduction or check the Introduction section of the CAPI site. Because CAPI interacts directly with infrastructure providers, it typically has to have some way of authenticating to those infrastructure providers. The AWS provider for Cluster API is no exception. In this post, I’ll show how to update the AWS credentials used by CAPA.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!