Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Technology Short Take 148

Welcome to Technology Short Take #148, aka the Thanksgiving Edition (at least, for US readers). I’ve been scouring RSS feeds and various social media sites, collecting as many useful links and articles as I can find: from networking hardware and networking CI/CD pipelines to Kernel TLS and tricks for improving your working memory. That’s quite the range! I hope that you find something useful here.

Networking

Servers/Hardware

Security

  • Microsoft found a macOS vulnerability that could bypass System Integrity Protection (SIP). More details are in this Microsoft blog post.
  • It’s unfortunate that a security researcher is having such a hard time with Apple’s Security Bounty program; see this article. (Hat tip to Michael Tsai.)

Cloud Computing/Cloud Management

  • For all the Internet’s distributed nature, it’s funny how some aspects still end up being dependent on smaller, less distributed things. Dark Reading takes a look at one such instance: the dependence on Let’s Encrypt and what happened when one of their root certificates recently expired.
  • Dean Lewis runs through updating node resources when using TKG on vSphere (the same basic process is used for AWS and Azure as well). Since machine templates are immutable, the key here (and what Dean illustrates in his article) is creating a new template and then pointing the MachineDeployment to the new template. This is all documented on the Cluster API site, but it’s useful to see an example of it as found in Dean’s article.
  • Want to do a deep dive into the inner workings of custom resource validation in Kubernetes? Daniel Mangum has you covered.
  • Thinking of using Envoy’s Lua filter? This post by Jean-Marie Joly has some great information.
  • Kief Morris tackles the “snowflakes as code” antipattern, where separate instances of infrastructure code are used to manage different environments.
  • Michael Marie-Julie shares some details on how The Fork structures their AWS presence (and the tools they use).
  • JoaquĆ­n Menchaca takes a closer look at using Istio with AKS.

Operating Systems/Applications

Virtualization

Career/Soft Skills/Other

  • Although this article is titled “What Kids Need to Know About Their Working Memory,” I’d say the article is equally applicable to adults.
  • Ever heard of the “tyranny of the urgent”? If not, this post is for you. Even if you have heard of it, you may find some useful tips!

I have more links and articles to share, but I’ll stop here…for now, anyway! I do hope I’ve managed to share something useful for readers. If you have any questions or any comments, please don’t hesitate to contact me. Reach out to me on Twitter, or find me on any one of several Slack communities (like the Kubernetes Slack community). Enjoy!

Using Kustomize Components with Cluster API

I’ve been using Kustomize with Cluster API (CAPI) to manage my AWS-based Kubernetes clusters for quite a while (along with Pulumi for managing the underlying AWS infrastructure). For all the time I’ve been using this approach, I’ve also been unhappy with the overlay-based approach that had evolved as a way of managing multiple workload clusters. With the recent release of CAPI 1.0 and the v1beta1 API, I took this opportunity to see if there was a better way. I found a different way—time will tell if it is a better way. In this post, I’ll share how I’m using Kustomize components to help streamline managing multiple CAPI workload clusters.

Before continuing, I feel it’s important to point out that while the bulk of the Kustomize API is reasonably stable at v1beta1, the components portion of the API is still in early days (v1alpha1). So, if you adopt this functionality, be aware that it may change (or even get dropped).

More information on Kustomize components can be found in the Kustomize components KEP or in this demo document. The documentation on Kustomize components is somewhat helpful as well. I won’t try to rehash information found in those sources here, but instead build on that information with a CAPI-focused use case. Finally, if you’re unfamiliar with using Kustomize with CAPI, start with this introduction and then read the post on transformer configurations for v1beta1.

So, what are Kustomize components? In the context of CAPI, let’s say that you have a discrete change or configuration you’d like to make to a base CAPI manifest. Perhaps it’s changing the imageLookupBaseOS setting, as described in this post, to influence AMI selection. You could use a JSON 6902 patch, similar to this, to make that change:

[
  { "op": "add",
    "path": "/spec/template/spec/imageLookupBaseOS",
    "value": "ubuntu-20.04"
  }
]

You could turn this into a Kustomize component using the following kustomization.yaml:

---
apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
patchesJson6902:
  - path: spec-template-spec-base-os.json
    target:
      group: infrastructure.cluster.x-k8s.io
      kind: AWSMachineTemplate
      name: ".*"
      version: v1beta1
  - path: spec-template-spec-base-os.json
    target:
      group: infrastructure.cluster.x-k8s.io
      kind: AWSMachine
      name: ".*"
      version: v1beta1

Note the kind: Component and the v1alpha1 API version, which distinguish this file from a typical kustomization.yaml. This file references the JSON 6902 patch shared earlier and applies that patch to all AWSMachine and AWSMachineTemplate objects.

Congratulations, you’ve defined your first component! Now, what do you do with it?

To use a component you’ve defined, simply include it in a kustomization.yaml like this:

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../base
components:
  - ../components/ubuntu-20.04

This is a more “traditional” kustomization.yaml that specifies a base resource, but then instead of referencing a list of transformers or generators or patches, it references the component you defined earlier. You are, of course, not limited to using just one component. Here’s an example from my own lab configuration files:

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - ../capi-base
components:
  - ../components/dev
  - ../components/us-west-2
  - ../components/calico-cni
  - ../components/aws-ccm
  - ../components/aws-ebs-csi
  - ../components/ubuntu-20.04

In this configuration, I’m referencing a number of different components:

  • The “dev” component controls the number of MachineDeployments and how many replicas each MachineDeployment manages. If I need more MachineDeployments with more replicas, I can simply swap out this component for the “prod” component.
  • The “calico-cni” component adds labels that cause the workload cluster to match against a ClusterResourceSet, thereby automatically installing the Calico CNI plugin. (Curious about ClusterResourceSets? Read this or this.)
  • Similarly, the “aws-ccm” and “aws-ebs-csi” components add labels for ClusterResourceSets that install the external AWS cloud provider and EBS CSI driver, respectively. (See this post for more details on using the external AWS cloud provider.)
  • The “ubuntu-20.04” component sets the AMIs to Ubuntu 20.04.

The awesome thing about using components is that I can define the change I want just once—as a component—and then reference it from multiple overlays. Each overlay becomes a list of references to components, instead of slightly-different copies of other overlays. This is a huge improvement over duplicating patches for multiple workload clusters.

Overall, I’m far more pleased with using components to describe Kustomize overlays for CAPI workload clusters than previous approaches.

I hope this is helpful to others. If you have any questions, want to talk about it in more detail, or have suggestions for how I can improve, please don’t hesitate to contact me. Reach out to me on Twitter, or find me on the Kubernetes Slack community.

Technology Short Take 147

Welcome to Technology Short Take #147! The list of articles is a bit shorter than usual this time around, but I’ve still got a good collection of articles and posts covering topics in networking, hardware (mostly focused on Apple’s processors), cloud computing, and virtualization. There’s bound to be something in here for most everyone! (At least, I hope so.) Enjoy your weekend reading!

Networking

Servers/Hardware

  • David Sparks shares the specs for his new 16" MacBook Pro as well as the rationale for how he arrived at that configuration. I must confess that I am intrigued by the M1 Pro and M1 Max, but having just bought a “regular” M1 a few months ago I’m not sure I can justify the purchase. And, to be honest, my “regular” M1 is plenty snappy for what I do.
  • AnandTech takes a closer look at Apple’s A15 SoC (system on a chip), which debuted in the iPhone 13.

Security

  • Senthil Raja Chermapandian asks the question, “Are Kubernetes Network Policies really useful?” I see their utility as being less useful now with the rising adoption of service meshes and L7-aware CNI plugins like Cilium, but I do still believe there are valid use cases for them.
  • Thomas Reed with Malwarebytes Labs has a lengthy post on how macOS attacks are evolving.

Cloud Computing/Cloud Management

  • Now that Cluster API has hit 1.0 (and v1beta1 status, indicating pretty stable APIs), the Cluster API community moves on to addressing how to manage the lifecycle of groups of Kubernetes clusters. See this blog post from Fabrizio Pandini for more details.
  • Want to use the Kong API Gateway with Knative? Fear not, the folks from Direktiv share their knowledge on running services with Knative and Kong.
  • William Lam shares some information on using kube-vip with VMware’s new Tanzu Community Edition (TCE).
  • Jacob Schulz brought this article on Terraform modules to my attention; if you aren’t already familiar with Terraform modules, you may find it helpful.
  • In this article, Alex Feiszli lays out the argument for a “wide” cluster over multiple clusters, and advocates for potentially spreading that “wide” cluster over multiple cloud providers. I have to say that, in general, I disagree with this architecture. There are a number of reasons (and I won’t go into great detail), but some of them include blast radius (if your one control plane goes down, you’re in for a world of hurt), lack of support for multiple cloud provider integrations (you’ll have to give up most, if not all, cloud provider integrations), and increased complexity (among other things, having to add and maintain additional node labels and node selectors is additional work).
  • Speaking of labels, Aviad Shikloshi wrote an article on best practices for Kubernetes labels and annotations.
  • Chris Evans attempts to define hybrid cloud and multi-cloud.

Operating Systems/Applications

Virtualization

Career/Soft Skills

  • I loved this article about the “follow your feet” saying. I’d never heard it before, but I like the idea of applying this to your IT career. Ask the questions you think are stupid questions. Chase down the ideas that seem like they have promise, even if they aren’t fully fleshed out yet. Try that new combination of technologies that you think may hold promise for your team or your company. In other words, follow your feet.
  • I really enjoyed this article on rent vs. buy in one’s career.

That’s all, folks! (Bonus points if you get this reference.) If you have any questions, comments, or feedback, I’d love to hear from you! Feel free to contact me on Twitter, or find me in any of a variety of Slack communities.

Influencing Cluster API AMI Selection

The Kubernetes Cluster API (CAPI) project—which recently released v1.0—can, if you wish, help manage the underlying infrastructure associated with a cluster. (You’re also fully able to have CAPI use existing infrastructure as well.) Speaking specifically of AWS, this means that the Cluster API Provider for AWS is able to manage VPCs, subnets, routes and route tables, gateways, and—of course—EC2 instances. These EC2 instances are booted from a set of AMIs (Amazon Machine Images, definitely pronounced “ay-em-eye” with three syllables) that are prepared and maintained by the CAPI project. In this short and simple post, I’ll show you how to influence the AMI selection process that CAPI’s AWS provider uses.

There are a couple different ways to influence AMI selection, and all of them have to do with settings within the AWSMachineSpec, which controls the configuration of an AWSMachine object. (An AWSMachine object is an infrastructure-specific implementation of a logical Machine object.) Specifically, there are these options for influencing AMI selection:

  1. You can instruct CAPI to use a specific AMI with the ami field. (If this field is set, the other options do not apply.)
  2. You can modify the lookup format used to find an AMI with the imageLookupFormat field. By default, the value for this field is capa-ami-{{.BaseOS}}-?{{.K8sVersion}}-*. These placeholders are controlled by the imageLookupBaseOS field (described below) and the Kubernetes version supplied by a Machine, MachineDeployment, or KubeadmControlPlane object.
  3. The imageLookupOrg field allows you to provide an AWS Organization ID to use in looking up an AMI. If I’m not mistaken, this value defaults to “258751437250”.
  4. You can provide parameters for the base OS lookup using the imageLookupBaseOS field. For example, to have the CAPI AWS provider use Ubuntu 20.04 instead of Ubuntu 18.04 (the default), add this field with a value of “ubuntu-20.04” to your CAPI manifests.

(For reference, the default values for imageLookupOrg, imageLookupBaseOS, and imageLookupFormat are found here in the code.)

Using these fields, you could instruct CAPI to use a custom Ubuntu 20.04-based AMI maintained by your own AWS account by specifying imageLookupOrg, imageLookupBaseOS, and imageLookupFormat. I personally have used the imageLookupBaseOS field to use Ubuntu 20.04 instead of Ubuntu 18.04 on several occasions.

Keep in mind these fields are part of the AWSMachineSpec struct, and may be used in AWSMachine objects as well as AWSMachineTemplate objects.

If you have any questions—or if I have misrepresented something or explained it incorrectly—feel free to contact me on Twitter or find me on the Kubernetes Slack community. All constructive feedback is welcome—I’d love to hear from you.

Creating Reusable Kuma Installation YAML

Using CLI tools—instead of a “wall of YAML”—to install things onto Kubernetes is a growing trend, it seems. Istio and Cilium, for example, each have a CLI tool for installing their respective project. I get the reasons why; you can build logic into a CLI tool that you can’t build into a YAML file. Kuma, the open source service mesh maintained largely by Kong and a CNCF Sandbox project, takes a similar approach with its kumactl tool. In this post, however, I’d like to take a look at creating reusable YAML to install Kuma, instead of using the CLI tool every time you install.

You might be wondering, “Why?” That’s a fair question. Currently, the kumactl tool, unless configured otherwise, will generate a set of TLS assets to be used by Kuma (and embeds some of those assets in the YAML regardless of the configuration). Every time you run kumactl, it will generate a new set of TLS assets. This means that the command is not declarative, even if the output is. Unfortunately, you can’t reuse the output, as that would result in duplicate TLS assets across installations. That brings me to the point of this post: how can one create reusable YAML to install Kuma?

Fortunately, this is definitely possible. There are two parts to this process:

  1. Define replacement TLS assets using cert-manager.
  2. Modify the output of kumactl to reference the replacement TLS assets.

Defining TLS Assets

Instead of allowing kumactl to generate TLS assets every time the command is run, you need a way to be able to declaratively define what TLS assets are needed and what the properties of those assets should be. Fortunately, that’s exactly what the cert-manager project does!

Relying on cert-manager to handle TLS assets does mean that cert-manager becomes a dependency (or a prerequisite) for Kuma—it will have to be installed before Kuma can be installed.

To define the necessary TLS assets, you’ll use cert-manager to:

  1. Create a self-signed ClusterIssuer.
  2. Use the self-signed ClusterIssuer to issue a CA root certificate (and a corresponding Secret to store the private key).
  3. Configure the root CA certificate as an Issuer.
  4. Issue a TLS certificate and key that will be used by Kuma.

The root CA certificate definition could look something like this:

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: kuma-root-ca
  namespace: kuma-system
spec:
  isCA: true
  commonName: kuma-root-ca
  secretName: kuma-root-ca
  duration: 43800h # 5 years
  renewBefore: 720h # 30d
  privateKey:
    algorithm: RSA
    encoding: PKCS1
    size: 2048
  usages:
    - digital signature
    - key encipherment
    - cert sign
  issuerRef:
    name: selfsigned-issuer # References self-signed ClusterIssuer
    kind: ClusterIssuer
    group: cert-manager.io

Here’s an example of a cert-manager Certificate resource for the TLS certificate that Kuma would use:

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: kuma-tls-general
  namespace: kuma-system
spec:
  commonName: kuma-tls-general
  secretName: kuma-tls-general
  duration: 8760h # 1 year
  renewBefore: 360h # 15d
  subject:
    organizations:
      - kuma
  isCA: false
  privateKey:
    algorithm: RSA
    encoding: PKCS1
    size: 2048
  usages:
    - digital signature
    - key encipherment
    - server auth
    - client auth
  dnsNames:
    - kuma-control-plane.kuma-system
    - kuma-control-plane.kuma-system.svc
  issuerRef:
    name: kuma-ca-issuer # References Issuer based on kuma-root-ca
    kind: Issuer
    group: cert-manager.io

Make note of the name and secretName values for both certificates; they will be needed later.

After all the TLS assets have been defined—they don’t need to be actually applied against the cluster, just defined—then you’re ready to modify the installation YAMl to make it reusable.

Creating Reusable Installation YAML

Once you’ve defined the TLS assets, then you can make the necessary changes to the YAML output of kumactl to make it reusable. Keep in mind, as described in the previous section, using cert-manager to manage TLS assets means that cert-manager becomes a dependency for Kuma (in other words, you’ll need to install cert-manager before you can install Kuma).

Begin by creating a starting point with kumactl and piping the output to a file:

kumactl install control-plane --tls-general-secret=kuma-tls-general \
--tls-general-ca-bundle=$(echo "blah") > kuma.yaml

For the --tls-general-secret parameter, you’re specifying the name of the Secret created by the general TLS certificate you defined earlier with cert-manager.

The file created by this command needs four changes made to it:

  1. The caBundle value supplied for all webhooks needs to be deleted (hence, the value you specify on the command line doesn’t matter).
  2. All webhooks need to be annotated for the cert-manager CA Injector to automatically inject the correct caBundle value.
  3. The “kuma-control-plane” Deployment needs to be modified to mount the root CA certificate’s Secret (created by cert-manager) as a volume.
  4. The “kuma-control-plane” Deployment needs to be changed to pass in a different value for the KUMA_RUNTIME_KUBERNTES_INJECTOR_CA_CERT_FILE environment variable (it should point to the ca.crt file on the volume added in step 3).

You could make these changes manually, but since we’re going for declarative why not use something like Kustomize?

To make the first change—removing the caBundle value embedded by kumactl—you could use this JSON 6902 patch:

[
    { "op": "remove", "path": "/webhooks/0/clientConfig/caBundle" },
    { "op": "remove", "path": "/webhooks/1/clientConfig/caBundle" },
    { "op": "remove", "path": "/webhooks/2/clientConfig/caBundle" }
]

To make the second change, you could use a JSON 6902 patch like this (the use of “kuma-root-ca” in the patch below refers to the name of the root CA Certificate resource defined earlier with cert-manager):

[
  { "op": "add",
    "path": "/metadata/annotations", 
    "value": 
      { "cert-manager.io/inject-ca-from": "kuma-system/kuma-root-ca" }
  }
]

These two changes enable you to remove the Base64-encoded copy of the CA certificate—referenced by Kuma’s webhooks—and instead have cert-manager’s CA Injector insert the correct value instead.

This JSON 6902 patch would handle the third change:

[
  {
    "op": "add",
    "path": "/spec/template/spec/volumes/0",
    "value": {
      "name": "general-ca-crt",
      "secret": {
        "secretName": "kuma-root-ca"
      }
    }
  },
  {
    "op": "add",
    "path": "/spec/template/spec/containers/0/volumeMounts/0",
    "value": {
      "name": "general-ca-crt",
      "mountPath": "/var/run/secrets/kuma.io/ca-cert",
      "readOnly": true
    }
  }
]

The Secret referenced in the first part of the patch above references the Secret for the root CA Certificate resource, as noted in the secretName field on the Certificate’s manifest.

And, finally, the fourth change can be handled using this JSON 6902 patch:

[
  { "op": "replace",
    "path": "/spec/template/spec/containers/0/env/11/value",
    "value": "/var/run/secrets/kuma.io/ca-cert/ca.crt" }
]

I won’t walk through all of these changes in great detail, but I do want to take a moment to dive a bit deeper into the Secret mounted as an additional volume. Using the JSON 6902 patch above against the base YAML created using kumactl will result in a configuration that looks like this (focused only on volumes and volumeMounts in the Deployment, everything else is stripped away):

spec:
  template:
    spec:
      containers:
      - volumeMounts:
        - mountPath: /var/run/secrets/kuma.io/ca-cert
          name: general-ca-crt
          readOnly: true
        - mountPath: /var/run/secrets/kuma.io/tls-cert
          name: general-tls-cert
          readOnly: true
        - mountPath: /etc/kuma.io/kuma-control-plane
          name: kuma-control-plane-config
          readOnly: true
      volumes:
      - name: general-ca-crt
        secret:
          secretName: kuma-root-ca
      - name: general-tls-cert
        secret:
          secretName: kuma-tls-general
      - configMap:
          name: kuma-control-plane-config
        name: kuma-control-plane-config

The “general-ca-cert” Secret and volume are what’s been added by the Kustomize patch. Why? When generating the YAML output, kumactl combines three resources—the TLS certificate, the TLS key, and the CA certificate—into a single Secret. However, cert-manager won’t create a Secret like that. So, to avoid an additional manual step, you mount the Secret created by cert-manager for CA root certificate as a separate volume. This allows you modify the environment variable passed to the control plane specifying where the CA certificate is located to the new path where the volume is mounted.

After the four changes are complete, the resulting modified YAML is completely reusable.

Using the Reusable YAML

To actually use the reusable YAML you’ve just created with the above steps:

  1. Apply the cert-manager TLS definitions to the cluster where you want to install Kuma. This will create all the necessary Certificate resources and Secrets. (Obviously cert-manager will need to be installed already.)
  2. Apply the Kuma YAML created by the steps above. It’s configured to reference the cert-manager assets created in step 1.

That’s it.

Caveats/Limitations

The process and changes outlined in this post only apply to a standalone (single zone) installation. It’s absolutely possible to do this for multizone deployments, too, but I’ll leave that as an exercise for the readers.

Additional Resources

I’ve recently published two GitHub repositories with content that supports what’s described in this post:

  • The “kuma-cert-manager” repository outlines how to replace kumactl-generated TLS assets with resources from cert-manager. This supports the “Defining TLS Assets” section above. Full examples of all the related cert-manager resources are found in this repository.
  • The “kuma-declarative-install” repository builds on the previous repository by showing the additional changes that must be made to the YAML output generated by kumactl. This supports the “Creating Reusable Installation YAML” section above. This includes a Kustomize configuration that will automate all the changes necessary to make the YAML reusable.

I hope this information is useful. If you have questions, feel free to find me on the Kuma community Slack or contact me on Twitter (my DMs are open).

Recent Posts

Using the External AWS Cloud Provider for Kubernetes

In 2018, after finding a dearth of information on setting up Kubernetes with AWS integration/support, I set out to try to establish some level of documentation on this topic. That effort resulted in a few different blog posts, but ultimately culminated in this post on setting up an AWS-integrated Kubernetes cluster using kubeadm. Although originally written for Kubernetes 1.15, the process described in that post is still accurate for newer versions of Kubernetes. With the release of Kubernetes 1.22, though, the in-tree AWS cloud provider—which is what is used/described in the post linked above—has been deprecated in favor of the external cloud provider. In this post, I’ll show how to set up an AWS-integrated Kubernetes cluster using the external AWS cloud provider.

Read more...

Kustomize Transformer Configurations for Cluster API v1beta1

The topic of combining kustomize with Cluster API (CAPI) is a topic I’ve touched on several times over the last 18-24 months. I first touched on this topic in November 2019 with a post on using kustomize with CAPI manifests. A short while later, I discovered a way to change the configurations for the kustomize transformers to make it easier to use it with CAPI. That resulted in two posts on changing the kustomize transformers: one for v1alpha2 and one for v1alpha3 (since there were changes to the API between versions). In this post, I’ll revisit kustomize transformer configurations again, this time for CAPI v1beta1 (the API version corresponding to the CAPI 1.0 release).

Read more...

Technology Short Take 146

Welcome to Technology Short Take #146! Over the last couple of weeks, I’ve gathered a few technology-related links for you all. There’s some networking stuff, a few security links, and even a hardware-related article. But enough with the introduction—let’s get into the content!

Read more...

Installing Cilium via a ClusterResourceSet

In this post, I’m going to walk you through how to install Cilium onto a Cluster API-managed workload cluster using a ClusterResourceSet. It’s reasonable to consider this post a follow-up to my earlier post that walked you through using a ClusterResourceSet to install Calico. There’s no need to read the earlier post, though, as this post includes all the information (or links to the information) you need. Ready? Let’s jump in!

Read more...

Technology Short Take 145

Welcome to Technology Short Take #145! What will you find in this Tech Short Take? Well, let’s see…stuff on Envoy, network automation, network designs, M1 chips (and potential open source variants!), a bevy of security articles (including a couple on very severe vulnerabilities), Kubernetes, AWS IAM, and so much more! I hope that you find something useful here. Enjoy!

Read more...

Technology Short Take 144

Welcome to Technology Short Take #144! I have a fairly diverse set of links for readers this time around, covering topics from microchips to improving your writing, with stops along the way in topics like Kubernetes, virtualization, Linux, and the popular JSON-parsing tool jq. I hope you find something useful!

Read more...

Establishing VPC Peering with Pulumi and Go

I use Pulumi to manage my lab infrastructure on AWS (I shared some of the details in this April 2020 blog post published on the Pulumi site). Originally I started with TypeScript, but later switched to Go. Recently I had a need to add some VPC peering relationships to my lab configuration. I was concerned that this may pose some problems—due entirely to the way I structure my Pulumi projects and stacks—but as it turned out it was more straightforward than I expected. In this post, I’ll share some example code and explain what I learned in the process of writing it.

Read more...

Using the AWS CLI to Tag Groups of AWS Resources

To conduct some testing, I recently needed to spin up a group of Kubernetes clusters on AWS. Generally speaking, my “weapon of choice” for something like this is Cluster API (CAPI) with the AWS provider. Normally this would be enormously simple. In this particular case—for reasons that I won’t bother going into here—I needed to spin up all these clusters in a single VPC. This presents a problem for the Cluster API Provider for AWS (CAPA), as it currently doesn’t add some required tags to existing AWS infrastructure (see this issue). The fix is to add the tags manually, so in this post I’ll share how I used the AWS CLI to add the necessary tags.

Read more...

Technology Short Take 143

Welcome to Technology Short Take #143! I have what I think is an interesting list of links to share with you this time around. Since taking my new job at Kong, I’ve been spending more time with Envoy, so you’ll see some Envoy-related content showing up in this Technology Short Take. I hope this collection of links has something useful for you!

Read more...

Starting WireGuard Interfaces Automatically with Launchd on macOS

In late June of this year, I wrote a piece on using WireGuard on macOS via the CLI, where I walked readers using macOS through how to configure and use the WireGuard VPN from the terminal (as opposed to using the GUI client, which I discussed here). In that post, I briefly mentioned that I was planning to explore how to have macOS' launchd automatically start WireGuard interfaces. In this post, I’ll show you how to do exactly that.

Read more...

An Alternate Approach to etcd Certificate Generation with Kubeadm

I’ve written a fair amount about kubeadm, which was my preferred way of bootstrapping Kubernetes clusters until Cluster API arrived. Along the way, I’ve also discussed using kubeadm to assist with setting up etcd, the distributed key-value store leveraged by the Kubernetes control plane (see here, here, and here). In this post, I’d like to revisit the topic of using kubeadm to set up an etcd cluster once again, this time taking a look at an alternate approach to generating the necessary TLS certificates than what the official documentation describes.

Read more...

Technology Short Take 142

Welcome to Technology Short Take #142! This time around, the Networking section is a bit light, but I’ve got plenty of cloud computing links and articles for you to enjoy, along with some stuff on OSes and applications, programming, and soft skills. Hopefully there’s something useful here for you!

Read more...

Adding Multiple Items Using Kustomize JSON 6902 Patches

Recently, I needed to deploy a Kubernetes cluster via Cluster API (CAPI) into a pre-existing AWS VPC. As I outlined in this post from September 2019, this entails modifying the CAPI manifest to include the VPC ID and any associated subnet IDs, as well as referencing existing security groups where needed. I knew that I could use the kustomize tool to make these changes in a declarative way, as I’d explored using kustomize with Cluster API manifests some time ago. This time, though, I needed to add a list of items, not just modify an existing value. In this post, I’ll show you how I used a JSON 6902 patch with kustomize to add a list of items to a CAPI manifest.

Read more...

Using WireGuard on macOS via the CLI

I’ve written a few different posts on WireGuard, the “simple yet fast and modern VPN” (as described by the WireGuard web site) that aims to supplant tools like IPSec and OpenVPN. My first post on WireGuard showed how to configure WireGuard on Linux, both on the client side as well as on the server side. After that, I followed it up with posts on using the GUI WireGuard app to configure WireGuard on macOS and—most recently—making WireGuard from Homebrew work on an M1-based Mac. In this post, I’m going to take a look at using WireGuard on macOS again, but this time via the CLI.

Read more...

Installing Older Versions of Kumactl on an M1 Mac

The Kuma community recently released version 1.2.0 of the open source Kuma service mesh, and along with it a corresponding version of kumactl, the command-line utility for interacting with Kuma. To make it easy for macOS users to get kumactl, the Kuma community maintains a Homebrew formula for the CLI utility. That includes providing M1-native (ARM64) macOS binaries for kumactl. Unfortunately, installing an earlier version of kumactl on an M1-based Mac using Homebrew is somewhat less than ideal. Here’s one way—probably not the only way—to work around some of the challenges.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!