Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Considerations for using IaC with Cluster API

In other posts on this site, I’ve talked about both infrastructure-as-code (see my posts on Terraform or my posts on Pulumi) and somewhat separately I’ve talked about Cluster API (see my posts on Cluster API). And while I’ve discussed the idea of using existing AWS infrastructure with Cluster API, in this post I wanted to try to think about how these two technologies play together, and provide some considerations for using them together.

I’ll focus here on AWS as the cloud provider/platform, but many of these considerations would also apply—in concept, at least—to other providers/platforms.

In no particular order, here are some considerations for using infrastructure-as-code and Cluster API (CAPI)—specifically, the Cluster API Provider for AWS (CAPA)—together:

  • If you’re going to need the CAPA workload clusters to have access to other AWS resources, like applications running on EC2 instances or managed services like RDS, you’ll need to use the additionalSecurityGroups functionality, as I described in this blog post.
  • The AWS cloud provider requires certain tags to be assigned to resources (see this post for more details), and CAPI automatically provisions new workload clusters with the AWS cloud provider when running on AWS. Thus, you’ll want to make sure that the IaC tool you’re using is assigning the correct tags on the AWS resources.
  • Continuing on the tag theme, you’ll also need to make sure that the tags match the cluster name assigned in the workload cluster YAML manifest. So, for example, if your workload cluster YAML manifest defines a cluster name of “blue”, the AWS tag must be kubernetes.io/cluster/blue. Otherwise, the AWS cloud provider won’t function correctly.
  • When it comes to bastion hosts, both CAPA and your IaC tool of choice can create them. You’ll probably want to have them handled by the IaC tool (presumably you have other AWS resources you’re managing to which you may also need access), in which case the first bullet point above—about using the additionalSecurityGroups functionality to enable access to other AWS resources—applies.
  • CAPA will need access to information about the infrastructure it is consuming. Per the upstream docs, CAPA needs the VPC ID and the IDs of all the subnets. Ideally, you’ll want some sort of automated (or relatively automated) means of getting this information out of your IaC solution and into CAPA. For a few ideas of how this might be done with Pulumi, check out this repository that I created to accompany my Cloud Engineering Summit session.
  • Keep in mind that using IaC to manage infrastructure but using CAPI/CAPA to manage your Kubernetes clusters creates a “split management” scenario. One potential benefit to CAPI/CAPA is that it can handle the lifecycle of both Kubernetes clusters and the underlying infrastructure. Leveraging IaC with CAPI/CAPA means giving up that potential benefit. On the flip side, using IaC for infrastructure may provide greater flexibility and more options for customization. As with so many things in technology, making this decision is all about weighing the trade-offs.

No doubt there are more considerations worth discussing, but this short list should get you started. Feel free to contact me on Twitter or find me on the Kubernetes Slack if you’re interested in talking more about this topic.

Technology Short Take 131

Welcome to Technology Short Take #131! I’m back with another collection of articles on various data center technologies. This time around the content is a tad heavy on the security side, but I’ve still managed to pull in articles on networking, cloud computing, applications, and some programming-related content. Here’s hoping you find something useful here!

Networking

  • This recent Ars Technica article points out that a feature in Chromium—the open source project leveraged by Chrome and Edge, among others—is having a significant impact on root DNS traffic. More technical details can be found in an associated APNIC blog post.
  • Here’s a few details around Open Service Mesh.
  • Quentin Machu outlines a series of problems his company experienced using Weave Net as the CNI for their Kubernetes clusters, as well as describes the migration process to a new CNI. His blog post is well worth a read, IMO.

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

Programming

  • Kyle Galbraith has two articles that I read over the last several weeks, one on the repository pattern and one on the adapter pattern. Since I’m still quite the programming newbie, both were a bit of a stretch of my knowledge, but I think I gleaned enough of the concepts to be able to use them later.

Virtualization

  • Laurens van Dujin brings to light a bug in vCenter 7.0.0c that causes high CPU usage; turns out this bug is related to the new Workload Control Plane features in vSphere 7. You can disable the service to bring the CPU usage down, but there are caveats. Be sure to read Laurens’ post for details.

Career/Soft Skills

OK, that’s all for now. Hopefully you’ve found something useful in this post. If so, I’d love to hear about it—feel free to reach out to me on Twitter. Similarly, if you have suggestions for how I might improve the content of these types of posts, I’m open to all constructive criticism. Thanks for reading!

Updating AWS Credentials in Cluster API

I’ve written a bit here and there about Cluster API (aka CAPI), mostly focusing on the Cluster API Provider for AWS (CAPA). If you’re not yet familiar with CAPI, have a look at my CAPI introduction or check the Introduction section of the CAPI site. Because CAPI interacts directly with infrastructure providers, it typically has to have some way of authenticating to those infrastructure providers. The AWS provider for Cluster API is no exception. In this post, I’ll show how to update the AWS credentials used by CAPA.

Why might you need to update the credentials being used by CAPA? Security professionals recommend that users rotate credentials on a regular basis, and when those credentials get rotated you’ll need to update what CAPA is using. There are other reasons, too; perhaps you started with one set of credentials but now want to move to a different set of credentials. Fortunately, the process for updating the CAPA credentials isn’t too terribly tedious.

CAPA stores the credentials it uses as a Secret in the “capa-system” namespace. You can use kubectl -n capa-system get secrets and you’ll see the “capa-manager-bootstrap-credentials” Secret. The credentials themselves are stored as a key named credentials; you can use this command to retrieve the credentials and decode them (if you’re using macOS, change the -d to -D):

kubectl -n capa-system get secret capa-manager-bootstrap-credentials \
-o jsonpath="{.data.credentials}" | base64 -d

The command will return something like this (but with valid access key ID, secret access key, and region values, obviously):

[default]
aws_access_key_id = <access-key-id-value-here>
aws_secret_access_key = <secret-access-key-value-here>
region = <aws-region-here>

There’s a couple different ways to update this information. What I’ll describe below is one way to do it.

First, you’ll need to encode a correct/working set of credentials into a Base64-encoded string. Fortunately, the clusterawsadm command can do this for you. Before running clusterawsadm, be sure to set—as needed—the AWS_PROFILE, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and the AWS_REGION environment variables. If you’re using version 0.5.4 or earlier of clusterawsadm, you can use this clusterawsadm command to generate the necessary Secret materials:

clusterawsadm alpha bootstrap encode-aws-credentials

If you’re using clusterawsadm 0.5.5 or later, the command changes to this:

clusterawsadm bootstrap credentials encode-as-profile

Keep the output of this command handy; you’ll need it shortly.

Next, use kubectl -n capa-system edit secret capa-manager-bootstrap-credentials to edit the Secret. Replace the existing value of the data.credentials field with the new value created above using clusterawsadm. Save your changes.

For the CAPA controller manager to pick up the new credentials in the Secret, restart it with this command:

kubectl -n capa-manager rollout restart \
deployment capa-controller-manager

The AWS infrastructure provider in your CAPI management cluster should now be good to go with the updated credentials.

It also appears that if you need to upgrade the CAPI components on your management cluster (using clusterctl upgrade plan and clusterctl upgrade apply), that operation will also ensure that updated credentials are embedded into the “capa-manager-bootstrap-credentials” Secret.

If you have any questions about this process, if I’ve explained something incorrectly, or if you have any suggestions for how I can improve this article, please feel free to reach out to me on Twitter or find me on the Kubernetes Slack community. All constructive comments and feedback are welcome!

Behavior Changes in clusterawsadm 0.5.5

Late last week I needed to test some Kubernetes functionality, so I thought I’d spin up a test cluster really quick using Cluster API (CAPI). As often happens with fast-moving projects like Kubernetes and CAPI, my existing CAPI environment had gotten a little out of date. So I updated my environment, and along the way picked up an important change in the default behavior of the clusterawsadm tool used by the Cluster API Provider for AWS (CAPA). In this post, I’ll share more information on this change in default behavior and the impacts of that change.

The clusterawsadm tool is part of CAPA and is used to help manage AWS-specific aspects, particularly around credentials and IAM (Identity and Access Management). As outlined in this doc, users use clusterawsadm to create a CloudFormation stack that prepares an AWS account for use with CAPA. This stack contains roles and policies that enable CAPA to function as expected.

Here’s the change in default behavior:

  • In clusterawsadm 0.5.4 and earlier, using clusterawsadm to create or update the CloudFormation stack would also create a bootstrap IAM user and group by default.
  • In clusterawsadm 0.5.5 and later, creating or updating the associated CloudFormation stack does not create a bootstrap IAM user or group.

This change in default behavior is briefly documented in the 0.5.5 release here. As mentioned in the release, the default behavior can be changed with a configuration file (API reference is available here).

In and of itself, this change in default behavior isn’t significant. What is significant is what happens if you use clusterawsadm 0.5.4 or earlier to create the necessary CAPA stack, and then use clusterawsadm 0.5.5 or later to update this stack. In such cases, if you haven’t taken steps to change the default behavior then the bootstrap IAM user and group are removed. When this happens, you’ll start to see error messages like this (or similar):

The user with name bootstrapper.cluster-api-provider-aws.sigs.k8s.io cannot be found

If your CAPI management cluster is using those credentials to interact with AWS, the CAPA controllers on that management cluster are now broken. You’ll have to update the CAPA controllers to use a new set of credentials (see this blog post for information on that process) before any CAPI-related operations will succeed.

One of the CAPA contributors (thanks, Naadir!) did point out that it is still possible to use the pre-0.5.5 clusterawsadm alpha commands in the 0.5.5 release. The CLI help text has been completely removed, but the command to run is clusterawsadm alpha bootstrap generate-cloudformation <aws-account-id> (this generates the CloudFormation template only; use clusterawsadm alpha bootstrap create-stack to actually create the stack). This command works with both the 0.5.4 and 0.5.5 releases of clusterawsadm, although the latter will generate a deprecation warning. However, the CloudFormation template generated by clusterawsadm 0.5.5 is not identical to the template generated by the 0.5.4 release; it lacks a name for the bootstrap IAM group. I have not tested what impact this has on existing CAPA stacks.

To get identical output (at least, with regard to the bootstrap user and group) between the two releases of clusterawsadm, you must generate a configuration file and make sure this section is present in the configuration file:

spec:
  bootstrapUser:
    enable: true
    userName: bootstrapper.cluster-api-provider-aws.sigs.k8s.io
    groupName: bootstrapper.cluster-api-provider-aws.sigs.k8s.io

Then specify the configuration file when running clusterawsadm:

clusterawsadm bootstrap iam create-cloudformation-stack --config config.yaml

Based on my testing, this should generate a CloudFormation stack that, with regard to the bootstrap IAM user and group, is identical to stacks created with clusterawsadm 0.5.4 and earlier. Thus, if you have existing CAPA environments prepared with clusterawsadm 0.5.4 and earlier, then—at least with regard to the bootstrap IAM user and group—it is safe to update these environments with clusterawsadm 0.5.5.

If anyone has questions, feel free to find me on the K8s Slack or hit me on Twitter. I’ll do my best to help.

Technology Short Take 130

Welcome to Technology Short Take #130! I’ve had this blog post sitting in my Drafts folder waiting to be published for almost a month, and I kept forgetting to actually make it live. Sorry! So, here it is—better late than never, right?

Networking

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I recently came across the jc utility, which converts “ordinary” command-line output from a number of different utilities into structured JSON output. Read about why the author created jc in this blog post.
  • If you’re new to the GNOME desktop environment, Ori Alvarez’s article on how to create a GNOME desktop entry may be useful.
  • According to The Verge, Windows 10 users aren’t very happy about Microsoft’s forced roll-out of its Chromium-based Edge browser.
  • Justin Garrison shared some shell functions for making it easier to switch AWS CLI profiles, or set AWS region (for example). They’re written for zsh, but should be adaptable to other shells without unreasonable effort.
  • James Pulec walks readers through Git Worktrees, the “best Git feature you’ve never heard of.” Indeed!
  • I missed the announcement about the release of the Debian 10 “Buster” handbook.
  • Via Ivan Pepelnjak (who in turn got it from Julia Evans), I learned about entr, a Linux CLI tool to run arbitrary commands when files change. Handy.
  • This site has more details about the X11 Window system than most people care to know.

Programming

  • Gergely Orosz shares some data structures and algorithms he actually used at a few different tech companies.
  • I also learned that the Go language server (gopls, used by Visual Studio Code and many other editors for Go language awareness) doesn’t work properly when go.mod isn’t in the root directory of whatever you’ve opened (see here). The workaround is to use a “multi-root workspace.”

Storage

Virtualization

  • With the announcement of a new version of macOS come new beta builds, and articles about running those beta builds in a VM. Here’s the latest. It’ll be interesting to see how virtualization continues (or maybe doesn’t) when Apple moves to its own custom ARM processors.
  • William Lam talks about options for evaluating vSphere with Kubernetes.

Career/Soft Skills

  • A co-worker (thanks Joe!) pointed out this article on the concept of inversion. I found the article quite interesting, and it has already affected my thinking and how I am approaching/will approach certain projects that I’d like to tackle.

And that’s a wrap! Feel free to contact me on Twitter if you have any questions or comments (constructive feedback is always welcome). Thanks for reading!

Recent Posts

Creating an AWS ELB using Pulumi and Go

In case you hadn’t noticed, I’ve been on a bit of a kick with Pulumi and Go recently. There are two reasons for this. First, I have a number of “learning projects” (things that I decide I’d like to try or test) that would benefit greatly from the use of infrastructure as code. Second, I’ve been working on getting more familiar with Go. The idea of combining both those reasons by using Pulumi with Go seemed natural. Unfortunately, examples of using Pulumi with Go seem to be more limited than examples of using Pulumi with other languages, so in this post I’d like to share how to create an AWS ELB using Pulumi and Go.

Read more...

Review: Anker PowerExpand Elite Thunderbolt 3 Dock

Over the last couple of weeks or so, I’ve been using my 2017 MacBook Pro (running macOS “Mojave” 10.14.6) more frequently as my daily driver/primary workstation. Along with it, I’ve been using the Anker PowerExpand Elite 13-in-1 Thunderbolt 3 Dock. In this post, I’d like to share my experience with this dock and provide a quick review of the Anker PowerExpand Elite.

Read more...

Technology Short Take 129

Welcome to Technology Short Take #129, where I’ve collected a bunch of links and references to technology-centric resources around the Internet. This collection is (mostly) data center- and cloud-focused, and hopefully I’ve managed to curate a list that has some useful information for readers. Sorry this got published so late; it was supposed to go live this morning!

Read more...

Working Around Docker Desktop's Outdated Kubernetes Version

As of the time that I published this blog post in early July 2020, Docker Desktop for macOS was at version 2.2.0.4 (for the “stable” channel). That version includes a relatively recent version of the Docker engine (19.03.8, compared to 19.03.12 on my Fedora 31 box), but a quite outdated version of Kubernetes (1.15.5, which isn’t supported by upstream). Now, this may not be a problem for users who only use Kubernetes via Docker Desktop. For me, however, the old version of Kubernetes—specifically the old version of kubectl—causes problems. Here’s how I worked around the old version that Docker Desktop supplies. (Also, see the update at the bottom for some additional details that emerged after this post was originally published.)

Read more...

Creating an AWS Security Group using Pulumi and Go

In this post, I’m going to share some examples of how to create an AWS security group using Pulumi and Go. I’m sharing these examples because—as of this writing—the Pulumi site does not provide any examples on how this is done using Go. There are examples for the other languages supported by Pulumi, but not for Go. The syntax is, to me at least, somewhat counterintuitive, although I freely admit this could be due to the fact that I am still pretty new to Go and its syntax.

Read more...

Adopting the Default Route Table of an AWS VPC using Pulumi and Go

Up until now, when I used Pulumi to create infrastructure on AWS, my code would create all-new infrastructure: a new VPC, new subnets, new route tables, new Internet gateway, etc. One thing bothered me, though: when I created a new VPC, that new VPC automatically came with a default route table. My code, however, would create a new route table and then explicitly associate the subnets with that new route table. This seemed less than ideal. (What can I say? I’m a stickler for details.) While building a Go-based replacement for my existing TypeScript code, I found a way to resolve this duplication of resources. In this post, I’ll show you how to “adopt” the default route table of an AWS VPC so that you can manage it in your Pulumi code.

Read more...

Getting AWS Availability Zones using Pulumi and Go

I’ve written several different articles on Pulumi (take a look at all articles tagged “Pulumi”), the infrastructure-as-code tool that allows users to define their infrastructure using a general-purpose programming language instead of a domain-specific language (DSL). Thus far, my work with Pulumi has leveraged TypeScript, but moving forward I’m going to start sharing more Pulumi code written using Go. In this post, I’ll share how to use Pulumi and Go to get a list of Availability Zones (AZs) from a particular region in AWS.

Read more...

Fixes for Some Vagrant Issues on Fedora

Yesterday I needed to perform some testing of an updated version of some software that I use. (I was conducting the testing because this upgrade contained some breaking changes, and needed to understand how to mitigate the breaking changes.) So, I broke out Vagrant (with the Libvirt provider) on my Fedora laptop—and promptly ran into a couple issues. Fortunately, these issues were relatively easy to work around, but since the workarounds were non-intuitive I wanted to share them here for the benefit of others.

Read more...

Technology Short Take 128

Welcome to Technology Short Take #128! It looks like I’m settling into a roughly monthly cadence with the Technology Short Takes. This time around, I’ve got a (hopefully) interesting collection of links. The collection seems a tad heavier than normal in the hardware and security sections, probably due to new exploits discovered in Intel’s speculative execution functionality. In any case, here’s what I’ve gathered for you. Enjoy!

Read more...

Using kubectl via an SSH Tunnel

In this post, I’d like to share one way (not the only way!) to use kubectl to access your Kubernetes cluster via an SSH tunnel. In the future, I may explore some other ways (hit me on Twitter if you’re interested). I’m sharing this information because I suspect it is not uncommon for folks deploying Kubernetes on the public cloud to want to deploy them in a way that does not expose them to the Internet. Given that the use of SSH bastion hosts is not uncommon, it seemed reasonable to show how one could use an SSH tunnel to reach a Kubernetes cluster behind an SSH bastion host.

Read more...

Making it Easier to Get Started with Cluster API on AWS

I’ve written a few articles about Cluster API (you can see a list of the articles here), but even though I strive to make my articles easy to understand and easy to follow along many of those articles make an implicit assumption: that readers are perhaps already somewhat familiar with Linux, Docker, tools like kind, and perhaps even Kubernetes. Today I was thinking, “What about folks who are new to this? What can I do to make it easier?” In this post, I’ll talk about the first idea I had: creating a “bootstrapper” AMI that enables new users to quickly and easily jump into the Cluster API Quick Start.

Read more...

Creating a Multi-AZ NAT Gateway with Pulumi

I recently had a need to test a configuration involving the use of a single NAT Gateway servicing multiple private subnets across multiple availability zones (AZs) within a single VPC. While there are notable caveats with such a design (see the “Caveats” section at the bottom of this article), it could make sense in some use cases. In this post, I’ll show you how I used TypeScript with Pulumi to automate the creation of this design.

Read more...

Review: Magic Mouse 2 and Magic Trackpad 2 on Fedora

I recently purchased a new Apple Magic Mouse 2 and an Apple Magic Trackpad 2—not to use with my MacBook Pro, but to use with my Fedora-powered laptop (a Lenovo 5th generation ThinkPad X1 Carbon; see my review). I know it seems odd to buy Apple accessories for a non-Apple laptop, and in this post I’d like to talk about why I bought these items as well as provide some (relatively early) feedback on how well they work with Fedora.

Read more...

Using Unison Across Linux, macOS, and Windows

I recently wrapped up an instance where I needed to use the Unison file synchronization application across Linux, macOS, and Windows. While Unison is available for all three platforms and does work across (and among) systems running all three operating systems, I did encounter a few interoperability issues while making it work. Here’s some information on these interoperability issues, and how I worked around them. (Hopefully this information will help someone else.)

Read more...

Technology Short Take 127

Welcome to Technology Short Take #127! Let’s see what I’ve managed to collect for you this time around…

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!