Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Updating AWS Credentials in Cluster API

I’ve written a bit here and there about Cluster API (aka CAPI), mostly focusing on the Cluster API Provider for AWS (CAPA). If you’re not yet familiar with CAPI, have a look at my CAPI introduction or check the Introduction section of the CAPI site. Because CAPI interacts directly with infrastructure providers, it typically has to have some way of authenticating to those infrastructure providers. The AWS provider for Cluster API is no exception. In this post, I’ll show how to update the AWS credentials used by CAPA.

Why might you need to update the credentials being used by CAPA? Security professionals recommend that users rotate credentials on a regular basis, and when those credentials get rotated you’ll need to update what CAPA is using. There are other reasons, too; perhaps you started with one set of credentials but now want to move to a different set of credentials. Fortunately, the process for updating the CAPA credentials isn’t too terribly tedious.

CAPA stores the credentials it uses as a Secret in the “capa-system” namespace. You can use kubectl -n capa-system get secrets and you’ll see the “capa-manager-bootstrap-credentials” Secret. The credentials themselves are stored as a key named credentials; you can use this command to retrieve the credentials and decode them (if you’re using macOS, change the -d to -D):

kubectl -n capa-system get secret capa-manager-bootstrap-credentials \
-o jsonpath="{.data.credentials}" | base64 -d

The command will return something like this (but with valid access key ID, secret access key, and region values, obviously):

[default]
aws_access_key_id = <access-key-id-value-here>
aws_secret_access_key = <secret-access-key-value-here>
region = <aws-region-here>

There’s a couple different ways to update this information. What I’ll describe below is one way to do it.

First, you’ll need to encode a correct/working set of credentials into a Base64-encoded string. Fortunately, the clusterawsadm command can do this for you. Before running clusterawsadm, be sure to set—as needed—the AWS_PROFILE, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and the AWS_REGION environment variables. If you’re using version 0.5.4 or earlier of clusterawsadm, you can use this clusterawsadm command to generate the necessary Secret materials:

clusterawsadm alpha bootstrap encode-aws-credentials

If you’re using clusterawsadm 0.5.5 or later, the command changes to this:

clusterawsadm bootstrap credentials encode-as-profile

Keep the output of this command handy; you’ll need it shortly.

Next, use kubectl -n capa-system edit secret capa-manager-bootstrap-credentials to edit the Secret. Replace the existing value of the data.credentials field with the new value created above using clusterawsadm. Save your changes.

For the CAPA controller manager to pick up the new credentials in the Secret, restart it with this command:

kubectl -n capa-manager rollout restart \
deployment capa-controller-manager

The AWS infrastructure provider in your CAPI management cluster should now be good to go with the updated credentials.

It also appears that if you need to upgrade the CAPI components on your management cluster (using clusterctl upgrade plan and clusterctl upgrade apply), that operation will also ensure that updated credentials are embedded into the “capa-manager-bootstrap-credentials” Secret.

If you have any questions about this process, if I’ve explained something incorrectly, or if you have any suggestions for how I can improve this article, please feel free to reach out to me on Twitter or find me on the Kubernetes Slack community. All constructive comments and feedback are welcome!

Behavior Changes in clusterawsadm 0.5.5

Late last week I needed to test some Kubernetes functionality, so I thought I’d spin up a test cluster really quick using Cluster API (CAPI). As often happens with fast-moving projects like Kubernetes and CAPI, my existing CAPI environment had gotten a little out of date. So I updated my environment, and along the way picked up an important change in the default behavior of the clusterawsadm tool used by the Cluster API Provider for AWS (CAPA). In this post, I’ll share more information on this change in default behavior and the impacts of that change.

The clusterawsadm tool is part of CAPA and is used to help manage AWS-specific aspects, particularly around credentials and IAM (Identity and Access Management). As outlined in this doc, users use clusterawsadm to create a CloudFormation stack that prepares an AWS account for use with CAPA. This stack contains roles and policies that enable CAPA to function as expected.

Here’s the change in default behavior:

  • In clusterawsadm 0.5.4 and earlier, using clusterawsadm to create or update the CloudFormation stack would also create a bootstrap IAM user and group by default.
  • In clusterawsadm 0.5.5 and later, creating or updating the associated CloudFormation stack does not create a bootstrap IAM user or group.

This change in default behavior is briefly documented in the 0.5.5 release here. As mentioned in the release, the default behavior can be changed with a configuration file (API reference is available here).

In and of itself, this change in default behavior isn’t significant. What is significant is what happens if you use clusterawsadm 0.5.4 or earlier to create the necessary CAPA stack, and then use clusterawsadm 0.5.5 or later to update this stack. In such cases, if you haven’t taken steps to change the default behavior then the bootstrap IAM user and group are removed. When this happens, you’ll start to see error messages like this (or similar):

The user with name bootstrapper.cluster-api-provider-aws.sigs.k8s.io cannot be found

If your CAPI management cluster is using those credentials to interact with AWS, the CAPA controllers on that management cluster are now broken. You’ll have to update the CAPA controllers to use a new set of credentials (see this blog post for information on that process) before any CAPI-related operations will succeed.

One of the CAPA contributors (thanks, Naadir!) did point out that it is still possible to use the pre-0.5.5 clusterawsadm alpha commands in the 0.5.5 release. The CLI help text has been completely removed, but the command to run is clusterawsadm alpha bootstrap generate-cloudformation <aws-account-id> (this generates the CloudFormation template only; use clusterawsadm alpha bootstrap create-stack to actually create the stack). This command works with both the 0.5.4 and 0.5.5 releases of clusterawsadm, although the latter will generate a deprecation warning. However, the CloudFormation template generated by clusterawsadm 0.5.5 is not identical to the template generated by the 0.5.4 release; it lacks a name for the bootstrap IAM group. I have not tested what impact this has on existing CAPA stacks.

To get identical output (at least, with regard to the bootstrap user and group) between the two releases of clusterawsadm, you must generate a configuration file and make sure this section is present in the configuration file:

spec:
  bootstrapUser:
    enable: true
    userName: bootstrapper.cluster-api-provider-aws.sigs.k8s.io
    groupName: bootstrapper.cluster-api-provider-aws.sigs.k8s.io

Then specify the configuration file when running clusterawsadm:

clusterawsadm bootstrap iam create-cloudformation-stack --config config.yaml

Based on my testing, this should generate a CloudFormation stack that, with regard to the bootstrap IAM user and group, is identical to stacks created with clusterawsadm 0.5.4 and earlier. Thus, if you have existing CAPA environments prepared with clusterawsadm 0.5.4 and earlier, then—at least with regard to the bootstrap IAM user and group—it is safe to update these environments with clusterawsadm 0.5.5.

If anyone has questions, feel free to find me on the K8s Slack or hit me on Twitter. I’ll do my best to help.

Technology Short Take 130

Welcome to Technology Short Take #130! I’ve had this blog post sitting in my Drafts folder waiting to be published for almost a month, and I kept forgetting to actually make it live. Sorry! So, here it is—better late than never, right?

Networking

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

  • I recently came across the jc utility, which converts “ordinary” command-line output from a number of different utilities into structured JSON output. Read about why the author created jc in this blog post.
  • If you’re new to the GNOME desktop environment, Ori Alvarez’s article on how to create a GNOME desktop entry may be useful.
  • According to The Verge, Windows 10 users aren’t very happy about Microsoft’s forced roll-out of its Chromium-based Edge browser.
  • Justin Garrison shared some shell functions for making it easier to switch AWS CLI profiles, or set AWS region (for example). They’re written for zsh, but should be adaptable to other shells without unreasonable effort.
  • James Pulec walks readers through Git Worktrees, the “best Git feature you’ve never heard of.” Indeed!
  • I missed the announcement about the release of the Debian 10 “Buster” handbook.
  • Via Ivan Pepelnjak (who in turn got it from Julia Evans), I learned about entr, a Linux CLI tool to run arbitrary commands when files change. Handy.
  • This site has more details about the X11 Window system than most people care to know.

Programming

  • Gergely Orosz shares some data structures and algorithms he actually used at a few different tech companies.
  • I also learned that the Go language server (gopls, used by Visual Studio Code and many other editors for Go language awareness) doesn’t work properly when go.mod isn’t in the root directory of whatever you’ve opened (see here). The workaround is to use a “multi-root workspace.”

Storage

Virtualization

  • With the announcement of a new version of macOS come new beta builds, and articles about running those beta builds in a VM. Here’s the latest. It’ll be interesting to see how virtualization continues (or maybe doesn’t) when Apple moves to its own custom ARM processors.
  • William Lam talks about options for evaluating vSphere with Kubernetes.

Career/Soft Skills

  • A co-worker (thanks Joe!) pointed out this article on the concept of inversion. I found the article quite interesting, and it has already affected my thinking and how I am approaching/will approach certain projects that I’d like to tackle.

And that’s a wrap! Feel free to contact me on Twitter if you have any questions or comments (constructive feedback is always welcome). Thanks for reading!

Creating an AWS ELB using Pulumi and Go

In case you hadn’t noticed, I’ve been on a bit of a kick with Pulumi and Go recently. There are two reasons for this. First, I have a number of “learning projects” (things that I decide I’d like to try or test) that would benefit greatly from the use of infrastructure as code. Second, I’ve been working on getting more familiar with Go. The idea of combining both those reasons by using Pulumi with Go seemed natural. Unfortunately, examples of using Pulumi with Go seem to be more limited than examples of using Pulumi with other languages, so in this post I’d like to share how to create an AWS ELB using Pulumi and Go.

Here’s the example code:

elb, err := elb.NewLoadBalancer(ctx, "elb", &elb.LoadBalancerArgs{
	NamePrefix:             pulumi.String(baseName),
	CrossZoneLoadBalancing: pulumi.Bool(true),
	AvailabilityZones:      pulumi.StringArray(azNames),
	Instances:              pulumi.StringArray(cpNodeIds),
	SecurityGroups:			pulumi.StringArray{elbSecGrp.ID()},
	HealthCheck: &elb.LoadBalancerHealthCheckArgs{
		HealthyThreshold:   pulumi.Int(3),
		Interval:           pulumi.Int(30),
		Target:             pulumi.String("SSL:6443"),
		UnhealthyThreshold: pulumi.Int(3),
		Timeout:            pulumi.Int(15),
	},
	Listeners: &elb.LoadBalancerListenerArray{
		&elb.LoadBalancerListenerArgs{
			InstancePort:     pulumi.Int(6443),
			InstanceProtocol: pulumi.String("TCP"),
			LbPort:           pulumi.Int(6443),
			LbProtocol:       pulumi.String("TCP"),
		},
	},
	Tags: pulumi.StringMap{
		"Name": pulumi.String(fmt.Sprintf("cp-elb-%s", baseName)),
		k8sTag: pulumi.String("shared"),
	},
})

You can probably infer from the code above that this example creates an ELB that listens on TCP port 6443 and forwards to instances on TCP 6443 (and is therefore most likely a load balancer for the control plane of a Kubernetes cluster). Not shown above is the error handling code that would check err for a return error.

My use of pulumi.StringArray for the “AvailabilityZones” and “Instances” parameters is one way to do it; both azNames and cpNodeIds are arrays of type []pulumi.StringInput. I took the approach above since I’m collecting information about the various AZs in the azNames array using code described here, and I gathered the instance IDs for a group of instances in the cpNodesId array.

You could also do something like this, if you wanted to explicitly specify values:

AvailabilityZones: pulumi.StringArray{pulumi.String("us-east-1")}
Instances:         pulumi.StringArray{pulumi.String("i-01234564789")}

Note that you should probably use the “Subnets” argument instead of the “AvailabilityZones” argument (you can only use one or the other) if you need to attach the ELB to subnets in a particular VPC.

Getting the syntax for the health check and the listeners was a bit challenging until I reviewed the package documentation for each (see here for the health check arguments and here for the listener arguments). I quickly realized it followed a similar pattern as the ingress and egress rules for an AWS security group (as outlined in this post).

I hope this example helps. Although I’m not a Go expert (nor am I a Pulumi expert), if you have any questions feel free to reach out to me on Twitter. I also do hang out in the Pulumi community Slack, in case you’d like to contact me there.

Review: Anker PowerExpand Elite Thunderbolt 3 Dock

Over the last couple of weeks or so, I’ve been using my 2017 MacBook Pro (running macOS “Mojave” 10.14.6) more frequently as my daily driver/primary workstation. Along with it, I’ve been using the Anker PowerExpand Elite 13-in-1 Thunderbolt 3 Dock. In this post, I’d like to share my experience with this dock and provide a quick review of the Anker PowerExpand Elite.

Note that I’m posting this as a customer of Anker. I paid for the PowerExpand Elite out of my own pocket, and haven’t received any compensation of any kind from anyone in return for my review. This is just me sharing my experience in the event it will help others.

First Impressions

The dock is both smaller than I expected (it measures 5 inches by 3.5 inches by 1.5 inches) and yet heavier than I expected. It feels solid and well-built. It comes with a (rather large) power brick and a Thunderbolt 3 cable to connect to the MacBook Pro. Setup was insanely easy; plug it in, connect it to the laptop, and you’re off to the races. (I did need to reboot my MacBook Pro for macOS to recognize the network interface in the dock.)

More In-Depth Impressions

Connectivity

I’m able to connect all the peripherals I need easily:

  • My LG 34” ultrawide monitor (via HDMI)
  • My podcasting microphone (via USB)
  • My Logitech C920 webcam (via USB)
  • My external Bose speakers (via analog audio)
  • My “conference call” headset (via USB)

I still have two more USB-C connectors (one of which is a Power Delivery port for faster charging of devices), another USB-A connector, and a Thunderbolt 3 connector unused, as well as the SD/MMC card reader slots.

Reliability

In researching Thunderbolt 3-based docks, I saw a fair number of reviews mentioning issues with the docks “losing” the connection. I can honestly say that I haven’t had that happen a single time over the time I’ve been using the PowerExpand Elite. Now, to be fair, my configuration is fairly static—I’m not constantly plugging and unplugging peripherals. If that’s you, then your experience with the PowerExpand Elite may be different.

Other Notes

The PowerExpand Elite does generate a fair amount of heat. It’s definitely warm to the touch, so I’d be sure to keep it somewhere with sufficient ventilation. I keep mine behind the Rain Design mStand that props up my MacBook Pro when it is docked. I’ve also had it placed under my monitor (which is on an adjustable monitor arm), where the ports are a bit more accessible. In both cases, it has/had plenty of ventilation to help keep the dock from getting too hot.

Summary

In summary, I’ve been pretty pleased with the Anker PowerExpand Elite. It works well, required very little effort to set up, and supports all the peripherals I have/need with room for a few more.

If you have any questions, feel free to contact me on Twitter and I’ll do my best to answer them. Thanks!

Recent Posts

Technology Short Take 129

Welcome to Technology Short Take #129, where I’ve collected a bunch of links and references to technology-centric resources around the Internet. This collection is (mostly) data center- and cloud-focused, and hopefully I’ve managed to curate a list that has some useful information for readers. Sorry this got published so late; it was supposed to go live this morning!

Read more...

Working Around Docker Desktop's Outdated Kubernetes Version

As of the time that I published this blog post in early July 2020, Docker Desktop for macOS was at version 2.2.0.4 (for the “stable” channel). That version includes a relatively recent version of the Docker engine (19.03.8, compared to 19.03.12 on my Fedora 31 box), but a quite outdated version of Kubernetes (1.15.5, which isn’t supported by upstream). Now, this may not be a problem for users who only use Kubernetes via Docker Desktop. For me, however, the old version of Kubernetes—specifically the old version of kubectl—causes problems. Here’s how I worked around the old version that Docker Desktop supplies. (Also, see the update at the bottom for some additional details that emerged after this post was originally published.)

Read more...

Creating an AWS Security Group using Pulumi and Go

In this post, I’m going to share some examples of how to create an AWS security group using Pulumi and Go. I’m sharing these examples because—as of this writing—the Pulumi site does not provide any examples on how this is done using Go. There are examples for the other languages supported by Pulumi, but not for Go. The syntax is, to me at least, somewhat counterintuitive, although I freely admit this could be due to the fact that I am still pretty new to Go and its syntax.

Read more...

Adopting the Default Route Table of an AWS VPC using Pulumi and Go

Up until now, when I used Pulumi to create infrastructure on AWS, my code would create all-new infrastructure: a new VPC, new subnets, new route tables, new Internet gateway, etc. One thing bothered me, though: when I created a new VPC, that new VPC automatically came with a default route table. My code, however, would create a new route table and then explicitly associate the subnets with that new route table. This seemed less than ideal. (What can I say? I’m a stickler for details.) While building a Go-based replacement for my existing TypeScript code, I found a way to resolve this duplication of resources. In this post, I’ll show you how to “adopt” the default route table of an AWS VPC so that you can manage it in your Pulumi code.

Read more...

Getting AWS Availability Zones using Pulumi and Go

I’ve written several different articles on Pulumi (take a look at all articles tagged “Pulumi”), the infrastructure-as-code tool that allows users to define their infrastructure using a general-purpose programming language instead of a domain-specific language (DSL). Thus far, my work with Pulumi has leveraged TypeScript, but moving forward I’m going to start sharing more Pulumi code written using Go. In this post, I’ll share how to use Pulumi and Go to get a list of Availability Zones (AZs) from a particular region in AWS.

Read more...

Fixes for Some Vagrant Issues on Fedora

Yesterday I needed to perform some testing of an updated version of some software that I use. (I was conducting the testing because this upgrade contained some breaking changes, and needed to understand how to mitigate the breaking changes.) So, I broke out Vagrant (with the Libvirt provider) on my Fedora laptop—and promptly ran into a couple issues. Fortunately, these issues were relatively easy to work around, but since the workarounds were non-intuitive I wanted to share them here for the benefit of others.

Read more...

Technology Short Take 128

Welcome to Technology Short Take #128! It looks like I’m settling into a roughly monthly cadence with the Technology Short Takes. This time around, I’ve got a (hopefully) interesting collection of links. The collection seems a tad heavier than normal in the hardware and security sections, probably due to new exploits discovered in Intel’s speculative execution functionality. In any case, here’s what I’ve gathered for you. Enjoy!

Read more...

Using kubectl via an SSH Tunnel

In this post, I’d like to share one way (not the only way!) to use kubectl to access your Kubernetes cluster via an SSH tunnel. In the future, I may explore some other ways (hit me on Twitter if you’re interested). I’m sharing this information because I suspect it is not uncommon for folks deploying Kubernetes on the public cloud to want to deploy them in a way that does not expose them to the Internet. Given that the use of SSH bastion hosts is not uncommon, it seemed reasonable to show how one could use an SSH tunnel to reach a Kubernetes cluster behind an SSH bastion host.

Read more...

Making it Easier to Get Started with Cluster API on AWS

I’ve written a few articles about Cluster API (you can see a list of the articles here), but even though I strive to make my articles easy to understand and easy to follow along many of those articles make an implicit assumption: that readers are perhaps already somewhat familiar with Linux, Docker, tools like kind, and perhaps even Kubernetes. Today I was thinking, “What about folks who are new to this? What can I do to make it easier?” In this post, I’ll talk about the first idea I had: creating a “bootstrapper” AMI that enables new users to quickly and easily jump into the Cluster API Quick Start.

Read more...

Creating a Multi-AZ NAT Gateway with Pulumi

I recently had a need to test a configuration involving the use of a single NAT Gateway servicing multiple private subnets across multiple availability zones (AZs) within a single VPC. While there are notable caveats with such a design (see the “Caveats” section at the bottom of this article), it could make sense in some use cases. In this post, I’ll show you how I used TypeScript with Pulumi to automate the creation of this design.

Read more...

Review: Magic Mouse 2 and Magic Trackpad 2 on Fedora

I recently purchased a new Apple Magic Mouse 2 and an Apple Magic Trackpad 2—not to use with my MacBook Pro, but to use with my Fedora-powered laptop (a Lenovo 5th generation ThinkPad X1 Carbon; see my review). I know it seems odd to buy Apple accessories for a non-Apple laptop, and in this post I’d like to talk about why I bought these items as well as provide some (relatively early) feedback on how well they work with Fedora.

Read more...

Using Unison Across Linux, macOS, and Windows

I recently wrapped up an instance where I needed to use the Unison file synchronization application across Linux, macOS, and Windows. While Unison is available for all three platforms and does work across (and among) systems running all three operating systems, I did encounter a few interoperability issues while making it work. Here’s some information on these interoperability issues, and how I worked around them. (Hopefully this information will help someone else.)

Read more...

Technology Short Take 127

Welcome to Technology Short Take #127! Let’s see what I’ve managed to collect for you this time around…

Read more...

Technology Short Take 126

Welcome to Technology Short Take #126! I meant to get this published last Friday, but completely forgot. So, I added a couple more links and instead have it ready for you today. I don’t have any links for servers/hardware or security in today’s Short Take, but hopefully there’s enough linked content in the other sections that you’ll still find something useful. Enjoy!

Read more...

Setting up etcd with etcdadm

I’ve written a few different posts on setting up etcd. There’s this one on bootstrapping a TLS-secured etcd cluster with kubeadm, and there’s this one about using kubeadm to run an etcd cluster as static Pods. There’s also this one about using kubeadm to run etcd with containerd. In this article, I’ll provide yet another way of setting up a “best practices” etcd cluster, this time using a tool named etcdadm.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!