Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Working Around Docker Desktop's Outdated Kubernetes Version

As of the time that I published this blog post in early July 2020, Docker Desktop for macOS was at version 2.2.0.4 (for the “stable” channel). That version includes a relatively recent version of the Docker engine (19.03.8, compared to 19.03.12 on my Fedora 31 box), but a quite outdated version of Kubernetes (1.15.5, which isn’t supported by upstream). Now, this may not be a problem for users who only use Kubernetes via Docker Desktop. For me, however, the old version of Kubernetes—specifically the old version of kubectl—causes problems. Here’s how I worked around the old version that Docker Desktop supplies. (Also, see the update at the bottom for some additional details that emerged after this post was originally published.)

First, you’ll note that Docker Desktop automatically symlinks its version of kubectl into your system path at /usr/local/bin. You can verify the version of Docker Desktop’s kubectl by running this command:

/usr/local/bin/kubectl version --client=true

On my macOS 10.14.6-based system, this returned a version of 1.15.5. According to GitHub, v1.15.5 was released in October of 2019. Per the Kubernetes version skew policy, this version of kubectl would work with with 1.14, 1.15, and 1.16. What if I need to work with a 1.17 or 1.18 cluster? Simple—it just won’t work. In my case, I regularly need to work with newer Kubernetes versions, hence the issue with this old bundled version of kubectl.

Unfortunately, you can’t just delete or rename the symlink that Docker Desktop creates; it will simply re-create the symlink the next time it launches.

So what’s the fix? Because /usr/local/bin is typically pretty early in the system search path (use echo $PATH to see), you’ll need to create a symlink earlier in the search path. Here’s one way to do that:

  1. Create a $HOME/bin or $HOME/.local/bin directory (I prefer the latter, since it mimics my Linux system).
  2. Modify your shell startup files to include this new directory in your search path.
  3. Create a symlink to a newer version of kubectl in that directory.

The steps above should be pretty straightforward, but I’ll expand just a bit on item #2. I’m using bash as my shell on macOS (again, for similarity across macOS and Linux; I’ve installed the latest bash version via homebrew), and so I added this snippet to my ~/.bash_profile:

# Set PATH so it includes user's personal bin directories
# if those directories exist
if [ -d "$HOME/.local/bin" ]; then
    PATH="$HOME/.local/bin:$PATH"
fi

if [ -d "$HOME/bin" ]; then
    PATH="$HOME/bin:$PATH"
fi

With this in ~/.bash_profile, if a $HOME/bin or $HOME/.local/bin directory exists, it gets added to the start of the search path—thus placing it before /usr/local/bin, and thus making sure that any executables or symlinks placed there will be found before those in /usr/local/bin. Therefore, when I place a symlink to a newer version of kubectl in one of these directories, it will get found before Docker Desktop’s outdated version. Problem solved! (You can, of course, still access the older Docker Desktop version by using the full path.)

Folks who are familiar with UNIX/Linux are probably already very familiar with this sort of approach. However, I suspect there are a fair number of my readers who may be using macOS-based systems but aren’t well-versed in the intricacies of manipulating the $PATH environment variable. Hopefully, this article helps those folks. Feel free to contact me on Twitter if you have any questions or feedback on what I’ve shared here.

UPDATE 2020-07-09: Several folks contacted me on Twitter (thank you!) to point out that a slightly newer version of Docker Desktop may be available in the stable channel, although I have not been able to update my installation thus far. The new version, version 2.3.0.3, comes with Kubernetes 1.16.5, and therefore may not be as problematic for folks as the older version I have on my system. Regardless, the workaround remains valid. Thanks to the readers who responded!

Creating an AWS Security Group using Pulumi and Go

In this post, I’m going to share some examples of how to create an AWS security group using Pulumi and Go. I’m sharing these examples because—as of this writing—the Pulumi site does not provide any examples on how this is done using Go. There are examples for the other languages supported by Pulumi, but not for Go. The syntax is, to me at least, somewhat counterintuitive, although I freely admit this could be due to the fact that I am still pretty new to Go and its syntax.

As a framework for providing these examples, I’ll use the scenario that I need to create two different security groups. The first security group will allow SSH traffic from the Internet to designated bastion hosts. The second security group will need to allow SSH from those bastion hosts, as well as allow all traffic between/among members of the security group. Between these two groups, I should be able to show enough examples to cover most of the different use cases you’ll run into.

Although no example was present for Go when I wrote this article, readers may still find the API reference for the SecurityGroup resource to be useful nevertheless.

First, let’s look at the security group to allow SSH traffic to the bastion hosts. Here’s a snippet of Go code that will create the desired group:

sshSecGrp, err := ec2.NewSecurityGroup(ctx, "ssh-sg", &ec2.SecurityGroupArgs{
	Name:        pulumi.String("ssh-sg"),
    VpcId:       vpc.ID(),
	Description: pulumi.String("Allows SSH traffic to bastion hosts"),
	Ingress: ec2.SecurityGroupIngressArray{
		ec2.SecurityGroupIngressArgs{
			Protocol:    pulumi.String("tcp"),
			ToPort:      pulumi.Int(22),
			FromPort:    pulumi.Int(22),
			Description: pulumi.String("Allow inbound TCP 22"),
			CidrBlocks:  pulumi.StringArray{pulumi.String("0.0.0.0/0")},
		},
	},
	Egress: ec2.SecurityGroupEgressArray{
		ec2.SecurityGroupEgressArgs{
			Protocol:    pulumi.String("-1"),
			ToPort:      pulumi.Int(0),
			FromPort:    pulumi.Int(0),
			Description: pulumi.String("Allow all outbound traffic"),
			CidrBlocks:  pulumi.StringArray{pulumi.String("0.0.0.0/0")},
		},
	},
})

The tricky part, for me, was the syntax around the use of ec2.SecurityGroupIngressArray followed by ec2.SecurityGroupIngressArgs (although, in retrospect, I shouldn’t have been thrown off by this since it follows the same pattern Pulumi uses elsewhere). The CidrBlocks parameter is an array (hence the use of pulumi.StringArray) with a single entry.

Now let’s look at the second security group. This example will demonstrate multiple ingress rules, the use of the Self parameter, and referencing a separate security group as a source:

nodeSecGrp, err := ec2.NewSecurityGroup(ctx, "node-sg", &ec2.SecurityGroupArgs{
	Name:        pulumi.String("node-sg"),
	VpcId:       vpc.ID(),
	Description: pulumi.String("Allows traffic between and among nodes"),
	Ingress: ec2.SecurityGroupIngressArray{
		ec2.SecurityGroupIngressArgs{
			Protocol:       pulumi.String("tcp"),
			ToPort:         pulumi.Int(22),
			FromPort:       pulumi.Int(22),
			Description:    pulumi.String("Allow TCP 22 from bastion hosts"),
			SecurityGroups: pulumi.StringArray{sshSecGrp.ID()},
		},
		ec2.SecurityGroupIngressArgs{
			Protocol:    pulumi.String("-1"),
			ToPort:      pulumi.Int(0),
			FromPort:    pulumi.Int(0),
			Description: pulumi.String("Allow all from this security group"),
			Self:        pulumi.Bool(true),
		},
	},
	Egress: ec2.SecurityGroupEgressArray{
		ec2.SecurityGroupEgressArgs{
			Protocol:    pulumi.String("-1"),
			ToPort:      pulumi.Int(0),
			FromPort:    pulumi.Int(0),
			Description: pulumi.String("Allow all outbound traffic"),
			CidrBlocks:  pulumi.StringArray{pulumi.String("0.0.0.0/0")},
		},
	},
})

A few things stand out from this second example:

  1. To create multiple ingress rules, simply include an ec2.SecurityGroupIngressArgs entry for each ingress rule in the ec2.SecurityGroupIngressArray. I don’t know why I thought it wouldn’t be that simple, but it is.
  2. Note the use of Self: pulumi.Bool(true); this is what says the source for this ingress rule should be the security group being created.
  3. To reference another security group in an ingress rule, use SecurityGroups: pulumi.StringArray and reference the .ID() of the other security group. The example above uses this to allow SSH from the security group in the first example.

This is all pretty straightforward once you’ve figured it out (isn’t everything?), but without any good examples to help guide the way for new users like myself it can be challenging to get the point where you’ve figured it out. Hopefully these examples will help in some small way.

If you have any questions, corrections, or comments, please feel free to contact me on Twitter, or hit me up in the Pulumi community Slack.

Adopting the Default Route Table of an AWS VPC using Pulumi and Go

Up until now, when I used Pulumi to create infrastructure on AWS, my code would create all-new infrastructure: a new VPC, new subnets, new route tables, new Internet gateway, etc. One thing bothered me, though: when I created a new VPC, that new VPC automatically came with a default route table. My code, however, would create a new route table and then explicitly associate the subnets with that new route table. This seemed less than ideal. (What can I say? I’m a stickler for details.) While building a Go-based replacement for my existing TypeScript code, I found a way to resolve this duplication of resources. In this post, I’ll show you how to “adopt” the default route table of an AWS VPC so that you can manage it in your Pulumi code.

Let’s assume you are creating a new VPC using code that looks something like this:

vpc, err := ec2.NewVpc(ctx, "testvpc", &ec2.VpcArgs{
	CidrBlock: pulumi.String("10.100.0.0/16"),
	Tags: pulumi.StringMap {
		"Name": pulumi.String("testvpc"),
		k8sTag: pulumi.String("shared"),
	},
})

(Note that this snippet of code doesn’t show anything happening with the return values of the ec2.NewVpc function, which Go will complain about. Make sure you are either using those values or discarding them. The same goes for the other code snippets found below.)

If you’d like to now bring the default route table that automatically gets created with a new VPC into Pulumi, then you can do this with this snippet of code:

defrt, err := ec2.NewDefaultRouteTable(ctx, "defrt", &ec2.DefaultRouteTableArgs{
	DefaultRouteTableId: vpc.DefaultRouteTableId,
	Tags: pulumi.StringMap {
		"Name": pulumi.String("defrt"),
		k8sTag: pulumi.String("shared"),
	},
})

The Pulumi documentation for the ec2.DefaultRouteTable resource indicates that all defined routes will be removed when the default route table. I took this to mean even the route for the VPC’s own CIDR (the “10.100.0.0/16” shown in the ec2.NewVpc code above), but that wasn’t the behavior I observed. Perhaps they only meant separately defined routes? I don’t know. The documentation also indicates that you can add routes as part of the adoption. Unfortunately, no example Go code exists, and the syntax/structure for adding routes is completely undocumented. I even tried using the syntax/structure from the ec2.NewRouteTable resource, but that didn’t work. I never got it to work. Instead, I used a separate ec2.NewRoute resource, and simply specified the newly-adopted default route table:

route, err := ec2.NewRoute(ctx, "inet-route", &ec2.RouteArgs {
	RouteTableId: defrt.ID(),
	DestinationCidrBlock: pulumi.String("0.0.0.0/0"),
	GatewayId: gw.ID(),
})

The result? A new VPC with only a single route table, and routes for both the VPC’s local CIDR as well as to the Internet through an Internet gateway. No explicit subnet-route table associations were necessary (unlike the previous approach), which reduces code to maintain and simplifies the overall code base (as well as simplifies the resources to manage on AWS).

I hope this information is helpful to someone. I’m finding there to be quite a dearth of documentation on using Pulumi with Go, and I hope that my beginner-level posts help alleviate that in some way. If you have questions—or if you have comments on how I can improve my Go code!—feel free to find me on Twitter or on the Pulumi Slack community.

UPDATE 2020-07-01: I updated the code snippets to use Pulumi.StringMap for the tags instead of Pulumi.Map. The AWS provider used by Pulumi changed in version 2.11.0 to require string values in maps, hence the need to use Pulumi.StringMap.

Getting AWS Availability Zones using Pulumi and Go

I’ve written several different articles on Pulumi (take a look at all articles tagged “Pulumi”), the infrastructure-as-code tool that allows users to define their infrastructure using a general-purpose programming language instead of a domain-specific language (DSL). Thus far, my work with Pulumi has leveraged TypeScript, but moving forward I’m going to start sharing more Pulumi code written using Go. In this post, I’ll share how to use Pulumi and Go to get a list of Availability Zones (AZs) from a particular region in AWS.

Before I proceed, I feel like it is important to provide the disclaimer that I’m new to Go (and therefore still learning). There are probably better ways of doing what I’m doing here, and so I welcome all constructive feedback on how I can improve.

With that disclaimer out of the way, allow me to first provide a small bit of context around this code. When I’m using Pulumi to manage infrastructure on AWS, I like to try to keep things as region-independent as possible. Therefore, I try to avoid hard-coding things like the number of AZs or the AZ names, and prefer to gather that information dynamically—which is what this code does.

Here’s the Go code I concocted:

package main

import (
	"github.com/pulumi/pulumi-aws/sdk/v2/go/aws"
	"github.com/pulumi/pulumi/sdk/v2/go/pulumi"
)

func main() {
	pulumi.Run(func(ctx *pulumi.Context) error {
		// Look up Availability Zone (AZ) information for configured region
		desiredAzState := "available"

		rawAzInfo, err := aws.GetAvailabilityZones(ctx, &aws.GetAvailabilityZonesArgs{
			State: &desiredAzState,
		})
		if err != nil {
			return err
		}

		// Determine how many AZs are present
		numOfAZs := len(rawAzInfo.Names)

		// Build a list of AZ names
		azNames := []string{}
		for idx := 0; idx < numOfAZs; idx++ {
			azNames = append(azNames, rawAzInfo.Names[idx])
		}

		return nil
	})
}

The code above assumes that you’ve defined the desired AWS region using pulumi config set aws:region <region> before running it. I’m sure I could add some code to check for that (which I will very likely do in the near future). Otherwise, this code has no other dependencies and will gather the total number of AZs in a region (stored in numOfAZs) as well as a list of AZ names (stored in the azNames slice). Using the number of AZs and the list of AZ names, you could then use that information to create subnets in each AZ, distribute instances across AZs, or similar.

This code doesn’t define any output, but it would be reasonably straightforward to add ctx.Export statements to add some output (which would be useful/necessary if you needed to consume information from this stack in another stack via a StackReference).

How I Tested

I tested this Go code with Pulumi 2.4.0 on a Debian 10.4 VM. The VM had Go 1.14.4, Pulumi, and the AWS CLI installed and configured.

There aren’t a lot of examples of using Go with Pulumi (my colleague Leon Stigter is one of the few folks I’ve seen writing about using Go with Pulumi, check his website), so I hope that sharing this will help others who want to use Go with Pulumi. If you have any questions, feel free to contact me on Twitter or find me in the Pulumi Slack community.

Fixes for Some Vagrant Issues on Fedora

Yesterday I needed to perform some testing of an updated version of some software that I use. (I was conducting the testing because this upgrade contained some breaking changes, and needed to understand how to mitigate the breaking changes.) So, I broke out Vagrant (with the Libvirt provider) on my Fedora laptop—and promptly ran into a couple issues. Fortunately, these issues were relatively easy to work around, but since the workarounds were non-intuitive I wanted to share them here for the benefit of others.

If you’re unfamiliar with Vagrant, have a look at my quick introduction to Vagrant. The “TL;DR” is the Vagrant can offer users a consistent workflow to creating and destroying VMs across a fairly wide number of platforms, including both local providers (like VirtualBox or VMware Fusion/VMware Workstation) and cloud provider (such as AWS and Azure). I’ve written a fair amount on Vagrant, so feel free to browse all the “Vagrant”-tagged posts on the site for more information.

Likewise, if you’re unfamiliar with the Libvirt provider, check out this post from 2017 on using Vagrant with Libvirt on Fedora 27.

In my testing yesterday, I ran into two networking-related issues. The first of them was an error that the correct Libvirt network could not be found, even though virsh net-list and virsh net-list --all showed the “missing” network was present. The error is described in this GitHub issue; the GitHub issue also contains a fix at the very end of the discussion. If you are using Vagrant 2.2 on Fedora (which I was), then some changes had been made to the default configuration of Vagrant. This Fedora wiki page outlines the changes; it turns out these changes did (in my case, at least) lead to the “couldn’t find network” behavior. The fix, as outlined on the wiki, was to add libvirt.qemu_use_session = false to my Vagrantfile, and the problem went away. I hadn’t seen this issue in my earlier use of Vagrant with Libvirt because these changes hadn’t occurred until the most recent release of Vagrant (the 2.2.x series).

The second issue was also networking-related; the guest domain (VM) would boot up, but hang while waiting for an IP address. My first thought was this was related to firewalld, but I quickly verified via firewall-cmd that the Libvirt provider was placing the bridge interfaces into the correct zone to allow the necessary traffic. The ultimate fix is described in this Red Hat Bugzilla bug; I had to specify host-passthrough as the CPU mode setting for the Libvirt provider. I have no idea why this works, but it does. I also find it strange that the bug dates back to Fedora 23, yet this is the first time I’ve run into this behavior. Regardless, it solved the problem, enabling me to move forward with the testing.

(The testing is still underway, by the way. I haven’t figured out a workaround for the breaking changes introduced by the new software version.)

As always, feel free to contact me if you have any questions, comments, or suggestions for improvement. It’s probably easiest to find me on Twitter and engage with me there. Thanks!

Recent Posts

Technology Short Take 128

Welcome to Technology Short Take #128! It looks like I’m settling into a roughly monthly cadence with the Technology Short Takes. This time around, I’ve got a (hopefully) interesting collection of links. The collection seems a tad heavier than normal in the hardware and security sections, probably due to new exploits discovered in Intel’s speculative execution functionality. In any case, here’s what I’ve gathered for you. Enjoy!

Read more...

Using kubectl via an SSH Tunnel

In this post, I’d like to share one way (not the only way!) to use kubectl to access your Kubernetes cluster via an SSH tunnel. In the future, I may explore some other ways (hit me on Twitter if you’re interested). I’m sharing this information because I suspect it is not uncommon for folks deploying Kubernetes on the public cloud to want to deploy them in a way that does not expose them to the Internet. Given that the use of SSH bastion hosts is not uncommon, it seemed reasonable to show how one could use an SSH tunnel to reach a Kubernetes cluster behind an SSH bastion host.

Read more...

Making it Easier to Get Started with Cluster API on AWS

I’ve written a few articles about Cluster API (you can see a list of the articles here), but even though I strive to make my articles easy to understand and easy to follow along many of those articles make an implicit assumption: that readers are perhaps already somewhat familiar with Linux, Docker, tools like kind, and perhaps even Kubernetes. Today I was thinking, “What about folks who are new to this? What can I do to make it easier?” In this post, I’ll talk about the first idea I had: creating a “bootstrapper” AMI that enables new users to quickly and easily jump into the Cluster API Quick Start.

Read more...

Creating a Multi-AZ NAT Gateway with Pulumi

I recently had a need to test a configuration involving the use of a single NAT Gateway servicing multiple private subnets across multiple availability zones (AZs) within a single VPC. While there are notable caveats with such a design (see the “Caveats” section at the bottom of this article), it could make sense in some use cases. In this post, I’ll show you how I used TypeScript with Pulumi to automate the creation of this design.

Read more...

Review: Magic Mouse 2 and Magic Trackpad 2 on Fedora

I recently purchased a new Apple Magic Mouse 2 and an Apple Magic Trackpad 2—not to use with my MacBook Pro, but to use with my Fedora-powered laptop (a Lenovo 5th generation ThinkPad X1 Carbon; see my review). I know it seems odd to buy Apple accessories for a non-Apple laptop, and in this post I’d like to talk about why I bought these items as well as provide some (relatively early) feedback on how well they work with Fedora.

Read more...

Using Unison Across Linux, macOS, and Windows

I recently wrapped up an instance where I needed to use the Unison file synchronization application across Linux, macOS, and Windows. While Unison is available for all three platforms and does work across (and among) systems running all three operating systems, I did encounter a few interoperability issues while making it work. Here’s some information on these interoperability issues, and how I worked around them. (Hopefully this information will help someone else.)

Read more...

Technology Short Take 127

Welcome to Technology Short Take #127! Let’s see what I’ve managed to collect for you this time around…

Read more...

Technology Short Take 126

Welcome to Technology Short Take #126! I meant to get this published last Friday, but completely forgot. So, I added a couple more links and instead have it ready for you today. I don’t have any links for servers/hardware or security in today’s Short Take, but hopefully there’s enough linked content in the other sections that you’ll still find something useful. Enjoy!

Read more...

Setting up etcd with etcdadm

I’ve written a few different posts on setting up etcd. There’s this one on bootstrapping a TLS-secured etcd cluster with kubeadm, and there’s this one about using kubeadm to run an etcd cluster as static Pods. There’s also this one about using kubeadm to run etcd with containerd. In this article, I’ll provide yet another way of setting up a “best practices” etcd cluster, this time using a tool named etcdadm.

Read more...

Using External Etcd with Cluster API on AWS

If you’ve used Cluster API (CAPI), you may have noticed that workload clusters created by CAPI use, by default, a “stacked master” configuration—that is, the etcd cluster is running co-located on the control plane node(s) alongside the Kubernetes control plane components. This is a very common configuration and is well-suited for most deployments, so it makes perfect sense that this is the default. There may be cases, however, where you’ll want to use a dedicated, external etcd cluster for your Kubernetes clusters. In this post, I’ll show you how to use an external etcd cluster with CAPI on AWS.

Read more...

Using Existing AWS Security Groups with Cluster API

I’ve written before about how to use existing AWS infrastructure with Cluster API (CAPI), and I was recently able to help update the upstream documentation on this topic (the upstream documentation should now be considered the authoritative source). These instructions are perfect for placing a Kubernetes cluster into an existing VPC and associated subnets, but there’s one scenario that they don’t yet address: what if you need your CAPI workload cluster to be able to communicate with other EC2 instances or other AWS services in the same VPC? In this post, I’ll show you the CAPI functionality that makes this possible.

Read more...

Using Paw to Launch an EC2 Instance via API Calls

Last week I wrote a post on using Postman to launch an EC2 instance via API calls. Postman is a cross-platform application, so while my post was centered around Postman on Linux (Ubuntu, specifically) the steps should be very similar—if not exactly the same—when using Postman on other platforms. Users of macOS, however, have another option: a macOS-specific peer to Postman named Paw. In this post, I’ll walk through using Paw to issue API requests to AWS to launch an EC2 instance.

Read more...

Using Postman to Launch an EC2 Instance via API Calls

As I mentioned in this post on region and endpoint match in AWS API requests, exploring the AWS APIs is something I’ve been doing off and on for several months. There’s a couple reasons for this; I’ll go into those in a bit more detail shortly. In any case, I’ve been exploring the APIs using Postman (when on Linux) and Paw (when on macOS), and in this post I’ll share how to use Postman to launch an EC2 instance via API calls.

Read more...

Making File URLs Work Again in Firefox

At some point in the last year or so—I don’t know exactly when it happened—Firefox, along with most of the other major browsers, stopped working with file:// URLs. This is a shame, because I like using Markdown for presentations (at least, when it’s a presentation where I don’t need to collaborate with others). However, using this sort of approach generally requires support for file:// URLs (or requires running a local web server). In this post, I’ll show you how to make file:// URLs work again in Firefox.

Read more...

Installing MultiMarkdown 6 on Ubuntu 19.10

Markdown is a core part of many of my workflows. For quite a while, I’ve used Fletcher Penny’s MultiMarkdown processor (available on GitHub) on my various systems. Fletcher offers binary builds for Windows and macOS, but not a Linux binary. Three years ago, I wrote a post on how to compile MultiMarkdown 6 for a Fedora-based system. In this post, I’ll share how to compile it on an Ubuntu-based system.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!