Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Technology Short Take 144

Welcome to Technology Short Take #144! I have a fairly diverse set of links for readers this time around, covering topics from microchips to improving your writing, with stops along the way in topics like Kubernetes, virtualization, Linux, and the popular JSON-parsing tool jq. I hope you find something useful!

Networking

  • A short while ago I was helping someone (an acquaintance of a friend) with some odd DNS issues. I never found the root cause, but we did find a workaround; however, along the way, someone shared this article with me. I thought it was useful, so now I’m sharing it with you.
  • Michael Kashin shares the journey of containerizing NVIDIA Cumulus Linux.

Servers/Hardware

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

Storage

Virtualization

Career/Soft Skills

I guess I’d better wrap up now! I hope you found something useful here. If you have any questions, comments, suggestions for improvement, or just want to say hello, feel free to reach out to me. You can find me on Twitter, and I’m also active in a number of different Slack communities (Kubernetes, Kuma, Envoy, and Pulumi, to name a few.) I’d love to hear from you!

Establishing VPC Peering with Pulumi and Go

I use Pulumi to manage my lab infrastructure on AWS (I shared some of the details in this April 2020 blog post published on the Pulumi site). Originally I started with TypeScript, but later switched to Go. Recently I had a need to add some VPC peering relationships to my lab configuration. I was concerned that this may pose some problems—due entirely to the way I structure my Pulumi projects and stacks—but as it turned out it was more straightforward than I expected. In this post, I’ll share some example code and explain what I learned in the process of writing it.

Some Background

First, let me share some background on how I structure my Pulumi projects and stacks.

It all starts with a Pulumi project that manages my base AWS infrastructure—VPC, subnets, route tables and routes, Internet gateways, NAT gateways, etc. I use a separate stack in this project for each region where I need base infrastructure.

All other projects build on “top” of this base project, referencing the resources created by the base project in order to create their own resources. Referencing the resources created by the base project is accomplished via a Pulumi StackReference.

In my particular instance, I wanted to create a VPC peering relationship between two VPC in different regions, i.e., between two stacks of the base infrastructure project. However, I had some questions/concerns about how to do this:

  • I potentially could have added VPC peering into the base infrastructure project, since the VPC peering requester and the VPC peering accepter are separate resources.
  • However, I wanted the flexibility to optionally create a peering relationship, which would not have been possible if I bundled it into the base infrastructure project (without building in some branching logic to make it optional).
  • And yet, if I used a separate project (which affords me the flexibility of optionally adding a peering relationship), then how would that separate project add things to the base project that were necessary for the peering relationship to work, like routes and security group rules?

Although the Pulumi documentation has improved and continues to improve, there was no documentation or articles that really addressed these questions/concerns. I will provide a shout-out to Itay from the Pulumi community Slack, who took some time to share their experience with VPC peering (it was very useful).

Establishing VPC Peering

To establish a VPC peering relationship, a few different resources are needed (note that each of these is considered its own independent Pulumi resource, not a property of another resource):

  1. The VPC peering connection, which references the VPC IDs on both sides
  2. The VPC peering connector accepter, which references the VPC peering connection
  3. New routes to direct traffic between the two VPC CIDRs (these wouldn’t already exist because these routes need to reference the VPC peering connection in order to direct traffic appropriately)
  4. New security group rules to allow traffic from the peer VPC CIDR (unless this traffic is already allowed)

Let’s look at some code. Before I can create any of these resources, I need to pull some information from the base infrastructure project stacks via StackReferences. Assuming the StackReferences are named srcStackRef and dstStackRef, then I could pull the corresponding (exported) information like this:

srcPrivateRouteTbl := srcStackRef.GetIDOutput(pulumi.String("privRouteTableId"))
srcVpcId := srcStackRef.GetIDOutput(pulumi.String("vpcId"))
srcNodeSecGrpId := srcStackRef.GetIDOutput(pulumi.String("nodeSecGrpId"))
dstPrivateRouteTbl := dstStackRef.GetIDOutput(pulumi.String("privRouteTableId"))
dstVpcId := dstStackRef.GetIDOutput(pulumi.String("vpcId"))
dstNodeSecGrpId := dstStackRef.GetIDOutput(pulumi.String("nodeSecGrpId"))

That last note is important: the information I want to pull using a StackReference must be exported via ctx.Export() in the base infrastructure project. Fortunately, I’d exported just about everything, so no changes to the base infrastructure project were needed.

Next, I needed to set up a new AWS provider. Since creating a VPC peering relationship across regions means creating resources in two different regions (in my use case, at least), a new (additional) AWS provider to handle the second region is needed:

dstProvider, err := aws.NewProvider(ctx, "dstProvider", &aws.ProviderArgs{
	Region: pulumi.String(dstVpcRegion),
})

Armed with the second provider and the information from the base infrastructure project stacks (the names of which are parameterized to make the code more reusable), I can proceed with creating the VPC peering connection and VPC peering connection accepter:

peerConn, err := ec2.NewVpcPeeringConnection(ctx, "peering-connection", &ec2.VpcPeeringConnectionArgs{
	PeerRegion: pulumi.String(dstVpcRegion),
	PeerVpcId:  dstVpcId,
	VpcId:      srcVpcId,
})

_, err = ec2.NewVpcPeeringConnectionAccepter(ctx, "peering-acceptor", &ec2.VpcPeeringConnectionAccepterArgs{
	VpcPeeringConnectionId: peerConn.ID(),
	AutoAccept:             pulumi.Bool(true),
}, pulumi.Provider(dstProvider))

At this point, the relationship is created, but no traffic will pass between the VPCs (there’s no route and the traffic wouldn’t be allowed by my security groups anyway). Now we start to get into the area where most of my questions/concerns were centered: how was this third project going to be able to modify things that sat inside the base infrastructure project, like route tables and security groups? Using a third project—as opposed to building the peering into the base infrastructure project—seemed like the best/right approach. Would it work?

As it turns out, yes, it does work! I had been thinking too “atomically,” thinking of the route table and the security group as singular entities. In reality, they are not; we add routes to a route table via a route table association, and both routes and route table associations are separate resources from the route table itself. Similarly, security group rules can exist as an independent resource, referencing only the ID of the security group in which those rules should be included. This was a key expansion of my understanding.

Here’s the code to create the new routes (the VPC CIDRs are parameterized):

_, err = ec2.NewRoute(ctx, "src-peer-route", &ec2.RouteArgs{
	RouteTableId:           srcPrivateRouteTbl,
	DestinationCidrBlock:   pulumi.String(netAddrMap[dstVpcRegion]),
	VpcPeeringConnectionId: peerConn.ID(),
})

_, err = ec2.NewRoute(ctx, "dst-peer-route", &ec2.RouteArgs{
	RouteTableId:           dstPrivateRouteTbl,
	DestinationCidrBlock:   pulumi.String(netAddrMap[srcVpcRegion]),
	VpcPeeringConnectionId: peerConn.ID(),
}, pulumi.Provider(dstProvider))

You can see that I needed only to reference the route table ID in order to create the route (and the peering connection ID, of course, but that was created in this same project).

Similarly, referencing the security group ID gained via a StackReference to the base infrastructure project stacks allowed me to insert a security group rule to allow the traffic:

_, err = ec2.NewSecurityGroupRule(ctx, "src-peer-cidr", &ec2.SecurityGroupRuleArgs{
	Type:            pulumi.String("ingress"),
	FromPort:        pulumi.Int(0),
	ToPort:          pulumi.Int(65535),
	Protocol:        pulumi.String("all"),
	CidrBlocks:      pulumi.StringArray{pulumi.String(dstVpcCidr)},
	SecurityGroupId: srcNodeSecGrpId,
})

_, err = ec2.NewSecurityGroupRule(ctx, "dst-peer-cidr", &ec2.SecurityGroupRuleArgs{
	Type:            pulumi.String("ingress"),
	FromPort:        pulumi.Int(0),
	ToPort:          pulumi.Int(65535),
	Protocol:        pulumi.String("all"),
	CidrBlocks:      pulumi.StringArray{pulumi.String(srcVpcCidr)},
	SecurityGroupId: dstNodeSecGrpId,
}, pulumi.Provider(dstProvider))

In all of the above examples, please note that I’ve omitted code to handle the value of err and to return errors; you’d want to add that yourself before you can use the code.

Running pulumi up was successful (no errors, first try!), and a quick check of connectivity showed that my workloads were able to communicate across the VPC peering relationship. Success!

Lesson Learned

The key thing I gained from working on this was a better understanding of the relationship between things like route tables and routes, or between security group rules and security groups. Being able to separate the management of routes in a table or rules in a security group into separate projects is very useful, and makes it much easier to “layer” projects and stacks.

I hope this post is helpful. If you have any questions, or if you have corrections or suggestions for improving the post, feel free to reach out to me. You can easily find me on Twitter, and I also hang out in the Pulumi Slack community.

Using the AWS CLI to Tag Groups of AWS Resources

To conduct some testing, I recently needed to spin up a group of Kubernetes clusters on AWS. Generally speaking, my “weapon of choice” for something like this is Cluster API (CAPI) with the AWS provider. Normally this would be enormously simple. In this particular case—for reasons that I won’t bother going into here—I needed to spin up all these clusters in a single VPC. This presents a problem for the Cluster API Provider for AWS (CAPA), as it currently doesn’t add some required tags to existing AWS infrastructure (see this issue). The fix is to add the tags manually, so in this post I’ll share how I used the AWS CLI to add the necessary tags.

Without the necessary tags, the AWS cloud provider—which is responsible for the integration that creates Elastic Load Balancers (ELBs) in response to the creation of a Service of type LoadBalancer, for example— won’t work properly. Specifically, the following tags are needed:

kubernetes.io/cluster/<cluster-name>
kubernetes.io/role/elb
kubernetes.io/role/internal-elb

The latter two tags are mutually exclusive: the former should be assigned to public subnets to tell the AWS cloud provider where to place public-facing ELBs, while the latter is assigned to private subnets to manage the placement of internal ELBs.

When CAPA is managing the infrastructure, this isn’t a problem because CAPA will add the necessary tags when it creates the infrastructure. Therefore, had I been able to use a separate VPC for each cluster, I could have let CAPA manage the infrastructure and avoided any issues entirely. In this case I was using a separate infrastructure-as-code tool (Pulumi) to manage the underlying AWS infrastructure and had the requirement to use a single VPC for multiple clusters.

Now, I could have logged into the AWS console and used “point-and-click” to work my way through tagging the VPC and the subnets, but I preferred to use the AWS CLI. I quickly found the aws ec2 create-tags command, which would do exactly what I needed; all I had to do was provide a list of the resources to tag and the tags to add.

To find the resources, I just had to make use of the tags that I’d made sure to assign to all the resources I created. So, to find all my public subnets, I used this AWS CLI command:

aws ec2 describe-subnets --filters Name=tag:Owner,Values="Scott Lowe" Name=tag:Name,Values="*pub*" --query 'Subnets[*].SubnetId' --output text

Similarly, I could pull up my private subnets like this:

aws ec2 describe-subnets --filters Name=tag:Owner,Values="Scott Lowe" Name=tag:Name,Values="*priv*" --query 'Subnets[*].SubnetId' --output text

Next, I had to prepare the tags I wanted added to each resource. For this, the parameter --generate-cli-skeleton input was very helpful; it generated the following framework:

{
    "DryRun": true,
    "Resources": [
        ""
    ],
    "Tags": [
        {
            "Key": "",
            "Value": ""
        }
    ]
}

Using this skeleton as the foundation, I created two JSON input files—one for the tags to be assigned to public subnets, and one for the tags to be assigned to private subnets.

The documentation for the aws ec2 create-tags command indicated that it would take a space-delimited list of resource IDs, and the output from the aws ec2 describe-subnets command appeared to be space-delimited. At this point, I thought I’d be able to do something like this:

aws ec2 describe-subnets --filters Name=tag:Owner,Values="Scott Lowe" \
Name=tag:Name,Values="*priv*" --query 'Subnets[*].SubnetId' \
--output text | xargs -I {} aws ec2 create-tags --resources {} \
--cli-input-json file://priv-subnet-tags.json

Alas, this did not work. Further, no amount of messing around with the output of the aws ec2 describe-subnets command could get it into a format that aws ec2 create-tags liked. I’m sure it was an error of some sort on my part, but I couldn’t figure it out.

I’d been spending a fair amount of time with jq recently (parsing a lot of Envoy configurations), so I thought, “Why not drop back to JSON output and use jq?”

Dropping the --output text and piping output through jq finally got me to a working command. Here’s the command for the public subnets:

aws ec2 describe-subnets --filters Name=tag:Owner,Values="Scott Lowe" \
Name=tag:Name,Values="*pub*" --query 'Subnets[*].SubnetId' \
jq -r '.[]' | xargs -p -I {} -n 1 aws ec2 create-tags --resources {} \
--cli-input-json file://pub-subnet-tags.json

There was also a corresponding version for the private subnets as well.

Along the way, I did end up accidentally applying some private tags to public subnets; fortunately, the aws ec2 delete-tags command was there to save me.

Once the (correct) tags were applied, I was able to create a series of workload clusters via CAPI, and everything worked just as expected.

What did I learn from this whole process?

  • I had not been previously aware of the aws ec2 create-tags and aws ec2 delete-tags commands; these are pretty handy.
  • It seems that working with structured data, like JSON, can sometimes be easier than freeform text. jq is your ally here.
  • Using the --generate-cli-skeleton input parameter is very useful for generating JSON input documents. I’ll definitely be using that one again.

I hope this information is useful to you in some way. Thanks for reading! Feedback is always welcome, so feel free to reach out to me on Twitter if you have any questions or comments.

Technology Short Take 143

Welcome to Technology Short Take #143! I have what I think is an interesting list of links to share with you this time around. Since taking my new job at Kong, I’ve been spending more time with Envoy, so you’ll see some Envoy-related content showing up in this Technology Short Take. I hope this collection of links has something useful for you!

Networking

Servers/Hardware

Security

  • I saw this blog post about Curiefense, an open source Envoy extension to add WAF (web application firewall) functionality to Envoy.
  • This post on using SPIFFE/SPIRE, Kubernetes, and Envoy together shows how to implement mutual TLS (mTLS) for a simple application. As a learning resource, I thought this post was helpful. However, I wouldn’t recommend trying to cobble together something like this for a production environment. If you need mTLS in production, use a service mesh that supports this sort of functionality.

Cloud Computing/Cloud Management

Operating Systems/Applications

Storage

Virtualization

Career/Soft Skills

And with that, I’ll wrap this up. As always, I love to hear from readers, so feel free to engage with me on Twitter or find me on any one of a number of different Slack communities. Have a great weekend!

Starting WireGuard Interfaces Automatically with Launchd on macOS

In late June of this year, I wrote a piece on using WireGuard on macOS via the CLI, where I walked readers using macOS through how to configure and use the WireGuard VPN from the terminal (as opposed to using the GUI client, which I discussed here). In that post, I briefly mentioned that I was planning to explore how to have macOS' launchd automatically start WireGuard interfaces. In this post, I’ll show you how to do exactly that.

These instructions borrow heavily from this post showing how to use macOS as a WireGuard VPN server. These instructions also assume that you’ve already walked through installing the necessary WireGuard components, and that you’ve already created the configuration file(s) for your WireGuard interface(s). Finally, I wrote this using my M1-based MacBook Pro, so my example files and instructions will be referencing the default Homebrew prefix of /opt/homebrew. If you’re on an Intel-based Mac, change this to /usr/local instead.

The first step is to create a launchd job definition. This file should be named <label>.plist, and it will need to be placed in a specific location. The <label> value is taken from the name given to the job itself, which you’ll see in the example job definition below. Since this modifies the networking configuration of your macOS system, it will need to be treated as a “global daemon” and will need to be placed in the /Library/LaunchDaemons directory.

Here’s an example of a job definition:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
    <dict>
        <key>Label</key>
        <string>com.wireguard.wg0</string>
        <key>ProgramArguments</key>
        <array>
            <!-- Points to local version of wg-quick that
                 fixes path issues with the script -->
            <string>/Users/slowe/.local/bin/wg-quick</string>
            <string>up</string>
            <string>wg0</string>
        </array>
        <key>KeepAlive</key>
            <dict>
                <key>NetworkState</key>
                <true/>
            </dict>
        <key>RunAtLoad</key>
        <true/>
        <key>StandardErrorPath</key>
        <string>/opt/homebrew/var/log/wireguard.err</string>
        <key>EnvironmentVariables</key>
        <dict>
            <key>PATH</key>
            <!-- Adds in user-specific and Homebrew bin directories to start of PATH -->
            <string>/Users/slowe/.local/bin:/opt/homebrew/bin:/usr/local/sbin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin</string>
        </dict>
    </dict>
</plist>

For the most part, the job definition is pretty easy to figure out, but here’s a few notes:

  • The <label> reference in naming the file pertains to the value of the Label key in the job definition. In my example, I’m calling the job “com.wireguard.wg0”, so the filename should be com.wireguard.wg0.plist. Note that the extension is necessary.
  • Note in the ProgramArguments key that I’m referencing my personal changed copy of the Homebrew wg-quick script, which works around some path- and shell-related issues on M1-based Macs (as described here).
  • As mentioned earlier, users of Intel-based Macs would want to change references to /opt/homebrew over to /usr/local (and the reference to the local copy of wg-quick may not be necessary).

Make sure this job definition is saved to an appropriate file name (<label>.plist) and it’s found in the /Library/LaunchDaemons directory. Then run these commands to inform launchd of the new job definition you just created:

sudo launchctl enable system/com.wireguard.wg0
sudo launchctl bootstrap system /Library/LaunchDaemons/com.wireguard.wg0.plist

After running these two commands, you should be able to run wg show and see your WireGuard interface up and running. It should get turned up every time you restart your computer and there is a network present/active.

Additional Resources

In addition to the Barrowclift post, I also found the launchd.info site to be quite helpful.

Although the above instructions work without any issues on my Mac, I can’t guarantee they’ll work on every system out there. If you run into issues, or if you find that I’ve provided incorrect or incomplete information, please let me know. You can interact with me on Twitter, or drop me an e-mail (my address isn’t too hard to find).

Recent Posts

An Alternate Approach to etcd Certificate Generation with Kubeadm

I’ve written a fair amount about kubeadm, which was my preferred way of bootstrapping Kubernetes clusters until Cluster API arrived. Along the way, I’ve also discussed using kubeadm to assist with setting up etcd, the distributed key-value store leveraged by the Kubernetes control plane (see here, here, and here). In this post, I’d like to revisit the topic of using kubeadm to set up an etcd cluster once again, this time taking a look at an alternate approach to generating the necessary TLS certificates than what the official documentation describes.

Read more...

Technology Short Take 142

Welcome to Technology Short Take #142! This time around, the Networking section is a bit light, but I’ve got plenty of cloud computing links and articles for you to enjoy, along with some stuff on OSes and applications, programming, and soft skills. Hopefully there’s something useful here for you!

Read more...

Adding Multiple Items Using Kustomize JSON 6902 Patches

Recently, I needed to deploy a Kubernetes cluster via Cluster API (CAPI) into a pre-existing AWS VPC. As I outlined in this post from September 2019, this entails modifying the CAPI manifest to include the VPC ID and any associated subnet IDs, as well as referencing existing security groups where needed. I knew that I could use the kustomize tool to make these changes in a declarative way, as I’d explored using kustomize with Cluster API manifests some time ago. This time, though, I needed to add a list of items, not just modify an existing value. In this post, I’ll show you how I used a JSON 6902 patch with kustomize to add a list of items to a CAPI manifest.

Read more...

Using WireGuard on macOS via the CLI

I’ve written a few different posts on WireGuard, the “simple yet fast and modern VPN” (as described by the WireGuard web site) that aims to supplant tools like IPSec and OpenVPN. My first post on WireGuard showed how to configure WireGuard on Linux, both on the client side as well as on the server side. After that, I followed it up with posts on using the GUI WireGuard app to configure WireGuard on macOS and—most recently—making WireGuard from Homebrew work on an M1-based Mac. In this post, I’m going to take a look at using WireGuard on macOS again, but this time via the CLI.

Read more...

Installing Older Versions of Kumactl on an M1 Mac

The Kuma community recently released version 1.2.0 of the open source Kuma service mesh, and along with it a corresponding version of kumactl, the command-line utility for interacting with Kuma. To make it easy for macOS users to get kumactl, the Kuma community maintains a Homebrew formula for the CLI utility. That includes providing M1-native (ARM64) macOS binaries for kumactl. Unfortunately, installing an earlier version of kumactl on an M1-based Mac using Homebrew is somewhat less than ideal. Here’s one way—probably not the only way—to work around some of the challenges.

Read more...

Making WireGuard from Homebrew Work on an M1 Mac

After writing the post on using WireGuard on macOS (using the official WireGuard GUI app from the Mac App Store), I found the GUI app’s behavior to be less than ideal. For example, tunnels marked as on-demand would later show up as no longer configured as an on-demand tunnel. When I decided to set up WireGuard on my M1-based MacBook Pro (see my review of the M1 MacBook Pro), I didn’t want to use the GUI app. Fortunately, Homebrew has formulas for WireGuard. Unfortunately, the WireGuard tools as installed by Homebrew on an M1-based Mac won’t work. Here’s how to fix that.

Read more...

Kubernetes Port Names and Terminating HTTPS Traffic on AWS

I recently came across something that wasn’t immediately intuitive with regard to terminating HTTPS traffic on an AWS Elastic Load Balancer (ELB) when using Kubernetes on AWS. At least, it wasn’t intuitive to me, and I’m guessing that it may not be intuitive to some other readers as well. Kudos to my teammates Hart Hoover and Brent Yarger for identifying the resolution, which I’m going to call out in this post.

Read more...

Technology Short Take 141

Welcome to Technology Short Take #141! This is the first Technology Short Take compiled, written, and published entirely on my M1-based MacBook Pro (see my review here). The collection of links shared below covers a fairly wide range of topics, from old Sun hardware to working with serverless frameworks in the public cloud. I hope that you find something useful here. Enjoy!

Read more...

Review: Logitech Ergo K860 Ergonomic Keyboard

As part of an ongoing effort to refine my work environment, several months ago I switched to a Logitech Ergo K860 ergonomic keyboard. While I’m not a “keyboard snob,” I am somewhat particular about the feel of my keyboard, so I wasn’t sure how I would like the K860. In this post, I’ll provide my feedback, and provide some information on how well the keyboard works with both Linux and macOS.

Read more...

Review: 2020 M1-Based MacBook Pro

I hadn’t done a personal hardware refresh in a while; my laptop was a 2017-era MacBook Pro (with the much-disliked butterfly keyboard) and my tablet was a 2014-era iPad Air 2. Both were serviceable but starting to show their age, especially with regard to battery life. So, a little under a month ago, I placed an order for some new Apple equipment. Included in that order was a new 2020 13" MacBook Pro with the Apple-designed M1 CPU. In this post, I’d like to provide a brief review of the 2020 M1-based MacBook Pro based on the past month of usage.

Read more...

The Next Step

The Greek philosopher Heraclitus is typically attributed as the creator of the well-known phrase “Change is the only constant.” Since I left VMware in 2018 to join Heptio, change has been my companion. First, there was the change of focus, moving to a focus on Kubernetes and related technologies. Then there was the acquisition of Heptio by VMware, and all the change that comes with an acquisition. Just when things were starting to settle down, along came the acquisition of Pivotal by VMware and several more rounds of changes as a result. Today, I mark the start of another change, as I begin a new role and take the next step in my career journey.

Read more...

Technology Short Take 140

Welcome to Technology Short Take #140! It’s hard to believe it’s already the start of May 2021—my how time flies! In this Technology Short Take, I’ve gathered some links for you covering topics like Azure and AWS networking, moving from macOS to Linux (and back again), and more. Let’s jump right into the content!

Read more...

Making Firefox on Linux use Private Browsing by Default

While there are a couple different methods to make Firefox use private browsing by default (see this page for a couple methods), these methods essentially force private browsing and disable the ability to use “regular” (non-private) browsing. In this post, I’ll describe what I consider to be a better way of achieving this, at least on Linux.

Read more...

Technology Short Take 139

Welcome to Technology Short Take #139! This Technology Short Take is a bit heavy on cloud, OS, and programming topics, but there should be enough other interesting links to be useful to plenty of folks. (At least, I hope that’s the case!) Now, let’s get on to the content!

Read more...

Using WireGuard on macOS

A short while ago I published a post on setting up WireGuard for AWS VPC access. In that post, I focused on the use of Linux on both the server side (on an EC2 instance in your AWS VPC) as well as on the client side (using the GNOME Network Manager interface). However, WireGuard is not limited to Linux, and I recently configured one of my macOS systems to take advantage of this WireGuard infrastructure for access to the private subnets in my AWS VPC. In this post, I’ll walk readers through configuring macOS to use WireGuard.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!