Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Technology Short Take 136

Welcome to Technology Short Take #136, the first Short Take of 2021! The content this time around seems to be a bit more security-focused, but I’ve still managed to include a few links in other areas. Here’s hoping you find something useful!

Networking

Servers/Hardware

  • Thinking of buying an M1-powered Mac? You may find this list helpful.

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

Career/Soft Skills

That’s it this time around! If you have any questions, comments, or corrections, feel free to contact me. I’m a regular visitor to the Kubernetes Slack instance, or you can just hit me on Twitter. Thanks!

Using Velero to Protect Cluster API

Cluster API (also known as CAPI) is, as you may already know, an effort within the upstream Kubernetes community to apply Kubernetes-style APIs to cluster lifecycle management—in short, to use Kubernetes to manage the lifecycle of Kubernetes clusters. If you’re unfamiliar with CAPI, I’d encourage you to check out my introduction to Cluster API before proceeding. In this post, I’m going to show you how to use Velero (formerly Heptio Ark) to backup and restore Cluster API objects so as to protect your organization against an unrecoverable issue on your Cluster API management cluster.

To be honest, this process is so straightforward it almost doesn’t need to be explained. In general, the process for backing up the CAPI management cluster looks like this:

  1. Pause CAPI reconciliation on the management cluster.
  2. Back up the CAPI resources.
  3. Resume CAPI reconciliation.

In the event of catastrophic failure, the recovery process looks like this:

  1. Restore from backup onto another management cluster.
  2. Resume CAPI reconciliation.

Let’s look at these steps in a bit more detail.

Pausing and Resuming Reconciliation

The process for pausing and resuming reconciliation of CAPI resources is outlined in this separate blog post. To summarize that post here for convenience, the Cluster API spec includes a paused field that causes the Cluster API controllers to stop reconciliation when the field is set to true (and resume reconciliation when the field is false or absent). Setting this field allows you, the cluster operator, to pause or resume reconciliation.

Backing up CAPI Resources

Once you’ve paused reconciliation for Cluster API, you can then run a backup using Velero. Based on my testing, I didn’t see anything unusual or odd about running a backup; generally speaking, it looks to be as simple as velero backup create (with appropriate flags). Given the large number of custom resources used by Cluster API (Clusters, Machines, MachineDeployments, KubeadmConfigs, etc.) it may be challenging to include only Cluster API resources using Velero’s --include-resources functionality. It’s probably easier to either a) not use any of Velero’s filtering functionality and catch everything, or b) make sure you are either using namespaces or labels comprehensively for CAPI objects and then use Velero’s --include-namespaces and/or --selector filtering options for selecting things to be included in the backup. Refer to Velero’s resource filtering documentation for more details.

Restoring from Backup

As with creating the backup using Velero, restoring from the Velero backup follows the standard Velero procedures (i.e., run velero restore create with appropriate flags/options). Naturally, the cluster to which you are restoring should be an appropriately-configured Cluster API management cluster with the appropriate Cluster API components already installed.

Since this article is more focused on the “Oh no my management cluster is dead” scenario, all of the information on disaster recovery in the Velero docs is appropriate.

After the restore is complete, you’ll then want to resume reconciliation on the target/destination cluster, as outlined above.

Backup and Restore Versus Moving

The clusterctl utility used by CAPI for initializing management clusters (among other things) also has a move subcommand that can be used to move CAPI resources from one cluster to another cluster. Some readers may be wondering why we should bother with Velero, and if they could use clusterctl move instead.

clusterctl move is a viable option for moving CAPI objects between two clusters as long as both the source and target clusters are up and running. Using Velero, on the other hand, only requires that the source cluster is up and running when a backup needs to be taken; users can then restore this backup to another cluster even if the source cluster has completely failed. I’m also of the opinion that Velero will provide more fine-grained control over what can be backed up and restored, although I have yet to test that directly.

Additional Resources

Readers may find the following resources useful as well:

Disaster recovery use case with Velero

Cluster migration use case with Velero

I hope that readers find this article helpful. If there’s anything I’ve discussed here that you’d like to see examined/explained in greater detail, feel free to let me know. You can find me on the Kubernetes Slack instance, or find me on Twitter. I’d love to hear from you!

Details on the New Desk Layout

Over the holiday break I made some time to work on my desk layout, something I’d been wanting to do for quite a while. I’d been wanting to “up my game,” so to speak, with regard to producing more content, including some video content. Inspired by—and heavily borrowing from—this YouTube video, I decided I wanted to create a similar arrangement for my desk. In this post, I’ll share more details on my setup.

I’ll start with the parts list, which contains links to everything I’m using in this new arrangement.

Parts List

When I shared a picture of the desk layout on Twitter, a number of folks expressed interest in the various components that I used. To make it easier for others who may be interested in replicating their own variation of this setup, here are Amazon links for all the parts I used to build this setup (these are not affiliate links):

  1. WALI Extra Tall Single LCD Monitor Fully Adjustable Desk Mount (qty 2)
  2. WALI Single Fully Adjustable Arm (qty 2)
  3. Pergear TH5 Camera Swivel Mini Tripod Ball Head (qty 2)
  4. Pergear TH3 Pro DSLR Camera Tripod Ball Head
  5. PULUZ Metal Handheld Adjustable 3/8” Tripod/Monopod Extension
  6. Camera Quick Release Screws, 3/8” and 1/4”
  7. Neewer 160 LED Dimmable Light Panel, 2 pack

The Setup

This photo shows the finished setup:

The new desk setup

Breaking it down a bit, here’s how the parts list above is represented in this photo:

  • On each side of the monitor are the extra tall desk mounts (item #1 above). From a setup/installation perspective, there’s nothing special here—these bolt onto the desk. They do offer a grommet installation method, which I didn’t use because the placement of the grommets on my desk are less than ideal.
  • On the desk mount on the right of the photo are four arms. Three of these arms are #2 from the parts list above (the WALI fully adjustable arm); the last is a repurposed monitor arm from an Amazon Basics monitor arm (this one). I didn’t include the Amazon Basics monitor arm above because…well, it’s pricey when all you’re using is the arm itself and not the base. You’re probably better off using a WALI monitor arm (I re-used the Amazon Basics arm because I already had one from a previous configuration).
  • The lowest arm on the right desk mount is a WALI arm (item #2 above; more on that in a moment) and is used for a camera. The next arm up is the re-used Amazon Basics arm (for my LG 34” ultrawide monitor). The third arm is used for the microphone boom, and the last arm is for the Neewer LED panel (item #7 on the list above).
  • The left desk mount has only a single arm (currently). It is a WALI arm and is used for a second Neewer LED panel.

Attaching the Camera

Of note is the means by which I attach the camera to the monitor arm and desk mount. I take no credit for this; I got the idea from this YouTube video (the same one linked in the first paragraph). The camera setup has four components:

  1. The WALI monitor arm (item #2 on the parts list)
  2. The PULUZ tripod/monopod extension (item #5 on the parts list)
  3. The Pergear TH3 Pro ball head (item #4 on the parts list)
  4. The camera quick release screws (a 3/8” quick release screw; part of item #6 on the parts list).

Here’s a picture of the assembled product:

The assembled camera arm

All of this is explained and demonstrated in the YouTube video I’ve referenced a few times, but the assembly goes something like this. First, you’ll need to remove the VESA mount at the end of the monitor arm.

Here’s the with the VESA mount attached (as it comes):

Arm with VESA mount

And here’s without the VESA mount:

Arm with VESA mount removed

Once the VESA mount is removed, enlarge the top hole in the monitor arm so that one of the 3/8” camera quick release screws will fit through the hole. Using the 3/8” camera quick release screw, attach the tripod/monopod extension, then the Pergear ball head, and then your camera. This gives you the ability to move the camera position (using the arm) as well as adjust the camera height (using the tripod/monopod extension) for perfect placement.

Attaching the LED Panels

The LED panels are attached in much the same way as the camera. I’m using a lower-end ball head for the LED panels, and they attach using 1/4” quick release screws. Unlike the 3/8” quick release screw used for the tripod/monopod extension, the 1/4” quick release screw will fit through the hole at the end of the monitor arm without any modification. Attaching the LED panels is just a matter of removing the VESA mount, using a 1/4” quick release screw to attach the Pergear TH5 ball head (item #3 on the parts list), and then attaching the LED panel to the ball head.

Here’s a close-up shot of how this looks:

LED panel attachment

Attaching the Microphone

The microphone is attached using a pretty ordinary two-section boom arm that simply clamps to the end of the monitor arm. All that’s required is to remove the VESA mount, then use the clamp for the boom arm to clamp to the monitor arm. You’ll want a decent microphone that you can (ideally) position out of frame of the camera but that will still produce reasonable sound quality. The linked YouTube video has a recommendation for one (I’m not using that microphone; I’m using a Heil PR40 that was given to me as a gift).

Additional Thoughts

I’m still adjusting things, tweaking the lighting and positioning of the various elements, but I do have a few additional (miscellaneous) thoughts so far:

  • The Neewer 160 LED lights aren’t ideal, in my opinion. They are (fairly) inexpensive, but they have drawbacks. Most of the purchasing options don’t include a battery or AC power adapter, which is an additional expense. The color isn’t adjustable, and the light they produce seems fairly harsh even when the brightness is turned way down. Further, I haven’t been able to find any diffusers or shadow boxes that are designed to work with these lights. I’ll probably end up switching to LED panels where both brightness and color are adjustable and that have some means of diffusing the light.
  • I’m not pleased with the current microphone boom arm and its attachment to the monitor arm, so I’ll probably look for some sort of alternative in the near future. It just doesn’t feel as sturdy/robust as I’d prefer.
  • Upon seeing a picture of the arrangement, several folks have commented on the need for a camera upgrade; that’s in the works. So, even though the current camera arm holds a webcam, I have plans to upgrade that to a DSLR soon. The nice thing about the setup is that swapping the webcam for a DSLR won’t require any changes to the monitor arm, the tripod/monopod extension, or the ball head—it will just be a matter of removing the webcam and replacing it with a DSLR (and adjusting the cabling to account for an HDMI capture device).
  • Both the desk mount (vertical poles) and monitor arms come with some cable management pieces; when supplemented with some Velco ties you can easily keep everything neat and organized.

So, there you have it—a fully breakdown of the parts involved, some notes on how everything comes together, and some early thoughts on some of the components. I’d love to hear from other folks who perhaps have done something similar, so feel free to contact me on Twitter and share any feedback on your setup, any thoughts about mine, or just to say hi. Thanks!

Technology Short Take 135

Welcome to Technology Short Take #135! This will likely be the last Technology Short Take of 2020, so it’s a tad longer than usual. Sorry about that! You know me—I just want to make sure everyone has plenty of technical content to read during the holidays. And speaking of holidays…whatever holidays you do (or don’t) celebrate, I hope that the rest of the year is a good one for you. Now, on to the content!

Networking

  • Arthur Chiao cracks open kube-proxy, a key part of Kubernetes networking, to expose the internals, and along the way exposes readers to a few different technologies. This is a good read if you’re trying to better understand some aspects of Kubernetes networking.
  • Gian Paolo takes a look at using tools like curl and jq when working with networking-related APIs.
  • It’s not unusual to see “networking professionals need to learn developer tools,” but how often do you see “developers need to learn these networking tools”? Martin Heinz discusses that very topic in this post.

Servers/Hardware

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

Programming

  • Although I do not (yet) consider myself a developer, I found John Arundel’s Rust versus Go article to be very informative.
  • Ben Kehoe provides readers with a hygienic Python setup for Linux, macOS, and WSL.
  • The folks at SemaphoreCI shared with me an e-book on CI/CD with Docker and Kubernetes. Although some parts of it do focus on SemaphoreCI (as would be expected), I do believe it may still be a useful resource for some readers. It’s available behind a regwall (you have to supply an e-mail address) here.

Virtualization

Career/Soft Skills

That’s all for this time around. I’ll be back in early 2021 with the next Technology Short Take and more interesting—and hopefully useful—technical content. Until then, feel free to hit me on Twitter; I’d love to hear from you!

Bootstrapping a Cluster API Management Cluster

Cluster API is, if you’re not already familiar, an effort to bring declarative Kubernetes-style APIs to Kubernetes cluster lifecycle management. (I encourage you to check out my introduction to Cluster API post if you’re new to Cluster API.) Given that it is using Kubernetes-style APIs to manage Kubernetes clusters, there must be a management cluster with the Cluster API components installed. But how does one establish that management cluster? This is a question I’ve seen pop up several times in the Kubernetes Slack community. In this post, I’ll walk you through one way of bootstrapping a Cluster API management cluster.

The process I’ll describe in this post is also described in the upstream Cluster API documentation (see the “Bootstrap & Pivot” section of this page).

At a high level, the process looks like this:

  1. Create a temporary bootstrap cluster.
  2. Make the bootstrap cluster into a temporary management cluster.
  3. Use the temporary management cluster to establish a workload cluster (through Cluster API).
  4. Convert the workload cluster into a permanent management cluster.
  5. Remove the temporary bootstrap cluster.

The following sections describe each of these steps in a bit more detail.

Create a Temporary Bootstrap Cluster

The first step is to create a temporary bootstrap cluster. This will be a short-lived cluster whose only purpose is just to get you to the point of having a more permanent management cluster. There’s a few different ways to do this, but probably the easiest way (in my opinion) is to use kind. To use kind, you’ll need to either a) be running on Linux with a container runtime (typically Docker, but Podman is experimentally supported), or b) use something like Docker Desktop for macOS or Windows. The kind website also has instructions for using Windows Subsystem for Linux 2 (WSL2) on Windows 10.

In order for your kind cluster to be able to function as a temporary management cluster, you’ll need at least one worker node. The default for kind is to only provision a single control plane node, so we’ll have to use a custom configuration file (this is described pretty well on the kind website). Here’s an example configuration file you could use:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker

Save this to a file and then reference that file when creating the cluster by running kind create cluster --config boostrap.yaml (or whatever you named the file). kind will create the cluster and create a Kubeconfig file for accessing the cluster, leaving you ready to proceed directly to the next step. Use kubectl get nodes to check the status of the nodes, proceeding to the next step once both nodes report “Ready”.

Establish the Temporary Management Cluster

For this step you’ll need clusterctl, a Cluster API command-line tool that you’ll use extensively through the rest of this process. This tool is available from the Cluster API GitHub repository. Once you’ve downloaded the tool, marked it as executable (if necessary for your OS), and placed it in your path (again, if necessary for your OS), then you’re ready to proceed.

I’ll assume you’re going to create a management cluster with the AWS provider, so the instructions below will reflect that. If you’re going to use a different provider—say, the Azure provider or the vSphere provider—the insructions will look a bit different. Check the Cluster API Quick Start page for examples of how to initialize a management cluster with a particular provider.

For the AWS provider, you’ll also need clusterawsadm (available from the Cluster API Provider for AWS GitHub repository). I won’t walk through all of the steps necessary to prep your AWS environment; the Quick Start page provides a great summary of what’s needed.

Once you’ve prepared your AWS environment (or whatever platform you’re going to use), run clusterctl init --infrastructure aws to initialize the bootstrap cluster as a Cluster API management cluster. (Replace “aws” with the provider you’re using, naturally.) This process normally only takes a few minutes.

Once the process has completed, run kubectl get clusters to verify the Cluster API components have been installed. You should get “No resources found in default namespace”; if you see that, you’re good to go. If you get an error, then it’s time to start troubleshooting.

Once things are working, you’re ready to proceed to the next step.

Create a Workload Cluster

In this step, you’ll use the Cluster API components in the bootstrap cluster to create a more long-lived cluster on the infrastructure provider of your choice (whatever provider you installed in the previous section). We refer to this first cluster as a “workload cluster,” because it will (initially) be managed by a separate management cluster—the temporary boostrap cluster you’ve just initialized in the previous section.

To do this, you’ll use clusterctl config cluster to create a YAML manifest for the workload cluster you’ll create. Because each infrastructure provider has different values that are needed in order to create the YAML manifest, I’d recommend use you clusterctl config cluster --list-variables <provider>, replacing <provider> with the name of the infrastructure provider you’re using (like “aws” or “vsphere”). This will give you a list of the values that clusterctl expects you to supply in order to generate a complete manifest. For the AWS provider, for example, that command produces this output:

Variables:
  - AWS_CONTROL_PLANE_MACHINE_TYPE
  - AWS_NODE_MACHINE_TYPE
  - AWS_REGION
  - AWS_SSH_KEY_NAME
  - CLUSTER_NAME
  - CONTROL_PLANE_MACHINE_COUNT
  - KUBERNETES_VERSION
  - WORKER_MACHINE_COUNT

This tells me exactly what values I need to supply to clusterctl—either as environment variables (what’s listed above), as values in a configuration file that I’ll then specify with the --config <file> parameter, or as command-line parameters to clusterctl directly (like the --kubernetes-version parameter). I’m a fan of using the configuration file, which would need to look like this (values changed to protect sensitive information):

AWS_CONTROL_PLANE_MACHINE_TYPE: m5a.large
AWS_NODE_MACHINE_TYPE: m5a.large
AWS_REGION: us-east-1
AWS_SSH_KEY_NAME: blog_rsa_key
CLUSTER_NAME: demo
CONTROL_PLANE_MACHINE_COUNT: 1
KUBERNETES_VERSION: v1.18.2
WORKER_MACHINE_COUNT: 1

With the configuration file in place, you can create the YAML manifest for the workload cluster with clusterctl config cluster <cluster-name> --config <config-file.yaml>. This will output to the screen. To actually create this cluster using Cluster API, you have a couple of options:

  1. Pipe the output directly to kubectl by appending | kubectl apply -f - to the command above.
  2. Redirect the output to a file by adding > manifest.yaml to the command above, and then apply separately with kubectl apply -f manifest.yaml.

It will take some time for Cluster API to realize (create) the new cluster. You can use kubectl get clusters to check the status of the new cluster. Once the cluster starts reporting as “Provisioned”, then you can retrieve the Kubeconfig for the new cluster. Using the Kubeconfig for the new cluster, install a CNI plugin of your choice (I’m partial to Calico and Antrea). Once kubectl get nodes for the new cluster shows the nodes in the newly-created cluster as “Ready”, you can proceed to the next step.

Convert the Workload Cluster to a Management Cluster

Now you’re finally ready to move the Cluster API components from the temporary kind cluster to the more permanent cluster on the infrastructure provider of your choice (whatever provider you used when you initialized the bootstrap cluster). This process is, in my opinion, pretty straightforward:

  1. Set your active Kubeconfig to the temporary management cluster.
  2. Run clusterctl move --to-kubeconfig=<path-to-kubeconfig-for-new-cluster>. The Kubeconfig being referenced here is the Kubeconfig for the workload cluster you created in the previous section.

Note that this operates on the default namespace; add the --namespace flag if you used a different namespace.

After this process completes, you should be able to run kubectl get clusters against the new cluster (not the temporary bootstrap cluster) and see the cluster object you defined earlier.

Decommission the Bootstrap Cluster

At this point, the kind bootstrap cluster is no longer needed, so delete it with kind delete cluster.

And that’s it—you’ve just bootstrapped a management cluster on the infrastructure provider of your choice! You can now use clusterctl to create new workload clusters that will be managed by this management cluster.

Wrapping Up

I hope this walk-through is helpful. To help make this process even easier, check out this post on creating a pre-configured machine image (currently only focusing on AWS) to eliminate the need to install kind, clusterctl, or any of the other utilities.

If you have any questions, please feel free to contact me on Twitter, or reach out to me on the Kubernetes Slack community. I’d love to hear from you!

Recent Posts

Some Site Updates

For the last three years, the site has been largely unchanged with regard to the structure and overall function even while I continue to work to provide quality technical content. However, time was beginning to take its toll, and some “under the hood” work was needed. Over the Thanksgiving holiday, I spent some time updating the site, and there are a few changes I wanted to mention.

Read more...

Assigning Node Labels During Kubernetes Cluster Bootstrapping

Given that Kubernetes is a primary focus of my day-to-day work, I spend a fair amount of time in the Kubernetes Slack community, trying to answer questions from users and generally be helpful. Recently, someone asked about assigning node labels while bootstrapping a cluster with kubeadm. I answered the question, but afterward started thinking that it might be a good idea to also share that same information via a blog post—my thinking being that others who also had the same question aren’t likely to be able to find my answer on Slack, but would be more likely to find a published blog post. So, in this post, I’ll show how to assign node labels while bootstrapping a Kubernetes cluster.

Read more...

Pausing Cluster API Reconciliation

Cluster API is a topic I’ve discussed here in a number of posts. If you’re not already familiar with Cluster API (also known as CAPI), I’d encourage you to check out my introductory post on Cluster API first; you can also visit the official Cluster API site for more details. In this short post, I’m going to show you how to pause the reconciliation of Cluster API cluster objects, a task that may be necessary for a variety of reasons (including backing up the Cluster API objects in your management cluster).

Read more...

Technology Short Take 134

Welcome to Technology Short Take #134! I’m publishing a bit early this time due to the Thanksgiving holiday in the US. So, for all my US readers, here’s some content to peruse while enjoying some turkey (or whatever you’re having this year). For my international readers, here’s some content to peruse while enjoying dramatically lower volumes of e-mail because the US is on holiday. See, something for everyone!

Read more...

Review: CPLAY2air Wireless CarPlay Adapter

In late September, I was given a CPLAY2air wireless CarPlay adapter as a gift. Neither of my vehicles support wireless CarPlay, and so I was looking forward to using the CPLAY2air device to enable the use of CarPlay without having to have my phone plugged into a cable. Here’s my feedback on the CPLAY2air device after about six weeks of use.

Read more...

Resizing Windows to a Specific Size on macOS

I recently had a need (OK, maybe more a desire than a need) to set my browser window(s) on macOS to a specific size, like 1920x1080. I initially started looking at one of the many macOS window managers, but after reading lots of reviews and descriptions and still being unclear if any of these products did what I wanted, I decided to step back to using AppleScript to accomplish what I was seeking. In this post, I’ll share the solution (and the articles that helped me arrive at the solution).

Read more...

Technology Short Take 133

Welcome to Technology Short Take #133! This time around, I have a collection of links featuring the new Raspberry Pi 400, some macOS security-related articles, information on AWS Nitro Enclaves and gVisor, and a few other topics. Enjoy!

Read more...

Technology Short Take 132

Welcome to Technology Short Take #132! My list of links and articles from around the web seems to be a bit heavy on security-related topics this time. Still, there’s a decent collection of networking, cloud computing, and virtualization articles as well as a smattering of other topics for you to peruse. I hope you find something useful!

Read more...

Considerations for using IaC with Cluster API

In other posts on this site, I’ve talked about both infrastructure-as-code (see my posts on Terraform or my posts on Pulumi) and somewhat separately I’ve talked about Cluster API (see my posts on Cluster API). And while I’ve discussed the idea of using existing AWS infrastructure with Cluster API, in this post I wanted to try to think about how these two technologies play together, and provide some considerations for using them together.

Read more...

Technology Short Take 131

Welcome to Technology Short Take #131! I’m back with another collection of articles on various data center technologies. This time around the content is a tad heavy on the security side, but I’ve still managed to pull in articles on networking, cloud computing, applications, and some programming-related content. Here’s hoping you find something useful here!

Read more...

Updating AWS Credentials in Cluster API

I’ve written a bit here and there about Cluster API (aka CAPI), mostly focusing on the Cluster API Provider for AWS (CAPA). If you’re not yet familiar with CAPI, have a look at my CAPI introduction or check the Introduction section of the CAPI site. Because CAPI interacts directly with infrastructure providers, it typically has to have some way of authenticating to those infrastructure providers. The AWS provider for Cluster API is no exception. In this post, I’ll show how to update the AWS credentials used by CAPA.

Read more...

Behavior Changes in clusterawsadm 0.5.5

Late last week I needed to test some Kubernetes functionality, so I thought I’d spin up a test cluster really quick using Cluster API (CAPI). As often happens with fast-moving projects like Kubernetes and CAPI, my existing CAPI environment had gotten a little out of date. So I updated my environment, and along the way picked up an important change in the default behavior of the clusterawsadm tool used by the Cluster API Provider for AWS (CAPA). In this post, I’ll share more information on this change in default behavior and the impacts of that change.

Read more...

Technology Short Take 130

Welcome to Technology Short Take #130! I’ve had this blog post sitting in my Drafts folder waiting to be published for almost a month, and I kept forgetting to actually make it live. Sorry! So, here it is—better late than never, right?

Read more...

Creating an AWS ELB using Pulumi and Go

In case you hadn’t noticed, I’ve been on a bit of a kick with Pulumi and Go recently. There are two reasons for this. First, I have a number of “learning projects” (things that I decide I’d like to try or test) that would benefit greatly from the use of infrastructure as code. Second, I’ve been working on getting more familiar with Go. The idea of combining both those reasons by using Pulumi with Go seemed natural. Unfortunately, examples of using Pulumi with Go seem to be more limited than examples of using Pulumi with other languages, so in this post I’d like to share how to create an AWS ELB using Pulumi and Go.

Read more...

Review: Anker PowerExpand Elite Thunderbolt 3 Dock

Over the last couple of weeks or so, I’ve been using my 2017 MacBook Pro (running macOS “Mojave” 10.14.6) more frequently as my daily driver/primary workstation. Along with it, I’ve been using the Anker PowerExpand Elite 13-in-1 Thunderbolt 3 Dock. In this post, I’d like to share my experience with this dock and provide a quick review of the Anker PowerExpand Elite.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!