Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Using Unison Across Linux, macOS, and Windows

I recently wrapped up an instance where I needed to use the Unison file synchronization application across Linux, macOS, and Windows. While Unison is available for all three platforms and does work across (and among) systems running all three operating systems, I did encounter a few interoperability issues while making it work. Here’s some information on these interoperability issues, and how I worked around them. (Hopefully this information will help someone else.)

The use case here is to keep a subset of directories in sync between a MacBook Air running macOS “Catalina” 10.15.5 and a Surface Pro 6 running Windows 10. A system running Ubuntu 18.04.4 acted as the “server”; each “client” system (the MacBook Air and the Surface Pro) would synchronize with the Ubuntu system. I’ve used a nearly-identical setup for several years to keep my systems synchronized.

One thing to know about Unison before I continue is that you need compatible versions of Unison on both systems in order for it to work. As I understand it, compatibility is not just based on version numbers, but also on the Ocaml version with which it was compiled.

With that in mind, I already had a working setup using Unison 2.48 so I started there. Unison 2.48.4 was installed and running on the Ubuntu system, and I installed Unison 2.48.15 on the new MacBook Air. I’d used Unison 2.48.15 on macOS for quite a while, so I didn’t test the installation right away, instead moving on to the Surface Pro. From this page hosting Windows binaries for Unison, I downloaded a Unison 2.48.4 binary for Windows. Should be all set, right?

Unfortunately, I ran into a couple problems:

  • The Windows binary has an issue where it won’t recognize the preinstalled OpenSSH binary on Windows 10. So, I had to copy ssh.exe to the same directory as the Unison binary.
  • The Windows binary didn’t like paths with spaces in the name; no style of quoting seemed to help. The only workaround I could find was to use dir /X to get the auto-generated short name, and use that in the Unison profile.

Once past those two issues I managed to successfully get Unison to synchronize files between the Surface Pro and the Ubuntu system. Moving on to the MacBook Air—which I honestly suspected would be the easy step—I found Unison 2.48 crashed on macOS 10.15. Nothing would make it run without crashing.

Some continued research led me to find Windows and macOS builds of a newer version of Unison, version 2.51. The changelog referenced APFS support, which is what was being used on the MacBook Air. That should do it, right?

  • Unison 2.51 wouldn’t interoperate with the existing Unison 2.48 binary on the Ubuntu system. (Recall that I mentioned earlier that compatible versions of Unison were needed on both systems.)
  • There were no packaged versions of Unison 2.51 for Ubuntu.

Fortunately, cloning the GitHub repository and building from source was pretty straightforward. I changed the filename of the new version (I used unison-2.51.2) and changed the “servercmd” setting in the Unison preferences on both the Surface Pro and the MacBook Air. Success! I was able to run Unison to synchronize files between the Surface Pro, the Ubuntu system, and the MacBook air running macOS 10.15.

Lessons learned?

  1. Unison’s support on Windows for paths with spaces is tricky. Use dir /X to get the auto-generated short name and use that instead.
  2. On Windows 10, you’ll need to copy ssh.exe from C:\Windows\System32\OpenSSH to the directory where the Unison executable resides. Otherwise, Unison will be unable to initiate the SSH connection to the remote system.
  3. If you’re running macOS 10.15, be sure to use Unison 2.51. (This may apply to all systems running APFS, but I haven’t verified that yet.)
  4. Pre-compiled binaries for Unison 2.51 are available for both Windows and macOS, so that’s probably the best version to use. For Linux, it’s very likely you’ll need to build from source in order to get a version 2.51 binary.

I hope this information is helpful to others who may need to get Unison working across multiple systems. If you have corrections, suggestions for improvement, or feedback on how I can improve this post, please contact me on Twitter. Thanks!

Technology Short Take 127

Welcome to Technology Short Take #127! Let’s see what I’ve managed to collect for you this time around…



Nothing this time around, but I’ll stay alert for items to include next time!


Cloud Computing/Cloud Management

Operating Systems/Applications


I don’t have anything to share this time, but maybe check here for something that strikes your interest?


Career/Soft Skills

That’s all for now! I hope I have included something that is useful to you. If you have feedback or suggestions for improvement, I’d love to hear from you (reaching me on Twitter is probably easiest). Thanks for reading!

Technology Short Take 126

Welcome to Technology Short Take #126! I meant to get this published last Friday, but completely forgot. So, I added a couple more links and instead have it ready for you today. I don’t have any links for servers/hardware or security in today’s Short Take, but hopefully there’s enough linked content in the other sections that you’ll still find something useful. Enjoy!



Nothing this time around!


I don’t have anything to include this time, but I’ll stay alert for content I can include next time.

Cloud Computing/Cloud Management

Operating Systems/Applications


  • Cody Hosterman, of Pure Storage, takes a look at Pure Storage’s support of the recently-released vSphere 7.
  • Chris Mellor reports on something of a “scandal” wherein Western Digital is selling drives that use shingled magnetic recording (SMR) in use cases where that technology can cause noticeable performance problems.


  • Frank Denneman follows-up his post on initial placement of vSphere Pods with an article on scheduling vSphere Pods (the link to the initial placement article was in Technology Short Take 125).
  • Stijn Vermoesen describes what’s involved in installing the vRealize Build Tools fling (part 1 and part 2).

Career/Soft Skills

  • I recently learned about the Johari window, an aid for understanding the self. It got me thinking about what my blind spots might be.

That’s all for now. If you have questions, comments, or suggestions for how I can improve the Technology Short Take series (or this site in general), I welcome all feedback. Feel free to contact me on Twitter, hit me up in any one of a number of Slack channels (Kubernetes, CNCF, Pulumi, to name a few), or drop me an e-mail (my address isn’t too hard to find).

Setting up etcd with etcdadm

I’ve written a few different posts on setting up etcd. There’s this one on bootstrapping a TLS-secured etcd cluster with kubeadm, and there’s this one about using kubeadm to run an etcd cluster as static Pods. There’s also this one about using kubeadm to run etcd with containerd. In this article, I’ll provide yet another way of setting up a “best practices” etcd cluster, this time using a tool named etcdadm.

etcdadm is an open source project, originally started by Platform9 (here’s the blog post announcing the project being open sourced). As the README in the GitHub repository mentions, the user experience for etcdadm “is inspired by kubeadm.”

Getting etcdadm

The instructions in the repository indicate that you can use go get -u, but I ran into problems with that approach (using Go 1.14). At the suggestion of the one of the maintainers, I also tried Go 1.12, but it failed both on my main Ubuntu laptop as well as on a clean Ubuntu VM. However, running make etcdadm in a clone of the repository worked, and one of the maintainers indicated the documentation will be updated to reflect this approach instead. So, for now, it would appear that cloning the GitHub repository and running make etcdadm is your best approach for obtaining a binary build of the tool.

Setting up an Etcd Cluster

Once you have a binary build of etcdadm, the process for setting up an etcd cluster is—as the documentation in the repository indicates—pretty straightforward.

  1. First, copy the etcdadm binary to the system(s) that will comprise the cluster.
  2. On the system that will be the first node in the cluster, run etcdadm init. This will create a certificate authority (CA) for the TLS certs, then use that CA to create TLS certificates for the etcd node. Finally, it will bootstrap the etcd cluster with one only member, itself, and then will output a command to run on subsequent nodes. The command will look something like etcdadm join

    As part of the bootstrapping process, etcdadm will install the etcd and etcdctl binaries (the default location is into /opt/bin), and will create a systemd unit to run etcd as a service. You can use systemctl to check the status of the etcd unit, and you can use etcdctl to check the health of the cluster (among other things).

  3. Copy the CA certificate and key from the first node to the next node you’re going to add to the cluster. The CA certificate and key are found as /etc/etcd/pki/ca.crt and /etc/etcd/pki/ca.key, respectively. Place these files in the same location on the next node, then run the command output by etcdadm init in the previous step.

  4. Repeat step 3 for each additional node you add to the cluster, keeping in mind you should use odd numbers of nodes in the cluster. Generally speaking, a cluster of 3 nodes will work just fine in the vast majority of cases.

Once you’ve added the desired number of nodes to the etcd cluster, you can use this etcdctl command to check the health of the cluster:

ETCDCTL_API=3 /opt/bin/etcdctl --cert /etc/etcd/pki/peer.crt \
--key /etc/etcd/pki/peer.key --cacert /etc/etcd/pki/ca.crt \
--endpoints endpoint health --cluster

Assuming you get healthy responses to the above command, you’re good to go. Pretty easy, right? This is why I tweeted earlier today that I think there is real promise in this tool.

How I Tested

To test etcdadm, I first used Vagrant to spin up an Ubuntu 18.04 VM for building the etcdadm binary. After I had the binary, I used Pulumi to launch three AWS EC2 instances in their own VPC. I then used these instances to test the process of bootstrapping the etcd cluster using etcdadm.

While etcdadm is still an early project, I would recommend keeping an eye on its development. In the meantime, I’d love to hear any feedback from you, so feel free to find me on Twitter or hit me up on the Kubernetes Slack instance.

Using External Etcd with Cluster API on AWS

If you’ve used Cluster API (CAPI), you may have noticed that workload clusters created by CAPI use, by default, a “stacked master” configuration—that is, the etcd cluster is running co-located on the control plane node(s) alongside the Kubernetes control plane components. This is a very common configuration and is well-suited for most deployments, so it makes perfect sense that this is the default. There may be cases, however, where you’ll want to use a dedicated, external etcd cluster for your Kubernetes clusters. In this post, I’ll show you how to use an external etcd cluster with CAPI on AWS.

The information in this blog post is based on this upstream document. I’ll be adding a little bit of AWS-specific information, since I primarily use the AWS provider for CAPI. This post is written with CAPI v1alpha3 in mind.

The key to this solution is building upon the fact that CAPI leverages kubeadm for bootstrapping cluster nodes. This puts the full power of the kubeadm API at your fingertips—which in turn means you have a great deal of flexibility. This is the mechanism whereby you can tell CAPI to use an external etcd cluster instead of creating a co-located etcd cluster.

I should note that I am referring to using CAPI to create a workload cluster with an external etcd environment. If you’re not familiar with some of the CAPI terminology, check out my introductory post.

At a high level, the steps involved are:

  1. Create the required Secrets in the management cluster.
  2. Modify the CAPI manifests to reference the external etcd cluster.
  3. Profit!

Let’s take a look at these two steps in more detail. (I’ll omit step 3.) Note that I’m not going to cover the process of establishing the etcd cluster, as that’s something I’ve covered sufficiently elsewhere (like here, here, or here).

Creating the Required Secrets

When bootstrapping a typical workload cluster with a stacked master configuration, kubeadm typically generates all the necessary public key infrastructure (certificate authority and associated certificates). In the case of an external etcd cluster, though, some of these certificates already exist, and you need a mechanism to provide those to CAPI.

This page gave me the first hint on how it should be handled, and this upstream document finishes it out. Using Kubernetes Secrets, you can provide the certificates necessary to CAPI.

To create the required Secrets, you’ll need four files from the etcd cluster:

  1. The etcd certificate authority (CA) certificate
  2. The etcd CA’s private key
  3. The API server etcd client certificate
  4. The API server etcd client private key

Files #1 and #2 are, in the event of a typical kubeadm-bootstrapped etcd cluster, found at /etc/kubernetes/pki/etcd/ca.{crt,key}. Files #3 and #4 are typically found at /etc/kubernetes/pki/apiserver-etcd-client.{crt,key}. (Keep in mind these paths may change depending on how the etcd cluster was created.)

Once you have this files, create the first Secret using this command:

kubectl create secret tls <cluster-name>-apiserver-etcd-client \
--cert /path/to/apiserver-etcd-client.crt \
--key /path/to/apiserver-etcd-client.key

Next, create the second Secret with this command:

kubectl create secret tls <cluster-name>-etcd \
--cert /path/to/etcd/ca.crt --key /path/to/etcd/ca.key

In both of these commands, you’ll need to replace <cluster-name> with the name of the workload cluster you’re going to create with CAPI.

If you are going to create your workload cluster in a specific namespace on the management cluster, you’ll want to be sure you create these Secrets in the same namespace.

Once the Secrets are in place, the next step is to modify the CAPI manifests.

Updating the CAPI Manifests

The first change is configuring the CAPI manifests to use an external etcd cluster. CAPI v1alpha3 introduces the KubeadmControlPlane object, and the KubeadmControlPlane object has a KubeadmConfigSpec that contains the equivalent of a valid kubeadm configuration file. This is the section you’ll need to modify to instruct CAPI to create a workload cluster with an external etcd cluster.

Here’s a snippet (unrelated entries have been removed for the sake of brevity) of YAML to add to the KubeadmControlPlane object in order to use an external etcd cluster:

          caFile: /etc/kubernetes/pki/etcd/ca.crt
          certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
          keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key

Obviously, you’d want to use the correct hostnames for the nodes in the external etcd cluster (the names above have been randomized). The certificate files referenced here will be created by CAPI using the Secrets created in the previous section.

The second change is to instruct CAPI to use the existing VPC where the etcd cluster resides. (Technically, this isn’t required, but this sidesteps the need for VPC peering and similar configurations). For that, you can refer to the upstream documentation for using existing AWS infrastructure, which indicates you need to add the following to your CAPI workload manifest (this is in the spec field for the AWSCluster object):

      id: vpc-0123456789abcdef0

You will want to ensure that your existing AWS infrastructure is appropriately configured for use with CAPI; again, refer to the upstream documentation for full details.

The third and final step is to ensure the new CAPI workload cluster is able to communicate with the existing external etcd cluster. On AWS, that means configuring security groups appropriately to ensure access to the etcd cluster. As I explained in this post on using existing AWS security groups with CAPI, you’ll want to configure your CAPI manifest to reference the existing AWS security group that permits access to your etcd cluster. That would look something like this in the AWSMachineTemplate that’s referenced by the KubeadmControlPlane object:

        - id: <etcd_security_group_id>
        - id: <any_other_needed_security_group_id>

Once you’ve modified the manifests for the CAPI workload cluster—adding the etcd section to the KubeadmConfigSpec, specifying the existing VPC, and adding any necessary security group IDs in the AWSMachineTemplate for the KubeadmControlPlane object—then you can create the CAPI cluster by applying the modified manifest with kubectl apply -f manifest.yaml when your kubectl context is set to your CAPI management cluster.

Everything should work fine, but in the event you run into issues be sure to check the CAPI and CAPI provider logs (using kubectl logs) in the management cluster for entries that will point you in the right direction to help resolve the problem.


Although this is a supported configuration with CAPI, you should be aware that you are giving up etcd management/upgrades via CAPI with this arrangement. It will fall on you, the cluster operator, to manage etcd upgrades and the lifecycle of the etcd cluster. Because you’re also leveraging existing AWS infrastructure, you’re also giving up CAPI management of that infrastructure. These may be acceptable trade-offs for your specific use case, but be sure to take that into consideration.

How I Tested

To test this procedure, I used Pulumi to create some existing AWS infrastructure, in according with the upstream project’s guidelines for using existing AWS infrastructure. In that existing infrastructure, the Pulumi code created three instances that I then set up as an etcd cluster using kubeadm (using a variation of these instructions). Once the external etcd cluster had been established, then I proceeded with testing the configuration and procedure outlined above.

Please don’t hesitate to contact me if you have any questions, or if you spot an error in this post. All constructive feedback is welcome! You can contact me on Twitter, or feel free to contact me via the Kubernetes Slack instance.

Recent Posts

Using Existing AWS Security Groups with Cluster API

I’ve written before about how to use existing AWS infrastructure with Cluster API (CAPI), and I was recently able to help update the upstream documentation on this topic (the upstream documentation should now be considered the authoritative source). These instructions are perfect for placing a Kubernetes cluster into an existing VPC and associated subnets, but there’s one scenario that they don’t yet address: what if you need your CAPI workload cluster to be able to communicate with other EC2 instances or other AWS services in the same VPC? In this post, I’ll show you the CAPI functionality that makes this possible.


Using Paw to Launch an EC2 Instance via API Calls

Last week I wrote a post on using Postman to launch an EC2 instance via API calls. Postman is a cross-platform application, so while my post was centered around Postman on Linux (Ubuntu, specifically) the steps should be very similar—if not exactly the same—when using Postman on other platforms. Users of macOS, however, have another option: a macOS-specific peer to Postman named Paw. In this post, I’ll walk through using Paw to issue API requests to AWS to launch an EC2 instance.


Using Postman to Launch an EC2 Instance via API Calls

As I mentioned in this post on region and endpoint match in AWS API requests, exploring the AWS APIs is something I’ve been doing off and on for several months. There’s a couple reasons for this; I’ll go into those in a bit more detail shortly. In any case, I’ve been exploring the APIs using Postman (when on Linux) and Paw (when on macOS), and in this post I’ll share how to use Postman to launch an EC2 instance via API calls.


Making File URLs Work Again in Firefox

At some point in the last year or so—I don’t know exactly when it happened—Firefox, along with most of the other major browsers, stopped working with file:// URLs. This is a shame, because I like using Markdown for presentations (at least, when it’s a presentation where I don’t need to collaborate with others). However, using this sort of approach generally requires support for file:// URLs (or requires running a local web server). In this post, I’ll show you how to make file:// URLs work again in Firefox.


Installing MultiMarkdown 6 on Ubuntu 19.10

Markdown is a core part of many of my workflows. For quite a while, I’ve used Fletcher Penny’s MultiMarkdown processor (available on GitHub) on my various systems. Fletcher offers binary builds for Windows and macOS, but not a Linux binary. Three years ago, I wrote a post on how to compile MultiMarkdown 6 for a Fedora-based system. In this post, I’ll share how to compile it on an Ubuntu-based system.


Setting up etcd with Kubeadm, containerd Edition

In late 2018, I wrote a couple of blog posts on using kubeadm to set up an etcd cluster. The first one was this post, which used kubeadm only to generate the TLS certs but ran etcd as a systemd service. I followed up that up a couple months later with this post, which used kubeadm to run etcd as a static Pod on each system. It’s that latter post—running etcd as a static Pod on each system in the cluster—that I’ll be revisiting in this post, only this time using containerd as the container runtime instead of Docker.


HA Kubernetes Clusters on AWS with Cluster API v1alpha3

A few weeks ago, I published a post on HA Kubernetes clusters on AWS with Cluster API v1alpha2. That post was itself a follow-up to a post I wrote in September 2019 on setting up HA clusters using Cluster API v1alpha1. In this post, I’ll follow up on both of those posts with a look at setting up HA Kubernetes clusters on AWS using Cluster API v1alpha3. Although this post is similar to the v1alpha2 post, be aware there are some notable changes in v1alpha3, particularly with regard to the control plane.


Technology Short Take 125

Welcome to Technology Short Take #125, where I have a collection of articles about various data center and cloud technologies collected from around the Internet. I hope I have managed to find a few useful things for you! (If not, contact me on Twitter and tell me how I can make this more helpful for you.)


Using KinD with Docker Machine on macOS

I’ll admit right up front that this post is more “science experiment” than practical, everyday use case. It all started when I was trying some Cluster API-related stuff that leveraged KinD (Kubernetes in Docker). Obviously, given the name, KinD relies on Docker, and when running Docker on macOS you generally would use Docker Desktop. At the time, though, I was using Docker Machine, and as it turns out KinD doesn’t like Docker Machine. In this post, I’ll show you how to make KinD work with Docker Machine.


Kustomize Transformer Configurations for Cluster API v1alpha3

A few days ago I wrote an article on configuring kustomize transformers for use with Cluster API (CAPI), in which I explored how users could configure the kustomize transformers—the parts of kustomize that actually modify objects—to be a bit more CAPI-aware. By doing so, using kustomize with CAPI manifests becomes much easier. Since that post, the CAPI team released v1alpha3. In working with v1alpha3, I realized my kustomize transformer configurations were incorrect. In this post, I will share CAPI v1alpha3 configurations for kustomize transformers.


Configuring Kustomize Transformers for Cluster API

In November 2019 I wrote an article on using kustomize with Cluster API (CAPI) manifests. The idea was to use kustomize to simplify the management of CAPI manifests for clusters that are generally similar but have minor differences (like the AWS region in which they are running, or the number of Machines in a MachineDeployment). In this post, I’d like to show a slightly different way of using kustomize with Cluster API that involves configuring the kustomize transformers.


Updating Visual Studio Code's Kubernetes API Awareness

After attempting (and failing) to get Sublime Text to have some of the same “intelligence” that Visual Studio Code has with certain languages, I finally stopped trying to make Sublime Text work for me and just went back to using Code full-time. As I mentioned in this earlier post, now that I’ve finally solved how Code handles wrapping text in brackets and braces and the like I’m much happier. (It’s the small things in life.) Now I’ve moved on to tackling how to update Code’s Kubernetes API awareness.


An Update on the Tokyo Assignment

Right at the end of 2019 I announced that in early 2020 I was temporarily relocating to Tokyo, Japan, for a six month work assignment. It’s now March, and I’m still in Colorado. So what’s up with that Tokyo assignment, anyway? Since I’ve had several folks ask, I figured it’s probably best to post something here.


Modifying Visual Studio Code's Bracketing Behavior

There are two things I’ve missed since I switched from Sublime Text to Visual Studio Code (I switched in 2018). First, the speed. Sublime Text is so much faster than Visual Studio Code; it’s insane. But, the team behind Visual Studio Code is working hard to improve performance, so I’ve mostly resigned myself to it. The second thing, though, was the behavior of wrapping selected text in brackets (or parentheses, curly braces, quotes, etc.). That part has annoyed me for two years, until this past weekend I’d finally had enough. Here’s how I modified Visual Studio Code’s bracketing behaviors.


HA Kubernetes Clusters on AWS with Cluster API v1alpha2

About six months ago, I wrote a post on how to use Cluster API (specifically, the Cluster API Provider for AWS) to establish highly available Kubernetes clusters on AWS. That post was written with Cluster API (CAPI) v1alpha1 in mind. Although the concepts I presented there worked with v1alpha2 (released shortly after that post was written), I thought it might be helpful to revisit the topic with CAPI v1alpha2 specifically in mind. So, with that, here’s how to establish highly available Kubernetes clusters on AWS using CAPI v1alpha2.


Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!