Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Making File URLs Work Again in Firefox

At some point in the last year or so—I don’t know exactly when it happened—Firefox, along with most of the other major browsers, stopped working with file:// URLs. This is a shame, because I like using Markdown for presentations (at least, when it’s a presentation where I don’t need to collaborate with others). However, using this sort of approach generally requires support for file:// URLs (or requires running a local web server). In this post, I’ll show you how to make file:// URLs work again in Firefox.

I tested this procedure using Firefox 74 on Ubuntu, but it should work on any platform on which Firefox is supported. Note that the locations of the user.js file will vary from OS to OS; see this MozillaZine Knowledge Base entry for more details.

Here’s the process I followed:

  1. Create the user.js file (it doesn’t exist by default) in the correct location for your Firefox profile. (Refer to the MozillaZine KB article linked above for exactly where that is on your OS.)

  2. In the user.js, add these entries:

    // Allow file:// links
    user_pref("capability.policy.policynames", "localfilelinks");
    user_pref("capability.policy.localfilelinks.sites", "file://");
    user_pref("capability.policy.localfilelinks.checkloaduri.enabled", "allAccess");
    
  3. In your Firefox configuration (accessible using about:config in a Firefox tab), change the value of privacy.file_unique_origin from true to false.

  4. Restart Firefox.

After you restart Firefox, you should be able to use file:// URLs, but only from local HTML files on your system (as specified by the second line you added in step 2). It’s possible this may expose an unknown security flaw or weakness that I haven’t foreseen, so keep that in mind.

If you’re a fan of Markdown-based presentations displayed using your browser, they should work again.

Hit me on Twitter if you have questions. Thanks!

Installing MultiMarkdown 6 on Ubuntu 19.10

Markdown is a core part of many of my workflows. For quite a while, I’ve used Fletcher Penny’s MultiMarkdown processor (available on GitHub) on my various systems. Fletcher offers binary builds for Windows and macOS, but not a Linux binary. Three years ago, I wrote a post on how to compile MultiMarkdown 6 for a Fedora-based system. In this post, I’ll share how to compile it on an Ubuntu-based system.

Just as in the Fedora post, I used Vagrant with the Libvirt provider to spin up a temporary build VM.

In this clean build VM, I perform the following steps to build a multimarkdown binary:

  1. Install the necessary packages with this command:

    sudo apt install gcc make cmake git build-essential
    
  2. Clone the source code repository:

    git clone https://github.com/fletcher/MultiMarkdown-6
    
  3. Switch into the directory where the repository was cloned and run these commands to build the binary:

    make
    cd build
    make
    
  4. Once the second make command is done, you’re left with a multimarkdown binary. Copy that to the host system (scp works fine). Use vagrant destroy to clean up the temporary build VM once you’ve copied the binary to your host system.

And with that, you’re good to go!

Setting up etcd with Kubeadm, containerd Edition

In late 2018, I wrote a couple of blog posts on using kubeadm to set up an etcd cluster. The first one was this post, which used kubeadm only to generate the TLS certs but ran etcd as a systemd service. I followed up that up a couple months later with this post, which used kubeadm to run etcd as a static Pod on each system. It’s that latter post—running etcd as a static Pod on each system in the cluster—that I’ll be revisiting in this post, only this time using containerd as the container runtime instead of Docker.

This post assumes you’ve already created the VMs/instances on which etcd will run, that an appropriate version of Linux is installed (I’ll be using Ubuntu LTS 18.04.4), and that the appropriate packages have been installed. This post also assumes that you’ve already made sure that the correct etcd ports have been opened between the VMs/instances, so that etcd can communicate properly.

Finally, this post builds upon the official Kubernetes documentation on setting up an etcd cluster using kubeadm. The official guide assumes the use of Docker, whereas this post will focus on using containerd as the container runtime instead of Docker. The sections below outline the changes required to the official documentation in order to make it work with containerd.

Configuring Kubelet

The official documentation provides a systemd drop-in to configure the Kubelet to operate in a “stand-alone” mode. Unfortunately, this drop-in won’t work with containerd. Here is a replacement drop-in for containerd:

[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --address=127.0.0.1 --pod-manifest-path=/etc/kubernetes/manifests --cgroup-driver=systemd --container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock
Restart=always

The changes here are the addition of the --container-runtime, --runtime-remote, and --container-runtime-endpoint parameters. These parameters configure the Kubelet to talk to containerd instead of Docker.

As instructed in the official documentation, put this into a systemd drop-in (like the suggested 20-etcd-service-manager.conf) and copy it into the /etc/systemd/system/kubelet.service.d directory. Once the file is in place, run systemctl daemon-reload so that systemd will pick up the changes.

Create the Manifests Directory

Although this is not included in the official documentation, I saw problems with the Kubelet starting up if the manifests directory doesn’t exist. I suggest manually creating the /etc/kubernetes/manifests directory to avoid any such issues.

Bootstrap the etcd Cluster

Aside from the changes/differences described above, the rest of the process is as outlined in the official documentation. At a high-level, that means:

  1. Use kubeadm init phase certs etcd-ca to generate the etcd CA certificate and key.
  2. Use kubeadm init phase certs to generate the etcd server, peer, health check, and API server client certificates.
  3. Distribute the certificates to the etcd nodes.
  4. Use kubeadm init phase etcd local to generate the Pod manifests for the etcd static Pods.

One final note: the docker command at the end of the official documentation won’t work in this case, since containerd is the container runtime instead of Docker. I’m still working on the correct containerd equivalent command to test the health of the cluster.

How I Tested

I used Pulumi to create a test environment in AWS for testing the instructions in this article. The TypeScript code that I wrote for use with Pulumi creates an environment suitable for use with Kubernetes Cluster API, including a VPC, both public and private subnets, an Internet gateway, NAT gateways for the private subnets, all associated route tables and route table associations, and the necessary security groups. I hope to publish this code for others to use soon; look for an update here.

If you have any questions, concerns, or corrections, please contact me. You can reach me on Twitter, or contact me on the Kubernetes Slack instance. I’d love to hear from you, and all constructive feedback is welcome.

HA Kubernetes Clusters on AWS with Cluster API v1alpha3

A few weeks ago, I published a post on HA Kubernetes clusters on AWS with Cluster API v1alpha2. That post was itself a follow-up to a post I wrote in September 2019 on setting up HA clusters using Cluster API v1alpha1. In this post, I’ll follow up on both of those posts with a look at setting up HA Kubernetes clusters on AWS using Cluster API v1alpha3. Although this post is similar to the v1alpha2 post, be aware there are some notable changes in v1alpha3, particularly with regard to the control plane.

If you’re not yet familiar with Cluster API, take a look at this high-level overview I wrote in August 2019. That post will provide an explanation of the project’s goals as well as provide some terminology.

In this post, I won’t discuss the process of establishing a management cluster; I’m assuming your Cluster API management cluster is already up and running. (I do have some articles in the content pipeline that discuss creating a management cluster.) Instead, this post will focus on creating a highly-available workload cluster. By “highly available,” I mean a cluster with multiple control plane nodes that are distributed across multiple availability zones (AZs). Please read the “Disclaimer” section at the bottom for some caveats with regard to availability.

Prerequisites

As mentioned above, this post assumes you already have a functional Cluster API management cluster. This post also assumes you have already installed the kubectl binary, and that you’ve configured kubectl to access your management cluster.

As a side note, I highly recommend some sort of solution that shows you the current Kubernetes context in your shell prompt. I prefer powerline-go, but choose/use whatever works best for you.

Crafting Manifests for High Availability

The Cluster API manifests require two basic changes in order to be able to deploy a highly-available cluster:

  1. The AWSCluster object needs to be modified to include information on how to create subnets across multiple AZs.
  2. The manifests for worker nodes need to be modified to include AZ information.

You’ll note that I didn’t mention anything about the control plane above—that’s because v1alpha3 introduces a new mechanism for managing the control plane of workload clusters. This new mechanism, the KubeadmControlPlane object, is smart enough in this release to automatically distribute control plane nodes across multiple AZs when they are present. (Nice, right?)

The next two sections will provide more details on the changes listed above.

Creating Subnets Across Multiple AZs

By default, Cluster API will only create public and private subnets in the first AZ it finds in a region. As a result, all control plane nodes and worker nodes will end up in the same AZ. To change that, you’ll need to modify the AWSCluster object to tell Cluster API to create subnets across multiple AZs.

This change is accomplished by adding a networkSpec to the AWSCluster specification. Here’s an example networkSpec:

spec:
  networkSpec:
    vpc:
      cidrBlock: 10.10.0.0/16
    subnets:
    - availabilityZone: us-west-2a
      cidrBlock: 10.10.0.0/20
      isPublic: true
    - availabilityZone: us-west-2a
      cidrBlock: 10.10.16.0/20
    - availabilityZone: us-west-2b
      cidrBlock: 10.10.32.0/20
      isPublic: true
    - availabilityZone: us-west-2b
      cidrBlock: 10.10.48.0/20
    - availabilityZone: us-west-2c
      cidrBlock: 10.10.64.0/20
      isPublic: true
    - availabilityZone: us-west-2c
      cidrBlock: 10.10.80.0/20

This YAML is fairly straightforward, and is largely (completely?) unchanged from previous versions of Cluster API. One key takeaway is that the user is responsible for “manually” breaking down the VPC CIDR appropriately for the subnets in each AZ.

With this change, Cluster API will now create multiple subnets within a VPC, distributing those subnets across AZs as directed. Since multiple AZs are now accessible by Cluster API, the control plane nodes (managed by the KubeadmControlPlane object) will automatically get distributed across AZs. This leaves only the worker nodes to distribute, which I’ll discuss in the next section.

Distributing Worker Nodes Across AZs

To distribute worker nodes across AZs, users can add a failureDomain field to their Machine or MachineDeployment manifests. This field specifies the name of an AZ where a usable subnet exists. Using the example AWSCluster specification listed above, this means I could tell Cluster API to use us-west-2a, us-west-2b, or us-west-2c. I could not tell Cluster API to use us-west-2d, because there are no Cluster API-usable subnets in that AZ.

For a Machine object, the failureDomain field goes in the Machine’s specification:

spec:
  failureDomain: "us-west-2a"

For a MachineDeployment, the failureDomain field goes in the template specification:

spec:
  template:
    spec:
      failureDomain: "us-west-2b"

Given the nature of a MachineDeployment, it’s not possible to distribute Machines from a single MachineDeployment across AZs. To use MachineDeployments with multiple AZs, you’d need to use a separate MachineDeployment for each AZ.

With these two changes—adding a networkSpec to the AWSCluster object and adding a failureDomain field to your Machine or MachineDeployment objects—Cluster API will instantiate a Kubernetes cluster whose control plane nodes and worker nodes are distributed across multiple AWS AZs.

Disclaimer

Readers should note that deploying across multiple AZs is not a panacea to cure all availability ills. Although the loss of a single AZ will not (generally) render the cluster unavailable—etcd will maintain a quorum so the API server will continue to function—the control plane may be flooded with the demands of rescheduling Pods, and remaining active nodes may not be able to support the resource requirements of the Pods being rescheduled. The sizing and overall utilization of the cluster will greatly affect the behavior of the cluster and the workloads hosted there in the event of an AZ failure. Careful planning is needed to maximize the availability of the cluster even in the face of an AZ failure. There are also other considerations, like cross-AZ traffic charges, that should be taken into account. There is no “one size fits all” solution.

I hope this post is helpful. If you have questions, please do reach out to me on the Kubernetes Slack instance (I hang out a lot in the #kubeadm and #cluster-api-aws channels), or reach out to me on Twitter. I’d love to hear from you, and possibly help you if I can.

Technology Short Take 125

Welcome to Technology Short Take #125, where I have a collection of articles about various data center and cloud technologies collected from around the Internet. I hope I have managed to find a few useful things for you! (If not, contact me on Twitter and tell me how I can make this more helpful for you.)

Networking

Servers/Hardware

Nothing this time around. I’ll try hard to find something useful for the next Technology Short Take.

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

Storage

Virtualization

Career/Soft Skills

  • Jessica Dean shares her home office setup, which is applicable since so many folks are/will be working from home for a while. Her setup is a bit pricey for my budget, but still provides some useful ideas.

That’s all for now! I hope that you’ve found something useful. I welcome all feedback from readers, so I invite you to contact me on Twitter if you have corrections, feedback, suggestions for improvement, or just want to say hello.

Recent Posts

Using KinD with Docker Machine on macOS

I’ll admit right up front that this post is more “science experiment” than practical, everyday use case. It all started when I was trying some Cluster API-related stuff that leveraged KinD (Kubernetes in Docker). Obviously, given the name, KinD relies on Docker, and when running Docker on macOS you generally would use Docker Desktop. At the time, though, I was using Docker Machine, and as it turns out KinD doesn’t like Docker Machine. In this post, I’ll show you how to make KinD work with Docker Machine.

Read more...

Kustomize Transformer Configurations for Cluster API v1alpha3

A few days ago I wrote an article on configuring kustomize transformers for use with Cluster API (CAPI), in which I explored how users could configure the kustomize transformers—the parts of kustomize that actually modify objects—to be a bit more CAPI-aware. By doing so, using kustomize with CAPI manifests becomes much easier. Since that post, the CAPI team released v1alpha3. In working with v1alpha3, I realized my kustomize transformer configurations were incorrect. In this post, I will share CAPI v1alpha3 configurations for kustomize transformers.

Read more...

Configuring Kustomize Transformers for Cluster API

In November 2019 I wrote an article on using kustomize with Cluster API (CAPI) manifests. The idea was to use kustomize to simplify the management of CAPI manifests for clusters that are generally similar but have minor differences (like the AWS region in which they are running, or the number of Machines in a MachineDeployment). In this post, I’d like to show a slightly different way of using kustomize with Cluster API that involves configuring the kustomize transformers.

Read more...

Updating Visual Studio Code's Kubernetes API Awareness

After attempting (and failing) to get Sublime Text to have some of the same “intelligence” that Visual Studio Code has with certain languages, I finally stopped trying to make Sublime Text work for me and just went back to using Code full-time. As I mentioned in this earlier post, now that I’ve finally solved how Code handles wrapping text in brackets and braces and the like I’m much happier. (It’s the small things in life.) Now I’ve moved on to tackling how to update Code’s Kubernetes API awareness.

Read more...

An Update on the Tokyo Assignment

Right at the end of 2019 I announced that in early 2020 I was temporarily relocating to Tokyo, Japan, for a six month work assignment. It’s now March, and I’m still in Colorado. So what’s up with that Tokyo assignment, anyway? Since I’ve had several folks ask, I figured it’s probably best to post something here.

Read more...

Modifying Visual Studio Code's Bracketing Behavior

There are two things I’ve missed since I switched from Sublime Text to Visual Studio Code (I switched in 2018). First, the speed. Sublime Text is so much faster than Visual Studio Code; it’s insane. But, the team behind Visual Studio Code is working hard to improve performance, so I’ve mostly resigned myself to it. The second thing, though, was the behavior of wrapping selected text in brackets (or parentheses, curly braces, quotes, etc.). That part has annoyed me for two years, until this past weekend I’d finally had enough. Here’s how I modified Visual Studio Code’s bracketing behaviors.

Read more...

HA Kubernetes Clusters on AWS with Cluster API v1alpha2

About six months ago, I wrote a post on how to use Cluster API (specifically, the Cluster API Provider for AWS) to establish highly available Kubernetes clusters on AWS. That post was written with Cluster API (CAPI) v1alpha1 in mind. Although the concepts I presented there worked with v1alpha2 (released shortly after that post was written), I thought it might be helpful to revisit the topic with CAPI v1alpha2 specifically in mind. So, with that, here’s how to establish highly available Kubernetes clusters on AWS using CAPI v1alpha2.

Read more...

Technology Short Take 124

Welcome to Technology Short Take #124! It seems like the natural progression of the Tech Short Takes is moving toward monthly articles, since it’s been about a month since my last one. In any case, here’s hoping that I’ve found something useful for you. Enjoy! (And yes, normally I’d publish this on a Friday, but I messed up and forgot. So, I decided to publish on Monday instead of waiting for Friday.)

Read more...

Region and Endpoint Match in AWS API Requests

Interacting directly with the AWS APIs—using a tool like Postman (or, since I switched back to macOS, an application named Paw)—is something I’ve been doing off and on for a little while as a way of gaining a slightly deeper understanding of the APIs that tools like Terraform, Pulumi, and others are calling when automating AWS. For a while, I struggled with AWS authentication, and after seeing Mark Brookfield’s post on using Postman to authenticate to AWS I thought it might be helpful to share what I learned as well.

Read more...

Retrieving the Kubeconfig for a Cluster API Workload Cluster

Using Cluster API allows users to create new Kubernetes clusters easily using manifests that define the desired state of the new cluster (also referred to as a workload cluster; see here for more terminology). But how does one go about accessing this new workload cluster once it’s up and running? In this post, I’ll show you how to retrieve the Kubeconfig file for a new workload cluster created by Cluster API.

Read more...

Setting up K8s on AWS with Kubeadm and Manual Certificate Distribution

Credit for this post goes to Christian Del Pino, who created this content and was willing to let me publish it here.

The topic of setting up Kubernetes on AWS (including the use of the AWS cloud provider) is a topic I’ve tackled a few different times here on this site (see here, here, and here for other posts on this subject). In this post, I’ll share information provided to me by a reader, Christian Del Pino, about setting up Kubernetes on AWS with kubeadm but using manual certificate distribution (in other words, not allowing kubeadm to distribute certificates among multiple control plane nodes). As I pointed out above, all this content came from Christian Del Pino; I’m merely sharing it here with his permission.

Read more...

Building an Isolated Kubernetes Cluster on AWS

In this post, I’m going to explore what’s required in order to build an isolated—or Internet-restricted—Kubernetes cluster on AWS with full AWS cloud provider integration. Here the term “isolated” means “no Internet access.” I initially was using the term “air-gapped,” but these aren’t technically air-gapped so I thought isolated (or Internet-restricted) may be a better descriptor. Either way, the intent of this post is to help guide readers through the process of setting up a Kubernetes cluster on AWS—with full AWS cloud provider integration—using systems that have no Internet access.

Read more...

Creating an AWS VPC Endpoint with Pulumi

In this post, I’d like to show readers how to use Pulumi to create a VPC endpoint on AWS. Until recently, I’d heard of VPC endpoints but hadn’t really taken the time to fully understand what they were or how they might be used. That changed when I was presented with a requirement for the AWS EC2 APIs to be available within a VPC that did not have Internet access. As it turns out—and as many readers are probably already aware—this is one of the key use cases for a VPC endpoint (see the VPC endpoint docs). The sample code I’ll share below shows how to programmatically create a VPC endpoint for use in infrastructure-as-code use cases.

Read more...

Manually Loading Container Images with containerD

I recently had a need to manually load some container images into a Linux system running containerd (instead of Docker) as the container runtime. I say “manually load some images” because this system was isolated from the Internet, and so simply running a container and having containerd automatically pull the image from an image registry wasn’t going to work. The process for working around the lack of Internet access isn’t difficult, but didn’t seem to be documented anywhere that I could readily find using a general web search. I thought publishing it here may help individuals seeking this information in the future.

Read more...

Thinking and Learning About API Design

In July of 2018 I talked about Polyglot, a very simple project I’d launched whose only purpose was simply to bolster my software development skills. Work on Polyglot has been sporadic at best, coming in fits and spurts, and thus far focused on building a model for the APIs that would be found in the project. Since I am not a software engineer by training (I have no formal training in software development), all of this is new to me, and I’ve found myself encountering lots of questions about API design along the way. In the interest of helping others who may be in a similar situation, I thought I’d share a bit here.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!