Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Additive Loops with Ansible and Jinja2

I don’t know if “additive” is the right word, but it was the best word I could come up with to describe the sort of configuration I recently needed to address in Ansible. In retrospect, the solution seems pretty straightforward, but I’ll include it here just in case it proves useful to someone else. If nothing else, it will at least show some interesting things that can be done with Ansible and Jinja2 templates.

First, allow me to explain the problem I was trying to solve. As you may know, Kubernetes 1.11 was recently released, and along with it a new version of kubeadm, the tool for bootstrapping Kubernetes clusters. As part of the new release, the Kubernetes community released a new setup guide for using kubeadm to create a highly available cluster. This setup guide uses new functionality in kubeadm to allow you to create “stacked masters” (control plane nodes running both the Kubernetes components as well as the etcd key-value store). Because of the way etcd clusters work, and because of the way you create HA control plane members, the process requires that you start with a single etcd node, then add the second node, and finally add the third node. If you start out with all three, then the cluster won’t establish quorum and establishing a functional Kubernetes control plane node will fail.

As a result, this means the kubeadm configuration file used on the first Kubernetes control plane node must be written to boot a new single-node cluster. However, the kubeadm configuration file on the second control plane node specifies the first node and adds the second node. Likewise, the kubeadm configuration file for the third control plane node has the first and second nodes and adds the third node (hence my use of “additive loops” in the blog post title).

So, when I set out to automate this process using Ansible, I knew this wasn’t going to be your standard “run-of-the-mill” inventory loop in a Jinja2 template—at least, that wasn’t all it was going to be.

I arrived at a solution with three templates (one for each of the stacked masters). In the first template, the section for configuring etcd specifies the “initial-cluster” in the following way:

initial-cluster: "{%- for host in groups['masters'] -%}
{%- if loop.first -%}
{{ hostvars[host]['ansible_fqdn'] }}=https://{{ hostvars[host]['ansible_' + primary_interface]['ipv4']['address'] }}:2380{%- endif -%}
{%- endfor -%}"

The result, as you can probably determine, is that the first item in the loop—the first server in that inventory group—is the only item rendered in the template.

The second server was a bit more challenging, mostly because of a stupid error on my part. After accounting for my stupid error (if you must know, I was using hostvars[inventory_hostname] instead of hostvars[host]), this is where I ended:

initial-cluster: "{%- for host in groups['masters'] -%}
{%- if loop.first -%}
{{ hostvars[host]['ansible_fqdn'] }}=https://{{ hostvars[host]['ansible_' + primary_interface]['ipv4']['address'] }}:2380,{%- endif -%}
{%- if (not loop.last and not loop.first) -%}
{{ hostvars[host]['ansible_fqdn'] }}=https://{{ hostvars[host]['ansible_' + primary_interface]['ipv4']['address'] }}:2380{%- endif %}
{%- endfor -%}"

This adds to the previous template by including the if (not loop.last and not loop.first) conditional, which—for a group of three—ends up meaning the second host in the group. Great; now we have a template for the first host which has only the first host, and a template for the second host which has both the first and second hosts listed.

The third and final template is, after all, a standard “run-of-the-mill” inventory loop:

initial-cluster: "{%- for host in groups['masters'] -%}
{%- if loop.last -%}
{{ hostvars[host]['ansible_fqdn'] }}=https://{{ hostvars[host]['ansible_' + primary_interface]['ipv4']['address'] }}:2380
{%- else -%}
{{ hostvars[host]['ansible_fqdn'] }}=https://{{ hostvars[host]['ansible_' + primary_interface]['ipv4']['address'] }}:2380,
{%- endif -%}{%- endfor -%}"

This one probably requires no explanation; it simply iterates through the inventory group. The only “special” thing about it is knowing whether it should include a comma after the rendered text or not by testing for loop.last.

I should probably explain, though, the use of the ['ansible_' + primary_interface] syntax. I needed a way to get the IP address of an interface on the target system, but the names of interfaces change depending on platform. For example, on one platform the first Ethernet interface may be called “eth0”; on another, it may be called “ens0p0” or similar. Using the “primary_interface” variable allows me to use the same Ansible playbooks across platforms, only needing to adjust the variable when interface names change between platforms. (This was an Ansible trick I picked up from my colleague Craig.) Ansible substitutes the value of the variable when pulling the host variable to render the template. If I specify “eth0” as the value for “primary_interface”, Ansible will treat that as ansible_eth0 and thus pull the IPv4 address for eth0 when rendering the template. Handy! (Thanks Craig!)

As I said, looking back on the solution now it seems simple and straightforward. There’s probably a more elegant solution out there somewhere, but for now this will suffice. Oh, and if you’re interested in seeing the full template or the Ansible playbook used to render the template, have a look at the ansible/kubeadm-etcd-template directory in my GitHub “learning-tools” repository. Enjoy!

Technology Short Take 102

Welcome to Technology Short Take 102! I normally try to get these things published biweekly (every other Friday), but this one has taken quite a bit longer to get published. It’s no one’s fault but my own! In any event, I hope that you’re able to find something useful among the links below.

Networking

Security

Cloud Computing/Cloud Management

  • Joe Duffy takes the wraps off Pulumi, which looks very interesting. This is an area where I’d love to spend some time really digging in.
  • Interesting article here on Chick-Fil-A’s use of Kubernetes in their restaurants, and the tools/process they follow for establishing those clusters.
  • Jeff Geerling—well-known in the Ansible and Drupal communities—discusses the (perceived) complexity of Kubernetes. He spent some time actually working with Kubernetes to solve a problem, and he came away from that time recognizing (in his words) “most of the complexity is necessary,” and seeing value in using Kubernetes for appropriate use cases. This article is not (in my opinion) a knee-jerk reaction to some of the debate around Kubernetes’ complexity, but rather a measured and honest evaluation of a tool to solve problems.
  • Maish Saidel-Keesing has a two-part (so far?) series on comparing CloudFormation, Terraform, and Ansible (part 1, part 2). This is worth a read if you’re trying to determine which of these tools may be right for your use case.
  • Here’s a three-part series on Helm (part 1, part 2, and part 3).

Operating Systems/Applications

  • This is probably well-known, but I found this little tidbit about read-only containers handy.
  • This article on another reason your Docker containers may be slow was an excellent reminder that containerization is not equal to virtualization (that doesn’t make it better or worse, just different), and therefore can’t be treated the same. Different design and architecture considerations apply in each instance.
  • Lightroom is one of only a few applications that I keep around for macOS; this article gives me some alternatives.
  • Here’s an article on getting started with Buildah, part of a suite of tools being built as alternatives to Docker.
  • Tom Sweeney also talks about Buildah and how it can be used to create small container images. My takeaway from this article is that building really small container images requires either a) arcane knowledge of Docker layers and the Dockerfile, or b) arcane knowledge of little-used yum parameters.
  • Shahidh Muhammad has a review/comparison of various tools aimed at helping developers build and deploy their apps on Kubernetes.
  • William Lam shares his list of Visual Studio Code extensions.
  • Finally, a reasonably usable CLI client for Slack—written in Bash, no less!

Storage

Virtualization

  • Mike Foley walks you through configuring TPM 2.0 on a vSphere 6.7 host. (This must be the “future TPM content” that Mike mentioned.)
  • Ed Haletky documents the approach he uses to produce segregated virtual-in-virtual test environments that live within his production environment.
  • It seems as if there’s quite a bit of complexity in this article on using KubeVirt witih GlusterFS, but I can’t tell if that is a byproduct of KubeVirt, GlusterFS, or the combination of the two.

Career/Soft Skills

A couple of useful career-related articles popped up on my radar over the last couple of weeks:

  • First, there’s this article by Eric Lee on IT burnout. If you find yourself experiencing some of the same issues that Eric describes, I’d encourage you to take a deeper look and see if changes are necessary.
  • Next, Cody De Arkland tackles the topic of imposter syndrome. This is an area where I personally have struggled in the past, and I had to learn to humbly accept praise and stop negative self-talk. Talk to someone if you’re wrestling with this.

That’s all for now. Look for the next Technology Short Take in 2-3 weeks, and feel free to contact me via Twitter if you have any links or articles you think I should share. Thanks!

More Handy CLI Tools for JSON

In late 2015 I wrote a post about a command-line tool named jq, which is used for parsing JSON data. Since that time I’ve referenced jq in a number of different blog posts (like this one). However, jq is not the only game in town for parsing JSON data at the command line. In this post, I’ll share a couple more handy CLI tools for working with JSON data.

(By the way, if you’re new to JSON, check out this post for a gentle introduction.)

JMESPath and jp

JMESPath is used by both Amazon Web Services (AWS) in their AWS CLI as well as by Microsoft in the Azure CLI. For examples of JMESPath in action, see the AWS CLI documentation on the --query functionality, which makes use of server-side JMESPath queries to reduce the amount of data returned by an AWS CLI command (as opposed to filtering on the client side).

However, you can also use JMESPath on the client-side through the jp command-line utility. As a client-side parsing tool, jp is similar in behavior to jq, but I find the JMESPath query language to be a bit easier to use than jq in some situations.

Let’s assume we are working with the output of the command aws ec2 describe-security-groups, which returns—as JSON—a list of security groups and their properties. Naturally, parsing this data down to find only the specific information you need is a prime use case for jp. Perhaps you know that a security group named “internal-only” exists, but you don’t know anything else—only the name. Using jp, we could get more details on that specific group with this command:

aws ec2 describe-security-groups | jp "SecurityGroups[?GroupName == 'internal-only']"

The AWSCLI command will return all the security groups, and jp will filter through the data to return only the properties of the security group whose name (as defined in the “GroupName” field/property) is equal to “internal-only.” Compare that syntax to the equivalent jq syntax:

aws ec2 describe-security-groups | jq '.SecurityGroups[] | select (.GroupName == "internal-only")'

That’s handy, but what if we needed only a particular property of that security group? No problem, we’d just append the property name to the end of the query:

aws ec2 describe-security-groups | jp "SecurityGroups[?GroupName == 'internal-only'].GroupId"

This is fundamentally equivalent to this jq command:

aws ec2 describe-security-groups | jq '.SecurityGroups[] | select (.GroupName == "internal-only").GroupId'

However, if you try both jp and jq as described above, you’ll note a significant difference in the output. First, here’s the output of a jq command like the one above:

"sg-7f7efe02"

Now compare that to the output of the equivalent jp command:

[
  "sg-7f7efe02"
]

As you can see, jq returns a specific value, whereas jp returns an array of values (with only a single item in the array). In order to get all the way down to a single value with jp, you have to extend the query:

aws ec2 describe-security-groups | jp "SecurityGroups[?GroupName == 'internal-only'].GroupId | [0]"

This command will return only the value of the “GroupId” field. jp does support a -u command-line option that is equivalent to the -r option to jq in order to return unquoted (raw) strings. This is helpful when storing the command output into a variable for use later.

If you need more than a single property, you can build a JSON object from multiple properties by listing them in curly braces at the end of the query, like this:

aws ec2 describe-security-groups | jp "SecurityGroups[?GroupName == 'internal-only'].{ Name: GroupName, ID: GroupId, VPC: VpcId }"

This will return a JSON object with the properties listed.

I may explore writing a more in-depth article on jp in the future; if that’s something in which you’d be interested, please hit me up on Twitter and let me know.

JSON Incremental Digger (jid)

What if you’re not all that well-versed in the JMESPath syntax? Well, there are online simulators and parsers. Another approach would be to use jid, the JSON incremental digger. It doesn’t necessarily follow the JMESPath syntax, but it does allow you to interactively query some JSON data to find what you’re seeking.

To use jid, simply pipe in some JSON data. For example, you could direct the output of the AWS CLI into jid:

aws ec2 describe-instances | jid

This puts you into an interactive screen where you can explore and parse the data to find exactly what you need. jid will supply “suggested” queries at the top of the screen in green; just press Tab to accept the suggestion. Adjust the query at the top until you have the data you need, then press Enter. The specific data you’d selected in jid will be output to the shell. This is a super-handy way, in my opinion, of exploring JSON data structures with which you aren’t already familiar.

Have any other handy CLI tools for working with JSON? Hit me on Twitter with other tool suggestions, and I’ll update the post with feedback from readers. Thanks!

A Quick Intro to the AWS CLI

This post provides a (very) basic introduction to the AWS CLI (command-line interface) tool. It’s not intended to be a deep dive, nor is it intended to serve as a comprehensive reference guide (the AWS CLI docs nicely fill that need). I also assume that you already have a basic understanding of the key AWS concepts and terminology, so I won’t bore you with defining an instance, VPC, subnet, or security group.

For the purposes of this introduction, I’ll structure it around launching an EC2 instance. As it turns out, there’s a fair amount of information you need before you can launch an AWS instance using the AWS CLI. So, let’s look at how you would use the AWS CLI to help get the information you need in order to launch an instance using the AWS CLI. (Tool inception!)

To launch an instance, you need five pieces of information:

  1. The ID of an Amazon Machine Image (AMI)
  2. The type of instance you’re going to launch
  3. The name of the SSH keypair you’d like to inject into the instance
  4. The ID of the security group to which this instance should be added
  5. The ID of the subnet on which this instance should be placed

Some (most?) of this information is easily located via the AWS console, but I’ll use the CLI nevertheless.

Let’s start by determining the name of the SSH keypair you’d like to inject into the instance (this is assuming you’re launching a Linux-based instance). The basic format for AWS CLI commands looks something like aws <service> <command>. In this case, we’re dealing with the EC2 service, and we want to get a list of—or describe—the SSH keypairs. So the command looks like this:

aws ec2 describe-key-pairs

What you’ll get back is JSON (see this article if you need a quick introduction to JSON) that looks something like this (some of the data has been randomized to protect the innocent):

{
  "KeyPairs": [
    {
      "KeyName": "key_pair_name",
      "KeyFingerprint": "57:ca:27:99:fe:2a:24:60:8e:7f:b4:de:ad:be:ef:f1"
    }
  ]
}

Because it’s JSON, you can use handy tools like jq to manipulate and format this data. (If you aren’t familiar with jq, see this article.) So, let’s say you wanted to extract the keypair name into a variable so you can re-use it later. That command looks something like this:

KEYPAIR=$(aws ec2 describe-key-pairs | jq -r '.KeyPairs[0].KeyName')

Subsequently running echo $KEYPAIR would produce the output key_pair_name, showing you that you’ve successfully extracted the name of the keypair into the variable named KEYPAIR. This trick—assigning the output of a command to a variable—is a trick of the Bash shell called command substitution, and I’ll use it extensively in this article to store pieces of information we’ll need later.

Next, you’ll need to know what type of instance you want to launch. Rodney “Rodos” Haywood has a great article on determining which instances are available in your region. For now, we’ll just assume you want to create a “t2.micro” instance.

Next, let’s track down security group and subnet information. To retrieve security group details, you’d use this command:

aws ec2 describe-security-groups

This will return a list of security groups and the rules in the security groups, so the output will be lengthy and/or complex. Once again we’ll turn to jq to help parse the information down to show us only the group ID of the default security group:

aws ec2 describe-security-groups | jq '.SecurityGroups[] | select (.GroupName == "default") | .GroupId'

This finds the group whose GroupName key has the value “default” (i.e., the security group named “default”) and returns the group ID. You could modify the value for which you’re searching as needed, of course.

You can use Bash command substitution to place the output of this command into a variable we’ll use later:

SG_ID=$(aws ec2 describe-security-groups | jq -r '.SecurityGroups[] | select (.GroupName == "default") | .GroupId')

In reviewing the output of the aws ec2 describe-security-groups command, perhaps you notice the security group doesn’t allow SSH access. That will be problematic, as SSH is the nearly-universal means by which you gain access to Linux- and UNIX-based instances. We can fix that pretty easily with the oh-so-intuitively-named aws ec2 authorize-security-group-ingress command:

aws ec2 authorize-security-group-ingress --group-id <value> --protocol <tcp|udp|icmp> --port <value> --cidr <value>

To allow SSH access, then, we’ll use the $SG_ID value we obtained earlier and plug in the correct values for SSH:

aws ec2 authorize-security-group-ingress --group-id $SG_ID \
--protocol tcp --port 22 --cidr 0.0.0.0/0

And now our soon-to-be-launched instance will be available via SSH! (If you decide you don’t want SSH access, just use the aws ec2 revoke-security-group-ingress command, which has the same syntax as the aws ec2 authorize-security-group-ingress command.)

So far, we’ve gathered the SSH key pair, the instance type, and the security group ID. Determining the next piece of information we need—the subnet ID—is pretty straightforward as well. The basic command is aws ec2 describe-subnets; when it’s combined with a jq filter we can get the subnet ID for the subnet in a particular availability zone. Let’s say we want the subnet from the “us-west-2b” availability zone:

aws ec2 describe-subnets | jq '.Subnets[] | select(.AvailabilityZone == "us-west-2b") | .SubnetId'

This uses the same jq syntax we used with the security groups: finds the subnet object whose AvailabilityZone key is equal to “us-west-2b”, then returns the ID for that subnet.

Naturally, we’ll store this value in a variable for use later (adding the -r flag to jq to return the value in plain text, not as JSON):

SUBNET_ID=$(aws ec2 describe-subnets | jq -r '.Subnets[] | select(.AvailabilityZone == "us-west-2b") | .SubnetId')

We’re now left with only the AMI (Amazon Machine Image). I’ve saved this for last as some might consider it the most complex of the tasks. As you’ve likely guessed, the basic command is aws ec2 describe-images. However, you can’t just run that command; it will return way too much information. (Go ahead and try it, I’ll wait here.)

To limit the amount of information returned by the aws ec2 describe-images command, we need to use some server-side filtering/querying functionality. There’s a few different tricks we can use here:

  • First, we can add the --owners flag to limit the command to show AMIs belonging to a specific account. Let’s suppose that we’re looking for an Ubuntu AMI; in the case, we’d use the owner ID of “099720109477”, so that the command would look like this:

    aws ec2 describe-images --owners 099720109477
    
  • However, that’s not enough, because we still get far too many values returned. To whittle down the list even further, the AWS CLI supports the “–filters” parameter, which allows us to add more constraints to the list. The most useful filter, in my opinion, is filtering on the Name value. This allows you to do “wildcard” searches for AMIs whose name match a pattern. Here’s an example:

    aws ec2 describe-images --owners 099720109477 \
    --filters Name=name,Values='*ubuntu-xenial-16.04*'
    

    This will still return too much information, but we can tack on additional filters as needed:

    aws ec2 describe-images --owners 099720109477 \
    --filters Name=root-device-type,Values=ebs \
    Name=architecture, Values=x86_64 \
    Name=name,Values='*ubuntu-xenial-16.04*' \
    Name=virtualization-type,Values=hvm
    

    Here you can see I’ve added filters to show only AMIs that use EBS as the root volume type, are hardware-virtualized images, and are 64-bit images. The full list of available filters for the describe-images command is available here.

  • Our final trick is to add a server-side query to the command. This query uses JMESPath syntax (which I won’t cover here because that’s enough for a separate post on its own). Here’s an example of using a server-side query to show only the most recent AMI that matches the rest of the criteria:

    aws ec2 describe-images --owners 099720109477 --filters Name=root-device-type,Values=ebs Name=architecture,Values=x86_64 Name=name,Values='*ubuntu-xenial-16.04*' Name=virtualization-type,Values=hvm --query 'sort_by(Images, &Name)[-1].ImageId'
    

    The last and final step would be to store the value this command returns into a variable for use later:

    IMAGE_ID=$(aws ec2 describe-images --owners 099720109477 --filters Name=root-device-type,Values=ebs Name=architecture,Values=x86_64 Name=name,Values='*ubuntu-xenial-16.04*' Name=virtualization-type,Values=hvm --query 'sort_by(Images, &Name)[-1].ImageId')
    

At this point, we’ve gathered all the information we need to launch an AWS instance. The command to use is (drum roll, please) aws ec2 run-instances, and we’ll plug in the variables we’ve created along the way to supply all the necessary information. Here’s what the final command would look like (I’ve wrapped it with backslashes for improved readability):

aws ec2 run-instances --image-id $IMAGE_ID \
--count 1 --instance-type t2.micro \
--key-name $KEYPAIR \
--security-group-ids $SG_ID \
--subnet-id $SUBNET_ID

This command will also return some JSON, which will include the instance IDs of the instances we just launched. We’ll ignore that output for now for the sake of being able to show a few more ways to use the AWS CLI.

With an instance now running, let’s retrieve the details of the instance with another AWS CLI command:

aws ec2 describe-instances

This command also returns a pretty fair amount of information, and again we can use jq to filter down the information to show only the details you need. Suppose you need to see the private and public IP addresses assigned to the instance you just launched. With this set of jq filters, that’s exactly what you’ll see:

aws ec2 describe-instances | jq '.Reservations[].Instances[] | { instance: .InstanceId, publicip: .PublicIpAddress, privateip: .PrivateIpAddress }'

Handy! I could run through a dozen more examples, but I think you get the point now.

Let’s move on to terminating an instance. In order to terminate an instance—and you guessed it, the appropriate command is aws ec2 terminate-instances as expected—we’ll need the instance IDs. Turning back to jq again:

aws ec2 describe-instances | jq '.Reservations[].Instances[] | .InstanceId'

And once again using command substitution store that into a variable:

INSTANCE_ID=$(aws ec2 describe-instances | jq -r '.Reservations[].Instances[] | .InstanceId')

Then we can plug that value into this command to terminate instances:

aws ec2 terminate-instances --instance-ids $INSTANCE_ID

Hopefully this quick introduction to basic tasks you can perform with the AWS CLI has been useful. Feel free to hit me on Twitter if you have questions. If you have suggestions for improving this article, I invite you to open an issue or file a PR on the GitHub repository for this site.

Examining X.509 Certificates Embedded in Kubeconfig Files

While exploring some of the intricacies around the use of X.509v3 certificates in Kubernetes, I found myself wanting to be able to view the details of a certificate embedded in a kubeconfig file. (See this page if you’re unfamiliar with what a kubeconfig file is.) In this post, I’ll share with you the commands I used to accomplish this task.

First, you’ll want to extract the certificate data from the kubeconfig file. For the purposes of this post, I’ll use a kubeconfig file named config and found in the .kube subdirectory of your home directory. Assuming there’s only a single certificate embedded in the file, you can use a simple grep statement to isolate this information:

grep 'client-certificate-data' $HOME/.kube/config

Combine that with awk to isolate only the certificate data:

grep 'client-certificate-data' $HOME/.kube/config | awk '{print $2}'

This data is Base64-encoded, so we decode it (I’ll wrap the command using backslashes for readability now that it has grown a bit longer):

grep 'client-certificate-data' $HOME/.kube/config | \
awk '{print $2}' | base64 -d

You could, at this stage, redirect the output into a file (like certificate.crt) if so desired; the data you have is a valid X.509v3 certificate. It lacks the private key, of course.

However, if you’re only interested in viewing the properties of the certificate, as I was, there’s no need to redirect the output to a file. Instead, just pipe the output into openssl:

grep 'client-certificate-data' $HOME/.kube/config | \
awk '{print $2}' | base64 -d | openssl x509 -text

The output of this command should be a decoded breakdown of the data in the X.509 certificate. Notable pieces of information in this context are the Subject (this will identify the user being authenticated to Kubernetes with this certificate):

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 8264125584782928183 (0x72b0126f24342937)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jun 13 01:52:46 2018 GMT
            Not After : Jun 13 01:53:17 2019 GMT
        Subject: CN=system:kube-controller-manager

Also of interest is the X509v3 Extended Key Usage (which indicates the certificate is used for client authentication, i.e., “TLS Web Client Authentication”):

        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Client Authentication

Note that the certificate is not configured for encryption, meaning this certificate doesn’t ensure the connection to Kubernetes is encrypted. That function is handled by a different certificate; this one is only used for client authentication.

This is probably nothing new for experienced Kubernetes folks, but I thought it might prove useful to a few people out there. Feel free to hit me up on Twitter with any corrections, clarifications, or questions. Have fun examining certificate data!

Recent Posts

Using Variables in AWS Tags with Terraform

I’ve been working to deepen my Terraform skills recently, and one avenue I’ve been using to help in this area is expanding my use of Terraform modules. If you’re unfamiliar with the idea of Terraform modules, you can liken them to Ansible roles: a re-usable abstraction/function that is heavily parameterized and can be called/invoked as needed. Recently I wanted to add support for tagging AWS instances in a module I was building, and I found out that you can’t use variable interpolation in the normal way for AWS tags. Here’s a workaround I found in my research and testing.

Read more...

A Quadruple-Provider Vagrant Environment

In October 2016 I wrote about a triple-provider Vagrant environment I’d created that worked with VirtualBox, AWS, and the VMware provider (tested with VMware Fusion). Since that time, I’ve incorporated Linux (Fedora, specifically) into my computing landscape, and I started using the Libvirt provider for Vagrant (see my write-up here). With that in mind, I updated the triple-provider environment to add support for Libvirt and make it a quadruple-provider environment.

Read more...

Technology Short Take 101

Welcome to Technology Short Take #101! I have (hopefully) crafted an interesting and varied collection of links for you today, spanning all the major areas of modern data center technology. Now you have some reading material for this weekend!

Read more...

Exploring Kubernetes with Kubeadm, Part 1: Introduction

I recently started using kubeadm more extensively than I had in the past to serve as the primary tool by which I stand up Kubernetes clusters. As part of this process, I also discovered the kubeadm alpha phase subcommand, which exposes different sections (phases) of the process that kubeadm init follows when bootstrapping a cluster. In this blog post, I’d like to kick off a series of posts that explore how one could use the kubeadm alpha phase command to better understand the different components within Kubernetes, the relationships between components, and some of the configuration items involved.

Read more...

Book Review: Infrastructure as Code

As part of my 2018 projects, I committed to reading and reviewing more technical books this year. As part of that effort, I recently finished reading Infrastructure as Code, authored by Kief Morris and published in September 2015 by O’Reilly (more details here). Infrastructure as code is very relevant to my current job function and is an area of great personal interest, and I’d been half-heartedly working my way through the book for some time. Now that I’ve completed it, here are my thoughts.

Read more...

Technology Short Take 100

Wow! This marks 100 posts in the Technology Short Take series! For almost eight years (Technology Short Take #1 was published in August 2010), I’ve been collecting and sharing links and articles from around the web related to major data center technologies. Time really flies when you’re having fun! Anyway, here is Technology Short Take 100…I hope you enjoy!

Read more...

Quick Post: Parsing AWS Instance Data with JQ

I recently had a need to get a specific subset of information about some AWS instances. Naturally, I turned to the CLI and some CLI tools to help. In this post, I’ll share the command I used to parse the AWS instance data down using the ever-so-handy jq tool.

Read more...

Posts from the Past, May 2018

This month—May 2018—marks thirteen years that I’ve been generating content here on this site. It’s been a phenomenal 13 years, and I’ve enjoyed the opportunity to share information with readers around the world. To celebrate, I thought I’d do a quick “Posts from the Past” and highlight some content from previous years. Enjoy!

Read more...

DockerCon SF 18 and Spousetivities

DockerCon SF 18 is set to kick off in San Francisco at the Moscone Center from June 12 to June 15. This marks the return of DockerCon to San Francisco after being held in other venues for the last couple of years. Also returning to San Francisco is Spousetivities, which has organized activities for spouses, significant others/domestic partners, friends, and family members traveling with conference attendees!

Read more...

Manually Installing Firefox 60 on Fedora 27

Mozilla recently released version 60 of Firefox, which contains a number of pretty important enhancements (as outlined here). However, the Fedora repositories don’t (yet) contain Firefox 60 (at least not for Fedora 27), so you can’t just do a dnf update to get the latest release. With that in mind, here are some instructions for manually installing Firefox 60 on Fedora 27.

Read more...

One Week Until Spousetivities in Vancouver

Only one week remains until Spousetivities kicks off in Vancouver at the OpenStack Summit! If you are traveling to the Summit with a spouse, significant other, family member, or friend, I’d encourage you to take a look at the great activities Crystal has arranged during the Summit.

Read more...

Technology Short Take 99

Welcome to Technology Short Take 99! What follows below is a collection of various links and articles about (mostly) data center-related technologies. Hopefully something I’ve included will be useful. Here goes!

Read more...

Installing GitKraken on Fedora 27

GitKraken is a full-featured graphical Git client with support for multiple platforms. Given that I’m trying to live a multi-platform life, it made sense for me to give this a try and see whether it is worth making part of my (evolving and updated) multi-platform toolbelt. Along the way, though, I found that GitKraken doesn’t provide an RPM package for Fedora, and that the installation isn’t as straightforward as one might hope. I’m documenting the procedure here in the hope of helping others.

Read more...

An Updated Look at My Multi-Platform Toolbelt

In early 2017 I posted about my (evolving) multi-platform toolbelt, describing some of the applications, standards, and services that I use across my Linux and macOS systems. In this post, I’d like to provide an updated review of that toolbelt.

Read more...

Technology Short Take 98

Welcome to Technology Short Take #98! Now that I’m starting to get settled into my new role at Heptio, I’ve managed to find some time to pull together another collection of links and articles pertaining to various data center technologies. Feedback is always welcome!

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!