Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

A Quick Intro to the AWS CLI

This post provides a (very) basic introduction to the AWS CLI (command-line interface) tool. It’s not intended to be a deep dive, nor is it intended to serve as a comprehensive reference guide (the AWS CLI docs nicely fill that need). I also assume that you already have a basic understanding of the key AWS concepts and terminology, so I won’t bore you with defining an instance, VPC, subnet, or security group.

For the purposes of this introduction, I’ll structure it around launching an EC2 instance. As it turns out, there’s a fair amount of information you need before you can launch an AWS instance using the AWS CLI. So, let’s look at how you would use the AWS CLI to help get the information you need in order to launch an instance using the AWS CLI. (Tool inception!)

To launch an instance, you need five pieces of information:

  1. The ID of an Amazon Machine Image (AMI)
  2. The type of instance you’re going to launch
  3. The name of the SSH keypair you’d like to inject into the instance
  4. The ID of the security group to which this instance should be added
  5. The ID of the subnet on which this instance should be placed

Some (most?) of this information is easily located via the AWS console, but I’ll use the CLI nevertheless.

Let’s start by determining the name of the SSH keypair you’d like to inject into the instance (this is assuming you’re launching a Linux-based instance). The basic format for AWS CLI commands looks something like aws <service> <command>. In this case, we’re dealing with the EC2 service, and we want to get a list of—or describe—the SSH keypairs. So the command looks like this:

aws ec2 describe-key-pairs

What you’ll get back is JSON (see this article if you need a quick introduction to JSON) that looks something like this (some of the data has been randomized to protect the innocent):

{
  "KeyPairs": [
    {
      "KeyName": "key_pair_name",
      "KeyFingerprint": "57:ca:27:99:fe:2a:24:60:8e:7f:b4:de:ad:be:ef:f1"
    }
  ]
}

Because it’s JSON, you can use handy tools like jq to manipulate and format this data. (If you aren’t familiar with jq, see this article.) So, let’s say you wanted to extract the keypair name into a variable so you can re-use it later. That command looks something like this:

KEYPAIR=$(aws ec2 describe-key-pairs | jq -r '.KeyPairs[0].KeyName')

Subsequently running echo $KEYPAIR would produce the output key_pair_name, showing you that you’ve successfully extracted the name of the keypair into the variable named KEYPAIR. This trick—assigning the output of a command to a variable—is a trick of the Bash shell called command substitution, and I’ll use it extensively in this article to store pieces of information we’ll need later.

Next, you’ll need to know what type of instance you want to launch. Rodney “Rodos” Haywood has a great article on determining which instances are available in your region. For now, we’ll just assume you want to create a “t2.micro” instance.

Next, let’s track down security group and subnet information. To retrieve security group details, you’d use this command:

aws ec2 describe-security-groups

This will return a list of security groups and the rules in the security groups, so the output will be lengthy and/or complex. Once again we’ll turn to jq to help parse the information down to show us only the group ID of the default security group:

aws ec2 describe-security-groups | jq '.SecurityGroups[] | select (.GroupName == "default") | .GroupId'

This finds the group whose GroupName key has the value “default” (i.e., the security group named “default”) and returns the group ID. You could modify the value for which you’re searching as needed, of course.

You can use Bash command substitution to place the output of this command into a variable we’ll use later:

SG_ID=$(aws ec2 describe-security-groups | jq -r '.SecurityGroups[] | select (.GroupName == "default") | .GroupId')

In reviewing the output of the aws ec2 describe-security-groups command, perhaps you notice the security group doesn’t allow SSH access. That will be problematic, as SSH is the nearly-universal means by which you gain access to Linux- and UNIX-based instances. We can fix that pretty easily with the oh-so-intuitively-named aws ec2 authorize-security-group-ingress command:

aws ec2 authorize-security-group-ingress --group-id <value> --protocol <tcp|udp|icmp> --port <value> --cidr <value>

To allow SSH access, then, we’ll use the $SG_ID value we obtained earlier and plug in the correct values for SSH:

aws ec2 authorize-security-group-ingress --group-id $SG_ID \
--protocol tcp --port 22 --cidr 0.0.0.0/0

And now our soon-to-be-launched instance will be available via SSH! (If you decide you don’t want SSH access, just use the aws ec2 revoke-security-group-ingress command, which has the same syntax as the aws ec2 authorize-security-group-ingress command.)

So far, we’ve gathered the SSH key pair, the instance type, and the security group ID. Determining the next piece of information we need—the subnet ID—is pretty straightforward as well. The basic command is aws ec2 describe-subnets; when it’s combined with a jq filter we can get the subnet ID for the subnet in a particular availability zone. Let’s say we want the subnet from the “us-west-2b” availability zone:

aws ec2 describe-subnets | jq '.Subnets[] | select(.AvailabilityZone == "us-west-2b") | .SubnetId'

This uses the same jq syntax we used with the security groups: finds the subnet object whose AvailabilityZone key is equal to “us-west-2b”, then returns the ID for that subnet.

Naturally, we’ll store this value in a variable for use later (adding the -r flag to jq to return the value in plain text, not as JSON):

SUBNET_ID=$(aws ec2 describe-subnets | jq -r '.Subnets[] | select(.AvailabilityZone == "us-west-2b") | .SubnetId')

We’re now left with only the AMI (Amazon Machine Image). I’ve saved this for last as some might consider it the most complex of the tasks. As you’ve likely guessed, the basic command is aws ec2 describe-images. However, you can’t just run that command; it will return way too much information. (Go ahead and try it, I’ll wait here.)

To limit the amount of information returned by the aws ec2 describe-images command, we need to use some server-side filtering/querying functionality. There’s a few different tricks we can use here:

  • First, we can add the --owners flag to limit the command to show AMIs belonging to a specific account. Let’s suppose that we’re looking for an Ubuntu AMI; in the case, we’d use the owner ID of “099720109477”, so that the command would look like this:

    aws ec2 describe-images --owners 099720109477
    
  • However, that’s not enough, because we still get far too many values returned. To whittle down the list even further, the AWS CLI supports the “–filters” parameter, which allows us to add more constraints to the list. The most useful filter, in my opinion, is filtering on the Name value. This allows you to do “wildcard” searches for AMIs whose name match a pattern. Here’s an example:

    aws ec2 describe-images --owners 099720109477 \
    --filters Name=name,Values='*ubuntu-xenial-16.04*'
    

    This will still return too much information, but we can tack on additional filters as needed:

    aws ec2 describe-images --owners 099720109477 \
    --filters Name=root-device-type,Values=ebs \
    Name=architecture, Values=x86_64 \
    Name=name,Values='*ubuntu-xenial-16.04*' \
    Name=virtualization-type,Values=hvm
    

    Here you can see I’ve added filters to show only AMIs that use EBS as the root volume type, are hardware-virtualized images, and are 64-bit images. The full list of available filters for the describe-images command is available here.

  • Our final trick is to add a server-side query to the command. This query uses JMESPath syntax (which I won’t cover here because that’s enough for a separate post on its own). Here’s an example of using a server-side query to show only the most recent AMI that matches the rest of the criteria:

    aws ec2 describe-images --owners 099720109477 --filters Name=root-device-type,Values=ebs Name=architecture,Values=x86_64 Name=name,Values='*ubuntu-xenial-16.04*' Name=virtualization-type,Values=hvm --query 'sort_by(Images, &Name)[-1].ImageId'
    

    The last and final step would be to store the value this command returns into a variable for use later:

    IMAGE_ID=$(aws ec2 describe-images --owners 099720109477 --filters Name=root-device-type,Values=ebs Name=architecture,Values=x86_64 Name=name,Values='*ubuntu-xenial-16.04*' Name=virtualization-type,Values=hvm --query 'sort_by(Images, &Name)[-1].ImageId')
    

At this point, we’ve gathered all the information we need to launch an AWS instance. The command to use is (drum roll, please) aws ec2 run-instances, and we’ll plug in the variables we’ve created along the way to supply all the necessary information. Here’s what the final command would look like (I’ve wrapped it with backslashes for improved readability):

aws ec2 run-instances --image-id $IMAGE_ID \
--count 1 --instance-type t2.micro \
--key-name $KEYPAIR \
--security-group-ids $SG_ID \
--subnet-id $SUBNET_ID

This command will also return some JSON, which will include the instance IDs of the instances we just launched. We’ll ignore that output for now for the sake of being able to show a few more ways to use the AWS CLI.

With an instance now running, let’s retrieve the details of the instance with another AWS CLI command:

aws ec2 describe-instances

This command also returns a pretty fair amount of information, and again we can use jq to filter down the information to show only the details you need. Suppose you need to see the private and public IP addresses assigned to the instance you just launched. With this set of jq filters, that’s exactly what you’ll see:

aws ec2 describe-instances | jq '.Reservations[].Instances[] | { instance: .InstanceId, publicip: .PublicIpAddress, privateip: .PrivateIpAddress }'

Handy! I could run through a dozen more examples, but I think you get the point now.

Let’s move on to terminating an instance. In order to terminate an instance—and you guessed it, the appropriate command is aws ec2 terminate-instances as expected—we’ll need the instance IDs. Turning back to jq again:

aws ec2 describe-instances | jq '.Reservations[].Instances[] | .InstanceId'

And once again using command substitution store that into a variable:

INSTANCE_ID=$(aws ec2 describe-instances | jq -r '.Reservations[].Instances[] | .InstanceId')

Then we can plug that value into this command to terminate instances:

aws ec2 terminate-instances --instance-ids $INSTANCE_ID

Hopefully this quick introduction to basic tasks you can perform with the AWS CLI has been useful. Feel free to hit me on Twitter if you have questions. If you have suggestions for improving this article, I invite you to open an issue or file a PR on the GitHub repository for this site.

Examining X.509 Certificates Embedded in Kubeconfig Files

While exploring some of the intricacies around the use of X.509v3 certificates in Kubernetes, I found myself wanting to be able to view the details of a certificate embedded in a kubeconfig file. (See this page if you’re unfamiliar with what a kubeconfig file is.) In this post, I’ll share with you the commands I used to accomplish this task.

First, you’ll want to extract the certificate data from the kubeconfig file. For the purposes of this post, I’ll use a kubeconfig file named config and found in the .kube subdirectory of your home directory. Assuming there’s only a single certificate embedded in the file, you can use a simple grep statement to isolate this information:

grep 'client-certificate-data' $HOME/.kube/config

Combine that with awk to isolate only the certificate data:

grep 'client-certificate-data' $HOME/.kube/config | awk '{print $2}'

This data is Base64-encoded, so we decode it (I’ll wrap the command using backslashes for readability now that it has grown a bit longer):

grep 'client-certificate-data' $HOME/.kube/config | \
awk '{print $2}' | base64 -d

You could, at this stage, redirect the output into a file (like certificate.crt) if so desired; the data you have is a valid X.509v3 certificate. It lacks the private key, of course.

However, if you’re only interested in viewing the properties of the certificate, as I was, there’s no need to redirect the output to a file. Instead, just pipe the output into openssl:

grep 'client-certificate-data' $HOME/.kube/config | \
awk '{print $2}' | base64 -d | openssl x509 -text

The output of this command should be a decoded breakdown of the data in the X.509 certificate. Notable pieces of information in this context are the Subject (this will identify the user being authenticated to Kubernetes with this certificate):

Certificate:
    Data:
        Version: 3 (0x2)
        Serial Number: 8264125584782928183 (0x72b0126f24342937)
    Signature Algorithm: sha256WithRSAEncryption
        Issuer: CN=kubernetes
        Validity
            Not Before: Jun 13 01:52:46 2018 GMT
            Not After : Jun 13 01:53:17 2019 GMT
        Subject: CN=system:kube-controller-manager

Also of interest is the X509v3 Extended Key Usage (which indicates the certificate is used for client authentication, i.e., “TLS Web Client Authentication”):

        X509v3 extensions:
            X509v3 Key Usage: critical
                Digital Signature, Key Encipherment
            X509v3 Extended Key Usage: 
                TLS Web Client Authentication

Note that the certificate is not configured for encryption, meaning this certificate doesn’t ensure the connection to Kubernetes is encrypted. That function is handled by a different certificate; this one is only used for client authentication.

This is probably nothing new for experienced Kubernetes folks, but I thought it might prove useful to a few people out there. Feel free to hit me up on Twitter with any corrections, clarifications, or questions. Have fun examining certificate data!

Using Variables in AWS Tags with Terraform

I’ve been working to deepen my Terraform skills recently, and one avenue I’ve been using to help in this area is expanding my use of Terraform modules. If you’re unfamiliar with the idea of Terraform modules, you can liken them to Ansible roles: a re-usable abstraction/function that is heavily parameterized and can be called/invoked as needed. Recently I wanted to add support for tagging AWS instances in a module I was building, and I found out that you can’t use variable interpolation in the normal way for AWS tags. Here’s a workaround I found in my research and testing.

Normally, variable interpolation in Terraform would allow one to do something like this (this is taken from the aws_instance resource):

tags {
    Name = "${var.name}-${count.index}"
    role = "${var.role}"
}

This approach works, creating tags whose keys are “Name” and “role” and whose values are the interpolated variables. (I am, in fact, using this exact snippet of code in some of my Terraform modules.) Given that this works, I decided to extend it in a way that would allow the code calling the module to supply both the key as well as the value, thus providing more flexibility in the module. I arrived at this snippet:

tags {
    Name = "${var.name}-${count.index}"
    role = "${var.role}"
    "${var.opt_tag_name}" = "${var.opt_tag_value}"
}

The idea here is that the opt_tag_name variable contains a tag key, and the opt_tag_value contains the associated tag value.

Unfortunately, this doesn’t work—instead of interpolating the value for opt_tag_name, the string was applied literally, and the value wasn’t interpolated at all (it came through blank). I’m not really sure why this is the case, but after some searching I came across this GitHub issue that provides a workaround.

The workaround has 2 parts:

  1. A local definition
  2. Mapping the tags into the aws_instance resource

The local definition looks like this:

locals {
    common_tags = "${map(
        "${var.opt_tag_name}", "${var.opt_tag_value}",
        "role", "${var.role}"
    )}"
}

This sets up a “local variable” within the module (it’s scoped to the module). In this case, it’s a map of keys and values. One of the keys is determined by interpolating the opt_tag_name variable, and the value for that key is determined by the variable opt_tag_value. The second key is “role”, and the value is taken by interpolating the role variable.

The second step references this local definition. In the aws_instance resource itself, you’ll reference the local definition along with any additional tags like this:

tags = "${merge(
    local.common_tags,
    map(
        "Name", "${var.name}-${count.index}"
    )
)}"

This snippet of code sets up a new map that defines the tag with the “Name” key and the value taken from the variable name and suffixed by a count (derived from the number of instances being created). Then it uses the merge function to create a union of the two maps, which are then used by the AWS provider to set the tags on the AWS instance.

Taken together, this allows me to provide both the tag name and the tag value to the module, which will then pass those along to the AWS instances created by the module. Now I just need to figure out how to make this truly optional, so that the module doesn’t try to create a key-name value if no values are passed to the module. I haven’t figured that part out (yet).

More Resources

Here’s the Terraform documentation on modules, though—to be honest—I haven’t found it to be as helpful as I’d like.

A Quadruple-Provider Vagrant Environment

In October 2016 I wrote about a triple-provider Vagrant environment I’d created that worked with VirtualBox, AWS, and the VMware provider (tested with VMware Fusion). Since that time, I’ve incorporated Linux (Fedora, specifically) into my computing landscape, and I started using the Libvirt provider for Vagrant (see my write-up here). With that in mind, I updated the triple-provider environment to add support for Libvirt and make it a quadruple-provider environment.

To set expectations, I’ll start out by saying there isn’t a whole lot here that is dramatically different than the triple-provider setup that I shared back in October 2016. Obviously, it supports more providers, and I’ve improved the setup so that no changes to the Vagrantfile are needed (everything is parameterized).

With that in mind, let’s take a closer look. First, let’s look at the Vagrantfile itself:

# Specify minimum Vagrant version and Vagrant API version
Vagrant.require_version '>= 1.6.0'
VAGRANTFILE_API_VERSION = '2'

# Require 'yaml' module
require 'yaml'

# Read YAML file with VM details (box, CPU, and RAM)
machines = YAML.load_file(File.join(File.dirname(__FILE__), 'machines.yml'))

# Create and configure the VMs
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|

  # Always use Vagrant's default insecure key
  config.ssh.insert_key = false

  # Iterate through entries in YAML file to create VMs
  machines.each do |machine|

    # Configure the AWS provider
    config.vm.provider 'aws' do |aws|

      # Specify default AWS key pair
      aws.keypair_name = machine['aws']['keypair']

      # Specify default region
      aws.region = machine['aws']['region']
    end # config.vm.provider 'aws'

    config.vm.define machine['name'] do |srv|

      # Don't check for box updates
      srv.vm.box_check_update = false

      # Set machine's hostname
      srv.vm.hostname = machine['name']

      # Use dummy AWS box by default (override per-provider)
      srv.vm.box = 'aws-dummy'

      # Configure default synced folder (disable by default)
      if machine['sync_disabled'] != nil
        srv.vm.synced_folder '.', '/vagrant', disabled: machine['sync_disabled']
      else
        srv.vm.synced_folder '.', '/vagrant', disabled: true
      end #if machine['sync_disabled']

      # Iterate through networks as per settings in machines.yml
      machine['nics'].each do |net|
        if net['ip_addr'] == 'dhcp'
          srv.vm.network net['type'], type: net['ip_addr']
        else
          srv.vm.network net['type'], ip: net['ip_addr']
        end # if net['ip_addr']
      end # machine['nics'].each

      # Configure CPU & RAM per settings in machines.yml (Fusion)
      srv.vm.provider 'vmware_fusion' do |vmw, override|
        vmw.vmx['memsize'] = machine['ram']
        vmw.vmx['numvcpus'] = machine['vcpu']
        override.vm.box = machine['box']['vmw']
        if machine['nested'] == true
          vmw.vmx['vhv.enable'] = 'TRUE'
        end #if machine['nested']
      end # srv.vm.provider 'vmware_fusion'

      # Configure CPU & RAM per settings in machines.yml (VirtualBox)
      srv.vm.provider 'virtualbox' do |vb, override|
        vb.memory = machine['ram']
        vb.cpus = machine['vcpu']
        override.vm.box = machine['box']['vb']
        vb.customize ['modifyvm', :id, '--nictype1', 'virtio']
        vb.customize ['modifyvm', :id, '--nictype2', 'virtio']
      end # srv.vm.provider 'virtualbox'

      # Configure CPU & RAM per settings in machines.yml (Libvirt)
      srv.vm.provider 'libvirt' do |lv,override|
        lv.memory = machine['ram']
        lv.cpus = machine['vcpu']
        override.vm.box = machine['box']['lv']
        if machine['nested'] == true
          lv.nested = true
        end # if machine['nested']
      end # srv.vm.provider 'libvirt'

      # Configure per-machine AWS provider/instance overrides
      srv.vm.provider 'aws' do |aws, override|
        override.ssh.private_key_path = machine['aws']['key_path']
        override.ssh.username = machine['aws']['user']
        aws.instance_type = machine['aws']['type']
        aws.ami = machine['box']['aws']
        aws.security_groups = machine['aws']['security_groups']
      end # srv.vm.provider 'aws'
    end # config.vm.define
  end # machines.each
end # Vagrant.configure

A couple of notes about the above Vagrantfile:

  • All the data is pulled from an external YAML file named machines.yml; more information on that shortly.
  • The “magic,” if you will, is in the provider overrides. HashiCorp recommends against provider overrides, but in my experience they’re a necessity when working with multi-provider setups. Within each provider override block, we set provider-specific details and adjust the box needed (because finding boxes that support multiple platforms is downright impossible in many cases).
  • The machine[nics].each do |net| section works for the local virtualization providers (VirtualBox, VMware, and Libvirt), but is silently ignored for AWS. That made making the Vagrantfile much easier, in my opinion. Note that last time I really tested the Libvirt provider there was some weirdness with the network configuration; the configuration shown above works as expected. Other configurations may not.

Now, let’s look at the external YAML data file that feeds Vagrant the information it needs:

- aws:
    type: "t2.medium"
    user: "ubuntu"
    key_path: "~/.ssh/id_rsa"
    security_groups:
      - "default"
      - "test"
    keypair: "ssh_keypair"
    region: "us-west-2"
  box:
    aws: "ami-db710fa3"
    lv: "generic/ubuntu1604"
    vb: "ubuntu/xenial64"
    vmw: "bento/ubuntu-16.04"
  name: "xenial-01"
  nested: false
  nics:
    - type: "private_network"
      ip_addr: "dhcp"
  ram: "512"
  sync_disabled: true
  vcpu: "1"

This is pretty straightforward YAML. This configuration does support multiple VMs/instances, with one interesting twist. When working with multiple AWS instances, you only need to specify the AWS keypair and AWS region on the last instance defined in the YAML file. You can include it for all instances, if you like, but only the values on the last instance will actually apply. I may toy around with supporting multi-region configurations, but that is kind of far down my priority list. The other AWS-specific values (type, user, and path to private key) need to be specified for all instances in the YAML file.

To use this environment, you only need to edit the external YAML file with the appropriate values and make sure authentication against AWS is working as expected. I recommend installing and configuring the AWS CLI to ensure that authentication against AWS is working as expected. Alternately, you could use something like aws-vault.

Then it’s just a matter of running the appropriate command for your particular environment:

vagrant up --provider=aws (to spin up instances on AWS)
vagrant up --provider=virtualbox (to spin up VirtualBox VMs locally)
vagrant up --provider=vmware_fusion (to use Fusion to create local VMs)
vagrant up --provider=libvirt (to create Libvirt guest domains locally)

Using this sort of technique to support multiple providers in a single Vagrant environment provides a clean, consistent workflow regardless of backend provider. Naturally, this could be extended to include other providers using the same basic techniques I’ve used here. I’ll leave that as an exercise to the readers.

My Use Case

You might be wondering, “Why did you put effort into this?” It’s pretty simple, really. I’m working on a project where I needed to be able to quickly and easily spin up a few instances on AWS. I felt like Terraform was a bit too “heavy” for this, as all I really needed was the ability to launch an instance or two, interact with the instances, then tear them down. Yes, I could have done this with the AWS CLI, but…really? I knew that Vagrant worked with AWS, and I already use Vagrant for other purposes. It seemed pretty natural to incorporate the AWS support in Vagrant into my existing environments, and this quadruple-provider environment was the result. Enjoy!

Technology Short Take 101

Welcome to Technology Short Take #101! I have (hopefully) crafted an interesting and varied collection of links for you today, spanning all the major areas of modern data center technology. Now you have some reading material for this weekend!

Networking

Servers/Hardware

  • AWS adds local NVMe storage to the M5 instance family; more details here. What I found interesting is that the local NVMe storage is also hardware encrypted. AWS also mentions that these M5d instances are powered by (in their words) “Custom Intel Xeon Platinum” processors, which just goes to confirm the long-known fact that AWS is leveraging custom Intel CPUs in their stuff (as are all the major cloud providers, I’m sure).

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

  • Speaking of EKS, here’s a new command-line interface for EKS, courtesy of Weaveworks.
  • Along with the GA of EKS, HashiCorp has a release of the Terraform AWS provider that has EKS support. More details are available here.
  • Google recently announced kustomize, a tool that provides a new approach to customizing Kubernetes object configuration.
  • Following after a recent post involving parsing AWS instance data with jq, a Twitter follower pointed out jid (the JSON incremental digger). Handy tool!
  • I’m seeing a fair amount of attention on podman, a tool primarily backed by Red Hat that aims to replace Docker as the client-side tool of choice. The latest was this post. Anyone else spent any quality time with this tool and have some feedback?
  • Nick Janetakis has a collection of quick “Docker tips” that you may find useful; the latest one shows how to see all your container’s environment variables.

Storage

  • This is an interesting announcement from a few weeks ago that I missed—Dell EMC will offer Isilon on Google Cloud Platform. See Chris Evans’ article here. (I’ll leave the analysis and pontificating to Chris, who’s much better at it than I am.)

Virtualization

Career/Soft Skills

  • I found this two-part series (part 1, part 2) on understanding how to process information to help you get organized to be an interesting (and quick) read. It’s been my experience that improving your skills at being organized often reaps benefits in other areas.

OK, that’s all this time around. I hope you found something useful in this post. As always, your feedback is welcome; feel free to hit me up on Twitter.

Recent Posts

Exploring Kubernetes with Kubeadm, Part 1: Introduction

I recently started using kubeadm more extensively than I had in the past to serve as the primary tool by which I stand up Kubernetes clusters. As part of this process, I also discovered the kubeadm alpha phase subcommand, which exposes different sections (phases) of the process that kubeadm init follows when bootstrapping a cluster. In this blog post, I’d like to kick off a series of posts that explore how one could use the kubeadm alpha phase command to better understand the different components within Kubernetes, the relationships between components, and some of the configuration items involved.

Read more...

Book Review: Infrastructure as Code

As part of my 2018 projects, I committed to reading and reviewing more technical books this year. As part of that effort, I recently finished reading Infrastructure as Code, authored by Kief Morris and published in September 2015 by O’Reilly (more details here). Infrastructure as code is very relevant to my current job function and is an area of great personal interest, and I’d been half-heartedly working my way through the book for some time. Now that I’ve completed it, here are my thoughts.

Read more...

Technology Short Take 100

Wow! This marks 100 posts in the Technology Short Take series! For almost eight years (Technology Short Take #1 was published in August 2010), I’ve been collecting and sharing links and articles from around the web related to major data center technologies. Time really flies when you’re having fun! Anyway, here is Technology Short Take 100…I hope you enjoy!

Read more...

Quick Post: Parsing AWS Instance Data with JQ

I recently had a need to get a specific subset of information about some AWS instances. Naturally, I turned to the CLI and some CLI tools to help. In this post, I’ll share the command I used to parse the AWS instance data down using the ever-so-handy jq tool.

Read more...

Posts from the Past, May 2018

This month—May 2018—marks thirteen years that I’ve been generating content here on this site. It’s been a phenomenal 13 years, and I’ve enjoyed the opportunity to share information with readers around the world. To celebrate, I thought I’d do a quick “Posts from the Past” and highlight some content from previous years. Enjoy!

Read more...

DockerCon SF 18 and Spousetivities

DockerCon SF 18 is set to kick off in San Francisco at the Moscone Center from June 12 to June 15. This marks the return of DockerCon to San Francisco after being held in other venues for the last couple of years. Also returning to San Francisco is Spousetivities, which has organized activities for spouses, significant others/domestic partners, friends, and family members traveling with conference attendees!

Read more...

Manually Installing Firefox 60 on Fedora 27

Mozilla recently released version 60 of Firefox, which contains a number of pretty important enhancements (as outlined here). However, the Fedora repositories don’t (yet) contain Firefox 60 (at least not for Fedora 27), so you can’t just do a dnf update to get the latest release. With that in mind, here are some instructions for manually installing Firefox 60 on Fedora 27.

Read more...

One Week Until Spousetivities in Vancouver

Only one week remains until Spousetivities kicks off in Vancouver at the OpenStack Summit! If you are traveling to the Summit with a spouse, significant other, family member, or friend, I’d encourage you to take a look at the great activities Crystal has arranged during the Summit.

Read more...

Technology Short Take 99

Welcome to Technology Short Take 99! What follows below is a collection of various links and articles about (mostly) data center-related technologies. Hopefully something I’ve included will be useful. Here goes!

Read more...

Installing GitKraken on Fedora 27

GitKraken is a full-featured graphical Git client with support for multiple platforms. Given that I’m trying to live a multi-platform life, it made sense for me to give this a try and see whether it is worth making part of my (evolving and updated) multi-platform toolbelt. Along the way, though, I found that GitKraken doesn’t provide an RPM package for Fedora, and that the installation isn’t as straightforward as one might hope. I’m documenting the procedure here in the hope of helping others.

Read more...

An Updated Look at My Multi-Platform Toolbelt

In early 2017 I posted about my (evolving) multi-platform toolbelt, describing some of the applications, standards, and services that I use across my Linux and macOS systems. In this post, I’d like to provide an updated review of that toolbelt.

Read more...

Technology Short Take 98

Welcome to Technology Short Take #98! Now that I’m starting to get settled into my new role at Heptio, I’ve managed to find some time to pull together another collection of links and articles pertaining to various data center technologies. Feedback is always welcome!

Read more...

List of Kubernetes Folks on Twitter

Earlier this morning, I asked on Twitter about good individuals to follow on Twitter for Kubernetes information. I received quite a few good responses (thank you!), and I though it might be useful to share the list of the folks that were recommended across all those responses.

Read more...

Review: Lenovo ThinkPad X1 Carbon

As part of the transition into my new role at Heptio (see here for more information), I had to select a new corporate laptop. Given that my last attempt at running Linux full-time was thwarted due primarily to work-specific collaboration issues that would no longer apply (see here), and given that other members of my team (the Field Engineering team) are also running Linux full-time, I thought I’d give it another go. Accordingly, I’ve started working on a Lenovo ThinkPad X1 Carbon (5th generation). Here are my thoughts on this laptop.

Read more...

The Future is Containerized

Last week I announced my departure from VMware, and my intention to step away from VMware’s products and platforms to focus on a new technology area moving forward. Today marks the “official” start of a journey that’s been building for a couple years, a journey that will take me into a future that’s containerized. That journey starts in Seattle, Washington.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!