Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Programmatically Creating Kubernetes Manifests

A while ago I came across a utility named jk, which purported to be able to create structured text files—in JSON, YAML, or HCL—using JavaScript (or TypeScript that has been transpiled into JavaScript). One of the use cases was creating Kubernetes manifests. The GitHub repository for jk describes it as “a data templating tool”, and that’s accurate for simple use cases. In more complex use cases, the use of a general-purpose programming language like JavaScript in jk reveals that the tool has the potential to be much more than just a data templating tool—if you have the JavaScript expertise to unlock that potential.

The basic idea behind jk is that you could write some relatively simple JavaScript, and jk will take that JavaScript and use it to create some type of structured data output. I’ll focus on Kubernetes manifests here, but as you read keep in mind you could use this for other purposes as well. (I explore a couple other use cases at the end of this post.)

Here’s a very simple example:

const service = new api.core.v1.Service('appService', {
    metadata: {
        namespace: 'appName',
        labels: {
            app: 'appName',
            team: 'blue',
        },
    },
    spec: {
        selector: {
            app: 'appName',
        },
        ports: [{
            port: 80,
        }],
    },
});

As you might guess, this would generate a manifest for a Kubernetes Service object. In this example, you can see why jk is described as a data templating tool—the user essentially has to recreate a Service object as a JavaScript object, and then jk just does the rendering into YAML. In my opinion, using jk in this way doesn’t really offer a great deal of value. Developers familiar with JavaScript may prefer this format as opposed to YAML, and teams could leverage a test framework to test the code. In this instance, one could make a comparison to Pulumi; the same thing could be accomplished using a domain-specific language, but developers may prefer and be more productive with a general purpose programming language like JavaScript.

Given that jk can product multiple artifacts from the same JavaScript code (if the code is written correctly—the example above is not written to support multiple outputs), a more useful way to do leverage jk would be to combine all (or as much as possible) of the information for an application and then generate multiple outputs. I won’t post all the code here, but instead point you to the micro-service example in the jk repository as an example of this approach. (This example is also described in this blog post introducing jk.)

The basic gist of the micro-service example from the jk repository is this:

  • One JavaScript module (kubernetes.js) has a series of functions that generate JavaScript objects that correspond to the major Kubernetes objects (Namespace, Service, Deployment, Ingress, and ConfigMap). This module also imports a couple other JavaScript modules that create Prometheus rules and Grafana dashboards.
  • Other JavaScript modules (like billing.js) import the module with the function definitions and use data from a generic JavaScript object to call the functions and generate multiple YAML manifests. The generic JavaScript object contains all the information needed by all the functions, although each individual function will generally only use a subset of the information in the generic object.

The end result is that a developer can combine all the information needed to deploy an application on Kubernetes in one place, and then programmatically generate the YAML manifests for the Namespace, Deployment, Service, and Ingress from that one source. Need to change a value? Change it in one place, re-generate the manifests, and the value will be changed in all places where it was referenced. This, in my opinion, is a far more useful example of how jk could be leveraged. It does require JavaScript knowledge in order to unlock this functionality, as you have to write actual JavaScript code to define functions, import modules, etc., as opposed to just creating a JavaScript object whose structure mirrors that of the desired output.

I could also envision use cases for jk involving creating both Kubernetes manifests and Terraform HCL (maybe you want to create a Route 53 entry that corresponds to an Ingress object, or maybe you need to update a CDN when creating a new Ingress object). The Quick Start in the project’s documentation briefly discusses using jk to generate Terraform for the GitHub provider as well as generating JSON output, so there’s another example of creating multiple outputs from the same code. I won’t say the possibilities are endless, but hopefully you can see there is quite a bit of flexibility here. The tradeoff, as I mentioned earlier, is that you’ll need JavaScript expertise in order to really leverage this flexibility.

Are you using jk? What sort of use cases can you envision for a tool that enables you to programmatically create configuration files? I’d love to hear from you, so hit me up on Twitter and let me know what you think!

Spousetivities in Barcelona at VMworld EMEA 2019

Barcelona is probably my favorite city in Europe—which works out well, since VMware seems to have settled on Barcelona at the destination for VMworld EMEA. VMworld is back in Barcelona again this year, and I’m fortunate enough to be able to attend. VMworld in Barcelona wouldn’t be the same without Spousetivities, though, and I’m happy to report that Spousetivities will be in Barcelona. In fact, registration is already open!

If you’re bringing along a spouse, significant other, boyfriend/girlfriend, or just some family members, you owe it to them to look into Spousetivities. You’ll be able to focus at the conference knowing that your loved one(s) are not only safe, but enjoying some amazing activities in and around Barcelona. Here’s a quick peek at what Crystal and her team have lined up this year:

  • A wine tour of the Penedes region (southwest of Barcelona)—attendees will get to see some amazing wineries not frequented by tourists!
  • A walking tour of Barcelona
  • A tapas cooking class
  • A fantastic walking tour of Costa Brava, Pals, and Girona
  • A sailing tour (it’s a 3 hour tour, but it won’t end up like Gilligan’s)

Lunch and private transportation are included for all activities, and all activities will depart from the conference center. Times are listed on the registration site.

It’s worth noting—even though I’ve said it before—that these activities are not your run-of-the-mill tourist activities. These are custom activities not available to the general public, specially arranged for Spousetivities participants.

Prices for all these activities are reduced thanks to Veeam’s sponsorship, and to help make things even more affordable there is a Full Week Pass that gives you access to all the activities at an additional discount. I’d like to personally thank Veeam for their continued support—I believe work/life balance is an important defense against burnout, and it’s great to see a company letting their actions demonstrate their support of work/life balance (instead of just empty corporate statements).

These activities will almost certainly sell out, so register today!

(BTW, for all things Spousetivities-related, be sure to check out the newly-updated Spousetivities web site.)

Using Kustomize with Kubeadm Configuration Files

Last week I had a crazy idea: if kustomize can be used to modify YAML files like Kubernetes manifests, then could one use kustomize to modify a kubeadm configuration file, which is also a YAML manifest? So I asked about it in one of the Kubernetes-related channels in Slack at work, and as it turns out it’s not such a crazy idea after all! So, in this post, I’ll show you how to use kustomize to modify kubeadm configuration files.

If you aren’t already familiar with kustomize, I recommend having a look at this blog post, which provides an overview of this tool. For the base kubeadm configuration files to modify, I’ll use kubeadm configuration files from this post on setting up a Kubernetes 1.15 cluster with the AWS cloud provider.

While the blog post linked above provides an overview of kustomize, it certainly doesn’t cover all the functionality kustomize provides. In this particular use case—modifying kubeadm configuration files—the functionality described in the linked blog post doesn’t get you where you need to go. Instead, you’ll have to use the patching functionality of kustomize, which allows you to overwrite specific fields within the YAML definition for an object with new values.

To do this requires that you provide kustomize with enough information to find the specific object you’d like to modify in the YAML resource files. You do this by telling kustomize what API group to find, the kind of object to modify, and then uniquely identify the object. For example, in the kustomization.yaml file, you’d specify a patch like follows:

---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- kubeadm.yaml
patches:
- path: cluster-details.yaml
  target:
    group: kubeadm.k8s.io
    version: v1beta2
    kind: ClusterConfiguration
    name: kubeadm-init-clusterconfig

Let’s break this down just a bit:

  • The resources field list all the base YAML files that can potentially be modified by kustomize. In this case, it points to a single file: kubeadm.yaml (which, as you can guess, is a configuration file for kubeadm).
  • The path field for the patch points to a YAML file that contains the fields and values that will be used to modify the base YAML file.
  • The target fields tell kustomize which object in the base YAML file will be modified, by providing kustomize with the API group, API version, the kind of object, and the name (as provided by the metadata.name field) of the object to be modified.

Put another way, the resources field provides the base YAML file(s), the target says what object will be changed in the resources, and the path specifies a file that contains how the target in the resources will be changed.

(None of this is specific to modifying kubeadm configuration files with kustomize, by the way—the same fields are needed if you want to patch a Kubernetes object, like a Deployment, Service, or Ingress).

In this case, since you want to modify a kubeadm configuration file, the target specifies an API group of “kubeadm.k8s.io”, an API version of “v1beta2”, an object kind of “ClusterConfiguration”, and then a name to unique identify the object (because there may be more than one object of the specified API group, version, and kind in the base YAML files). Wait…a name?

“Scott,” you say, “kubeadm configuration files don’t have a name in them.”

Not by default, no—but you can add it, and this is the key to making it possible to use kustomize with a kubeadm configuration file. For example, here’s the kubeadm configuration file from the article on setting up Kubernetes with the AWS cloud provider:

---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  extraArgs:
    cloud-provider: aws
clusterName: blogsample
controlPlaneEndpoint: cp-lb.us-west-2.elb.amazonaws.com
controllerManager:
  extraArgs:
    cloud-provider: aws
    configure-cloud-routes: "false"
kubernetesVersion: stable
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: aws

No metadata.name field anywhere there, and yet this kubeadm configuration file works just fine. Just because the field isn’t required, though, doesn’t mean you can’t add it. To make it possible to use kustomize with this configuration file, you only need to add a metadata.name field to each of the two objects in this file (you do understand there are two objects here, right?), resulting in something like this:

---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
  name: kubeadm-init-clusterconfig
apiServer:
  extraArgs:
    cloud-provider: aws
clusterName: blogsample
controlPlaneEndpoint: cp-lb.us-west-2.elb.amazonaws.com
controllerManager:
  extraArgs:
    cloud-provider: aws
    configure-cloud-routes: "false"
kubernetesVersion: stable
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
  name: kubeadm-init-initconfig
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: aws

Now each object in this file can be uniquely identified by a target field in kustomization.yaml, which allows you to modify it using a patch. Handy, right!?

The final piece is the actual patch itself, which is simply a list of the fields to overwrite. As an example, here’s a patch that modifies the clusterName and controlPlaneEndpoint fields:

---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
metadata:
  name: kubeadm-init-clusterconfig
clusterName: oregon-cluster-1
controlPlaneEndpoint: different-dns-name.route53-domain.com

With all three pieces that are needed—the base YAML file (in this case a kubeadm configuration file with the metadata.name field added to each object), the kustomization.yaml which specifies the patch, and the patch YAML itself—in place, running kustomize build . results in this output:

apiServer:
  extraArgs:
    cloud-provider: aws
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: oregon-cluster-1
controlPlaneEndpoint: different-dns-name.route53-domain.com
controllerManager:
  extraArgs:
    cloud-provider: aws
    configure-cloud-routes: "false"
kind: ClusterConfiguration
kubernetesVersion: stable
metadata:
  name: kubeadm-init-clusterconfig
networking:
  dnsDomain: cluster.local
  podSubnet: 192.168.0.0/16
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
metadata:
  name: kubeadm-init-initconfig
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: aws

You’ll note that kustomize alphabetizes the fields in the base YAML, which is why they are in a different order in this sample output compared to the base YAML. However, you’ll also note that the clusterName and controlPlaneEndpoint fields were modified as instructed. Success!

This is interesting, but why is it important? By using kustomize to modify kubeadm configuration files, you can declaratively describe the configuration of the cluster you wish to bootstrap with kubeadm easily for multiple clusters. In the same way you’d use base resources with overlays for other Kubernetes objects, you can use the same base kubeadm files with overlays for different kubeadm-bootstrapped clusters. This brings some consistency to how you manage the manifests applied to Kubernetes clusters and the kubeadm configuration files used to establish those same clusters. (Oh, and spoiler alert: you can also use kustomize on Cluster API manifests! I have a blog post on that coming soon.)

If you have any feedback, questions, corrections (I am human and make mistakes—more frequently than I’d like!), or suggestions for improving this blog post, please don’t hesitate to reach out to me on Twitter. I’d be happy to hear from you!

Technology Short Take 119

Welcome to Technology Short Take #119! As usual, I’ve collected some articles and links from around the Internet pertaining to various data center- and cloud-related topics. This installation in the Tech Short Takes series is much shorter than usual, but hopefully I’ve managed to find something that proves to be helpful or informative! Now, on to the content!

Networking

  • Chip Zoller has a write-up on doing HTTPS ingress with Enterprise PKS. Normally I’d put something like this in a different section, but this is as much a write-up on how to configure NSX-T correctly as it is about configuring Ingress objects in Kubernetes.
  • I saw this headline, and immediately thought it was just “cloud native”-washing (i.e., tagging everything as “cloud native”). Fortunately, the diagrams illustrate that there is something substantive behind the headline. The “TL;DR” for those who are interested is that this solution bypasses the normal iptables layer involved in most Kubernetes implementations to load balance traffic directly to Pods in the cluster. Unfortunately, this appears to be GKE-specific.

Servers/Hardware

Nothing this time around. I’ll stay tuned for content to include next time!

Security

  • The Kubernetes project recently underwent a security audit; more information on the audit, along with links to the findings and other details, is available here.
  • Daniel Sagi of Aqua Security explains the mechanics behind a Pod escape using file system mounts.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • Containous, the folks behind the Traefik ingress controller, recently introduced Yaegi, a Go interpreter. I haven’t yet had time to take a closer look at this, but based on what I’ve read so far this might be a useful tool to help accelerate learning Golang. Yaegi is hosted on GitHub.
  • From the same folks, we have Maesh, a “simpler service mesh.” I’m not sure “simple” and “service mesh” belong in the same sentence, but given that I haven’t yet had the time to look into this more deeply I’ll let it slide.
  • Luc Dekens shares how to customize PowerShell to show things like the connected vSphere server, the Git repository or branch, the PowerCLI version, and more. Of course, Linux folks have been doing things like this with Bash for quite a while…
  • Puja Abbassi, a developer advocate at Giant Swarm, discusses the future of container image building by looking at some of the concerns with the existing “Docker way” of building images.
  • Jeff Geerling explains how to test your Ansible roles with Molecule. This is a slightly older post, but considering I found it useful I thought other readers might find it useful as well.

Storage

I don’t have any links to share this time, sorry!

Virtualization

Nope, nothing here either. I’ll stay alert for more content to include in the future.

Career/Soft Skills

  • This checklist is described as a “senior engineer’s checklist,” but it seems to be pretty applicable to most technology jobs these days.

See, I told you it was actually a short take this time! I should have more content to share next time. Until then, feel free to hit me up on Twitter and share any feedback or comments you may have. Thanks, and have a great weekend!

Exploring Cluster API v1alpha2 Manifests

The Kubernetes community recently released v1alpha2 of Cluster API (a monumental effort, congrats to everyone involved!), and with it comes a number of fairly significant changes. Aside from the new Quick Start, there isn’t (yet) a great deal of documentation on Cluster API (hereafter just called CAPI) v1alpha2, so in this post I’d like to explore the structure of the CAPI v1alpha2 YAML manifests, along with links back to the files that define the fields for the manifests. I’ll focus on the CAPI provider for AWS (affectionately known as CAPA).

As a general note, any links back to the source code on GitHub will reference the v0.2.1 release for CAPI and the v0.4.0 release for CAPA, which are the first v1apha2 releases for these projects.

Let’s start with looking at a YAML manifest to define a Cluster in CAPA (this is taken directly from the Quick Start):

apiVersion: cluster.x-k8s.io/v1alpha2
kind: Cluster
metadata:
  name: capi-quickstart
spec:
  clusterNetwork:
    pods:
      cidrBlocks: ["192.168.0.0/16"]
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
    kind: AWSCluster
    name: capi-quickstart
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSCluster
metadata:
  name: capi-quickstart
spec:
  region: us-east-1
  sshKeyName: default

Right off the bat, I’ll draw your attention to the separate Cluster and AWSCluster objects. CAPI v1alpha2 begins to more cleanly separate CAPI from the various CAPI providers (CAPA, in this case) by providing “generic” CAPI objects that map to provider-specific objects (like AWSCluster for CAPA). The link between the two is found in the infrastructureRef field, which references the AWSCluster object by name.

Astute readers may also note that the API group has changed from “cluster.k8s.io” in v1alpha1 to “cluster.x-k8s.io” in v1alpha2.

More details on these objects and the fields users can use to define them can be found in cluster_types.go (for the CAPI Cluster object) and in awscluster_types.go (for the AWSCluster object in CAPA). In particular, look for the definitions of the ClusterSpec and AWSClusterSpec structs (data structures).

The AWSClusterSpec struct still supports a NetworkSpec struct that users can use to influence how CAPA instantiates infrastructure, so the techniques I outlined here (for creating highly available clusters) and here (for consuming pre-existing AWS infrastructure) should still work with v1alpha2. (I’ll update this post once I’ve had a chance to fully test.)

Now, let’s look at some v1alpha2 YAML for creating a node in a cluster (in CAPI parlance, a node in a cluster is a Machine; as before, this example is taken from the Quick Start):

apiVersion: cluster.x-k8s.io/v1alpha2
kind: Machine
metadata:
  name: capi-quickstart-controlplane-0
  labels:
    cluster.x-k8s.io/control-plane: "true"
    cluster.x-k8s.io/cluster-name: "capi-quickstart"
spec:
  version: v1.15.3
  bootstrap:
    configRef:
      apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
      kind: KubeadmConfig
      name: capi-quickstart-controlplane-0
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
    kind: AWSMachine
    name: capi-quickstart-controlplane-0
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachine
metadata:
  name: capi-quickstart-controlplane-0
spec:
  instanceType: t3.large
  iamInstanceProfile: "controllers.cluster-api-provider-aws.sigs.k8s.io"
  sshKeyName: default
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfig
metadata:
  name: capi-quickstart-controlplane-0
spec:
  initConfiguration:
    nodeRegistration:
      name: '{{ ds.meta_data.hostname }}'
      kubeletExtraArgs:
        cloud-provider: aws
  clusterConfiguration:
    apiServer:
      extraArgs:
        cloud-provider: aws
    controllerManager:
      extraArgs:
        cloud-provider: aws

In addition to the split into a “generic” CAPI object (a Machine) and a provider-specific object (an AWSMachine) like shown above with Cluster and AWSCluster, for nodes the CAPI v1alpha2 release also brings a KubeadmConfig object. This new object allows users to customize the kubeadm configuration used by CAPI to configure nodes when bringing up the cluster. In this particular example, the KubeadmConfig object is enabling the AWS cloud provider (see this post for more details and links to other related posts regarding the AWS cloud provider).

Readers interested in perusing the Golang code that defines these objects can find them in machine_types.go (for the CAPI Machine object) and in awsmachine_types.go (for the CAPA AWSMachine object). For the KubeadmConfig object, the kubeadm bootstrap provider has its own repository here, and the struct is defined in kubeadmbootstrapconfig_types.go.

Finally, CAPI v1alpha2 still supports the MachineDeployment object (like a Deployment object is for Pods, but for Machines), but the underlying provider-specific objects are a bit different. Here’s the example from the Quick Start:

apiVersion: cluster.x-k8s.io/v1alpha2
kind: MachineDeployment
metadata:
  name: capi-quickstart-worker
  labels:
    cluster.x-k8s.io/cluster-name: capi-quickstart
    # Labels beyond this point are for example purposes,
    # feel free to add more or change with something more meaningful.
    # Sync these values with spec.selector.matchLabels and spec.template.metadata.labels.
    nodepool: nodepool-0
spec:
  replicas: 1
  selector:
    matchLabels:
      cluster.x-k8s.io/cluster-name: capi-quickstart
      nodepool: nodepool-0
  template:
    metadata:
      labels:
        cluster.x-k8s.io/cluster-name: capi-quickstart
        nodepool: nodepool-0
    spec:
      version: v1.15.3
      bootstrap:
        configRef:
          name: capi-quickstart-worker
          apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
          kind: KubeadmConfigTemplate
      infrastructureRef:
        name: capi-quickstart-worker
        apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
        kind: AWSMachineTemplate
---
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha2
kind: AWSMachineTemplate
metadata:
  name: capi-quickstart-worker
spec:
  template:
    spec:
      instanceType: t3.large
      iamInstanceProfile: "nodes.cluster-api-provider-aws.sigs.k8s.io"
      sshKeyName: default
---
apiVersion: bootstrap.cluster.x-k8s.io/v1alpha2
kind: KubeadmConfigTemplate
metadata:
  name: capi-quickstart-worker
spec:
  template:
    spec:
      joinConfiguration:
        nodeRegistration:
          name: '{{ ds.meta_data.hostname }}'
          kubeletExtraArgs:
            cloud-provider: aws

What’s new here that wasn’t shown in previous examples are the AWSMachineTemplate and KubeadmConfigTemplate objects, which—as you might suspect—serve as templates for AWSMachine and KubeadmConfig objects for the individual Machines in the MachineDeployment. These template structures are defined in machinedeployment_types.go (for the CAPI MachineDeployment object), in kubeadmconfigtemplate_types.go (for the KubeadmConfigTemplate object), and in awsmachinetemplate_types.go (for the CAPA AWSMachineTemplate object).

I hope this information is useful. There’s definitely more CAPI v1alpha2 content planned, and in the meantime feel free to browse all CAPI-tagged articles. If you have questions, comments, or corrections—we’re all human and make mistakes from time to time—feel free to contact me on Twitter. Thanks!

Recent Posts

An Introduction to Kustomize

kustomize is a tool designed to let users “customize raw, template-free YAML files for multiple purposes, leaving the original YAML untouched and usable as is” (wording taken directly from the kustomize GitHub repository). Users can run kustomize directly, or—starting with Kubernetes 1.14—use kubectl -k to access the functionality (although the standalone binary is newer than the functionality built into kubectl as of the Kubernetes 1.15 release). In this post, I’d like to provide an introduction to kustomize.

Read more...

Consuming Pre-Existing AWS Infrastructure with Cluster API

All the posts I’ve published so far about Kubernetes Cluster API (CAPI) assume that the underlying infrastructure needs to be created. This is fine, because generally speaking that’s part of the value of CAPI—it will create new cloud infrastructure for every Kubernetes cluster it instantiates. In the case of AWS, this includes VPCs, subnets, route tables, Internet gateways, NAT gateways, Elastic IPs, security groups, load balancers, and (of course) EC2 instances. But what if you didn’t want CAPA to create AWS infrastructure? In this post, I’ll show you how to consume pre-existing AWS infrastructure with Cluster API for AWS (CAPA).

Read more...

Highly Available Kubernetes Clusters on AWS with Cluster API

In my previous post on Kubernetes Cluster API, I showed readers how to use the Cluster API provider for AWS (referred to as CAPA) to instantiate a Kubernetes cluster on AWS. Readers who followed through the instructions in that post may note CAPA places all the nodes for a given cluster in a single AWS availability zone (AZ) by default. While multi-AZ Kubernetes deployments are not without their own considerations, it’s generally considered beneficial to deploy across multiple AZs for higher availability. In this post, I’ll share how to deploy highly-available Kubernetes clusters—defined as having multiple control plane nodes distributed across multiple AZs—using Cluster API for AWS (CAPA).

Read more...

VMworld 2019 Vendor Meeting: Lightbits Labs

Last week at VMworld, I had the opportunity to meet with Lightbits Labs, a relatively new startup working on what they called “disaggregated storage.” As it turns out, their product is actually quite interesting, and has relevance not only in “traditional” VMware vSphere environments but also in environments more focused on cloud-native technologies like Kubernetes.

Read more...

Bootstrapping a Kubernetes Cluster on AWS with Cluster API

Yesterday I published a high-level overview of Cluster API (CAPI) that provides an introduction to some of the concepts and terminology in CAPI. In this post, I’d like to walk readers through actually using CAPI to bootstrap a Kubernetes cluster on AWS. This walkthrough is for the v1alpha1 release of CAPI (a walk through for CAPI v1alpha2 is coming).

Read more...

An Introduction to Kubernetes Cluster API

In this post, I’d like to provide a high-level introduction to the Kubernetes Cluster API. The aim of Cluster API (CAPI, for short) is, as outlined in the project’s GitHub repository, “a Kubernetes project to bring declarative, Kubernetes-style APIs to cluster creation, configuration, and management”. This high-level introduction serves to establish some core terminology and concepts upon which I’ll build in future posts about CAPI.

Read more...

Liveblog: VMworld 2019 Day 1 General Session

This is the liveblog from the day 1 general session at VMworld 2019. This year the event is back at Moscone Center in San Francisco, and VMware has already released some juicy news (see here, here, and here) in advance of the keynote this morning, foreshadowing what Pat is expected to talk about.

Read more...

Technology Short Take 118

Welcome to Technology Short Take #118! Next week is VMworld US in San Francisco, CA, and I’ll be there live-blogging and meeting up with folks to discuss all things Kubernetes. If you’re going to be there, look me up! Otherwise, I leave you with this list of links and articles from around the Internet to keep you busy. Enjoy!

Read more...

Creating Tagged Subnets Across AWS AZs Using Pulumi

As I mentioned back in May in this post on creating a sandbox for learning Pulumi, I’ve started using Pulumi more and more of my infrastructure-as-code needs. I did switch from JavaScript to TypeScript (which I know compiles to JavaScript on the back-end, but the strong typing helps a new programmer like me). Recently I had a need to create some resources in AWS using Pulumi, and—for reasons I’ll explain shortly—many of the “canned” Pulumi examples didn’t cut it for my use case. In this post, I’ll share how I created tagged subnets across AWS availability zones (AZs) using Pulumi.

Read more...

Reconstructing the Join Command for Kubeadm

If you’ve used kubeadm to bootstrap a Kubernetes cluster, you probably know that at the end of the kubeadm init command to bootstrap the first node in the cluster, kubeadm prints out a bunch of information: how to copy over the admin Kubeconfig file, and how to join both control plane nodes and worker nodes to the cluster you just created. But what if you didn’t write these values down after the first kubeadm init command? How does one go about reconstructing the proper kubeadm join command?

Read more...

Setting up an AWS-Integrated Kubernetes 1.15 Cluster with Kubeadm

In this post, I’d like to walk through setting up an AWS-integrated Kubernetes 1.15 cluster using kubeadm. Over the last year or so, the power and utility of kubeadm has vastly improved (thank you to all the contributors who have spent countless hours!), and it is now—in my opinion, at least—at a point where setting up a well-configured, highly available Kubernetes cluster is pretty straightforward.

Read more...

Converting Kubernetes to an HA Control Plane

While hanging out in the Kubernetes Slack community, one question I’ve seen asked multiple times involves switching a Kubernetes cluster from a non-HA control plane (single control plane node) to an HA control plane (multiple control plane nodes). As far as I am aware, this isn’t documented upstream, so I thought I’d walk readers through what this process looks like.

Read more...

Technology Short Take 117

Welcome to Technology Short Take #117! Here’s my latest gathering of links and articles from the around the World Wide Web (an “old school” reference for you right there). I’ve got a little bit of something for most everyone, except for the storage nerds (I’m leaving that to my friend J Metz this time around). Here’s hoping you find something useful!

Read more...

Accessing the Docker Daemon via an SSH Bastion Host

Today I came across this article, which informed me that (as of the 18.09 release) you can use SSH to connect to a Docker daemon remotely. That’s handy! The article uses docker-machine (a useful but underrated tool, I think) to demonstrate, but the first question in my mind was this: can I do this through an SSH bastion host? Read on for the answer.

Read more...

Decoding a Kubernetes Service Account Token

Recently, while troubleshooting a separate issue, I had a need to get more information about the token used by Kubernetes Service Accounts. In this post, I’ll share a quick command-line that can fully decode a Service Account token.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!