Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Technology Short Take 109

Welcome to Technology Short Take #109! This is the first Technology Short Take of 2019. It may be confirmation bias, but I’ve noticed of number of sites adding “Short Take”-type posts to their content lineup. I’ll take that as flattery, even if it wasn’t necessarily intended that way. Enjoy!

Networking

  • Niran Even-Chen says service mesh is a form of virtualization. While I get what Niran is trying to say here, I’m not so sure I agree with the analogy. Sometimes analogies such as this are helpful, but sometimes the analogy brings unnecessary connotations that make understanding new concepts more difficult. One area where I do strongly agree with Niran is in switching your perspective: looking at service mesh from a developer’s perspective gives one quite a different viewpoint than viewing service mesh in an infrastructure light.
  • Jim Palmer has a detailed write-up on DHCP Option 51 and different behaviors from different DHCP clients.
  • Niels Hagoort talks about some network troubleshooting tools in a vSphere/ESXi environment.

Servers/Hardware

Nothing this time around, but I’ll stay alert for items to include next time.

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

  • Jorge Salamero Sanz describes how to use the Sysdig Terraform provider to do “container security as code.” I’m a fan of Terraform (despite some of its limitations), so it’s kind of cool to see new providers coming online.
  • OS/2 lives on. ‘Nuff said.
  • This project purports to help you generate an AWS IAM policy with exactly the permissions needed. It’s a bit of a brute force tool, so be sure to read the caveats, warnings, and disclaimers in the documentation!
  • I do manage most of my “dotfiles” in a Git repository, but I’d never heard of rcm before reading this Fedora Magazine article. It might be something worth exploring to supplant/replace my existing system.
  • I found this article by Forrest Brazeal on a step-by-step exploration of moving from a relational database to a single DynamoDB table to be very helpful and very informative. DynamoDB—along with other key-value store solutions—have been something I’ve been really interested in better understanding, but never could quite understand how they fit with traditional RDBMSes. I still have tons to learn, but at least now I have a bit of a framework by which to learn more. Thanks Forrest!
  • Steve Flanders provides an introduction to Ambassador, an open source API gateway. This looks interesting, but embedding YAML configuration in annotations seems…odd.
  • Mark Hinkle, a co-founder at TriggerMesh, announces TriggerMesh KLR—the Knative Lambda Runtime that allows users to run AWS Lambda functions in a Knative-enabled Kubernetes cluster. This seems very powerful to me, but I’m no serverless expert so maybe I’m missing something. Would the serverless experts care to weigh in?
  • Via Jeremy Daly’s Off-by-None newsletter, I found Jerry Hargrove’s Cloud Diagrams & Notes site. I haven’t dug in terribly deep yet, but at first glance Jerry’s site looks to be enormously helpful. (I have a suspicion that I’ve probably seen references to Jerry’s site via Corey Quinn’s Last Week in AWS newsletter, too.)
  • AWS users who prefer Visual Studio Code may want to track the development of the AWS Toolkit for Visual Studio Code. It’s early days yet, so keep that in mind.
  • And while we’re talking about Visual Studio Code, Julien Oudot highlights why users should choose Code for their Kubernetes/Docker work.

Storage

Virtualization

  • Marc Weisel shares how to use Cisco IOSv in a Vagrant box with VMware Fusion.
  • Paul Czarkowski talks about how the future of Kubernetes is virtual marchines. The title is a bit of linkbait; what Paul is really addressing here is how to solve the multi-tenancy challenges that currently exist with Kubernetes (which wasn’t really designed for multi-tenant deployments). VMs provide good isolation, so VMs could be the method whereby operators can provide the sort of strong isolation that multi-tenant environments need. One small clarification to Paul’s otherwise excellent post: by admission on their own web page, gVisor is not a VM container technology, but rather uses a different means to providing additional security.

Career/Soft Skills

In the infamous words of Porky Pig, that’s all folks! Feel free to engage with me on Twitter if you have any comments, questions, suggestions, corrections, or clarifications (or if you just want to chat!). I also welcome suggestions for content to include in future instances of Technology Short Take. Thank you for reading!

On Thinking About Infrastructure as Code

I just finished reading Cindy Sridharan’s excellent post titled “Effective Mental Models for Code and Systems,” and some of the points Sridharan makes immediately jumped out to me—not for “traditional” code development, but for the development of infrastructure as code. Take a few minutes to go read the post—seriously, it’s really good. Done reading it? Good, now we can proceed.

Some of these thoughts I was going to share in a planned presentation at Interop ITX in May 2019, but since I’m unable to speak at the conference this year due to schedule conflicts (my son’s graduation from college and a major anniversary trip for me and Crystal), I figured now was as good a time as any, especially given the timing of Sridharan’s post. Also, a lot of these thoughts stem from a discussion with a colleague at work, which in turn led to this Full Stack Journey podcast on practical infrastructure as code.

Anyway, let me get back to Sridharan’s post. One of the things that jumped out to me right away was Sridharan’s proposed hierarchy of needs for code:

Sridharan's hierarcy of needs for code

As you can see in the image (full credit for which belongs to Sridharan, as far as I know), making code understandable lies at the bottom of the hierarchy of needs, meaning it is the most basic necessity needed. Until this need is satisfied, you can’t move on to the other needs. Sridharan puts it this way:

Optimizing for understandability can result in optimizing for everything else on the hierarchy depicted above.

Many readers have probably heard of the DRY principle when it comes to writing code. (DRY stands for Don’t Repeat Yourself.) In many of the examples of infrastructure as code that I see online, the authors of these examples tend to use control structures such as Terraform’s count construct when creating multiple infrastructure objects. I’ll use some code that I wrote as an example: consider the use of a module to create a group of AWS instances as illustrated here. Yes, there is very little repetition in this code. The code is modular and re-usable. But is it understandable? Have I optimized for understandability, and (by extension) all the other needs listed in Sridharan’s hierarchy of needs for code?

Consider this as well: have I really violated the DRY principle if I were to explicitly spell out, with proper parameterization, the creation of each infrastructure object instead of using a count control structure or a module as a layer of abstraction? Is it not still true that there remains only “a single, unambiguous, authoritative representation” of each infrastructure object?

It seems to me that the latter approach—explicitly spelling out the creation of infrastructure objects in your infrastructure as code—may be a bit more verbose, but eminently more understandable and does not violate the DRY principle. It may not be as elegant, but as individuals creating infrastructure as code artifacts should be we optimizing for elegance, or optimizing for understandability?

Sridharan also talks about being explicit:

…it is worth reiterating this again that implicit assumptions and dependencies are one of the worst offenders when it comes to contributing to the obscurity of code.

Again, it seems to me that—for infrastructure as code especially—being explicit about the creation of infrastructure objects not only contributes to greater understandability, but also helps eliminate implicit assumptions and dependencies. Instead of using a loop or control structure to manage the creation of multiple objects, spell out the creation of those objects explicitly. It may seem like a violation of the DRY principle to have three (nearly) identical snippets of code creating three (nearly) identical compute instances, but applying the DRY principle here means ensuring that each instance is authoritatively represented in the code only once, not that we are minimizing lines of code.

“Now wait,” you say. “It’s not my fault if someone can’t read my Terraform code. They need to learn more about Terraform, and then they’ll better understand how the code works.”

Well, Sridharan talks about that as well in a discussion of properly identifying the target audience of your artifacts:

In general, when identifying the target audience and deciding what narrative they need to be exposed to in order to allow for them to get up and running quickly, it becomes necessary to consider the audience’s background, level of domain expertise and experience.

Sridharan goes on to point out that in situations where both novices and veterans may be present in the target audience, the experience of the novice is key to determining the understandability of the code. So, if we are optimizing for understandability, can we afford to take a “hands off” approach to maintenance of the code by our successors? Can we be guaranteed that a successor tasked with maintaining our code will have the same level of knowledge and experience we have?

I’ll stop here; there are more good points that Sridharan makes, but for now this post suffices to capture most of the thinking generated by the article when it comes to infrastructure as code. After I’ve had some time to continue to parse Sridharan’s article, I may come back with some additional thoughts. In the meantime, feel free to engage with me on Twitter if you have some thoughts or perspectives you’d like to share on this matter.

The Linux Migration: December 2018 Progress Report

In December 2016, I kicked off a migration from macOS to Linux as my primary laptop OS. Throughout 2017, I chronicled my progress and challenges along the way; links to all those posts are found here. Although I stopped the migration in August 2017, I restarted it in April 2018 when I left VMware to join Heptio. In this post, I’d like to recap where things stand as of December 2018, after 8 months of full-time use of Linux as my primary laptop OS.

I’ll structure this post roughly as a blend of the formats I used in my April 2017 and July 2017 progress reports.

Hardware

Readers may recall that I was using a Dell Latitude E7370 (see my E7370 hardware review) up until August 2017, when I put the Linux migration on hold indefinitely due to productivity concerns. Upon moving to Heptio, I switched to a Lenovo ThinkPad X1 Carbon (see here for my review of the X1 Carbon—the “TL;DR” is that I love it). In my home office, the X1 Carbon connects to a USB-C expansion hub that provides connectivity to a 34” 21:9 ultrawide curved monitor, external HD webcam, and a USB headset for Zoom meetings. I also recently converted my Mac Pro to Linux as well (see this post for details); that’s a workstation with dual quad-core Xeon CPUs, 24GB of RAM, a 512GB SSD, a 1TB hard drive, and a 700GB PCIe SSD from Micron, also connected to a 34” 21:9 ultrawide curved monitor.

Linux Distribution

Early (very early) in the migration I thought I would end up using Ubuntu 16.04, but I switched to Fedora and haven’t looked back. I started with Fedora 25 on the E7370, switching to Fedora 27 on the X1 Carbon and later upgrading to Fedora 28. The Mac Pro started out with Fedora 27 (this post outlines the reasons why) and was upgraded to Fedora 28.

Applications

By and large, application usage remains mostly unchanged from earlier:

  • Markdown: I switched from Sublime Text to Visual Studio Code, but otherwise my Markdown workflows remain largely the same. I still use Markdown for the vast majority of my content creation.
  • Browsing: While I have Google Chrome installed, I use Firefox for the vast majority of my browsing (I don’t care for the changes Chrome made with regard to signing you into the browser when you sign into Google). I make regular use of the Firefox Multi-Account Containers add-on to streamline some browser-based workflows while also protecting my privacy.
  • Chat/Instant messaging: I continue to use the Slack Linux client and Pidgin. For IRC—though I rarely use it these days—I have HexChat, as before.
  • Cloud storage/sync: GNOME offers built-in integration for Google Drive, and I’m still using Dropbox for non-confidential information. I’ll probably have to start using ODrive again for access to OneDrive.
  • Basic office productivity: I’m still using LibreOffice, though it’s a newer version.
  • Graphics: No changes here, except some occasional use of LibreOffice Draw.
  • Mind mapping: I’m still using XMind, and still evaluating whether a Pro license is needed or not. I haven’t had a strong need for the extra features yet.
  • E-mail: I continue to use Thunderbird, and I resolved the nagging performance concerns that I noted during the July 2017 progress report (these appear to have been caused by some TCP timeouts related to the NAT configuration on my ASA 5505). I’ve settled on IMAP/SMTP for all e-mail access. Having recently re-joined VMware via the Heptio acquisition, I’m currently using TbSync with the Active Sync add-on for calendar and contacts access (it also provides access to the Global Address List). The TbSync CalDAV/CardDAV provider enables access to the existing CalDAV- and CardDAV-based sync service I’m using on my mobile devices, providing a smoother user experience.
  • Task management: Aside from switching from Sublime Text to Visual Studio Code, this solution remains the same (plain text-based files on Dropbox, changes reconciled via a graphical diff program).
  • Calendaring/time management: I gave up on GNOME Calendar and settled for leveraging Lightning inside Thunderbird. Via the TbSync add-on, I have access to corporate and personal calendars. Access to Google Calendar (which is what we were using at Heptio) was and is problematic; I managed to get read-only access from Thunderbird, but read/write access was only through the web interface.
  • Password management: I switched back to 1Password, leveraging the 1Password X add-on for Firefox for access from my Linux systems.
  • Corporate connectivity: This wasn’t needed at Heptio, but now that I’m back at VMware I’m back to using the Linux client for VMware Horizon and vpnc (with integration into GNOME) for VPN connectivity.
  • Social media: I’m just using the Twitter web site; with the API changes, there aren’t really any other options on Linux.

So what’s working well with this configuration?

  • Calendar access via TbSync seems to work really well; I’m really pleased that the CalDAV/CardDAV provider for TbSync works as seamlessly as it does.
  • As I mentioned above, the performance issues I was seeing with e-mail seem to have been resolved. I’m not seeing the same lockups and delays that I was seeing in 2017.
  • Switching to the Materia theme for GNOME/GTK has relieved the eyestrain issues I reported in my July 2017 update.

What’s not working well?

  • Now that I’m back at VMware via the Heptio acquisition, I’m moving back into a Microsoft-heavy environment. I anticipate I’ll run into compatibility issues between LibreOffice and Office 365, as I did before, though there is a chance that the newer version of LibreOffice will work better. It’s still too early to tell yet.

Where I Still Use macOS

With the migration of my Mac Pro to Fedora 28, that leaves my 2017 MacBook Pro as the only remaining macOS-based system. I will still use it for podcast recording, viewing archived e-mail, and viewing documents in formats that I couldn’t/didn’t translate to a Linux equivalent (OmniGraffle, OmniOutliner, and MindNode, primarily). I’ll also use it to transcode media into industry-standard formats (MP3 and MP4/H.264) that my Linux systems can use. Aside from that, it sees very little regular use.

Summary

Through a combination of technology-related improvements in various applications plus personal growth on my part, I’ve been able to make Linux my full-time primary laptop OS since April 2018. Hopefully, moving back to VMware and its Microsoft-centric environment won’t present the same kind of insurmountable hurdles I saw last time. Time will tell whether my hopes are valid, but I do want to do a better job of keeping readers informed about how things are going overall. As always, if you have questions, comments, or suggestions, I encourage you to contact me on Twitter.

Looking Back: 2018 Project Report Card

Over the last five years or so, I’ve shared with my readers an annual list of projects along with—at the year’s end—a “project report card” on how I fared against the projects I’d set for myself. (For example, here’s my project report card for 2017.) Following that same pattern, then, here is my project report card for 2018.

Here’s the list of projects I established for myself in 2018 (you can also read the associated blog post for more context):

  1. Become extremely fluent in Kubernetes. (Stretch goal: Pass the CKA exam.)
  2. Learn to code/develop in Go.
  3. Make three contributions to open source projects. (Stretch goal: Make five contributions.)
  4. Read and review three technical books. (Stretch goal: Read and review five technical books.)
  5. Complete a “wildcard” project.

So, how did I do? Let’s take a look.

  1. Become extremely fluent in Kubernetes: This is, in my opinion, a hard one to accurately gauge. Why? Well, Kubernetes is a pretty massive project. I saw a tweet recently saying the project was now at a point where no one person can understand all of it. The other factor making it difficult for me to accurately gauge this is the caliber of folks with whom I’m working. I mean, the people on my team are amazing and extremely talented. So, while I made tremendous gains in my Kubernetes knowledge over the course of 2018, can I consider myself fluent, much less extremely fluent? I think I did reasonably well, but I don’t think I can consider myself extremely fluent. Grade: C

  2. Learn to code/develop in Go: In the post announcing my Polyglot project (aimed to help bolster my development/programming skills), I admitted that this was a tall task to take on and that I’d most likely need to, in my own words, “learn to walk (or even crawl!) before I can run.” Although I have improved in my ability to read Golang code and (sometimes) understand what it is doing, my ability to write useful Golang code is still severely lacking. Grade: D

  3. Make three contributions to open source projects: I’m very pleased to report that my contributions to open source projects have increased significantly during 2018. This past year, I’ve made contributions—in the form of actual commits as well as in the form of opening issues—to Kubernetes, Sonobuoy, Wardroom, and Ark. Looking only at committed changes, I submitted more than 5 pull requests to Wardroom and Ark, meaning that I achieved my “stretch goal” of making at least five contributions to open source projects. Grade: A

  4. Read and review three technical books: I got close on this one, but only completed 2 technical books (see here and here for the reviews). I’m still working on Sam Newman’s Building Microservices book, but have just been too busy to finish it out and write a review. Had I gotten a more timely start on my 2018 projects, I might have been able to finish out all three. Grade: B

  5. Complete a “wildcard” project: In looking back over 2018, I don’t really see anything that would qualify as a “wildcard project.” That’s not to say I didn’t make progress in areas other than those listed above; I grew tremendously in my knowledge and application of cloud-native projects other than just Kubernetes; I migrated to Linux full-time in April when I joined Heptio (and recently converted my Mac Pro workstation to Fedora, taking another big step forward); I expanded my knowledge and use of Linux containers for various use cases; I learned about RAML and API specifications; and deepened my skills with various infrastructure-as-code tools. These are all valuable things, but none of them rise (in my opinion) to the level of a “wilcard project.” Polyglot would qualify for one, had I spent more time working on it and gotten more done. However, I did say that I wouldn’t penalize myself for not completing one. Grade: N/A

In summary: not too bad, but there’s still room to improve!

I do have a couple takeaways—lessons learned, if you will—from the pursuit of these projects over the course of 2018:

  • Setting concrete, specific, measurable goals for projects is still an area in which I can improve. I performed better in those areas where I had specific goals associated with each project.
  • Bolstering development skills is harder and more time-consuming than I anticipated. This is an area where I do want to continue to invest, but it’s also an area where following the lessons learning in the previous bullet (e.g., setting specific and measurable goals) are really difficult. What is an effective goal with something like this–lines of code written?

Over the next week or two I’ll be evaluating potential projects—and associated goals that are specific and measurable—for 2019. Look for my list of 2019 projects in mid-January 2019.

Thanks for reading!

The Linux Migration Series

In early 2017 I kicked off an effort to start using Linux as my primary desktop OS, and I blogged about the journey. That particular effort ended in late October 2017. I restarted the migration in April 2018 (when I left VMware to join Heptio), and since that time I’ve been using Linux (Fedora, specifically) full-time. However, I thought it might be helpful to collect the articles I wrote about the experience together for easy reference. Without further ado, here they are.

Initial Progress Report

Final Linux Distro Selection

Virtualization Provider

Other Users’ Stories: Part 1, Part 2, Part 3, Part 4

Creating Presentations

Corporate Collaboration: Part 1, Part 2, Part 3

April 2017 Progress Report

July 2017 Progress Report

Wrap-Up

These are only the articles directly related to the migration efforts, but many more articles were spawned as a result of the project. Browse through all the Fedora-tagged articles to see some related articles.

If you have any questions about migrating to Linux or about any of these articles (or related articles), you’re welcome to contact me on Twitter. I look forward to hearing from you!

Recent Posts

Technology Short Take 108

Welcome to Technology Short Take #108! This will be the last Technology Short Take of 2018, so here’s hoping I can provide something useful for you. Enjoy!

Read more...

Running Fedora on my Mac Pro

I’ve been working on migrating off macOS for a couple of years (10+ years on a single OS isn’t undone quickly or easily). I won’t go into all the gory details here; see this post for some background and then see this update from last October that summarized my previous efforts to migrate to Linux (Fedora, specifically) as my primary desktop operating system. (What I haven’t blogged about is the success I had switching to Fedora full-time when I joined Heptio.) I took another big step forward in my efforts this past week, when I rebuilt my 2011-era Mac Pro workstation to run Fedora.

Read more...

KubeCon 2018 Day 2 Keynote

This is a liveblog of the day 2 (Wednesday) keynotes at KubeCon/CloudNativeCon 2018 in Seattle, WA. For additional KubeCon 2018 coverage, check out other articles tagged KubeCon2018.

Read more...

Liveblog: Hardening Kubernetes Setups

This is a liveblog of the KubeCon NA 2018 session titled “Hardening Kubernetes Setup: War Stories from the Trenches of Production.” The speaker is Puja Abbassi (@puja108 on Twitter) from Giant Swarm. It’s a pretty popular session, held in one of the larger ballrooms up on level 6 of the convention center, and nearly every seat was full.

Read more...

Liveblog: Linkerd 2.0, Now with Extra Prometheus

This is a liveblog of the KubeCon NA 2018 session titled “Linkerd 2.0, Now with Extra Prometheus.” The speakers are Frederic Branczyk from Red Hat and Andrew Seigner with Buoyant.

Read more...

KubeCon 2018 Day 1 Keynote

This is a liveblog from the day 1 (Tuesday, December 11) keynote of KubeCon/CloudNativeCon 2018 in Seattle, WA. This will be my first (and last!) KubeCon as a Heptio employee, and looking forward to the event.

Read more...

Technology Short Take 107

Welcome to Technology Short Take #107! In response to my request for feedback in the last Technology Short Take, a few readers responded in favor of a more regular publication schedule even if that means the articles are shorter in length. Thus, this Tech Short Take may be a bit shorter than usual, but hopefully you’ll still find something useful.

Read more...

Supercharging my CLI

I spent a lot of time in the terminal. I can’t really explain why; for many things it just feels faster and more comfortable to do them via the command line interface (CLI) instead of via a graphical point-and-click interface. (I’m not totally against GUIs, for some tasks they’re far easier.) As a result, when I find tools that make my CLI experience faster/easier/more powerful, that’s a big boon. Over the last few months, I’ve added some tools to my Fedora laptop that have really added some power and flexibility to my CLI environment. In this post, I want to share some details on these tools and how I’m using them.

Read more...

Technology Short Take 106

Welcome to Technology Short Take #106! It’s been quite a while (over a month) since the last Tech Short Take, as this one kept getting pushed back. Sorry about that, folks! Hopefully I’ve still managed to find useful and helpful links to include below. Enjoy!

Read more...

Spousetivities at DockerCon EU 18

DockerCon EU 18 is set to kick off in early December (December 3-5, to be precise!) in Barcelona, Spain. Thanks to Docker’s commitment to attendee families—something for which I have and will continue to commend them—DockerCon will offer both childcare (as they have in years past) and spouse/partner activities via Spousetivities. Let me just say: Spousetivities in Barcelona rocks. Crystal lines up a great set of activities that really cannot be beat.

Read more...

More on Setting up etcd with Kubeadm

A while ago I wrote about using kubeadm to bootstrap an etcd cluster with TLS. In that post, I talked about one way to establish a secure etcd cluster using kubeadm and running etcd as systemd units. In this post, I want to focus on a slightly different approach: running etcd as static pods. The information on this post is intended to build upon the information already available in the Kubernetes official documentation, not serve as a replacement.

Read more...

Validating RAML Files Using Docker

Back in July of this year I introduced Polyglot, a project whose only purpose is to provide a means for me to learn more about software development and programming (areas where am I sorely lacking real knowledge). In the limited spare time I’ve had to work on Polyglot in the ensuing months, I’ve been building out an API specification using RAML, and in this post I’ll share how I use Docker and a Docker image to validate my RAML files.

Read more...

Technology Short Take 105

Welcome to Technology Short Take #105! Here’s another collection of articles and blog posts about some of the common technologies that modern IT professionals will typically encounter. I hope that something I’ve included here proves to be useful for you.

Read more...

VMworld EMEA 2018 and Spousetivities

Registration is now open for Spousetivities at VMworld EMEA 2108 in Barcelona! Crystal just opened registration in the last day or so, and I wanted to help get the message out about these activities.

Read more...

Setting up the Kubernetes AWS Cloud Provider

The AWS cloud provider for Kubernetes enables a couple of key integration points for Kubernetes running on AWS; namely, dynamic provisioning of Elastic Block Store (EBS) volumes and dynamic provisioning/configuration of Elastic Load Balancers (ELBs) for exposing Kubernetes Service objects. Unfortunately, the documentation surrounding how to set up the AWS cloud provider with Kubernetes is woefully inadequate. This article is an attempt to help address that shortcoming.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!