Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

KubeCon 2018 Day 2 Keynote

This is a liveblog of the day 2 (Wednesday) keynotes at KubeCon/CloudNativeCon 2018 in Seattle, WA. For additional KubeCon 2018 coverage, check out other articles tagged KubeCon2018.

Kicking off the day 2 keynotes, Liz Rice takes the stage at 9:02am (same time as yesterday, making me wonder if my clock is off by 2 minutes). Rice immediately brings out Janet Kuo, Software Engineer at Google and co-chair with Rice of the KubeCon/CloudNativeCon event program. Kuo will be delivering a Kubernetes project update.

Kuo starts off by reiterating the announcement of the Kubernetes 1.13 release, and looking back on her very first commit to Kubernetes in 2015 (just prior to the 1.0 release and the formation of the CNCF). Kuo talks about how Kubernetes, as a software cycle, has matured through the cycle of first focusing on innovation, then expanding to include scale, and finally expanding again to include stability (critical for enterprise adopters).

Reviewing usage details, Kuo states that she believes Kubernetes has moved—in the context of the technology adoption curve—from early adopters to early majority, the first phase in the mainstream market (and, for those who think in these terms, has crossed the chasm). However, this also means that Kubernetes has gotten boring.

In looking at Kubernetes, Kuo draws out two key facets of Kubernetes: open standards and extensibility. Open standards includes, according to Kuo, built-in APIs and conformance; these provide consistent behavior around expectations for developers deploying applications or developing applications to be deployed onto Kubernetes. Extensibility has two parts: infrastructure extensibility, and API extensibility.

Infrastructure extensibility is all about how Kubernetes consumes the underlying infrastructure; this would include, for example, cloud provider integrations, storage plugins (CSI), networking plugins (CNI), and the container runtime (CRI). API extensibility is mostly encompassed by Custom Resource Definitions (CRDs) and controllers (automation engines). Kuo uses Istio as an example of using API extensibility to create Istio-specific resources. This allows you to, according to Kuo, build “everything the Kubernetes way.”

Recapping her presentation, Kuo reminds attendees that boring is essential for building a platform upon which other solutions can be built.

At this point, Rice returns to the stage to introduce Jason McGee, who is CTO for IBM Cloud and an IBM Fellow. McGee emphasizes that good application design is always a tradeoff of attributes; it’s not just about stateless applications. McGee states that the cloud-native efforts have, thus far, focused only on 12-factor applications, functions, and serverless. However, applications are more than just these areas. This leads McGee to review developments in the Cloud Foundry space and the landscape around functions. The functions landscape, McGee stresses, is terribly fragmented and is holding the entire industry and community from moving forward. What’s the answer? McGee moves into a discussion of Knative, and how Knative provides a single, unified platform that supports both containers, functions, and applications—all on top of Kubernetes.

McGee shifts into a product pitch talking about IBM Cloud and its use of Kubernetes before he wraps up his portion of this morning’s keynotes.

Rice returns to the stage again, this time not to introduce someone else but to provide her own keynote focused—naturally, given her role at Aqua Security—Kubernetes security. Asking the question, “Don’t we have security people for that?”, Rice says she believes that everyone can do something to help improve security; it’s not just for security-dedicated people. Reiterating that every system has vulnerability, Rice reminds attendees that they should not be surprised or afraid that Kubernetes will have vulnerabilities discovered from time to time. (And she reminds everyone to update their clusters.) So what can attendees do to help improve security?

This leads Rice into a discussion of other security weak points that might exist in a Kubernetes deployment. She shows the example of just copying some YAML and pasting it into a kubectl command without any knowledge of what that YAML actually does. She expands that example into showing how a single compromised Pod in turn exposes the Kubernetes API (via the default service account) and thus the entire cluster. Naturally, the answer here is to not just blindly apply YAML to our clusters. Rice next shows how she wrote a Validating Admission Webhook and associated controller/service that checks for and blocks the creation of service accounts.

This leads Rice to a discussion of Open Policy Agent (OPA), a CNCF Sandbox project that can act as the controller/service behind a Validating Admission Webhook to perform the same types of functions that Rice just showed (like blocking the creation of service accounts). This can enable users to load rulesets for OPA in as ConfigMaps to enforce policy decisions like blocking the creation of service accounts. Rice reminds attendees that OPA is still a relatively young project, but says she believes it’s pretty important to watch.

So what sorts of things could users do with OPA to help improve Kubernetes security?

  • Ensuring that only allowed registries are leveraged
  • Checking to ensure that only scanned (and safe) images can be deployed
  • Verifying software provenance to ensure that images are actually the images you want/expect

However, trusted code can still be bad, Rice reminds the audience. Open source libraries may change governance and “bad” code may be introduced (Rice uses the example of an NPM library compromised to include crypto-mining code). This underscores the importance of open source governance and the role of foundations such as the Linux Foundation and the CNCF in that governance.

As she wraps up her portion of the keynotes, Rice drops in a quick plug for the security book she recently wrote with Michael Hausenblas.

Rice now brings out Melanie Cebula from AirBnB to talk about AirBnB’s use of Kubernetes (which now hosts about 40% of all of AirBnB’s services). Cebula reviews how AirBnB has been moving away from a monolithic architecture to Kubernetes, and how the AirBnB team is now handling 125,000 deployments annually. More helpfully—in my opinion—is the portion of Cebula’s presentation on addressing key challenges with Kubernetes. For example, AirBnB leveraged Go templating to make it easier to reduce YAML boilerplate. (Other examples in this space include Helm, kustomize, and kapitan.) Cebula also reviews how AirBnB generates boilerplate for new services, and how she used that to help enforce/encourage the use of documentation, testing, CI/CD, etc. Finally, Cebula talks about how AirBnb wrapped the use of kubectl to make it easier for developers and standardizes (again) things like namespaces and environments. Cebula does something lots of presenters often fail to do, and that’s clearly summarize the recommendations out of the various portions of her presentation. All in all, this was an excellent presentation, full of very practical information learned from real-world use of Kubernetes.

Rice returns to the stage to wrap up the keynotes and remind folks of the evening events, and then closes the keynotes.

Liveblog: Hardening Kubernetes Setups

This is a liveblog of the KubeCon NA 2018 session titled “Hardening Kubernetes Setup: War Stories from the Trenches of Production.” The speaker is Puja Abbassi (@puja108 on Twitter) from Giant Swarm. It’s a pretty popular session, held in one of the larger ballrooms up on level 6 of the convention center, and nearly every seat was full.

Abbassi starts by talking about Giant Swarm’s environment, in which they run more than 100 clusters across different clouds and different regions. These clusters are running for different companies, different industries, and they serve different use cases for various constituents of users. Abbassi says that Giant Swarm opts to give users more freedom in how they use (and potentially misuse) the clusters.

Obviously, this can lead to problems, and that’s where the postmortems come into play. Abbassi explains the idea behind postmortems by quoting a definition from the Google SRE book, and then provides some context about the process that Giant Swarm follows when conducting postmortems. That leads into a discussion of various postmortems conducted at Giant Swarm.

The first one mentioned by Abbassi concerns a memory leak first fixed in 1.11.4 and 1.12.0. Prior to the fix, the memory leak could cause downtime due to failures of the control plane components.

(Side note: I was thinking that this session was security-related, but apparently it is not.)

The next one mentioned was an issue with Calico that was causing IP-in-IP tunnels to go down. Giant Swarm fixed this by writing a small ICMP pinger that generated enough traffic to keep the tunnels up.

After sharing a couple examples from Giant Swarm postmortems, Abbassi moves into a review of some hardening steps and best practices. Some of the hotspots for issues included flaws in older (non-upgraded) Kubernetes clusters, ingress, networking and DNS, resource (CPU or memory) pressure, and multi-tenancy. Next, Abbassi delves into a bit more detail on these hotspots:

  • Old issues typically have issues that have been solved in newer versions. Customers should test upgrades extensively, and automate the upgrade process in order to reduce the “cost” of upgrading so that your organization is more likely to upgrade on a regular basis.
  • Ingress: When it comes to the Nginx ingress controller, newer versions are less prone to misconfiguration. Users may also want to consider running multiple classes of ingress controllers, to help separate traffic types and protect against failures. Performing load testing and failover testing is also strongly recommended.
  • Networking and DNS: Be sure to monitor and alert on network health and connectivity. Monitor DNS latency (via Prometheus?). Check for known issues, and apply best practices. (Abbassi stops short of actually discussing some best practices.)
  • Resource pressure: Be sure to use resource management (use quotes, limits, and requests). Be aware many Java-based apps in containers don’t behave well. Include buffers (extra headroom) in resource capacity. Be sure to protect K8s and critical addons, so that the Kubernetes infrastructure isn’t crippled.
  • Multi-tenancy: Namespaces and proper RBAC configuration can help here. Avoid the use of the cluster-admin role! Use separate clusters where possible, and automate application deployment with CI/CD. Minimize manual operations as much as you can.

Finally, Abbassi provides a few “best practices”:

  • Use a monitoring/alerting solution, like Prometheus with Alert Manager.
  • Logging—and sometimes tracing—can help with debugging.
  • Be sure to fix issues fast; don’t let them escalate and cause even bigger/more significant issues.
  • Educate users on the correct/preferred way to interact with and/or use Kubernetes.
  • Have a postmortem process (and learn from it!).
  • Train on recovery, so that staff are able to more quickly get back to operational. (Heptio Ark gets a mention here as a solution.)

Finally, Abbassi mentions that many of the attendees are “standing on the shoulders of giants”—be sure to leverage the learnings that many others have already encountered and shared with the broader community. There’s no need to reinvent the wheel, and don’t be afraid to ask for help!

Abbassi closes out the session with thanks to his team at Giant Swarm, and then opens up for questions.

Liveblog: Linkerd 2.0, Now with Extra Prometheus

This is a liveblog of the KubeCon NA 2018 session titled “Linkerd 2.0, Now with Extra Prometheus.” The speakers are Frederic Branczyk from Red Hat and Andrew Seigner with Buoyant.

Seigner kicks off the session with a quick introduction before handing off to Branczyk. Prometheus, for folks who didn’t know, originated at SoundCloud with a couple of ex-Googlers. Prometheus is one of the graduated CNCF projects and—judging by a show of hands in response to a speaker question—lots of folks here at KubeCon know about Prometheus and are using Prometheus in production.

Branczyk provides an overview of Prometheus, explaining that it pulls metrics from a target on a set of regular intervals (like every 15 seconds, for example). Prometheus stores those metrics in a time-series database, so every time it pulls metrics it stores them in a time series. As a monitoring solution, it also has to provide alerting, to notify cluster operators/administrators that some metric is outside of some predefined threshold.

With regards to Kubernetes, Prometheus has built-in support to perform service discovery in Kubernetes by querying the Kubernetes API. This enables it to discover Pods backing a Service and scrape (pull) the metrics from those discovered Pods. Branczyk also spends some time talking about time series churn and a series of changes that went into Prometheus 2.0 to help address this and other challenges around gathering time-series data from ephemeral data sources (Pods).

A single Prometheus server has pretty amazing scale; Branczyk indicates he’s seen a single server ingesting 700K samples per second (which translates, roughly, to a Kubernetes cluster with 350 nodes).

At this point, Branczyk hands it over to Seigner, who shifts the focus to Linkerd. Seigner provides an overview of Linkerd. Linkerd originated at Twitter, as part of their effort to decompose the Twitter application into microservices. As part of that decomposition process, the people at Twitter realized there was a common set of functionality that all services needed, and this led to the creation of Linkerd to provide common functionality via a service mesh and a sidecar proxy.

But Linkerd 1.x was written in Scala, but Scala was a bit heavyweight. As part of the re-write from Linkerd 1.x to Linkerd 2.0, the proxy was re-written in Golang (for the control plane) and Rust (for the data plane). Linkerd 2.0 also incorporated native Kubernetes support and integrated support for Prometheus. This integrated support means Prometheus can scrape metrics from every Linkerd proxy running as a sidecar, which gives access to lots of very useful metrics.

This leads into a quick demo of Linkerd to show its usage with a simple microservices-based application. The demo shows the use of the linkerd CLI tool to install Linkerd onto a Kubernetes cluster, and then to inject Linkerd into the existing microservices-based application. The linkerd command also includes a ton of sub-commands that can show staticstics, show summary statistics, and expose a graphical web-based dashboard.

At this point, Seigner wraps up the session and opens up for questions.

KubeCon 2018 Day 1 Keynote

This is a liveblog from the day 1 (Tuesday, December 11) keynote of KubeCon/CloudNativeCon 2018 in Seattle, WA. This will be my first (and last!) KubeCon as a Heptio employee, and looking forward to the event.

The keynote kicks off at 9:02am with Liz Rice, Technology Evangelist at Aqua Security. Rice welcomes attendees (back) to Seattle, and she shares that this year’s event in Seattle is 8x the size of the same event in Seattle just two years ago. Rice also shares some statistics from other CNCF events around the world, stressing the growth of these events both in size and in the number of events happening worldwide.

Rice next shares some entertaining statistics about web site visits to versus some other popular brands. (TL;DR: Kubernetes gets more web site visits than the Seahawks and Manchester United, but not as many as Starbucks.)

Moving on, Rice talks for a few minutes about the strategy or purpose behind the collection of projects that fall under the CNCF umbrella (to provide some of the important building blocks in the full stack of technologies to support cloud-native environments). At this point, Rice turns it over to Michelle Noorali, a key maintainer of Helm.

Noorali starts out with a quick introduction/overview of Helm, likening it to yum or apt for Linux packages. Noorali then reviews the history of Helm, provides a high-level description of the Helm governance model, and introduces the Helm Hub (—a new, centralized way to search for and deploy Helm Charts. Helm v2.12.0 (the “Egg Nog” edition) was also recently released. This leads Noorali into a high-level description of some of the features planned for Helm v3, which will move to an entirely client-side architecture (no more Tiller, yay!). Chart Museum, an open source Helm chart repository server, also recently released an update. Noorali ends her section with a call to action to participate in the Helm community.

Rice returns to the stage, this time discussing the second of the three graduated projects in the CNCF, Prometheus. After a brief discussion of Prometheus, Rice shifts her focus to discuss recent updates on Fluentd and Fluent Bit. OpenTracing and Jaeger (Jaeger is an open source implementation of the specifications created by OpenTracing) are next up for Rice to review, and she shares that deploying Jaeger has gotten easier via the Jaeger Operator.

Rice introduces Matt Klein, Constance Caramanolis, and Jose Nino to talk about the third graduated project, Envoy. All three of these individuals are (apparently) with Lyft, where Envoy originated. Using the Lyft architecture as a framework, Caramanolis provides an overview of the need for Envoy before turning it over to Nino to provide an update on where Lyft stands now with Envoy deployed almost everywhere. Nino turns it over to Matt Klein, who provides context on Envoy in the larger ecosystem, and who speculates on why Envoy has become so popular in such a short period of time. Klein attributes Envoy’s success to performance, reliability, a modern codebase, extensibility, best-in-class observability, and a strong configuration API.

Following the discussion of Envoy, Rice returns to the stage to continue to review the CNCF projects. She starts with CoreDNS, which in the 1.13 release becomes the default DNS project for Kubernetes deployments. Rice moves on to Linkerd 2.0, and shows a quick video demo of deploying Linkerd into Kubernetes and seeing some of the detailed statistics that Linkerd enables.

Moving into storage, Rice mentions Rook, which as of September moved from the Sandbox into the CNCF Incubator. Next up is Vitess, which is now at version 3.0 and sports a number of important new features. gRPC, a high-performance RPC framework based on HTTP/2, is the next project that Rice mentions. With regards to messaging, Rice mentions NATS, the next CNCF project in the list to be reviewed.

The last project Rice mentions is Harbor, a container registry that also recently moved from Sandbox to the CNCF Incubator, and which integrates with Notary for providing content trust/signing for container images.

Following a quick review of the sponsors, Rice introduces Brandon Philips and Xiang Li, who take the stage to talk about 5 years of etcd. (Etcd is the consensus platform underneath Kubernetes.) Philips provides a quick overview of etcd and the origins that drove the creation of etcd at CoreOS. Philips announces that etcd is being moved to the CNCF today. This is a good move, in my opinion, and one that was much-needed. Li takes over to review the development of etcd over the last five years, including etcd’s own implementation of the Raft consensus protocol. Etcd itself leverages or integrates with other cloud-native technologies, like Prometheus, gRPC, and the Operator framework (not necessarily a CNFC project, but a common design pattern emerging in Kubernetes).

Li turns it back over to Philips to talk about the future of etcd in the CNCF. Key support items that the etcd community is looking forward to it more robust testing, support for etcd discovery service(s), and performing third-party security and correctness audits. Philips ends with a review of all the etcd-related sessions coming up at the conference.

Next, Janet Kuo, co-chair of the KubeCon track, takes the stage to introduce Aparna Sinha, the Group Product Manager for Kubernetes at Google. Sinha spends a few minutes re-iterating the success and growth of the Kubernetes community before moving on to new projects like Istio and Knative. Istio is an Envoy-based service mesh to provide observability, security, and control. Knative is a portable serverless framework that runs on top of Kubernetes (naturally). Sinha shows a recorded demo that shows GKE running with one-click installs of both Istio and Knative. (Sinha does point about that these projects are not limited to GKE, of course.)

Kuo comes back to the stage to introduce Wendy Cartee, Senior Director of Cloud Native Advocacy at VMware. Cartee reviews the lessons learned at VMware in reaching enterprises. Cartee also reviews some of VMware’s new and expanded open source efforts.

Kuo returns to the stage to introduce Matt Butcher and Karen Chu, both with Microsoft. Butcher and Chu are presenting “Phippy Goes to the Zoo,” an illustrated children’s guide to Kubernetes. After reading the book and going over the process for creating the book, Butcher and Chu announce that Phippy and all characters are being donated to the CNCF. Rice adds that the CNCF is licensing Phippy and the characters as Creative Commons, which allows people to re-use these characters in their own materials.

Another set of keynotes is happening this afternoon; if my schedule permits, I’ll try to liveblog those keynotes as well.

At this point, Rice and Kuo wrap up the morning keynotes.

Technology Short Take 107

Welcome to Technology Short Take #107! In response to my request for feedback in the last Technology Short Take, a few readers responded in favor of a more regular publication schedule even if that means the articles are shorter in length. Thus, this Tech Short Take may be a bit shorter than usual, but hopefully you’ll still find something useful.



  • Christian Kellner provides a brief reminder that not all USB-C ports are Thunderbolt ports, and updates everyone on the status of bolt (Linux utility for working with Thunderbolt ports and peripherals).


  • Troy Hunt has a good article on security measures other than just passwords, explaining some of the differences between multi-factor authentication and multi-step authentication (for example). Highly recommended reading.

Cloud Computing/Cloud Management

  • Another post from Matt Oswalt, also related to the NRE Labs post I mentioned above, discusses troubleshooting NGINX Ingress rewrites in Kubernetes.
  • Arush Salil reviews using ingress on Kubernetes with cert-manager.
  • Michael Hausenblas has an informative article with information on Kubernetes RBAC defaults.
  • This post is a couple months old (a lifetime in the cloud-native world): the AWS Service Operator enables users/developers to consume a select set of AWS services via Kubernetes YAML manifests.
  • And speaking of YAML manifests, I saw this tool mentioned somewhere among my various feeds. It’s described as a “layering tool” that allows users to extend officially published YAML documents with local extensions/additions.
  • There’s some good stuff in Michael Hausenblas’ AppOps Reloaded #102 (as always).
  • AWS Outposts is Amazon’s move into the hybrid cloud market. It will be available next year, and will come in a “native AWS” flavor as well as a VMware flavor (see William Lam’s post for more details on the VMware flavor). This is a pretty significant market move, and I believe this move impact the technology industry in a variety of ways. For those of us in the IT field, we are definitely living in interesting times.

Operating Systems/Applications


  • J Metz has launched his own, storage-focused “Short Takes” series; the first of these is found here.


Career/Soft Skills

  • This article by Phil Estes on 4 tips for learning Golang may prove useful to folks who, like myself, are interested in becoming more fluent in Go.

In the immortal words of Porky Pig, th-th-that’s all folks! As always, feel free to hit me up on Twitter with your feedback, suggestions, or corrections. Thanks for reading!

Recent Posts

Supercharging my CLI

I spent a lot of time in the terminal. I can’t really explain why; for many things it just feels faster and more comfortable to do them via the command line interface (CLI) instead of via a graphical point-and-click interface. (I’m not totally against GUIs, for some tasks they’re far easier.) As a result, when I find tools that make my CLI experience faster/easier/more powerful, that’s a big boon. Over the last few months, I’ve added some tools to my Fedora laptop that have really added some power and flexibility to my CLI environment. In this post, I want to share some details on these tools and how I’m using them.


Technology Short Take 106

Welcome to Technology Short Take #106! It’s been quite a while (over a month) since the last Tech Short Take, as this one kept getting pushed back. Sorry about that, folks! Hopefully I’ve still managed to find useful and helpful links to include below. Enjoy!


Spousetivities at DockerCon EU 18

DockerCon EU 18 is set to kick off in early December (December 3-5, to be precise!) in Barcelona, Spain. Thanks to Docker’s commitment to attendee families—something for which I have and will continue to commend them—DockerCon will offer both childcare (as they have in years past) and spouse/partner activities via Spousetivities. Let me just say: Spousetivities in Barcelona rocks. Crystal lines up a great set of activities that really cannot be beat.


More on Setting up etcd with Kubeadm

A while ago I wrote about using kubeadm to bootstrap an etcd cluster with TLS. In that post, I talked about one way to establish a secure etcd cluster using kubeadm and running etcd as systemd units. In this post, I want to focus on a slightly different approach: running etcd as static pods. The information on this post is intended to build upon the information already available in the Kubernetes official documentation, not serve as a replacement.


Validating RAML Files Using Docker

Back in July of this year I introduced Polyglot, a project whose only purpose is to provide a means for me to learn more about software development and programming (areas where am I sorely lacking real knowledge). In the limited spare time I’ve had to work on Polyglot in the ensuing months, I’ve been building out an API specification using RAML, and in this post I’ll share how I use Docker and a Docker image to validate my RAML files.


Technology Short Take 105

Welcome to Technology Short Take #105! Here’s another collection of articles and blog posts about some of the common technologies that modern IT professionals will typically encounter. I hope that something I’ve included here proves to be useful for you.


VMworld EMEA 2018 and Spousetivities

Registration is now open for Spousetivities at VMworld EMEA 2108 in Barcelona! Crystal just opened registration in the last day or so, and I wanted to help get the message out about these activities.


Setting up the Kubernetes AWS Cloud Provider

The AWS cloud provider for Kubernetes enables a couple of key integration points for Kubernetes running on AWS; namely, dynamic provisioning of Elastic Block Store (EBS) volumes and dynamic provisioning/configuration of Elastic Load Balancers (ELBs) for exposing Kubernetes Service objects. Unfortunately, the documentation surrounding how to set up the AWS cloud provider with Kubernetes is woefully inadequate. This article is an attempt to help address that shortcoming.


A Markdown-to-PDF Workflow on Linux

In May of last year I wrote about using a Makefile with Markdown documents, in which I described how I use make and a Makefile along with CLI tools like multimarkdown (the binary, not the format) and Pandoc. At that time, I’d figured out how to use combinations of the various CLI tools to create various formats from the source Markdown document. The one format I hadn’t gotten right at that time was PDF. Pandoc can create PDFs, but only if LaTeX is installed. This article describes a method I found that allows me to create PDFs from my Markdown documents without using LaTeX.


Running the gcloud CLI in a Docker Container

A few times over the last week or two I’ve had a need to use the gcloud command-line tool to access or interact with Google Cloud Platform (GCP). Because working with GCP is something I don’t do very often, I prefer to not install the Google Cloud SDK; instead, I run it in a Docker container. However, there is a trick to doing this, and so to make it easier for others I’m documenting it here.


Technology Short Take 104

Welcome to Technology Short Take 104! For many of my readers, VMworld 2018 in Las Vegas was “front and center” for them since the last Tech Short Take. Since I wasn’t attending the conference, I won’t try to aggregate information from the event; instead, I’ll focus on including some nuggets you may have missed amidst all the noise.


Kubernetes with Cilium and Containerd using Kubeadm

Now, if that isn’t a title jam-packed with buzzwords, I don’t know what is! In seriousness, though, I wanted to share how to use kubeadm to turn up a Kubernetes cluster using containerd (instead of Docker) and Cilium as the CNI plugin. I’m posting this because I wasn’t able to find a reasonable article that combined all the different threads—some posts talked about using containerd, others talked about using Cilium, and the official Kubernetes docs have examples for using kubeadm. The purpose of this post is to try to pull those threads together.


Book Review: REST API Design Rulebook

REST API Design Rulebook (written by Mark Masse and published by O’Reilly Media; more details here) is an older book, published in late 2011. However, having never attempted to design a REST API before, I found lots of useful information inside that really helped shape my understanding of REST APIs and REST API design.


Better XMind-GNOME Integration

In December of 2017 I wrote about how to install XMind 8 on Fedora 27, and at the time of that writing I hadn’t quite figured out how to define a MIME type for XMind files that would allow users to double-click on an XMind file in Nautilus and open that file in XMind. After doing a bit of additional research and testing, I’ve found a solution and would like to share it here.


Populating New Namespaces Using Heptio Ark

Heptio Ark is a tool designed to backup and restore Kubernetes cluster resources and persistent volumes. As such, it enables users to do a bunch of very useful things like copy cluster resources across cloud providers or replicate environments for development, staging, testing, QA, etc. In this post, I’ll share a slightly different use case for Ark: populating resources into new Kubernetes namespaces.


Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!