Scott's Weblog The weblog of an IT pro specializing in cloud computing, virtualization, and networking, all with an open source view

Liveblog: Terraform Abstractions for Safety and Power

This is a liveblog for the HashiConf 2017 session titled “Terraform Abstractions for Safety and Power.” The speaker is Calvin French-Owen, Founder and co-CTO at Segment.

French-Owen starts by describing Segment, and providing a quick overview of Segment’s use of Terraform. Segment is all on AWS, and is leveraging ECS (Elastic Container Service) to schedule containers. Segment’s journey with Terraform started about 2.5 years ago. They now have 30-50 developers interacting with Terraform weekly, and Terraform is managing tens of thousands of AWS resources.

Digging into the meat of the presentation, French-Owens starts by answering the question, “Why is safety such a big deal?” There’s more to the puzzle than just preventing downtime. To illustrate that point, French-Owens shares some conclusions from an academic paper that explores why developers choose software programs. It turns out that to scale adoption, you must reduce the risk of adoption (developers avoid programs based on risk).

Naturally, French-Owens talks about how Terraform can “feel scary” since it’s so easy to destroy a bunch of infrastructure with only terraform destroy.

Before moving into a discussion on how to make Terraform feel less scary, French-Owens first covers some “Terraform nouns” (HCL, HashiCorp Configuration Language; variables; resources; inputs and outputs; and modules). Along the way, he shows examples of these nouns so that attendees who aren’t familiar with Terraform can understand what he’s discussing.

French-Owens shows how Terraform uses state to be able to make the diff between your infrastructure’s current state and the desired state (which is what happens when you run terraform plan). Boiling it all down, French-Owens explains that Terraform applies diffs to your infrastructure to achieve the desired configuration.

With this foundational information in mind, French-Owens goes on to explain how to provide safety with Terraform; specifically, how do I protect existing infrastructure and prevent Terraform from destroying it? One common approach is to use separate AWS accounts (i.e., one account for dev, one account for staging, one account for production).

Basic state management is important, says French-Owens. One key rule is to use remote state/remote backends. It doesn’t matter as much which remote state you use (Terraform Enterprise, S3, Consul, etc.), it’s more important that you just actually use remote state. French-Owens quickly runs through some advantages and disadvantages of the various remote state backends (S3, Consul, and Terraform Enterprise).

While remote state is important, there’s more to state management according to French-Owens. One approach is breaking down state into “smaller” components, such as using different states for different services. Another approach is to use shared states grouped by team. Mastering the use of read-only remote state is also important. You can use the terraform_remote_state option, you can use Terraform’s data sources (although at first glance I don’t understand how remote state and data sources should both be lumped together—they seem somewhat unrelated), or you can use shared outputs from other Terraform modules.

Summarizing state safety:

  • Use separate cloud accounts.
  • Use states per environment.
  • Consider states per service or per team.
  • Use a remote state manager (like Terraform Enterprise or S3).
  • Limit your blast radius.
  • Use some sort of read-only state.

The second part of safety, according to French-Owens, pertains to Terraform modules. French-Owens believes that establishing standards/guidelines for writing Terraform modules can make Terraform “feel safe”. These can include separate definitions of inputs and outputs (using separate files), using sane default for inputs (to make it easier to consume), leveraging simplified templates, and bundling resources (like IAM) along with resource definitions/modules. Along the way, French-Owens shows examples of these standards/guidelines as they are being used at Segment. Using these guidelines allows you to abstract away some of the complexity, which makes it easier—and safer—for developers to consume. Reducing complexity and risk means developers are more likely to use Terraform, according to French-Owens.

French-Owens provided a summary slide of how to provide safety for modules, but I wasn’t able to capture the contents of the slide quickly enough before he advanced.

In closing, French-Owens shows how Segment has leveraged the use of Terraform to address operational issues like ensuring that alerts are created for every service (using Datadog) or how to optimize cloud provider costs by tagging resources with cost center/billing information. He finally wraps up by extending a vision of Terraform as the “package manager for the cloud” and what this means in the future.

At this point, French-Owens wraps up the session.

Liveblog: Journey to the Cloud with Packer and Terraform

This is a liveblog of the HashiConf 2017 breakout session titled “Journey to the Cloud with Packer and Terraform,” presented by Nadeem Ahmad, a senior software developer at Box.

Ahmad starts with a quick review of Box, but (thankfully) transitions quickly to his particular team at Box (the Productivity Engineering team). His team’s customers are the software developers at Box, and it’s his team’s job to help make them more productive and efficient. One of the tools that Ahmad’s team built was a tool called Cluster Runner, which is intended to streamline running unit and integration tests on the code the developers were writing.

This brings Ahmad to the crux of this presentation, which is telling the story of how Box went from a bare-metal environment to a cloud-based architecture. The purpose of this migration was to address some of the limitations of their bare-metal environment (inelastic, divergent host configurations over time, etc.). Box leveraged Platform9 to build an OpenStack-based private cloud, with the intent of switching to AWS, GCP, or Azure in the future as private cloud resources aged out.

Ahmad next goes into why Box selected the process they did; they wanted to move away from configuration synchronization (making changes to a server over time, either manually or in some automated fashion) to immutable servers (a change to the configuration triggers a new image build and new server deployment). A blend of these two approaches (“phoenix servers”) means you still run configuration management tools against your servers, but deployments of fresh images occur much more frequently.

With this in mind, Ahmad moves into a description of Box’s specific project to go down this path. Their requirements for this new system needed to meet the following requirements:

  • Easy-to-use
  • Self-service
  • Fully automated
  • With built-in validation

To address “Phase 1” (building and verifying images), Box selected the HashiCorp tool Packer. Ahmad pointed out that Packer’s multi-cloud support was very important for his team. Packer also supports multiple provisioners; for Box, this meant Packer supported their existing Puppet manifests, which allowed them to leverage their investment and expertise in Puppet.

However, Ahmad and his team felt that Packer’s JSON-based configuration was a bit too complex. This address this complexity, Box built a tool on top of Packer. An additional reason for building this tool was to add an “out-of-band” verification process, to verify that the final image meets the specifications provided. This tool—called image-pipeline—takes a simple YAML file that, in turn, incorporates various Packer JSON instructions in an easy-to-assemble way. Image Pipeline is executed via Jenkins jobs.

Now that “Phase 1” (building and verifying images) is complete, Ahmad moves into disussing “Phase 2” (deploying instances). This is where Terraform comes into play. Terraform’s ability to launch instances allows Box to quickly and easily take the images built by Image Pipeline (their tool on top of Packer) and deploy them onto OpenStack (their private cloud) or a public cloud.

As with Packer, Ahmad and their team felt that the Terraform configurations were a bit too complex. So, they wrote a tool (the Terraform Configuration Generator). This tool takes simple YAML files and then spits out HCL-formatted Terraform configurations to reflect the instructions found in the source YAML files.

Ahmad reminds session attendees that it’s important to iterate continuously over the processes found in building, verifying, and deploying workloads. Box built yet another tool, called Project Recycle, that automates the process of re-deploying fresh infrastructure (using the tools and procedures described earlier).

As Ahmad starts to wrap up, he shares some lessons learned:

  • Interact with infrastructure via API.
  • Make it easy to build and verify images.
  • Automate on top of simple, composable tools (like Packer and Terraform).

Where is Ahmad’s team headed next? There’s closely evaluating auto-scaling technologies (to scale down at night and scale back up during the day). Ahmad’s team is also evaluating the use of Docker containers to help with their CI pipeline needs (his team is focused quite heavily on optimizing CI infrastructure).

At this point, Ahmad wraps up and opens up for questions from the audience.

HashiConf 2017 Day 1 Keynote

This is a liveblog from the day 1 keynote (general session) at HashiConf 2017 in Austin, TX. I’m attending HashiConf this year as an “ordinary attendee” (not working or speaking), and so I’m looking forward to being able to actually sit in on sessions for a change.

At 9:43am, the keynote kicks off with someone (I don’t know who, he doesn’t identify himself) who provides some logistics about the event, the Wi-Fi, asking attendees to tweet, etc. After a couple minutes, he brings out Mitchell Hashimoto, Founder and co-CTO of HashiCorp, onto the stage.

Hashimoto starts out his talk by reviewing a bit of the history and growth of both HashiConf (and, indirectly, HashiCorp). Last year, HashiCorp has grown from about 50 employees to now over 130 employees. HashiCorp has also seen significant community growth, Hashimoto says, and he reviews the growth in in the use of HashiCorp’s products (Vagrant, Packer, Terraform, Vault, Consul, and Nomad). Hashimoto also reviews the growth in their commercial products (Consul Enterprise, Vault Enterprise, and Terraform Enterprise). Hashimoto also discusses HashiCorp’s commitment to open source software and the desire to properly balance commercial (paid) products versus free (open source) projects.

Hashimoto now transitions his discussion into a look at how HashiCorp (from here on referred to just as HC) products are deeply involved in companies’ transition to cloud-based operational models.

Starting his review of product updates within the four major categories, Hashimoto starts with Terraform (a favorite project of mine). Terraform has seen tremendous growth, both in use and in functionality, and Hashimoto says he believes Terraform is emerging as the “standard” for provisioning infrastructure. This leads to the announcement of the Terraform Registry, a place to find Terraform modules. Terraform Registry will include partner modules (from HashiCorp partners), as well as Verified modules (modules that HashiCorp has reviewed, vetted, and approved). He next shows a recorded demo of how easy it is to publish a Terraform module in the Registry. The Registry is available immediately at https://registry.terraform.io.

Hashimoto now brings out Corey Sanders from Microsoft Azure. Sanders talks about the eight different Terraform modules that Microsoft is actively working on/maintaining and that will be available via the Terraform Registry. Sanders also announces that moving forward Terraform will be pre-installed in the Linux containers that power Microsoft’s Azure Cloud Shell (a CLI environment for Azure). This is really powerful, in my humble opinion. Finally, Sanders talks about updates to Azure’s Terraform provider, such as adding support for Azure EventGrid and Azure Container Instances (ACI).

Hashimoto now returns to the stage to discuss the role of collaboration within the area of deploying infrastructure. Examples of things that HC has done to improve collaboration include remote state backends and state locking. This leads into a discussion of HC’s rearchitecture of Terraform Enterprise (TFE). Workspaces—available in both Terraform and TFE—are a key part of collaboration. Hashimoto reviews improvements in the “Plan” and “Apply” stages within TFE, including integrations with GitHub, BitBucket, and GitLab. TFE was also rebuilt on top of an API-based architecture for more flexibility. The new version of TFE is in beta today, and is expected to be available by the end of the year.

Hashimoto now invites Armon Dadgar (HC Founder and co-CTO) to the stage to talk about more product updates. Dadgar starts his presentation with a focus on Vault, a project for solving secrets management. Adoption of Vault has been faster/broader than expected, according to Dadgar, and Dadgar believes that this is due in part to the evolution of security away from “perimeter only” to more “defense in depth”. Dadgar says there are three separate axes involved: secrets management, encryption as a service, and privilege access management. Dadgar invites Dan McTeer from Adobe to the stage to talk about how they’re using Vault.

McTeer reviews the role of Adobe Digital Marketing, which builds and maintains over 23 different digital marketing tools across a number of different teams. These tools are distributed across multiple geographical regions and multiple public cloud providers, and McTeer reviews some of the scale that he and his team has to manage. In reviewing the goals and requirements for his team, McTeer points out that any tool they select must have a REST API, and how this led his group to Vault Enterprise. McTeer’s team maintains multiple Vault clusters in different geographic regions and on different cloud providers, and how this has greatly streamlined some of their security processes (like reducing the time taken to do privileged account password rotation for thousands of hosts down to less than a minute).

Dadgar now returns to the stage, and quickly reviews the Google Auth backend that was released earlier in the year. This leads to a discussion of Kubernetes support within Vault, and how it works with Vault (leverages JWT for integration). This Kubernetes integration will be available in Vault 0.8.3, which should be available today (or later today).

Moving on, Dadgar now transitions to Consul, HC’s service discovery tool (among other things). Dadgar reviews the many features that HC is adding to Consul, but also reviews HC’s commitment that Consul’s strong foundation in academia (with regard to Consul’s underlying subsystems and protocols). Within the last year, HC launched HashiCorp Research as a way of “giving back” information and research on protocol, design, algorithms, and such. The result of this work was a paper on “Lifeguard,” which is a set of extensions to Consul’s protocols to reduce false positives in failure detection and reduce the latency of failure detection.

After reviewing Consul’s (long and storied) history, Dadgar announces the availability of Consul 1.0. Consul 1.0 will be available today in beta form.

Next, Dadgar shifts his attention to Nomad, a batch/cluster scheduler. HC has been busy adding lots of features to Nomad (as driven by customer demand), and Dadgar introduces Mohit Arora from Jet.com to the stage to talk about how Jet is using Nomad.

Arora briefly reviews Jet’s value proposition, but quickly transitions into a discussion of their use of cluster schedulers and Nomad. According to Arora, they chose Nomad because it was cross-platform, flexible, easy to use, and offered integration with Consul and Vault.

Dadgar returns to the stage following Arora’s presentation, and he announces a native user interface (UI) for Nomad. The UI is enabled by a single flag and served out of the same set of processes that Nomad already runs. The Nomad UI is available now.

Following his discussion of the Nomad UI, Dadgar starts to address how Nomad helps address both the concerns of developers as well as the concerns of operators. Asking the question, “How can we do this at scale?” To answer that question, Dadgar announces Nomad’s ACL (access control list) system that will help organizations to define roles/permissions in a fine-grained basis. Dadgar mentions that this is related to the ACL systems in Consul and Vault, but subtly different (perhaps due to the different roles of the various tools). The ACL system is available in the Nomad UI. ACLs and the Nomad UI are present in the Nomad 0.7 beta, which is currently available.

Dadgar talks about the “Million Container Challenge”, and that leads him into a discussion of Nomad Enterprise (joining Vault Enterprise, Consul Enterprise, and Terraform Enterprise as commercial products). One feature of Nomad Enterprise is native support for namespacing, which helps with the mult-tenancy needed for many larger organizations. In conjunction with namespaces, users can attach quotas to those namespaces to preserve the overall quality of service (QoS) of the entire clsuter. All this is supported without sacrificing Nomad’s multi-region support.

Dadgar now transitions back to Hashimoto. Hashimoto reviews the evolution of HC’s tools as the industry has evolved (VMs powered by Vagrant moving to cloud infrastructure orchestrated by Terraform moving to containers and microservices executed by cluster schedulers). Hashimoto talks about adding some “safety guards” to automated environments, like forbidding changes outside working hours, or ensuring that all services have associated health checks. Other examples include ensuring proper key size for TLS certificates or making sure AWS instances are tagged with a billing entity.

This leads Hashimoto to announce Sentinel, a tool for defining “policy as code.” Sentinel is an enterprise feature that is being integrated into HC’s various Enterprise products (so if I’m understanding correctly Sentinel will not work with/be available for the non-Enterprise versions). Hashimoto walks through examples of how Sentinel’s “policy as code” functionality addresses each of the examples he outlined earlier (walking through examples in Consul, Nomad, Vault, and Terraform).

So how does Sentinel work? It’s built upon the idea of “infrastructure as code,” but this time applied to policy (hence “policy as code”). Sentinel leverages an “easy-to-use” policy language to make it easy to write policies. Sentinel is described by Hashimoto as an “embedded framework” that enables active enforcement and doesn’t require you to run a separate application/process to verify policy. (It is capable of running in a passive enforcement mode.) Sentinel supports multiple enforcement levels (advisory, soft mandatory, and hard mandatory), and offers components to help test policies to clearly identify failures. These components were explicitly designed to run in continuous integration (CI) environments. Sentinel also supports external data sources via plugins called “imports”; this allows Sentinel to leverage external data sources (such as ServiceNow, for example) when evaluating policy. An SDK (software development kit) is available to help develop Sentinel imports.

Hashimoto reiterates that Sentinel is integrated into the Enterprise products, and will be available in the next version of each Enterprise product. Hashimoto shows some examples of the integration:

  • Integration into Terraform Enterprise to prevent terraform apply if policy is violated
  • Integration into Consul Enterprise to determine which service registration is allowed to happen (based on policy definition)
  • With Vault Enterprise, Sentinel policies could control access to requests, tokens, identities, or multi-factor authentication (MFA)
  • When used with Nomad Enterprise, Sentinel can ensure that all jobs that are deployed to a cluster are sourced from an internal artifact repository

Hashimoto next reads a quick blurb about how Barclays will be using Sentinel (Barclays was a design customer who helped shape Sentinel).

At this point, Hashimoto brings out Dave McJannet, CEO of HashiCorp, to the stage. McJannet talks about the company’s investments in products and engineering, support structures, and greater regional presence in more geographies/areas. McJannet also reviewed the keynote’s announcements:

  • Terraform Enterprise updates and Terraform Module Registry
  • Vault support for Kubernetes
  • Nomad Enterprise and the Nomad 0.7 release
  • Consul 1.0 announcement
  • Sentinel “policy as code” framework

McJannet talks about how the company focuses on workflows, not technologies; automation through codification; and being open and extensible.

McJannet wraps up the keynote by pointing out the breakout sessions, the Hub (sponsor area), and a preview of tomorrow’s keynote speakers.

New Website Features

One of the reasons I migrated this site to Hugo a little over a month ago was that Hugo offered the ability to do things with the site that I couldn’t (easily) do with Jekyll (via GitHub Pages). Over the last few days, I’ve taken advantage of Hugo’s flexibility to add a couple new features to the site.

New functionality that I’ve added includes:

  1. Category- and tag-specific RSS feeds: Hugo can easily generate category- and tag-specific RSS feeds, enabling readers to subscribe to the RSS feed for a particular category or tag. On the taxonomy list pages—these are the pages that list all the posts found in a particular category or tag—there’s now a small link to the RSS feed for that specific category or tag. (As an example, checkout the list of posts in the “General” category.)

  2. (Truly) Related posts: The “Related Posts” section at the bottom of posts has returned, thanks to new functionality found in Hugo 0.27 (functionality that was, apparently, inspired in part by my experiences—see the docs page). This section lists 3 posts that are considered by Hugo to be related, based on the category and tags assigned to the posts.

It’s not much, I know, but I’m hoping this new functionality makes it easier for readers to find (and stay in touch with) content that is relevant to their specific interests and needs.

If you have feedback for me on what additional functionality you’d find useful on the site, feel free to hit me up on Twitter. Thanks for reading!

Some Q&A About the Migration to Hugo

As you may already know, I recently completed the migration of this site from GitHub Pages (generated using Jekyll) to S3/CloudFront and Hugo for static site generation. Since then, I’ve talked with a few readers who had additional questions about the site migration. I thought others might have the same questions, so I decided to gather the most common questions here and share the answers with everyone.

(For those who need a quick primer on how the site is set up/served, refer to this post.)

I’ll structure the rest of this post in a “question-and-answer” format.

Q: Why migrate away from Jekyll?

A: Some of this is tied up with GitHub Pages (see the next question), but the key things that drove me away were very slow build times (in excess of five minutes), limited troubleshooting, dealing with Ruby dependencies in order to run local Jekyll builds (needed to help with troubleshooting), and limited functionality (due in part to GitHub Pages’ restrictive support for plugins).

Q: Why migrate away from GitHub Pages?

A: If you’re happy with Jekyll (and it’s a fine static site generator for lots of folks), having it integrated on the backend with GitHub Pages is a super-sweet setup. A simple git push and the site automatically rebuilds—what more could you want? However, the GitHub Pages version of Jekyll is limited in the plugins you can use, and that in turn limits the functionality of the site. Now, I could’ve stuck with GitHub Pages as a hosting solution and not used the Jekyll functionality, as it is true that GitHub Pages offers the ability to host static sites without using Jekyll. The workflow for such a setup just didn’t feel as straightforward as I would’ve liked (it involved multiple branches, including the required “gh-pages” branch, and Git submodules), so it didn’t seem like there was really any point. From my perspective, GitHub Pages and Jekyll had a shared fate—there wasn’t much point in using one without the other.

Q: Why choose Hugo?

A: A lot of this is covered in my previous post, but I’ll quickly re-iterate the reasons. First, Hugo is vastly faster at site generation (think 5 minutes for Jekyll down to single-digit seconds for Hugo). Second, Hugo is a single binary that’s easily installed on macOS or Linux. Third, and finally, I wanted to stretch my skills, and this seemed like a reasonable way to do it.

Q: Why choose Amazon S3 for hosting your site?

A: As I mentioned above, I wanted to stretch my skills a bit. Hosting the site on Amazon S3 gave me the ability to gain additional experience with using and managing services on Amazon Web Services. Amazon S3 is perfectly capable of hosting static sites, so why not?

Q: Why use CloudFront in addition to S3?

A: Using S3 alone is perfectly fine for some lower-traffic sites, but I don’t really consider this site to be a lower-traffic site (not trying to make myself sound more important/influential than I am, just being realistic). I felt it would be better to leverage CloudFront as a content distribution network (CDN) and be able to offer (hopefully) lower-latency service to more readers around the world. Plus, this is another way to stretch my skills and my experience.

Q: Is there a reason you’re not using a continuous integration (CI) service (like TravisCI, CircleCI, or AWS CodePipeline) in your setup?

A: This is something that I really wanted to do, and I may yet work this into my workflow. The general way this works is that you connect the CI service to GitHub via a webhook so that a commit to the repository automatically triggers the site generation and subsequently uploads to S3. I avoided adding this step in the first iteration in order to keep the workflow simple and easy to understand. There are a couple of things I need to tackle in order to add a CI service. First, because many of these services are Docker-based, I need to build an appropriate Docker image that contains the tools I need (Hugo, the AWS CLI, and s3deploy). Second, I need to test the Docker image to ensure that I’m not unecessarily uploading files that I don’t need to upload (s3deploy is supposed to help with this, but I haven’t tested it in an ephemeral container yet). Third, I need to understand the configuration of the overall workflow and pipeline itself. Fourth, I need to do some thinking on what “continuous integration” means for a static site; is this just verifying that the site builds without any errors?

These are all achievable things; it’s just a matter of finding the time to work on them. The good news is that all these things are also really good blog post topics.

Q: Are you adding comments back to the site?

A: Hugo doesn’t have a built-in commenting system, since it’s made for static sites. There are, though, a few ways of adding comments to a static site. I won’t do Disqus (for a variety of reasons), but there is a solution that leverages GitHub pull requests for comments. There’s also a solution based on AWS Lambda as well. This topic is something I’ll need to explore in greater detail, especially if I’m going to use it in conjunction with a CI service.

There you have it—feel free to hit me up on Twitter if you have additional questions.

Recent Posts

Using Keybase with GPG on macOS

During my too-brief stint using Fedora Linux as my primary laptop OS (see here for some details), I became attached to using GPG (GNU Privacy Guard)—in conjunction with Keybase—for signing Git commits and signing e-mail messages. Upon moving back to macOS, I found that I needed to set this configuration back up again, and so I thought I’d document it here in case others find it useful.

Read more...

A Brief Look at VMware's Three Cloud Approaches

I’m at VMworld 2017 this week (obviously, based on my tweets and blog posts), and in the general sessions Monday and yesterday VMware made a big deal about how VMware is approaching cloud computing and cloud services. However, as I’ve been talking to other attendees, it’s become clear to me that many people don’t understand the three-pronged approach VMware is taking.

Read more...

Liveblog: VMworld 2017 Day 2 Keynote

This is a liveblog of the day 2 keynote at VMworld 2017 in Las Vegas, NV. Unlike yesterday, I wasn’t accosted by the local facilities team trying to get a seat at a table in the bloggers/press/analyst area, so that’s an improvement over yesterday. While I’m aware of (most, if not all, of) the announcements that will be made today, I’m still looking forward to the keynote.

Read more...

Liveblog: VMworld 2017 Day 1 Keynote

This is a liveblog of the day 1 keynote at VMworld 2017 in Las Vegas, NV. There was a bit of a kerfluffle regarding seating (the local facilities staff didn’t want to let me sit in the bloggers’ area because “you’re not a blogger”), but I managed to snag a seat anyway.

Read more...

Technology Short Take #86

Welcome to Technology Short Take #86, the latest collection of links, articles, and posts from around the web, focused on major data center technology areas. Enjoy!

Read more...

Quick Reference to Common AWS CLI Commands

This post provides an extremely basic “quick reference” to some commonly-used AWS CLI commands. It’s not intended to be a deep dive, nor is it intended to serve as any sort of comprehensive reference (the AWS CLI docs nicely fill that need).

Read more...

Using ODrive for Cloud Storage on Linux

A few months ago, I stumbled across a service called ODrive (“Oh” Drive) that allows you to combine multiple cloud storage services together. Since that time, I’ve been experimenting with ODrive, testing it to see how well it works, if at all, with my Fedora Linux environment. In spite of very limited documentation, I think I’ve finally come to a point where I can share what I’ve learned.

Read more...

Manually Installing Azure CLI on Fedora 25

For various reasons that we don’t need to get into just yet, I’ve started exploring Microsoft Azure. Given that I’m a command-line interface (CLI) fan, and given that I use Fedora as my primary laptop operating system, this led me to installing the Azure CLI on my Fedora 25 system—and that, in turn, led to this blog post.

Read more...

Technology Short Take #85

Welcome to Technology Short Take #85! This is my irregularly-published collection of links and articles from around the Internet related to the major data center technologies: networking, hardware, security, cloud computing, applications/OSes, storage, and virtualization. Plus, just for fun, I usually try to include a couple career-related links as well. Enjoy!

Read more...

Information on the Recent Site Migration

Earlier this week, I completed the migration of this site to an entirely new platform, marking the third or fourth platform migration for this site in its 12-year history. Prior to the migration, the site was generated using Jekyll and GitHub Pages following a previous migration in late 2014. Prior to that, I ran WordPress for about 9 years. So what is it running now?

Read more...

VMworld 2017 Prayer Time

At VMworld 2017 in Las Vegas, I’m organizing—as I have in previous years—a gathering of Christians for a brief time of prayer while at the conference. If you’re interested in joining us, here are the details.

Read more...

Ten Years of Spousetivities

A long time ago in a galaxy far, far away (OK, so it was 2008 and it was here in this galaxy—on this very planet, in fact), I posted an article about bringing your spouse to VMworld. That one post sparked a fire that, kindled by my wife’s passion and creativity, culminates this year in ten years of Spousetivities! Yes, Spousetivities is back at VMworld (both US and Europe) this year, and Crystal has some pretty nice events planned for this year’s participants.

Read more...

The Linux Migration: July 2017 Progress Report

I’m now roughly six months into using Linux as my primary laptop OS, and it’s been a few months since my last progress report. If you’re just now picking up this thread, I encourage you to go back and read my initial progress report, see which Linux distribution I selected, or check how I chose to handle corporate collaboration (see here, here, and here). In this post, I’ll share where things currently stand.

Read more...

Technology Short Take #84

Welcome to Technology Short Take #84! This episode is a bit late (sorry about that!), but I figured better late than never, right? OK, bring on the links!

Read more...

CentOS Atomic Host Customization Using cloud-init

Back in early March of this year, I wrote a post on customizing the Docker Engine on CentOS Atomic Host. In that post, I showed how you could use systemd constructs like drop-in units to customize the behavior of the Docker Engine when running on CentOS Atomic Host. In this post, I’m going to build on that information to show how this can be done using cloud-init on a public cloud provider (AWS, in this case).

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!