Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Troubleshooting TLS Certificates

I was recently working on a blog post involving the use of TLS certificates for encryption and authentication, and was running into errors. I’d checked all the “usual suspects”—AWS security groups, host-level firewall rules (via iptables), and the application configuration itself—but still couldn’t get it to work. When I did finally find the error, I figured it was probably worth sharing the commands I used in the event others might find it helpful.

The error was manifesting itself in that I was able to successfully connect to the application (with TLS) on the loopback address, but not the IP address assigned to the network adapter. Using ss -lnt, I verified that the application was listening on all IP addresses (not just loopback), and as I mentioned earlier I had also verified that AWS security groups and host-level firewall weren’t in play. This lead me to believe that there was something wrong with my TLS configuration.

Since the application’s error message was extremely vague (and not even remotely TLS-related), I decided to try using curl to verify that TLS was working correctly. First I ran this command:

curl --cacert /path/to/CA/certificate -v

After some output, curl reported this error:

curl: (35) gnutls_handshake() failed: certificate is bad

At first, I took this to mean that something was actually wrong with the certificate, but I quickly realized this was an expected error—TLS authentication was enabled, and I hadn’t given curl the correct parameters. So I tried it again, this time with the correct parameters:

curl --cacert /path/to/CA/certificate \
--cert /path/to/client/certificate \
--key /path/to/client/certificate/key \ -v

This time the connection succeeded, and the output of the curl command showed that TLS encryption and authentication were in place and successful. The next step was to try it against the IP address assigned to the network adapter (where it had been failing):

curl --cacert /path/to/CA/certificate \
--cert /path/to/client/certificate \
--key /path/to/client/certificate/key \ -v

And the output tells me, clearly, what’s wrong:

curl: (51) SSL: certificate subject name (ip-10-200-10-1) does not match target host name ''

Ah, now that’s much more informative, but unexpected—was the certificate, in fact, not configured correctly? Fortunately, it was relatively easy to double-check:

openssl x509 -in /path/to/server/certificate -text

Clearly noted in the output under the “X509v3 Extensions” section is the answer:

X509v3 Subject Alternative Name:
    DNS:ip-10-200-10-1, DNS:localhost, IP Address:, IP Address:0:0:0:0:0:0:0:1

Sure enough, the certificate was missing a Subject Alternative Name (SAN) for the network adapter’s IP address, and that was enough to cause the application (rightfully so) to fail.

So what are the takeaways? Well, they might be simplistic, but here’s what I learned:

  • Use curl -v to get more verbose information on SSL/TLS connections, especially when the application’s error messages aren’t very informative.
  • Use openssl x509 to view certificate details so that you can verify that the certificate is configured correctly and has the right properties (not just SANs but also Extended Key Usage would be another area—is the certificate actually enabled for client authentication, for example).

These takeaways may be “common knowledge” for a fair number of folks, but I suspect there’s a few readers that will probably find this information handy.

Technology Short Take 103

Welcome to Technology Short Take 103, where I’m back yet again with a collection of links and articles from around the World Wide Web (Ha! Bet you haven’t seen that term used in a while!) on various technology areas. Here’s hoping I’ve managed to include something useful to you!



Nothing this time around, sorry!


Cloud Computing/Cloud Management

Operating Systems/Applications

  • Kushal Das (briefly) talks about using Podman for containers. Looks like Podman is still evolving pretty rapidly, but it may be worth giving a try.
  • Nick Janetakis decides he wants to benchmark Debian versus Alpine as a base Docker image and shares what he found. (The reasons behind his decision are in the article.)
  • YACIS (Yet Another Container Isolation Scheme) has arrived: Nabla containers. Like Google GVisor, I suspect this will see limited uptake (at least initially) since it requires a specific base image in order to work.
  • I got bitten by this issue recently when doing some testing using Ansible and Jinja2; fortunately, using -e ansible_python_interpreter=/usr/bin/python on the Ansible command line fixed it for me.
  • Prometheus is the second project to “graduate” within the CNCF; more details are here.



Sorry, I didn’t find anything to include this time. I’m sure there’s a ton of great content out there, but none of it passed across my “desk.” (If you were needing a sign of a shift in my focus, this should be it!)

Career/Soft Skills

  • I’ll put this here since it kind of applies to lots of different technology areas, even though it is sort of networking-focused: Ben Cotton recently shared 6 RFCs you should read. This is helpful information for folks new to IT, for developers or systems-oriented folks who need a better understanding of some networking fundamentals.
  • In recounting his journey to the cloud, Stephen Manley shares a tidbit that I think it useful for folks in a similar situation: regardless of how technology changes, there’s a piece of your experience that remains valuable, and you can build on that piece to create a bridge into a new area. For Stephen, it was data management. For you, it may be something else. (I love his closing statement!)
  • Finally, I found this article from Scott Young on increasing your focus to be helpful. As an “information worker,” our focus is most definitely one of our most valuable resources.

That’s all for now, folks! Have a great weekend!

VMworld 2018 Prayer Time

For the last several years, I’ve organized a brief morning prayer time at VMworld. This year, I won’t be at the conference, but I’d like to help coordinate a time for believers to meet nevertheless. So, if you’re a Christian interested in gathering together with other Christians for a brief time of prayer, here are the details.

What: A brief time of prayer

Where: Mandalay Bay Convention Center, level 1 (same level as the food court), at the bottom of the escalators heading upstairs (over near the business center)

When: Monday 8/27 through Thursday 8/30 at 7:45am (this should give everyone enough time to grab breakfast before the keynotes start at 9am)

Who: All courteous attendees are welcome, but please note this will be a distinctly Christian-focused and Christ-centric activity (I encourage believers of other faiths/religions to organize equivalent activities)

Why: To spend a few minutes in prayer over the day, the conference, the attendees, and each other

You don’t need to RSVP or anything like that, although you’re welcome to if you’d like (just hit me up on Twitter). As I mentioned, I won’t be at the conference, so I’ll ask folks who have attended prayer time in previous years to help take the lead in my absence. (There’s a growing chance I’ll be in town but not attending the conference; if that works out, I’ll be sure to join for prayer in the morning.)

There’s no need to bring anything other than an open heart, your faith, and your willingness to display that faith in front of others. The gathering is very casual—we’ll gather together, share a few prayer requests and needs, and then give folks the opportunity to pray as they feel led. If you don’t like praying out loud in public, that’s cool; we’re not going to force anyone. We just want to give believers the opportunity to strengthen one another in the faith.

I hope that plenty of believers at the conference get the opportunity to join for prayer. Reach out to other Christians you may know and tell them about morning prayer at VMworld!

Bolstering my Software Development Skills

I recently tweeted that I was about to undertake a new pet project where I was, in my words, “probably going to fall flat on my face”. Later, I asked on Twitter if I should share some of the learning that will occur (is ocurring) as a result of this new project, and a number of folks indicated that I should. So, with that in mind, I’m announcing this project I’ve undertaken is a software development project aimed at helping me bolster my software development skills, and that I’ll be blogging about it along the way so that others can benefit from my mistakes…er, learning.

Readers may recall that my 2018 project list included a project to learn to write code in Golang. At the time, I indicated I’d use Kubernetes and related projects, along with my goal of making more open source contributions, as a vehicle for helping to accomplish that goal. In retrospect, that was quite ambitious, and I’ve since come to the realization that there are a number of “baby steps” that I need to take before I am ready to use a large software project like Kubernetes as a means to help improve my coding skills. In other words, I need to learn to walk (or even crawl!) before I can run.

At the same time, I’ve grown increasingly interested in Ballerina, the new “cloud native programming language” that’s recently sprung up. Given that it’s difficult (perhaps even impossible?) to learn a programming language without some sort of project, I figured: why not build a simple, microservices-based application using multiple languages (specifically, Golang and Ballerina)?

Enter Polyglot, a simple microservices-based application whose only purpose is to serve as a framework for bolstering my software development skills. Polyglot is laughably simple, at least right now: it has two services, both API driven, that will allow users or other services to interact with data from a back-end database. One will be written in Golang, and one will be written in Ballerina. That’s it. Will it grow into something more? Probably. Will I realize later on that I did lots of things wrong along the way? Almost certainly. That, however, is kind of the point—without trying to tackle something like this, I can’t grow my knowledge and skills beyond where they are right now. Part of learning is failing and making mistakes, and Polyglot gives me a place where I can fail, make mistakes, and (most importantly) learn from those failures and mistakes.

Along the way, I’ll be blogging about what I have learned/am learning as I work on Polyglot. Feel free to hit me up on Twitter if you have questions or suggestions, and I welcome comments/PRs on the Polyglot repository.

Cloning All Repositories in a GitHub Organization

I’ve recently started playing around with Ballerina, and upon the suggestion of some folks on Twitter wanted to clone down some of the “official” Ballerina GitHub repositories to provide code examples and guides that would assist in my learning. Upon attempting to do so, however, I found myself needing to clone down 39 different repositories (all under a single organization), and so I asked on Twitter if there was an easy way to do this. Here’s what I found.

Fairly quickly after I posted my tweet asking about a solution, a follower responded indicating that I should be able to get the list of repositories via the GitHub API. He was, of course, correct:

curl -s

This returns a list of the repositories in JSON format. Now, if you’ve been paying attention to my site, you know there’s a really handy way of parsing JSON data at the CLI (namely, the jq utility). However, to use jq, you need to know the overall structure of the data. What if you don’t know the structure?

No worries, this post outlines another tool—jid—that allows us to interactively explore the data. So, I ran:

curl -s | jid

This let me explore the data being returned by the GitHub API call, and I quickly determined that I needed the clone_url property for each repository. With this information in hand, I can now construct this command:

curl -s |
jq -r '.[].clone_url'

Now I have a list of all the clone URLs for all the repositories, right? Not quite—the GitHub API paginates results, so a minor adjustment is needed:

curl -s | jq -r '.[].clone_url'

From here it’s a simple matter of piping the results to xargs, like this:

curl -s | jq -r '.[].clone_url' | xargs -n 1 git clone

Boom! Problem solved. As fate would have it, I’m not the only one thinking along these lines; here’s another example. Several others also suggested solutions involving Ruby; here’s one such example (this is written for GitHub Enterprise but should work for “ordinary” GitHub).

Naturally, further tweaks to the API URL may be necessary; if you needed private repos, for example, then you’ll have to add &type=private on the URL. Of course, that also means you’ll need to supply authentication details…but you get the idea.

I hope others find this useful! (And thanks to those who took the time to respond on Twitter, I appreciate it!)

Recent Posts

Spousevitivities at VMworld 2018

In case there was any question whether Spousetivities would be present at VMworld 2018, let this settle it for you: Spousetivities will be there! In fact, registration for Spousetivities at VMworld 2018 is already open. If previous years are any indication, there’s a really good possibility these activities will sell out. Better get your tickets sooner rather than later!


Additive Loops with Ansible and Jinja2

I don’t know if “additive” is the right word, but it was the best word I could come up with to describe the sort of configuration I recently needed to address in Ansible. In retrospect, the solution seems pretty straightforward, but I’ll include it here just in case it proves useful to someone else. If nothing else, it will at least show some interesting things that can be done with Ansible and Jinja2 templates.


Technology Short Take 102

Welcome to Technology Short Take 102! I normally try to get these things published biweekly (every other Friday), but this one has taken quite a bit longer to get published. It’s no one’s fault but my own! In any event, I hope that you’re able to find something useful among the links below.


More Handy CLI Tools for JSON

In late 2015 I wrote a post about a command-line tool named jq, which is used for parsing JSON data. Since that time I’ve referenced jq in a number of different blog posts (like this one). However, jq is not the only game in town for parsing JSON data at the command line. In this post, I’ll share a couple more handy CLI tools for working with JSON data.


A Quick Intro to the AWS CLI

This post provides a (very) basic introduction to the AWS CLI (command-line interface) tool. It’s not intended to be a deep dive, nor is it intended to serve as a comprehensive reference guide (the AWS CLI docs nicely fill that need). I also assume that you already have a basic understanding of the key AWS concepts and terminology, so I won’t bore you with defining an instance, VPC, subnet, or security group.


Examining X.509 Certificates Embedded in Kubeconfig Files

While exploring some of the intricacies around the use of X.509v3 certificates in Kubernetes, I found myself wanting to be able to view the details of a certificate embedded in a kubeconfig file. (See this page if you’re unfamiliar with what a kubeconfig file is.) In this post, I’ll share with you the commands I used to accomplish this task.


Using Variables in AWS Tags with Terraform

I’ve been working to deepen my Terraform skills recently, and one avenue I’ve been using to help in this area is expanding my use of Terraform modules. If you’re unfamiliar with the idea of Terraform modules, you can liken them to Ansible roles: a re-usable abstraction/function that is heavily parameterized and can be called/invoked as needed. Recently I wanted to add support for tagging AWS instances in a module I was building, and I found out that you can’t use variable interpolation in the normal way for AWS tags. Here’s a workaround I found in my research and testing.


A Quadruple-Provider Vagrant Environment

In October 2016 I wrote about a triple-provider Vagrant environment I’d created that worked with VirtualBox, AWS, and the VMware provider (tested with VMware Fusion). Since that time, I’ve incorporated Linux (Fedora, specifically) into my computing landscape, and I started using the Libvirt provider for Vagrant (see my write-up here). With that in mind, I updated the triple-provider environment to add support for Libvirt and make it a quadruple-provider environment.


Technology Short Take 101

Welcome to Technology Short Take #101! I have (hopefully) crafted an interesting and varied collection of links for you today, spanning all the major areas of modern data center technology. Now you have some reading material for this weekend!


Exploring Kubernetes with Kubeadm, Part 1: Introduction

I recently started using kubeadm more extensively than I had in the past to serve as the primary tool by which I stand up Kubernetes clusters. As part of this process, I also discovered the kubeadm alpha phase subcommand, which exposes different sections (phases) of the process that kubeadm init follows when bootstrapping a cluster. In this blog post, I’d like to kick off a series of posts that explore how one could use the kubeadm alpha phase command to better understand the different components within Kubernetes, the relationships between components, and some of the configuration items involved.


Book Review: Infrastructure as Code

As part of my 2018 projects, I committed to reading and reviewing more technical books this year. As part of that effort, I recently finished reading Infrastructure as Code, authored by Kief Morris and published in September 2015 by O’Reilly (more details here). Infrastructure as code is very relevant to my current job function and is an area of great personal interest, and I’d been half-heartedly working my way through the book for some time. Now that I’ve completed it, here are my thoughts.


Technology Short Take 100

Wow! This marks 100 posts in the Technology Short Take series! For almost eight years (Technology Short Take #1 was published in August 2010), I’ve been collecting and sharing links and articles from around the web related to major data center technologies. Time really flies when you’re having fun! Anyway, here is Technology Short Take 100…I hope you enjoy!


Quick Post: Parsing AWS Instance Data with JQ

I recently had a need to get a specific subset of information about some AWS instances. Naturally, I turned to the CLI and some CLI tools to help. In this post, I’ll share the command I used to parse the AWS instance data down using the ever-so-handy jq tool.


Posts from the Past, May 2018

This month—May 2018—marks thirteen years that I’ve been generating content here on this site. It’s been a phenomenal 13 years, and I’ve enjoyed the opportunity to share information with readers around the world. To celebrate, I thought I’d do a quick “Posts from the Past” and highlight some content from previous years. Enjoy!


DockerCon SF 18 and Spousetivities

DockerCon SF 18 is set to kick off in San Francisco at the Moscone Center from June 12 to June 15. This marks the return of DockerCon to San Francisco after being held in other venues for the last couple of years. Also returning to San Francisco is Spousetivities, which has organized activities for spouses, significant others/domestic partners, friends, and family members traveling with conference attendees!


Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!