Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Technology Short Take 178

Welcome to Technology Short Take #178! This one is notably shorter than many of the Technology Short Takes I publish; I’m still trying to fine-tune my collection of RSS feeds (such a useful technology that seems to have fallen out of favor), removing inactive feeds and looking for new feeds to replace them. Regardless, I have managed to collect a few links for your reading pleasure this weekend. Enjoy!

Networking

Security

  • Matt Moore, CTO of Chainguard, goes into some detail on how Chainguard intends to honor the principles behind the CISA’s Secure by Design pledge.
  • Ars Technica examines TunnelVision, a vulnerability that has existed since 2002 and has the potential to render VPN apps useless. From my reading of the article, the greatest concern lies with untrusted networks where an attacker could manipulate things in their favor. Join that Wi-Fi network at the coffee shop at your own risk!
  • Here’s a slightly older post (March 2023) on using AppArmor to restrict app permissions, with a particular focus on containers (including Kubernetes). It’s a bit basic, but it does (in my opinion) provide some useful information.
  • Nick Frichette shares some research around using non-production AWS API endpoints as a potential attack surface.

Cloud Computing/Cloud Management

  • Yan Cui lays out how to manage Route 53 hosted zones in multi-account environments. Yan notes it’s a problem for IaC products to deal with multiple accounts. For Pulumi, at least, this isn’t typically an issue—although I haven’t tried the specific instance that Yan mentions in the article (an ACM certificate request originating in a different account than where the domain is hosted).
  • This trick for “de-Googling Google” has been making the rounds on social medial for the last few days. I haven’t personally tried it yet—have you?
  • If you’re interested in blocking the bots that various companies use to scrape your site for LLM training data, there’s some good information here.
  • Muhammad Bhatti shares some code and information for “bootstrapping” Pulumi (that is, using Pulumi to create the necessary AWS infrastructure for a self-managed backend).
  • Jacob Gillespie of Depot shares some neat tricks for making EC2 boot time 8x faster.

Operating Systems/Applications

Storage

Career/Soft Skills

That’s all for now! I sincerely hope you found something useful among these links. As always, I welcome any and all feedback; find me online and let me know what you think of this post or the site in general. I’m on Twitter, in the Fediverse, it’s not hard to locate me in any of the various Slack communities I frequent, and I even take the time to respond to legitimate e-mail messages from readers. Don’t be shy, reach out and say hi!

Endpoint Selectors and Kubernetes Namespaces in CiliumNetworkPolicies

While performing some testing with CiliumNetworkPolicies, I came across a behavior that was unintuitive and unexpected to me. The behavior centers around how an endpoint selector behaves in a CiliumNetworkPolicies when Kubernetes namespaces are involved. (If you didn’t understand a bit of what I just said, I’ll provide some additional explanation shortly—stay with me!) After chatting through the behavior with a few folks, I realized the behavior is essentially “correct” and expected. However, if I was confused by the behavior then there’s a good chance others might be confused by the behavior as well, so I thought a quick blog post might be a good idea. Keep reading to get more details on the interaction between endpoint selectors and Kubernetes namespaces in CiliumNetworkPolicies.

Before digging into the behavior, let me first provide some definitions or explanations of the various things involved here:

With these high-level definitions in mind, let’s dig into the behavior. I’ll start with this short CiliumNetworkPolicy (taken directly from the Cilium docs):

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "allow-all-to-victim"
spec:
  endpointSelector:
    matchLabels:
      role: victim
  ingress:
  - fromEndpoints:
    - {}

This policy is described as a rule that will allow all inbound traffic to the endpoints that match the endpointSelector. Indeed, the rule has an empty fromEndpoints rule (the - {} under fromEndpoints in the example above), which does indeed select all endpoints.

Now look at this policy (also taken directly from the Cilium documentation):

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "isolate-ns1"
  namespace: ns1
spec:
  endpointSelector:
    matchLabels:
      {}
  ingress:
  - fromEndpoints:
    - matchLabels:
        {}

Note the similarity in the ingress section of this example policy versus the first one—this policy has an empty matchLabels set on the fromEndpoints, meaning it will match any set of labels on an endpoint (essentially, matching all endpoints). Because the fromEndpoints matches all endpoints, like the first policy, you might look at this policy and think it is also intended to allow all inbound traffic.

However, this policy is described as a policy to “lock down ingress of the pods” in the specified namespace. Huh? How can two policies, both of which have empty fromEndpoint rules, both allow all inbound traffic and also lock down inbound traffic?

The key here is understanding the interaction between CiliumNetworkPolicies and Kubernetes namespaces. The Cilium docs have a page discussing the use of Kubernetes constructs in policies, and on that page it reminds you that CiliumNetworkPolicies are namespaced-scoped (limited or constrained to a single namespace). What this means is that the empty fromEndpoints rule in the policy example above does select all endpoints—but it’s only all endpoints in the namespace in which the policy is applied.

When you add that missing piece—that an empty endpoint selector only select endpoints in the same/specified namespace—then understanding these two example policies makes more sense. Both policies accomplish the same thing: they both allow all traffic from the current namespace while denying traffic from other namespaces. (The first policy is, in my opinion, a bit inaccurately described. It does allow all inbound traffic, but only from the current namespace.)

So what’s the key takeaway here? If you need to write a CiliumNetworkPolicy that targets endpoints in another namespace, you can’t use an empty fromEndpoints rule—instead, you need to explicitly call out the namespace of the source endpoint, like this:

apiVersion: "cilium.io/v2"
kind: CiliumNetworkPolicy
metadata:
  name: "k8s-expose-across-namespace"
  namespace: ns1
spec:
  endpointSelector:
    matchLabels:
      {}
  ingress:
  - fromEndpoints:
    - matchLabels:
        k8s:io.kubernetes.pod.namespace: ns2

I hope this explanation is helpful. As I mentioned at the start of the post, if I was confused by this then there’s a reasonable chance that others are (or will be) confused by it as well. I do intend to submit one or more PRs to the Cilium documentation to see if I can help clarify this in some way. In the meantime, if you have any questions—or any feedback for me—feel free to reach out. You can find me on Twitter, in the Fediverse, or in a number of different Slack communities. Thanks for reading!

Getting Barrier Working Between Arch Linux and Ubuntu

I recently had a need to get Barrier—an open source project aimed at enabling mouse/keyboard sharing across multiple computers, aka a “software KVM”—running between Arch Linux and Ubuntu 22.04. Unfortunately, the process for getting Barrier working isn’t as intuitive as it should be, so I’m posting this information in the hopes it will prove useful to others who find themselves in a similar situation. Below, I’ll share how I got Barrier working between an Arch Linux system and an Ubuntu system.

Although this post specifically mentions Arch Linux and Ubuntu, the process for getting Barrier running should be pretty similar (if not identical) for other Linux distributions and for macOS. I don’t have any Windows-based systems on which to test these instructions, but they should be adaptable to Windows as well. Note that there may be slight differences in the flags for the commands listed here when they are run on platforms other than Linux.

Installing Barrier

Both Arch and Ubuntu 22.04 have the latest release of Barrier, version 2.4.0, available in their repositories, so the installation is straightforward.

For Arch, just install with pacman:

pacman -Ss barrier

There’s also a “barrier-headless” package in Arch that has only the binaries and not the GUI component.

For Ubuntu, you’d just apt:

apt install barrier

There does not appear to be a non-GUI package available for Ubuntu.

Fixing Barrier’s SSL/TLS Configuration

The core of the issue with Barrier is in its SSL/TLS configuration. Barrier expects certain SSL/TLS assets to exist, but doesn’t create those assets automatically. To make matters worse, the documentation doesn’t indicate that these assets need to be manually created. Fortunately, once you do create the assets, Barrier seems to work as expected.

On the server (this is the system that will be sharing the mouse and keyboard), you’ll want to create an SSL/TLS certificate. Here’s the openssl command you need to use:

openssl req -x509 -nodes -days 365 -subj /CN=Barrier -newkey rsa:4096 -keyout Barrier.pem -out Barrier.pem

As far as I am aware, the subject must match what’s described above, but the duration of the certificate (365 days) and the key length (4096 bits) can be varied if desired. A quick review of the project’s source code indicates that the file must be named Barrier.pem, and that the file must be stored (on Linux, at least) in the $HOME/.local/share/barrier/SSL directory. You’ll likely need to create the SSL directory yourself.

Repeat this step on each client (a client is a system that will be controlled by the keyboard and mouse connected to the server) you’ll be using with Barrier. You can use the same command as above, and the same restrictions/limitations apply.

At this point, all systems should have an SSL/TLS certificate.

Next, on both the server and all clients, create a subdirectory in the $HOME/.local/share/barrier/SSL directory called Fingerprints. You’ll use this directory to store a file used by Barrier that contains the SHA256 fingerprint of the SSL/TLS certificate.

To generate the SHA256 fingerprint of the SSL/TLS certificate, you can use this command (the below command assumes you are running it from the $HOME/.local/share/barrier/SSL directory where Barrier.pem is found):

openssl x509 -fingerprint -sha256 -noout -in Barrier.pem | cut -d"=" -f2

On every system (the server and all clients), store this SHA256 fingerprint in the Fingerprints directory in a file named Local.txt. You can use shell redirection like this:

openssl x509 -fingerprint -sha256 -noout -in Barrier.pem | cut -d"=" -f2 > Fingerprints/Local.txt

Alternately, you can store the fingerprint as a shell variable and then use the shell variable later. Given that there’s one more small, mostly undocumented detail, you might prefer this second approach:

FINGERPRINT=$(openssl x509 -fingerprint -sha256 -noout -in Barrier.pem | cut -d"=" -f2)

The “small, mostly undocumented” detail is described in the Arch Linux wiki; that’s fortunate because it didn’t appear to be documented anywhere else. Barrier expects the text “v2:sha256:” to precede the SHA256 fingerprint in Local.txt (and some other files you’ll create shortly).

With that in mind, using a shell variable to store the SHA256 fingerprint is useful because you can just do this after running the previous command:

echo "v2:sha256:$FINGERPRINT" > Fingerprints/Local.txt

If you chose the first approach, just edit the file to add “v2:sha256:” before the SHA256 fingerprint in the file.

At this point, all systems have an SSL/TLS certificate and all systems have the SHA256 fingerprint of that certificate in a file, prepended with “v2:sha256:”. Make sure both of these steps are complete before proceeding!

The next steps are different for the server and for the clients.

On the Barrier Server

The Barrier server needs to be informed of the SHA256 fingerprints for every client that will connect to the server. Fortunately, each client has a Local.txt file that has the fingerprint along with the required “v2:sha256:” text.

Take the contents of each client’s Local.txt file and append it to a file on the server named TrustedClients.txt. This file should reside in the Fingerprints subdirectory alongside the server’s Local.txt. When you’re finished, the TrustedClients.txt file will contain a line, starting with “v2:sha256:” and followed by the SHA256 fingerprint, for each and every client that will connect to the server. If you have two clients, then you’ll have two fingerprints in this file. If you have five clients, then you’ll have five fingerprints in this file. If you have only a single client, then this file will have only a single fingerprint. There are numerous ways to get the client fingerprint files over to the server; choose whichever method you prefer.

On the Barrier Client

Just as the Barrier server needs to know about the SHA256 fingerprints of the clients, the clients need to know about the SHA256 fingerprint of the server. This information should be placed in a file named TrustedServers.txt in the Fingerprints subdirectory, alongside the client’s Local.txt file.

You have (at least) two options for making this happen. First, you can copy the Fingerprints/Local.txt file from the server to each client. To be honest, this is probably the easiest way.

The second way is to use openssl s_client to retrieve the certificate from the server and generate the fingerprint. (The Barrier server needs to be running for this to work.) The command looks something like this (the command below assumes it is being run from the $HOME/.local/share/barrier/SSL directory):

echo -n | openssl s_client -connect <hostname>:24800 2>/dev/null | \
    openssl x509 -fingerprint -sha256 -noout | cut -f2 -d'=' > Fingerprints/TrustedServers.txt

Whichever approach you choose, the end result is the same: the fingerprint of the server’s SSL/TLS certificate should be in a file named TrustedServers.txt in the Fingerprints subdirectory. Repeat this process on all clients.

At this point:

  1. Each system should have an SSL/TLS certificate, named Barrier.pem, found in the correct location (on Linux systems that’s $HOME/.local/share/barrier/SSL).
  2. Each system should have a file named Local.txt that contains the SHA256 fingerprint of this certificate, preceeded by the text “v2:sha256:”. This file should reside in the Fingerprints subdirectory (so the full path on a Linux system should be $HOME/.local/share/barrier/SSL/Fingerprints).
  3. The Barrier server should have a TrustedClients.txt file in the Fingerprints subdirectory that contains the contains of each and every client’s Local.txt file (a separate line for each client).
  4. All Barrier clients should have a TrustedServers.txt file in the Fingerprints subdirectory that contains the contents of the server’s Local.txt file.

If this is not the case, then go back and repeat the necessary step(s). Once all of the above statements are true, you’re ready to run Barrier on the server and on all the clients, and the connections between the server and the clients will be authenticated (only trusted clients will be able to connect, and only to trusted servers) and encrypted.

Summary

Let’s summarize the steps required to make Barrier work—all the stuff described above:

  1. Generate a Barrier.pem SSL/TLS certificate on the server and on each client. Store that certificate in the $HOME/.local/share/barrier/SSL directory (which you will likely need to create). Note that the path may be different on non-Linux systems!
  2. On all systems involved (server and each client), put the SHA256 fingerprint of the certificate—prepended by the text “v2:sha256:”—into the Fingerprints subdirectory of the SSL directory where the certificate is found. Use the filename Local.txt.
  3. On each client, add the server’s SHA256 fingerprint into a file named TrustedServers.txt in the Fingerprints subdirectory (alongside Local.txt). Every fingerprint needs to have the text “v2:sha256:” preceeding the fingerprint, one line per trusted server.
  4. On the server, add all the clients’ SHA256 fingerprints into a file named TrustedClients.txt in the Fingerprints subdirectory (alongside Local.txt). Every fingerprint needs to have the text “v2:sha256:” preceeding the fingerprint, one line per trusted client.

Once these steps are complete, you should be able to launch Barrier on the server and on the clients and proceed without further issues. (You’ll still need to handle things like the Barrier configuration file on the server, though—that isn’t addressed by these steps.)

Closing Notes

It’s worth noting that Barrier appears to no longer be maintained, and the replacement project (known as Input Leap) isn’t quite ready for regular use yet. The following quote is from the Input leap repository’s README:

But for now, we advise sticking with Barrier v2.4.0/v2.3.4…

Keep this in mind if you decide you want to try/use Barrier yourself. (UPDATE: It’s also worth noting that after having this up and running for a few days with no problems, Barrier just suddenly stopped working with “No route to host” in the Ubuntu client logs—this despite the fact that both ping and ssh worked perfectly between the two systems. I’m not sure I can recommend using Barrier, given the state of the project.)

I hope this information is useful for folks. I had to spend time combing GitHub issues and spelunking through the code to assemble the information above, but there’s still no guarantee that I have it all correct. If you see an error or a mistake, let me know so I can fix it! Feel free to reach out to me on Twitter, in the Fediverse, or via one of the many Slack communities I frequent. All constructive feedback is welcomed.

Technology Short Take 177

Welcome to Technology Short Take #177! Wow, is it the middle of May already? The year seems to be flying by—much in the same way that all these technical articles keep flying by my Inbox, occasionally getting caught and included here! In this Technology Short Take, I have links on things ranging from physical network designs to running retro operating systems as virtual machines. Surely there will be something useful in here for you!

Networking

  • Blogger Evert has a two part series (here and here) on managing NSX ALBs with Terraform.
  • Ivan launches a series of blog posts exploring routing protocol designs that can be used to implement EVPN-with-VXLAN L2VPNs in a leaf-and-spine fabric. The first one is here. What’s really cool is that Ivan also includes a netlab topology readers can use to create a lab and see how it works.
  • Eduard Tolosa discusses binding wireless network adapters to systemd-nspawn containers.
  • Ioannis Theodoridis has a three-part series on how he and his team used tools like Nautobot, Nornir, and Python to help with some extensive network migrations. Check out the series (part 1, part 2, and part 3); I think you’ll find some useful information in there.

Servers/Hardware

  • While in many respects Apple’s M series CPUs are amazing, all is not perfect: security researchers have discovered a flaw that would allow attackers to steal cryptographic keys. More details are available in this Zero Day article.

Security

Cloud Computing/Cloud Management

Operating Systems/Applications

Programming/Development

Storage

Virtualization

  • Talk about a blast from the past! William Lam discusses running a prerelease version of OS/2 2.0—an operating system I myself ran in the mid-1990s before switching to Windows NT—as a virtual machine on VMware ESXi. For what it’s worth, I remain convinced that OS/2 version 2 was technologically superior to its Windows peers (including Windows NT). It’s another example of when the best technology doesn’t always win.

Career/Soft Skills

OK, that’s all for this time around. Did you like this post, or another post on the site? Or maybe you have a question? Feel free to reach out! I always enjoy hearing from readers, so I invite you to find me on Twitter, on the Fediverse, or in one of the various Slack communities I frequent. (You can drop me an e-mail, if you’d prefer—my address isn’t too hard to find.) Thanks for reading!

Tracking EC2 Instances used by EKS with AWS CLI

As a sort of follow-up to my previous post on using the AWS CLI to track the specific Elastic Network Interfaces (ENIs) used by Amazon Elastic Kubernetes Service (EKS) cluster nodes, this post focuses on the EC2 instances themselves. I feel this is less of a “problem” than tracking ENIs, but I wanted to share this information nevertheless. In this post, I’ll show you which AWS CLI command to use to list all the EC2 instances associated with a particular EKS cluster.

If you read the previous post on tracking ENIs used by EKS, you might think that you could use a very similar AWS CLI command (aws ec2 describe-instances instead of aws ec2 describe-network-interfaces) to track the EC2 instances in a cluster—and you’d be mostly correct. Like the ENIs, EKS does add a cluster-specific tag to all EC2 instances in the cluster. However, just to make life interesting, the tag used for EC2 instances is not the same as the tag used for ENIs. (If someone at AWS knows of a technical reason why these tags are different, I’d love to hear it.)

Instead of using the cluster.k8s.amazonaws.com/name tag that is used on the ENIs, you’ll need to use the aws:eks:cluster-name tag instead, like this:

aws ec2 describe-instances --filters Name=tag:aws:eks:cluster-name,\
Values=<name-of-cluster>

Just replace <name-of-cluster> in the above command with the name of your EKS cluster, and you’re good to go. As I mentioned in the previous post, if you’re using an automation tool such as Pulumi or Terraform, you may need to explicitly specify the name of the cluster in your code (or look it up after the cluster is created).

I hope this information is useful to folks. If you have questions (or corrections, in the event I have something incorrect here!), please feel free to reach out. You can find me on Twitter, on the Fediverse, or in a number of different Slack communities. Thanks for reading!

Recent Posts

Tracking ENIs used by EKS with AWS CLI

I’ve recently been spinning up lots of Amazon Elastic Kubernetes Service (EKS) clusters (using Pulumi, of course) in order to test various Cilium configurations. Along the way, I’ve wanted to verify the association and configuration of Elastic Network Interfaces (ENIs) being used by the EKS cluster. In this post, I’ll share a couple of AWS CLI commands that will help you track the ENIs used by an EKS cluster.

Read more...

Technology Short Take 176

Welcome to Technology Short Take #176! This Tech Short Take is a bit heavy on security-related links, but there’s still some additional content in a number of other areas, so you should be able to find something useful—or at least interesting—in here. Thanks for reading!

Read more...

Linting your Markdown Files

It’s no secret I’m a fan of Markdown. The earliest mention of Markdown on this site is all the way back in 2011, and it was only a couple years after that when I migrated this site from WordPress to Markdown. Back then, the site was generated from Markdown using Jekyll (via GitHub Pages); today it is generated from Markdown sources using Hugo. One thing I’ve not done, though, is perform linting (checking for errors or potential errors) of the Markdown source files. That’s all about to change! In this post, I’ll share with you how I started linting my Markdown files.

Read more...

Technology Short Take 175

Welcome to Technology Short Take #175! Here’s your weekend reading—a collection of links and articles from around the internet on a variety of data center- and cloud-related topics. I hope you find something useful here!

Read more...

Technology Short Take 174

Welcome to Technology Short Take #174! For your reading pleasure, I’ve collected links on topics ranging from Kubernetes Gateway API to recent AWS attack techniques to some geeky Linux and Git topics. There’s something here for most everyone, I’d say! But enough of my rambling, let’s get on to the good stuff. Enjoy!

Read more...

Using NAT Instances on AWS with Pulumi

For folks using AWS in their day-to-day jobs, it comes as no secret that AWS’ Managed NAT Gateway—responsible for providing outbound Internet connectivity to otherwise private subnets—is an expensive proposition. While the primary concern for large organizations is the data processing fee, the concern for smaller organizations or folks like me who run a cloud-based lab instead of a hardware-based home lab is the per-hour cost. In this post, I’ll show you how to use Pulumi to use a NAT instance for outbound Internet connectivity instead of a Managed NAT Gateway.

Read more...

Using SSH with the Pulumi Docker Provider

In August 2023, Pulumi released a version of the Docker provider that supported SSH-based connections to a Docker daemon. I’ve written about using SSH with Docker before (see here), and I sometimes use AWS-based “Docker build hosts” with my M-series Macs to make it easier/simpler (and sometimes faster) to build x86_64-based Docker images. Naturally, I’m using an SSH connection in those cases. Until this past weekend, however, I hadn’t really made the time to look deeper into how to use SSH with the Pulumi Docker provider. In this post, I’ll share some details that (unfortunately) haven’t yet made it into the documentation about using SSH with the Pulumi Docker provider.

Read more...

Technology Short Take 173

Welcome to Technology Short Take #173! After a lull in links to share last time around, it looks like things have rebounded and folks are in full swing writing new content for me to share with you. I think I have a decent round-up of links for you; hopefully you can find something useful here. Enjoy!

Read more...

Technology Short Take 172

Welcome to Technology Short Take #172, the first Technology Short Take of 2024! This one is really short, which I’m assuming reflects a lack of blogging activity over the 2023 holiday season. Nevertheless, I have managed to scrape together a few links to share with readers. As usual, I hope you find something useful. Enjoy!

Read more...

Selectively Replacing Resources with Pulumi

Because Pulumi operates declaratively, you can write a Pulumi program that you can safely run (via pulumi up) multiple times. If no changes are needed—meaning that the current state of the infrastructure matches what you’ve defined in your Pulumi program—then nothing happens. If only one resource needs to be updated, then it will update only that one resource (and any dependencies, if there are any). There may be times, however, when you want to force the replacement of specific resources. In this post, I’ll show you how to target specific resources for replacement when using Pulumi.

Read more...

Dynamically Enabling the Azure CLI with Direnv

I’m a big fan of direnv, the tool that lets you load and unload environment variables depending on the current directory. It’s so very useful! Not too terribly long ago, I wanted to find a way to “dynamically activate” the Azure CLI using direnv. Basically, I wanted to be able to have the Azure CLI disabled (no configuration information) unless I was in a directory where I needed or wanted it to be active, and be able to make it active using direnv. I finally found a way to make it work, and in this blog post I’ll share how you can do this, too.

Read more...

Conditional Git Configuration

Building on the earlier article on automatically transforming Git URLs, I’m back with another article on a (potentially powerful) feature of Git—the ability to conditionally include Git configuration files. This means you can configure Git to be configured (and behave) differently based on certain conditions, simply by including or not including Git configuration files. Let’s look at a pretty straightforward example taken from my own workflow.

Read more...

Automatically Transforming Git URLs

Git is one of those tools that lots of people use, but few people truly master. I’m still on my own journey of Git mastery, and still have so very far to go. However, I did take one small step forward recently with the discovery of the ability for Git to automatically rewrite remote URLs. In this post, I’ll show you how to configure Git to automatically transform the URLs of Git remotes.

Read more...

Technology Short Take 171

Welcome to Technology Short Take #171! This is the next installation in my semi-regular series that shares links and articles from around the interwebs on various technology areas of interest. Let the linking begin!

Read more...

Saying Goodbye to the Full Stack Journey

In January 2016, I published the first-ever episode of the Full Stack Journey podcast. In October 2023, the last-ever episode of the Full Stack Journey podcast was published. After almost seven years and 83 episodes, it was time to end my quirky, eclectic, and unusual podcast that explored career journeys alongside various technologies, products, and open source projects. In this post, I wanted to share a few thoughts about saying goodbye to the Full Stack Journey.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!