Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Kubernetes, Kubeadm, and the AWS Cloud Provider

Over the last few weeks, I’ve noticed quite a few questions appearing in the Kubernetes Slack channels about how to use kubeadm to configure Kubernetes with the AWS cloud provider. You may recall that I wrote a post about setting up Kubernetes with the AWS cloud provider last September, and that post included a few snippets of YAML for kubeadm config files. Since I wrote that post, the kubeadm API has gone from v1alpha2 (Kubernetes 1.11) to v1alpha3 (Kubernetes 1.12) and now v1beta1 (Kubernetes 1.13). The changes in the kubeadm API result in changes in the configuration files, and so I wanted to write this post to explain how to use kubeadm 1.13 to set up a Kubernetes cluster with the AWS cloud provider.

I’d recommend reading the previous post from last September first. In that post, I listed four key configuration items that are necessary to make the AWS cloud provider work:

  1. Correct hostname (must match the EC2 Private DNS entry for the instance)
  2. Proper IAM role and policy for Kubernetes control plane nodes and worker nodes
  3. Kubernetes-specific tags on resources needed by the cluster
  4. Correct command-line flags added to the Kubernetes API server, controller manager, and the Kubelet

Items 1-3 remain the same, so I won’t discuss them here (refer back to last September’s post, where I do provide details). Instead, I’ll focus on item 4; in particular, how to use kubeadm configuration files to make sure that Kubernetes components are properly configured.

Let’s start with the control plane.

Control Plane Configuration via Kubeadm

I’ll assume a highly available (HA) control plane (aka multi-master) configuration here. In this context, “highly available” means multiple control plane nodes and a separate, dedicated etcd cluster.

First, I’ll show you a kubeadm configuration file that will work, and then I’ll walk through some of the key points about the configuration:

kind: ClusterConfiguration
    cloud-provider: aws
clusterName: test
    cloud-provider: aws
    configure-cloud-routes: "false"
    caFile: /etc/kubernetes/pki/etcd/ca.crt
    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
kubernetesVersion: v1.13.2
  dnsDomain: cluster.local
kind: InitConfiguration
    cloud-provider: aws

A few notes about this configuration:

  • It should (hopefully) be obvious that you’ll need to edit many of these values so they are correct for your specific configuration. Notably, clusterName, controlPlaneEndpoint, and etcd.external.endpoints will all need to be change to the correct values for your environment.
  • Whatever DNS name you supply for controlPlaneEndpoint—and it should be a DNS name and not an IP address, since in an HA configuration this value should point to a load balancer, and IP addresses assigned to AWS ELBs can change–will also be added as a Subject Alternative Name (SAN) to the API server’s certificate. If you have other SANs you want added (like maybe a CNAME alias for the ELB’s DNS name), you’ll need to manually add those via the apiServer.certSANs value (not shown above since it’s not needed).
  • kubeadm normally binds the controller manager and scheduler to localhost, which may cause problems with Prometheus (it won’t be able to connect and will think these components are “missing”). The controllerManager.extrArgs.address and scheduler.extraArgs.address values fix this issue.

You’d use this configuration file as outlined in this tutorial from the v1.11 version of the Kubernetes docs. Why the v1.11 version? In versions 1.12 and 1.13, kubeadm introduced a new flag, the --experimental-control-plane flag, that allows you to more easily join new control plane nodes to the cluster. The YAML configuration file outlined above won’t work with the --experimental-control-plane flag. I’m still working through the details and performing some testing, and will update this article with more information once I’ve worked it out. (The “TL;DR” from the linked v1.11 tutorial is that you’ll use kubeadm init on each control plane node.)

At the end of the kubeadm init commands you’ll run to establish the control plane, kubeadm will display a command that includes a token and a SHA256 hash. Make note of these values, as you’ll need them shortly.

Worker Node Configuration via Kubeadm

As I did in the control plane section, I’ll first present a working kubeadm configuration file, then walk through any important points or notes about it:

kind: JoinConfiguration
    token: y6yaev.8dvwxx5ny3ef7vlq
    apiServerEndpoint: ""
      - "sha256:7c9b814812fcba3745744dd45825f61318f7f6ca80018dee453e4b9bc7a5c814"
    cloud-provider: aws

As with the configuration file for the control plane, there are a few values you must modify for your particular environment:

  • The token, apiServerEndpoint, and caCertHashes values must be set. These values are displayed by kubeadm init when establishing the control plane; just plug in those values here.
  • The value should reflect the correct hostname of the instance being joined to the cluster (which should match the EC2 Private DNS entry for the instance).

Once you’ve set these values, you’d use this file by running kubeadm join --config <filename>.yaml after your control plane is up and running.

Once the process completes, you can run kubectl get nodes to verify that the worker node has been added to the cluster. You can verify the operation of the AWS cloud provider by running kubectl get node <new-nodename> -o yaml and looking for the presence of the “providerID” field. If that field is there, the cloud provider is working. If the field is not present, the cloud provider isn’t working as expected, and it’s time to do some troubleshooting.

More Resources

I gathered the information found in this article from a variety of sources, including my own direct personal experience in testing various configuration parameters and scenarios. Some of the information sources I used include:

I’d also like to include a quick shout-out to the members of the VMware Kubernetes Architecture team (née Heptio Field Engineering), who provided valuable feedback on the information found in this post.

For a vSphere version of this article, I recommend Myles Gray’s post on setting up Kubernetes and the vSphere cloud provider (I consulted with Myles on some of the kubeadm configurations in his post). Note that Myles’ article is still using the “v1alpha3” API version for joining the nodes to the cluster. I’d recommend migrating to the “v1beta1” API if at all possible.

Thanks for reading! If you have any questions, comments, suggestions, or corrections, feel free to reach out to me on Twitter. I’d love to hear from you.

Scraping Envoy Metrics Using the Prometheus Operator

On a recent customer project, I recommended the use of Heptio Contour for ingress on their Kubernetes cluster. For this particular customer, Contour’s support of the IngressRoute CRD and the ability to delegate paths via IngressRoutes made a lot of sense. Of course, the customer wanted to be able to scrape metrics using Prometheus, which meant I not only needed to scrape metrics from Contour but also from Envoy (which provides the data plane for Contour). In this post, I’ll show you how to scrape metrics from Envoy using the Prometheus Operator.

First, I’ll assume that you’ve already installed and configured Prometheus using the Prometheus Operator, a task which is already fairly well-documented and well-understood. If this is something you think would be helpful for me to write a blog post on, please contact me on Twitter and let me know.

The overall process looks something like this:

  1. Modify the Envoy DaemonSet (or Deployment, depending on your preference) to add a sidecar container and expose additional ports.
  2. Modify the Service for the Envoy DaemonSet/Deployment to expose the ports you added in step 1.
  3. Add a ServiceMonitor object (a CRD added by the Prometheus Operator) to tell Prometheus to scrape the metric from Envoy.

Let’s take a closer look at each of these steps.

Modifying the Envoy DaemonSet/Deployment

The first step is to modify the Envoy DaemonSet (or Deployment) to include a sidecar container and expose some additional ports. I found it helpful to use this example DaemonSet configuration from the Heptio Gimbal repository as a guide. There are three sets of changes you need to make:

  1. Add port 8002 to the Envoy container.
  2. Add a sidecar container for the statsd-exporter.
  3. Modify the Contour initContainer to add the --statsd-enabled flag.

The first set of changes—adding port 8002—isn’t reflected in the linked Gimbal example because that configuration assumes Prometheus has a scrape configuration that finds the annotations. From everything I’ve been able to find so far, the Prometheus Operator doesn’t use that sort of configuration, so you’ll have to manually add the necessary ports. So add port 8002 to the list of ports for the Envoy container using something like this:

- containerPort: 8002
  hostPort: 8002
  name: metrics
  protocol: TCP

Next, add a sidecar container for the statsd-exporter. You can see this in lines 52 through 64 in the example configuration from Gimbal. You’ll also need to add a reference to a ConfigMap (see lines 88 through 90); the ConfigMap itself can be found here. Note that the Gimbal example uses the name “metrics” for port 9102 on the sidecar container; I chose to differentiate this using the name “statsd-exporter”.

The final change is reflected on line 75 of the Gimbal example.

Once all these changes have been made, you’re ready to proceed to the next step.

Modifying the Envoy Service

The changes here are pretty straightforward; you’re just going to add a couple additional ports on the service:

- port: 8002
  name: metrics
  protocol: TCP
- port: 9102
  name: statsd-exporter
  protocol: TCP

These should be the only changes to the Service that are needed.

Creating a ServiceMonitor

Finally, we can create the Prometheus ServiceMonitor to tell Prometheus to scrape the newly-exposed ports to gather metrics. Here’s an example ServiceMonitor definition you could use:

kind: ServiceMonitor
    prometheus: kube-prometheus
  name: envoy
  namespace: monitoring
      app: envoy
      - ingress
  - targetPort: 8002
    interval: 30s
    path: /stats/prometheus
  - targetPort: 9102
    interval: 30s

There are a few changes you might need to make to your ServiceMonitor definition:

  • Under metadata.labels, you’ll want to change the value of the prometheus label to point to the Prometheus Operator instance in your cluster.
  • You may need to specify a different namespace.
  • Under spec.selector.MatchLabels and spec.namespaceSelector, you’ll most likely need to specify different values here. Make sure the labels listed under spec.selector.MatchLabels matches the label(s) on the Envoy Service, not the Envoy Pods.
  • Note that the path of “/stats/prometheus” under spec.endpoints for port 8002 is required.

If you are using NetworkPolicies in your Kubernetes cluster, you’ll want to be sure that your policies allow the appropriate traffic from Prometheus to the Envoy Pods.

Applying the Changes

With all the changes in place, you’re ready to use kubectl to apply the changes to the DaemonSet/Deployment and the Service, and then to create the ServiceMonitor. Assuming you don’t have any typos (I almost always do!), you should be able to browse the Prometheus web interface and see your Envoy instances show up as targets for Prometheus. Now you’re good to go!

If you have any questions or run into issues, feel free to contact me on Twitter and I’ll do my best to help you out.

Technology Short Take 110

Welcome to Technology Short Take #110! Here’s a look at a few of the articles and posts that have caught my attention over the last few weeks. I hope something I’ve included here is useful for you also!


  • Via Kirk Byers (who is himself a fantastic resource), I read a couple of articles on network automation that I think readers may find helpful. First up is a treatise from Mircea Ulinic on whether network automation is needed. Next is an older article from Patrick Ogenstad that provides an introduction to ZTP (Zero Touch Provisioning).
  • The folks over at Cilium took a look at a recent CNI benchmark comparison and unpacked it a bit. There’s some good information in their article.
  • I first ran into Forward Networks a few years ago at Fall ONUG in New York. At the time, I was recommending that they explore integration with NSX. Fast-forward to this year, and the company announces support for NSX and (more recently) support for Cisco ACI. The recent announcement of their GraphQL-based Network Query Engine (NQE)—more information is available in this blog post—is also pretty interesting to me.



Cloud Computing/Cloud Management

Operating Systems/Applications


  • Jeff Hunter launches what appears to be a series on vSAN capacity management and monitoring with part 1, found here.


  • Myles Gray explains how to create an Ubuntu 18.04 LTS cloud image that will work with Guest OS Customization in vSphere.

Career/Soft Skills

OK, that’s all for now. I’ll have another Tech Short Take in a few weeks, and hopefully I can finally publish some of the original content I’ve been working on. Feel free to contact me on Twitter if you’d like to chat, if you have a questions, or if you have suggestions for content I should include in the future. Thanks!

Technology Short Take 109

Welcome to Technology Short Take #109! This is the first Technology Short Take of 2019. It may be confirmation bias, but I’ve noticed of number of sites adding “Short Take”-type posts to their content lineup. I’ll take that as flattery, even if it wasn’t necessarily intended that way. Enjoy!


  • Niran Even-Chen says service mesh is a form of virtualization. While I get what Niran is trying to say here, I’m not so sure I agree with the analogy. Sometimes analogies such as this are helpful, but sometimes the analogy brings unnecessary connotations that make understanding new concepts more difficult. One area where I do strongly agree with Niran is in switching your perspective: looking at service mesh from a developer’s perspective gives one quite a different viewpoint than viewing service mesh in an infrastructure light.
  • Jim Palmer has a detailed write-up on DHCP Option 51 and different behaviors from different DHCP clients.
  • Niels Hagoort talks about some network troubleshooting tools in a vSphere/ESXi environment.


Nothing this time around, but I’ll stay alert for items to include next time.


Cloud Computing/Cloud Management

Operating Systems/Applications

  • Jorge Salamero Sanz describes how to use the Sysdig Terraform provider to do “container security as code.” I’m a fan of Terraform (despite some of its limitations), so it’s kind of cool to see new providers coming online.
  • OS/2 lives on. ‘Nuff said.
  • This project purports to help you generate an AWS IAM policy with exactly the permissions needed. It’s a bit of a brute force tool, so be sure to read the caveats, warnings, and disclaimers in the documentation!
  • I do manage most of my “dotfiles” in a Git repository, but I’d never heard of rcm before reading this Fedora Magazine article. It might be something worth exploring to supplant/replace my existing system.
  • I found this article by Forrest Brazeal on a step-by-step exploration of moving from a relational database to a single DynamoDB table to be very helpful and very informative. DynamoDB—along with other key-value store solutions—have been something I’ve been really interested in better understanding, but never could quite understand how they fit with traditional RDBMSes. I still have tons to learn, but at least now I have a bit of a framework by which to learn more. Thanks Forrest!
  • Steve Flanders provides an introduction to Ambassador, an open source API gateway. This looks interesting, but embedding YAML configuration in annotations seems…odd.
  • Mark Hinkle, a co-founder at TriggerMesh, announces TriggerMesh KLR—the Knative Lambda Runtime that allows users to run AWS Lambda functions in a Knative-enabled Kubernetes cluster. This seems very powerful to me, but I’m no serverless expert so maybe I’m missing something. Would the serverless experts care to weigh in?
  • Via Jeremy Daly’s Off-by-None newsletter, I found Jerry Hargrove’s Cloud Diagrams & Notes site. I haven’t dug in terribly deep yet, but at first glance Jerry’s site looks to be enormously helpful. (I have a suspicion that I’ve probably seen references to Jerry’s site via Corey Quinn’s Last Week in AWS newsletter, too.)
  • AWS users who prefer Visual Studio Code may want to track the development of the AWS Toolkit for Visual Studio Code. It’s early days yet, so keep that in mind.
  • And while we’re talking about Visual Studio Code, Julien Oudot highlights why users should choose Code for their Kubernetes/Docker work.



  • Marc Weisel shares how to use Cisco IOSv in a Vagrant box with VMware Fusion.
  • Paul Czarkowski talks about how the future of Kubernetes is virtual marchines. The title is a bit of linkbait; what Paul is really addressing here is how to solve the multi-tenancy challenges that currently exist with Kubernetes (which wasn’t really designed for multi-tenant deployments). VMs provide good isolation, so VMs could be the method whereby operators can provide the sort of strong isolation that multi-tenant environments need. One small clarification to Paul’s otherwise excellent post: by admission on their own web page, gVisor is not a VM container technology, but rather uses a different means to providing additional security.

Career/Soft Skills

In the infamous words of Porky Pig, that’s all folks! Feel free to engage with me on Twitter if you have any comments, questions, suggestions, corrections, or clarifications (or if you just want to chat!). I also welcome suggestions for content to include in future instances of Technology Short Take. Thank you for reading!

On Thinking About Infrastructure as Code

I just finished reading Cindy Sridharan’s excellent post titled “Effective Mental Models for Code and Systems,” and some of the points Sridharan makes immediately jumped out to me—not for “traditional” code development, but for the development of infrastructure as code. Take a few minutes to go read the post—seriously, it’s really good. Done reading it? Good, now we can proceed.

Some of these thoughts I was going to share in a planned presentation at Interop ITX in May 2019, but since I’m unable to speak at the conference this year due to schedule conflicts (my son’s graduation from college and a major anniversary trip for me and Crystal), I figured now was as good a time as any, especially given the timing of Sridharan’s post. Also, a lot of these thoughts stem from a discussion with a colleague at work, which in turn led to this Full Stack Journey podcast on practical infrastructure as code.

Anyway, let me get back to Sridharan’s post. One of the things that jumped out to me right away was Sridharan’s proposed hierarchy of needs for code:

Sridharan's hierarcy of needs for code

As you can see in the image (full credit for which belongs to Sridharan, as far as I know), making code understandable lies at the bottom of the hierarchy of needs, meaning it is the most basic necessity needed. Until this need is satisfied, you can’t move on to the other needs. Sridharan puts it this way:

Optimizing for understandability can result in optimizing for everything else on the hierarchy depicted above.

Many readers have probably heard of the DRY principle when it comes to writing code. (DRY stands for Don’t Repeat Yourself.) In many of the examples of infrastructure as code that I see online, the authors of these examples tend to use control structures such as Terraform’s count construct when creating multiple infrastructure objects. I’ll use some code that I wrote as an example: consider the use of a module to create a group of AWS instances as illustrated here. Yes, there is very little repetition in this code. The code is modular and re-usable. But is it understandable? Have I optimized for understandability, and (by extension) all the other needs listed in Sridharan’s hierarchy of needs for code?

Consider this as well: have I really violated the DRY principle if I were to explicitly spell out, with proper parameterization, the creation of each infrastructure object instead of using a count control structure or a module as a layer of abstraction? Is it not still true that there remains only “a single, unambiguous, authoritative representation” of each infrastructure object?

It seems to me that the latter approach—explicitly spelling out the creation of infrastructure objects in your infrastructure as code—may be a bit more verbose, but eminently more understandable and does not violate the DRY principle. It may not be as elegant, but as individuals creating infrastructure as code artifacts should be we optimizing for elegance, or optimizing for understandability?

Sridharan also talks about being explicit:

…it is worth reiterating this again that implicit assumptions and dependencies are one of the worst offenders when it comes to contributing to the obscurity of code.

Again, it seems to me that—for infrastructure as code especially—being explicit about the creation of infrastructure objects not only contributes to greater understandability, but also helps eliminate implicit assumptions and dependencies. Instead of using a loop or control structure to manage the creation of multiple objects, spell out the creation of those objects explicitly. It may seem like a violation of the DRY principle to have three (nearly) identical snippets of code creating three (nearly) identical compute instances, but applying the DRY principle here means ensuring that each instance is authoritatively represented in the code only once, not that we are minimizing lines of code.

“Now wait,” you say. “It’s not my fault if someone can’t read my Terraform code. They need to learn more about Terraform, and then they’ll better understand how the code works.”

Well, Sridharan talks about that as well in a discussion of properly identifying the target audience of your artifacts:

In general, when identifying the target audience and deciding what narrative they need to be exposed to in order to allow for them to get up and running quickly, it becomes necessary to consider the audience’s background, level of domain expertise and experience.

Sridharan goes on to point out that in situations where both novices and veterans may be present in the target audience, the experience of the novice is key to determining the understandability of the code. So, if we are optimizing for understandability, can we afford to take a “hands off” approach to maintenance of the code by our successors? Can we be guaranteed that a successor tasked with maintaining our code will have the same level of knowledge and experience we have?

I’ll stop here; there are more good points that Sridharan makes, but for now this post suffices to capture most of the thinking generated by the article when it comes to infrastructure as code. After I’ve had some time to continue to parse Sridharan’s article, I may come back with some additional thoughts. In the meantime, feel free to engage with me on Twitter if you have some thoughts or perspectives you’d like to share on this matter.

Recent Posts

The Linux Migration: December 2018 Progress Report

In December 2016, I kicked off a migration from macOS to Linux as my primary laptop OS. Throughout 2017, I chronicled my progress and challenges along the way; links to all those posts are found here. Although I stopped the migration in August 2017, I restarted it in April 2018 when I left VMware to join Heptio. In this post, I’d like to recap where things stand as of December 2018, after 8 months of full-time use of Linux as my primary laptop OS.


Looking Back: 2018 Project Report Card

Over the last five years or so, I’ve shared with my readers an annual list of projects along with—at the year’s end—a “project report card” on how I fared against the projects I’d set for myself. (For example, here’s my project report card for 2017.) Following that same pattern, then, here is my project report card for 2018.


The Linux Migration Series

In early 2017 I kicked off an effort to start using Linux as my primary desktop OS, and I blogged about the journey. That particular effort ended in late October 2017. I restarted the migration in April 2018 (when I left VMware to join Heptio), and since that time I’ve been using Linux (Fedora, specifically) full-time. However, I thought it might be helpful to collect the articles I wrote about the experience together for easy reference. Without further ado, here they are.


Technology Short Take 108

Welcome to Technology Short Take #108! This will be the last Technology Short Take of 2018, so here’s hoping I can provide something useful for you. Enjoy!


Running Fedora on my Mac Pro

I’ve been working on migrating off macOS for a couple of years (10+ years on a single OS isn’t undone quickly or easily). I won’t go into all the gory details here; see this post for some background and then see this update from last October that summarized my previous efforts to migrate to Linux (Fedora, specifically) as my primary desktop operating system. (What I haven’t blogged about is the success I had switching to Fedora full-time when I joined Heptio.) I took another big step forward in my efforts this past week, when I rebuilt my 2011-era Mac Pro workstation to run Fedora.


KubeCon 2018 Day 2 Keynote

This is a liveblog of the day 2 (Wednesday) keynotes at KubeCon/CloudNativeCon 2018 in Seattle, WA. For additional KubeCon 2018 coverage, check out other articles tagged KubeCon2018.


Liveblog: Hardening Kubernetes Setups

This is a liveblog of the KubeCon NA 2018 session titled “Hardening Kubernetes Setup: War Stories from the Trenches of Production.” The speaker is Puja Abbassi (@puja108 on Twitter) from Giant Swarm. It’s a pretty popular session, held in one of the larger ballrooms up on level 6 of the convention center, and nearly every seat was full.


Liveblog: Linkerd 2.0, Now with Extra Prometheus

This is a liveblog of the KubeCon NA 2018 session titled “Linkerd 2.0, Now with Extra Prometheus.” The speakers are Frederic Branczyk from Red Hat and Andrew Seigner with Buoyant.


KubeCon 2018 Day 1 Keynote

This is a liveblog from the day 1 (Tuesday, December 11) keynote of KubeCon/CloudNativeCon 2018 in Seattle, WA. This will be my first (and last!) KubeCon as a Heptio employee, and looking forward to the event.


Technology Short Take 107

Welcome to Technology Short Take #107! In response to my request for feedback in the last Technology Short Take, a few readers responded in favor of a more regular publication schedule even if that means the articles are shorter in length. Thus, this Tech Short Take may be a bit shorter than usual, but hopefully you’ll still find something useful.


Supercharging my CLI

I spent a lot of time in the terminal. I can’t really explain why; for many things it just feels faster and more comfortable to do them via the command line interface (CLI) instead of via a graphical point-and-click interface. (I’m not totally against GUIs, for some tasks they’re far easier.) As a result, when I find tools that make my CLI experience faster/easier/more powerful, that’s a big boon. Over the last few months, I’ve added some tools to my Fedora laptop that have really added some power and flexibility to my CLI environment. In this post, I want to share some details on these tools and how I’m using them.


Technology Short Take 106

Welcome to Technology Short Take #106! It’s been quite a while (over a month) since the last Tech Short Take, as this one kept getting pushed back. Sorry about that, folks! Hopefully I’ve still managed to find useful and helpful links to include below. Enjoy!


Spousetivities at DockerCon EU 18

DockerCon EU 18 is set to kick off in early December (December 3-5, to be precise!) in Barcelona, Spain. Thanks to Docker’s commitment to attendee families—something for which I have and will continue to commend them—DockerCon will offer both childcare (as they have in years past) and spouse/partner activities via Spousetivities. Let me just say: Spousetivities in Barcelona rocks. Crystal lines up a great set of activities that really cannot be beat.


More on Setting up etcd with Kubeadm

A while ago I wrote about using kubeadm to bootstrap an etcd cluster with TLS. In that post, I talked about one way to establish a secure etcd cluster using kubeadm and running etcd as systemd units. In this post, I want to focus on a slightly different approach: running etcd as static pods. The information on this post is intended to build upon the information already available in the Kubernetes official documentation, not serve as a replacement.


Validating RAML Files Using Docker

Back in July of this year I introduced Polyglot, a project whose only purpose is to provide a means for me to learn more about software development and programming (areas where am I sorely lacking real knowledge). In the limited spare time I’ve had to work on Polyglot in the ensuing months, I’ve been building out an API specification using RAML, and in this post I’ll share how I use Docker and a Docker image to validate my RAML files.


Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!