Scott's Weblog The weblog of an IT pro focusing on cloud computing, Kubernetes, Linux, containers, and networking

Creating an AWS ELB using Pulumi and Go

In case you hadn’t noticed, I’ve been on a bit of a kick with Pulumi and Go recently. There are two reasons for this. First, I have a number of “learning projects” (things that I decide I’d like to try or test) that would benefit greatly from the use of infrastructure as code. Second, I’ve been working on getting more familiar with Go. The idea of combining both those reasons by using Pulumi with Go seemed natural. Unfortunately, examples of using Pulumi with Go seem to be more limited than examples of using Pulumi with other languages, so in this post I’d like to share how to create an AWS ELB using Pulumi and Go.

Here’s the example code:

elb, err := elb.NewLoadBalancer(ctx, "elb", &elb.LoadBalancerArgs{
	NamePrefix:             pulumi.String(baseName),
	CrossZoneLoadBalancing: pulumi.Bool(true),
	AvailabilityZones:      pulumi.StringArray(azNames),
	Instances:              pulumi.StringArray(cpNodeIds),
	SecurityGroups:			pulumi.StringArray{elbSecGrp.ID()},
	HealthCheck: &elb.LoadBalancerHealthCheckArgs{
		HealthyThreshold:   pulumi.Int(3),
		Interval:           pulumi.Int(30),
		Target:             pulumi.String("SSL:6443"),
		UnhealthyThreshold: pulumi.Int(3),
		Timeout:            pulumi.Int(15),
	},
	Listeners: &elb.LoadBalancerListenerArray{
		&elb.LoadBalancerListenerArgs{
			InstancePort:     pulumi.Int(6443),
			InstanceProtocol: pulumi.String("TCP"),
			LbPort:           pulumi.Int(6443),
			LbProtocol:       pulumi.String("TCP"),
		},
	},
	Tags: pulumi.StringMap{
		"Name": pulumi.String(fmt.Sprintf("cp-elb-%s", baseName)),
		k8sTag: pulumi.String("shared"),
	},
})

You can probably infer from the code above that this example creates an ELB that listens on TCP port 6443 and forwards to instances on TCP 6443 (and is therefore most likely a load balancer for the control plane of a Kubernetes cluster). Not shown above is the error handling code that would check err for a return error.

My use of pulumi.StringArray for the “AvailabilityZones” and “Instances” parameters is one way to do it; both azNames and cpNodeIds are arrays of type []pulumi.StringInput. I took the approach above since I’m collecting information about the various AZs in the azNames array using code described here, and I gathered the instance IDs for a group of instances in the cpNodesId array.

You could also do something like this, if you wanted to explicitly specify values:

AvailabilityZones: pulumi.StringArray{pulumi.String("us-east-1")}
Instances:         pulumi.StringArray{pulumi.String("i-01234564789")}

Note that you should probably use the “Subnets” argument instead of the “AvailabilityZones” argument (you can only use one or the other) if you need to attach the ELB to subnets in a particular VPC.

Getting the syntax for the health check and the listeners was a bit challenging until I reviewed the package documentation for each (see here for the health check arguments and here for the listener arguments). I quickly realized it followed a similar pattern as the ingress and egress rules for an AWS security group (as outlined in this post).

I hope this example helps. Although I’m not a Go expert (nor am I a Pulumi expert), if you have any questions feel free to reach out to me on Twitter. I also do hang out in the Pulumi community Slack, in case you’d like to contact me there.

Review: Anker PowerExpand Elite Thunderbolt 3 Dock

Over the last couple of weeks or so, I’ve been using my 2017 MacBook Pro (running macOS “Mojave” 10.14.6) more frequently as my daily driver/primary workstation. Along with it, I’ve been using the Anker PowerExpand Elite 13-in-1 Thunderbolt 3 Dock. In this post, I’d like to share my experience with this dock and provide a quick review of the Anker PowerExpand Elite.

Note that I’m posting this as a customer of Anker. I paid for the PowerExpand Elite out of my own pocket, and haven’t received any compensation of any kind from anyone in return for my review. This is just me sharing my experience in the event it will help others.

First Impressions

The dock is both smaller than I expected (it measures 5 inches by 3.5 inches by 1.5 inches) and yet heavier than I expected. It feels solid and well-built. It comes with a (rather large) power brick and a Thunderbolt 3 cable to connect to the MacBook Pro. Setup was insanely easy; plug it in, connect it to the laptop, and you’re off to the races. (I did need to reboot my MacBook Pro for macOS to recognize the network interface in the dock.)

More In-Depth Impressions

Connectivity

I’m able to connect all the peripherals I need easily:

  • My LG 34” ultrawide monitor (via HDMI)
  • My podcasting microphone (via USB)
  • My Logitech C920 webcam (via USB)
  • My external Bose speakers (via analog audio)
  • My “conference call” headset (via USB)

I still have two more USB-C connectors (one of which is a Power Delivery port for faster charging of devices), another USB-A connector, and a Thunderbolt 3 connector unused, as well as the SD/MMC card reader slots.

Reliability

In researching Thunderbolt 3-based docks, I saw a fair number of reviews mentioning issues with the docks “losing” the connection. I can honestly say that I haven’t had that happen a single time over the time I’ve been using the PowerExpand Elite. Now, to be fair, my configuration is fairly static—I’m not constantly plugging and unplugging peripherals. If that’s you, then your experience with the PowerExpand Elite may be different.

Other Notes

The PowerExpand Elite does generate a fair amount of heat. It’s definitely warm to the touch, so I’d be sure to keep it somewhere with sufficient ventilation. I keep mine behind the Rain Design mStand that props up my MacBook Pro when it is docked. I’ve also had it placed under my monitor (which is on an adjustable monitor arm), where the ports are a bit more accessible. In both cases, it has/had plenty of ventilation to help keep the dock from getting too hot.

Summary

In summary, I’ve been pretty pleased with the Anker PowerExpand Elite. It works well, required very little effort to set up, and supports all the peripherals I have/need with room for a few more.

If you have any questions, feel free to contact me on Twitter and I’ll do my best to answer them. Thanks!

Technology Short Take 129

Welcome to Technology Short Take #129, where I’ve collected a bunch of links and references to technology-centric resources around the Internet. This collection is (mostly) data center- and cloud-focused, and hopefully I’ve managed to curate a list that has some useful information for readers. Sorry this got published so late; it was supposed to go live this morning!

Note there is a slight format change debuting in this Tech Short Take. Moving forward, I won’t include sections where I have no content to share, and I’ll add sections for content that may not typically appear. This will make the list of sections a bit more dynamic between Tech Short Takes. Let me know if you like this new approach—feel free to contact me on Twitter and provide your feedback.

Now, on to the good stuff!

Networking

  • Chip Zoller takes a look at Antrea, a new Kubernetes CNI (Container Networking Interface) plugin that leverages Open vSwitch (OVS).
  • Ivan Pepelnjak has a fantastic article on adapting network design to acommodate automation. This is a great read overall, but one sentence in particular really caught my eye: “30 years ago I was able to write a complete multitasking operating system in Z80 assembly code. I can no longer do that, and nobody is willing to pay me for that skill set. The world has moved on, and so did I.” Great quote!
  • Jon Langemak is blogging again, and he jumps back into the “blogging saddle” with a post on working with tc on Linux systems.

Servers/Hardware

  • Since Apple’s announcement at WWDC 2020 about the transition to ARM-based CPUs, there’s been a lot of analysis and discussion of what this means. This article captures a lot of the same thoughts I’ve been having, and it’s why I wonder whether Apple’s foothold in the enterprise will survive the CPU transition.

Security

  • SecurityWeek discusses a recent bypass of some of macOS’ privacy protections.
  • With the recent discovery that many iOS apps are “snooping” on the clipboard (as exposed by the iOS 14 beta), some security professionals are advocating users to turn off Handoff, a mechanism that—among other things—provides a shared clipboard between macOS and iOS devices. This article by Quincy Larson both provides some background for the recommendation as well as instructions on how to turn off Handoff.

Cloud Computing/Cloud Management

Operating Systems/Applications

  • Team colleague Matt Bagnara writes about NixOS and Enlightenment after a short test drive.
  • Kevin Campusano discusses Linux development in Windows 10 with Docker and WSL 2. Although I am not a Windows fan (I’ve been a macOS and/or Linux user since 2003, before Apple switched to Intel processors), I must say that the work that Microsoft is doing with WSL 2 appears—from the perspective of an outside user—to be pretty impressive.
  • Alex Ellis takes a look at building containers without Docker.
  • Former colleague (he recently moved into a new role) Jamie Duncan talks about using buildx and container manifests to build container images for multiple CPU architectures in this blog post. As Jamie points out, this sort of workflow could be useful when Apple moves to ARM-based CPUs (as I mentioned above, I’m not convinced that Apple’s foothold in the enterprise will survive the architecture transition).
  • This is an older post, but may still be useful: here’s how to hide mounted volumes from showing up in macOS Finder and the desktop.
  • I’ve written about jq before. If you’re interested in mastering this very useful tool, this may come in handy.
  • Hans Kruse writes about “throw-away WSL2 environments.”
  • This article opened my eyes to some interesting possibilities on macOS. I’ve already added Karabiner Elements and Hammerspoon to my system, and I’m exploring how best to leverage them in my daily workflow.

Programming

Recognizing that many formerly infrastructure-focused folks are being pulled into software development and programming, this is another section that will sometimes make its way into my Tech Short Takes.

  • I’ve been working with Go a fair amount over the last few weeks, and one of the things I didn’t understand (and perhaps still don’t fully grasp) is when the appropriate time is to use a pointer. This article by Dylan Meeus helps provide some guidelines.

Virtualization

Serverless

I am by no means a serverless expert (not even close!), but I have been finding an increasing number of serverless-related articles popping up in my timelines, RSS feeds, etc. As I mentioned above, this will be one of those sections that will appear from time to time as I find content.

Career/Soft Skills

  • Snir David writes about how written communication is a remote work superpower. The “TL;DR” from his article is that asynchronous communication is better suited for remote work than instant messaging/chat and audio/video calls. This echoes a sentiment that I’ve had for quite a while (and that I believe I’ve shared here before).

OK, that’s enough for this time around! I hope that you have found something useful or educational, and I hope that some of the new types of content that I’ll include moving forward are also useful. Your feedback as a reader is always welcome, so contact me on Twitter and let me know what you think. Thanks!

Working Around Docker Desktop's Outdated Kubernetes Version

As of the time that I published this blog post in early July 2020, Docker Desktop for macOS was at version 2.2.0.4 (for the “stable” channel). That version includes a relatively recent version of the Docker engine (19.03.8, compared to 19.03.12 on my Fedora 31 box), but a quite outdated version of Kubernetes (1.15.5, which isn’t supported by upstream). Now, this may not be a problem for users who only use Kubernetes via Docker Desktop. For me, however, the old version of Kubernetes—specifically the old version of kubectl—causes problems. Here’s how I worked around the old version that Docker Desktop supplies. (Also, see the update at the bottom for some additional details that emerged after this post was originally published.)

First, you’ll note that Docker Desktop automatically symlinks its version of kubectl into your system path at /usr/local/bin. You can verify the version of Docker Desktop’s kubectl by running this command:

/usr/local/bin/kubectl version --client=true

On my macOS 10.14.6-based system, this returned a version of 1.15.5. According to GitHub, v1.15.5 was released in October of 2019. Per the Kubernetes version skew policy, this version of kubectl would work with with 1.14, 1.15, and 1.16. What if I need to work with a 1.17 or 1.18 cluster? Simple—it just won’t work. In my case, I regularly need to work with newer Kubernetes versions, hence the issue with this old bundled version of kubectl.

Unfortunately, you can’t just delete or rename the symlink that Docker Desktop creates; it will simply re-create the symlink the next time it launches.

So what’s the fix? Because /usr/local/bin is typically pretty early in the system search path (use echo $PATH to see), you’ll need to create a symlink earlier in the search path. Here’s one way to do that:

  1. Create a $HOME/bin or $HOME/.local/bin directory (I prefer the latter, since it mimics my Linux system).
  2. Modify your shell startup files to include this new directory in your search path.
  3. Create a symlink to a newer version of kubectl in that directory.

The steps above should be pretty straightforward, but I’ll expand just a bit on item #2. I’m using bash as my shell on macOS (again, for similarity across macOS and Linux; I’ve installed the latest bash version via homebrew), and so I added this snippet to my ~/.bash_profile:

# Set PATH so it includes user's personal bin directories
# if those directories exist
if [ -d "$HOME/.local/bin" ]; then
    PATH="$HOME/.local/bin:$PATH"
fi

if [ -d "$HOME/bin" ]; then
    PATH="$HOME/bin:$PATH"
fi

With this in ~/.bash_profile, if a $HOME/bin or $HOME/.local/bin directory exists, it gets added to the start of the search path—thus placing it before /usr/local/bin, and thus making sure that any executables or symlinks placed there will be found before those in /usr/local/bin. Therefore, when I place a symlink to a newer version of kubectl in one of these directories, it will get found before Docker Desktop’s outdated version. Problem solved! (You can, of course, still access the older Docker Desktop version by using the full path.)

Folks who are familiar with UNIX/Linux are probably already very familiar with this sort of approach. However, I suspect there are a fair number of my readers who may be using macOS-based systems but aren’t well-versed in the intricacies of manipulating the $PATH environment variable. Hopefully, this article helps those folks. Feel free to contact me on Twitter if you have any questions or feedback on what I’ve shared here.

UPDATE 2020-07-09: Several folks contacted me on Twitter (thank you!) to point out that a slightly newer version of Docker Desktop may be available in the stable channel, although I have not been able to update my installation thus far. The new version, version 2.3.0.3, comes with Kubernetes 1.16.5, and therefore may not be as problematic for folks as the older version I have on my system. Regardless, the workaround remains valid. Thanks to the readers who responded!

Creating an AWS Security Group using Pulumi and Go

In this post, I’m going to share some examples of how to create an AWS security group using Pulumi and Go. I’m sharing these examples because—as of this writing—the Pulumi site does not provide any examples on how this is done using Go. There are examples for the other languages supported by Pulumi, but not for Go. The syntax is, to me at least, somewhat counterintuitive, although I freely admit this could be due to the fact that I am still pretty new to Go and its syntax.

As a framework for providing these examples, I’ll use the scenario that I need to create two different security groups. The first security group will allow SSH traffic from the Internet to designated bastion hosts. The second security group will need to allow SSH from those bastion hosts, as well as allow all traffic between/among members of the security group. Between these two groups, I should be able to show enough examples to cover most of the different use cases you’ll run into.

Although no example was present for Go when I wrote this article, readers may still find the API reference for the SecurityGroup resource to be useful nevertheless.

First, let’s look at the security group to allow SSH traffic to the bastion hosts. Here’s a snippet of Go code that will create the desired group:

sshSecGrp, err := ec2.NewSecurityGroup(ctx, "ssh-sg", &ec2.SecurityGroupArgs{
	Name:        pulumi.String("ssh-sg"),
    VpcId:       vpc.ID(),
	Description: pulumi.String("Allows SSH traffic to bastion hosts"),
	Ingress: ec2.SecurityGroupIngressArray{
		ec2.SecurityGroupIngressArgs{
			Protocol:    pulumi.String("tcp"),
			ToPort:      pulumi.Int(22),
			FromPort:    pulumi.Int(22),
			Description: pulumi.String("Allow inbound TCP 22"),
			CidrBlocks:  pulumi.StringArray{pulumi.String("0.0.0.0/0")},
		},
	},
	Egress: ec2.SecurityGroupEgressArray{
		ec2.SecurityGroupEgressArgs{
			Protocol:    pulumi.String("-1"),
			ToPort:      pulumi.Int(0),
			FromPort:    pulumi.Int(0),
			Description: pulumi.String("Allow all outbound traffic"),
			CidrBlocks:  pulumi.StringArray{pulumi.String("0.0.0.0/0")},
		},
	},
})

The tricky part, for me, was the syntax around the use of ec2.SecurityGroupIngressArray followed by ec2.SecurityGroupIngressArgs (although, in retrospect, I shouldn’t have been thrown off by this since it follows the same pattern Pulumi uses elsewhere). The CidrBlocks parameter is an array (hence the use of pulumi.StringArray) with a single entry.

Now let’s look at the second security group. This example will demonstrate multiple ingress rules, the use of the Self parameter, and referencing a separate security group as a source:

nodeSecGrp, err := ec2.NewSecurityGroup(ctx, "node-sg", &ec2.SecurityGroupArgs{
	Name:        pulumi.String("node-sg"),
	VpcId:       vpc.ID(),
	Description: pulumi.String("Allows traffic between and among nodes"),
	Ingress: ec2.SecurityGroupIngressArray{
		ec2.SecurityGroupIngressArgs{
			Protocol:       pulumi.String("tcp"),
			ToPort:         pulumi.Int(22),
			FromPort:       pulumi.Int(22),
			Description:    pulumi.String("Allow TCP 22 from bastion hosts"),
			SecurityGroups: pulumi.StringArray{sshSecGrp.ID()},
		},
		ec2.SecurityGroupIngressArgs{
			Protocol:    pulumi.String("-1"),
			ToPort:      pulumi.Int(0),
			FromPort:    pulumi.Int(0),
			Description: pulumi.String("Allow all from this security group"),
			Self:        pulumi.Bool(true),
		},
	},
	Egress: ec2.SecurityGroupEgressArray{
		ec2.SecurityGroupEgressArgs{
			Protocol:    pulumi.String("-1"),
			ToPort:      pulumi.Int(0),
			FromPort:    pulumi.Int(0),
			Description: pulumi.String("Allow all outbound traffic"),
			CidrBlocks:  pulumi.StringArray{pulumi.String("0.0.0.0/0")},
		},
	},
})

A few things stand out from this second example:

  1. To create multiple ingress rules, simply include an ec2.SecurityGroupIngressArgs entry for each ingress rule in the ec2.SecurityGroupIngressArray. I don’t know why I thought it wouldn’t be that simple, but it is.
  2. Note the use of Self: pulumi.Bool(true); this is what says the source for this ingress rule should be the security group being created.
  3. To reference another security group in an ingress rule, use SecurityGroups: pulumi.StringArray and reference the .ID() of the other security group. The example above uses this to allow SSH from the security group in the first example.

This is all pretty straightforward once you’ve figured it out (isn’t everything?), but without any good examples to help guide the way for new users like myself it can be challenging to get the point where you’ve figured it out. Hopefully these examples will help in some small way.

If you have any questions, corrections, or comments, please feel free to contact me on Twitter, or hit me up in the Pulumi community Slack.

Recent Posts

Adopting the Default Route Table of an AWS VPC using Pulumi and Go

Up until now, when I used Pulumi to create infrastructure on AWS, my code would create all-new infrastructure: a new VPC, new subnets, new route tables, new Internet gateway, etc. One thing bothered me, though: when I created a new VPC, that new VPC automatically came with a default route table. My code, however, would create a new route table and then explicitly associate the subnets with that new route table. This seemed less than ideal. (What can I say? I’m a stickler for details.) While building a Go-based replacement for my existing TypeScript code, I found a way to resolve this duplication of resources. In this post, I’ll show you how to “adopt” the default route table of an AWS VPC so that you can manage it in your Pulumi code.

Read more...

Getting AWS Availability Zones using Pulumi and Go

I’ve written several different articles on Pulumi (take a look at all articles tagged “Pulumi”), the infrastructure-as-code tool that allows users to define their infrastructure using a general-purpose programming language instead of a domain-specific language (DSL). Thus far, my work with Pulumi has leveraged TypeScript, but moving forward I’m going to start sharing more Pulumi code written using Go. In this post, I’ll share how to use Pulumi and Go to get a list of Availability Zones (AZs) from a particular region in AWS.

Read more...

Fixes for Some Vagrant Issues on Fedora

Yesterday I needed to perform some testing of an updated version of some software that I use. (I was conducting the testing because this upgrade contained some breaking changes, and needed to understand how to mitigate the breaking changes.) So, I broke out Vagrant (with the Libvirt provider) on my Fedora laptop—and promptly ran into a couple issues. Fortunately, these issues were relatively easy to work around, but since the workarounds were non-intuitive I wanted to share them here for the benefit of others.

Read more...

Technology Short Take 128

Welcome to Technology Short Take #128! It looks like I’m settling into a roughly monthly cadence with the Technology Short Takes. This time around, I’ve got a (hopefully) interesting collection of links. The collection seems a tad heavier than normal in the hardware and security sections, probably due to new exploits discovered in Intel’s speculative execution functionality. In any case, here’s what I’ve gathered for you. Enjoy!

Read more...

Using kubectl via an SSH Tunnel

In this post, I’d like to share one way (not the only way!) to use kubectl to access your Kubernetes cluster via an SSH tunnel. In the future, I may explore some other ways (hit me on Twitter if you’re interested). I’m sharing this information because I suspect it is not uncommon for folks deploying Kubernetes on the public cloud to want to deploy them in a way that does not expose them to the Internet. Given that the use of SSH bastion hosts is not uncommon, it seemed reasonable to show how one could use an SSH tunnel to reach a Kubernetes cluster behind an SSH bastion host.

Read more...

Making it Easier to Get Started with Cluster API on AWS

I’ve written a few articles about Cluster API (you can see a list of the articles here), but even though I strive to make my articles easy to understand and easy to follow along many of those articles make an implicit assumption: that readers are perhaps already somewhat familiar with Linux, Docker, tools like kind, and perhaps even Kubernetes. Today I was thinking, “What about folks who are new to this? What can I do to make it easier?” In this post, I’ll talk about the first idea I had: creating a “bootstrapper” AMI that enables new users to quickly and easily jump into the Cluster API Quick Start.

Read more...

Creating a Multi-AZ NAT Gateway with Pulumi

I recently had a need to test a configuration involving the use of a single NAT Gateway servicing multiple private subnets across multiple availability zones (AZs) within a single VPC. While there are notable caveats with such a design (see the “Caveats” section at the bottom of this article), it could make sense in some use cases. In this post, I’ll show you how I used TypeScript with Pulumi to automate the creation of this design.

Read more...

Review: Magic Mouse 2 and Magic Trackpad 2 on Fedora

I recently purchased a new Apple Magic Mouse 2 and an Apple Magic Trackpad 2—not to use with my MacBook Pro, but to use with my Fedora-powered laptop (a Lenovo 5th generation ThinkPad X1 Carbon; see my review). I know it seems odd to buy Apple accessories for a non-Apple laptop, and in this post I’d like to talk about why I bought these items as well as provide some (relatively early) feedback on how well they work with Fedora.

Read more...

Using Unison Across Linux, macOS, and Windows

I recently wrapped up an instance where I needed to use the Unison file synchronization application across Linux, macOS, and Windows. While Unison is available for all three platforms and does work across (and among) systems running all three operating systems, I did encounter a few interoperability issues while making it work. Here’s some information on these interoperability issues, and how I worked around them. (Hopefully this information will help someone else.)

Read more...

Technology Short Take 127

Welcome to Technology Short Take #127! Let’s see what I’ve managed to collect for you this time around…

Read more...

Technology Short Take 126

Welcome to Technology Short Take #126! I meant to get this published last Friday, but completely forgot. So, I added a couple more links and instead have it ready for you today. I don’t have any links for servers/hardware or security in today’s Short Take, but hopefully there’s enough linked content in the other sections that you’ll still find something useful. Enjoy!

Read more...

Setting up etcd with etcdadm

I’ve written a few different posts on setting up etcd. There’s this one on bootstrapping a TLS-secured etcd cluster with kubeadm, and there’s this one about using kubeadm to run an etcd cluster as static Pods. There’s also this one about using kubeadm to run etcd with containerd. In this article, I’ll provide yet another way of setting up a “best practices” etcd cluster, this time using a tool named etcdadm.

Read more...

Using External Etcd with Cluster API on AWS

If you’ve used Cluster API (CAPI), you may have noticed that workload clusters created by CAPI use, by default, a “stacked master” configuration—that is, the etcd cluster is running co-located on the control plane node(s) alongside the Kubernetes control plane components. This is a very common configuration and is well-suited for most deployments, so it makes perfect sense that this is the default. There may be cases, however, where you’ll want to use a dedicated, external etcd cluster for your Kubernetes clusters. In this post, I’ll show you how to use an external etcd cluster with CAPI on AWS.

Read more...

Using Existing AWS Security Groups with Cluster API

I’ve written before about how to use existing AWS infrastructure with Cluster API (CAPI), and I was recently able to help update the upstream documentation on this topic (the upstream documentation should now be considered the authoritative source). These instructions are perfect for placing a Kubernetes cluster into an existing VPC and associated subnets, but there’s one scenario that they don’t yet address: what if you need your CAPI workload cluster to be able to communicate with other EC2 instances or other AWS services in the same VPC? In this post, I’ll show you the CAPI functionality that makes this possible.

Read more...

Using Paw to Launch an EC2 Instance via API Calls

Last week I wrote a post on using Postman to launch an EC2 instance via API calls. Postman is a cross-platform application, so while my post was centered around Postman on Linux (Ubuntu, specifically) the steps should be very similar—if not exactly the same—when using Postman on other platforms. Users of macOS, however, have another option: a macOS-specific peer to Postman named Paw. In this post, I’ll walk through using Paw to issue API requests to AWS to launch an EC2 instance.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!