Scott's Weblog The weblog of an IT pro specializing in cloud computing, virtualization, and networking, all with an open source view

Technology Short Take 91

Welcome to Technology Short Take 91! It’s been a bit longer than usual since the last Tech Short Take (partly due to the US Thanksgiving holiday, partly due to vacation time, and partly due to business travel), so apologies for that. Still, there’s a great collection of links and articles here for you, so dig in and enjoy.

Networking

  • Amanpreet Singh has a two-part series on Kubernetes networking (part 1, part 2).
  • Anthony Spiteri has a brief look at NSX-T 2.1, which recently launched with support for Pivotal Container Service (PKS) and Pivotal Cloud Foundry, further extending the reach of NSX into new areas.
  • Jon Benedict has a brief article on OVN and its integration into Red Hat Virtualization; if you’re unfamiliar with OVN, it might be worth having a look.
  • sFlow is a networking technology that I find quite interesting, but I never seem to have the time to really dig into it. For example, I was recently browsing the sFlow blog and came across two really neat articles. The first was on RESTful control of Cumulus Linux ACLs (this one isn’t actually sFlow-related); the second was on combining sFlow telemetry and RESTful APIs for visibility and control in campus networks.
  • David Gee’s “network automation engineer persona” content continues; this time he tackles some thoughts around proof-of-concepts (PoCs).

Servers/Hardware

  • Frank Denneman (with an admittedly vSphere-focused lens) takes a look at the Intel Xeon Scalable Family in a two-part (so far) series. Part 1 covers the CPUs themselves; part 2 discusses the memory subsystem. Both articles are worth reviewing if hardware selection is an important aspect of your role.
  • Kevin Houston provides some details on blade server options for VMware vSAN Ready Nodes.

Security

Cloud Computing/Cloud Management

  • The Cloud-Native Computing Foundation (CNCF) and the Kubernetes community introduced the Certified Kubernetes Conformance Program, and the first announcements of certification have started rolling in. First, here’s Google’s announcement of renaming Google Container Engine to Google Kubernetes Engine (making the GKE acronym much more applicable) as a result of its certification. Next, here’s an announcement on the certification of PKS (Pivotal Container Service).
  • Henrik Schmidt writes about the kube-node project, an effort to allow Kubernetes to manage worker nodes in a cluster.
  • Helm is a great way to deploy applications onto (into?) a Kubernetes cluster, but there are some ways you can improve Helm’s security. Check out this article from Matt Butcher on securing Helm.
  • This site is a good collection of “lessons learned from the trenches” on running Kubernetes on AWS in production.
  • I have to be honest: this blog post on using OpenStack Helm to install OpenStack on Kubernetes with Rook sounds like a massive science experiment. That’s a lot of moving pieces!
  • User “sysadmin1138” (I couldn’t find a mapping to a real name, perhaps that’s intentional) has a great write-up on her/his experience with Terraform in production. There’s some great information here for those of you thinking of (or currently) using Terraform to manage production workloads/configurations.

Operating Systems/Applications

  • Michael Crosby outlines support for multi-client support in containerD.
  • Speaking of containerD, it just recently hit 1.0.
  • This is a slightly older post by Alex Ellis on attachable networks, which (as I understand it) enable interoperability between declarative workloads (deployed via docker stack deploy) and imperative workloads (launched via docker run).

Storage

Virtualization

Career/Soft Skills

  • Pat Bowden discusses the idea of learning styles, and how combining learning styles (or multiple senses) can typically contribute to more successful learning.
  • I also found some useful tidbits on learning over at The Art of Learning project website.

That’s all for now (but I think that should be enough to keep you busy for a little while, at least!). I’ll have another Tech Short Take in 2 weeks, though given the holiday season is nigh upon us it might be a bit light on content. Until then!

Installing the Azure CLI on Fedora 27

This post is a follow-up to a post from earlier this year on manually installing the Azure CLI on Fedora 25. I encourage you to refer back to that post for a bit of background. I’m writing this post because the procedure for manually installing the Azure CLI on Fedora 27 is slightly different than the procedure for Fedora 25.

Here are the steps to install the Azure CLI into a Python virtual environment on Fedora 27. Even though they are almost identical to the Fedora 25 instructions (one additional package is required), I’m including all the information here for the sake of completeness.

  1. Make sure that the “gcc”, “libffi-devel”, “python-devel”, “openssl-devel”, “python-pip”, and “redhat-rpm-config” packages are installed (you can use dnf to take care of this). Some of these packages may already be installed; during my testing with a Fedora 27 Cloud Base Vagrant image, these needed to be installed. (The change from Fedora 25 is the addition of the “redhat-rpm-config” package.)

  2. Install virtualenv either with pip install virtualenv or dnf install python2-virtualenv. I used dnf, but I don’t think the method you use here will have any material effects.

  3. Create a new Python virtual environment with virtualenv azure-cli (feel free to use a different name).

  4. Activate the new virtual environment (typically accomplished by sourcing the azure-cli/bin/activate script; substitute the name you used when creating the virtual environment if you didn’t name it azure-cli).

  5. Install the Azure CLI with pip install azure-cli. Once this command completes, you should be ready to roll.

That’s it!

Using Vagrant with Libvirt on Fedora 27

In this post, I’m going to show you how to use Vagrant with Libvirt via the vagrant-libvirt provider when running on Fedora 27. Both Vagrant and Libvirt are topics I’ve covered more than a few times here on this site, but this is the first time I’ve discussed combining the two projects.

If you’re unfamiliar with Vagrant, I recommend you start first with my quick introduction to Vagrant, after which you can browse all the “Vagrant”-tagged articles on my site for a bit more information. If you’re unfamiliar with Libvirt, you can browse all my “Libvirt”-tagged articles; I don’t have an introductory post for Libvirt.

Background

I first experimented with the Libvirt provider for Vagrant quite some time ago, but at that time I was using the Libvirt provider to communicate with a remote Libvirt daemon (the use case was using Vagrant to create and destroy KVM guest domains via Libvirt on a remote Linux host). I found this setup to be problematic and error-prone, and discarded it after only a short while.

Recently, I revisited using the Libvirt provider for Vagrant on my Fedora laptop (which I rebuilt with Fedora 27). As I mentioned in this post, installing VirtualBox on Fedora isn’t exactly straightforward. Further, what I didn’t mention in that post is that the VirtualBox kernel modules aren’t signed; this means you must turn off Secure Boot in order to run VirtualBox on Fedora. I was loathe to turn off Secure Boot, so I thought I’d try the Vagrant+Libvirt combination again—this time using Libvirt to talk to the local Libvirt daemon (which is installed by default on Fedora in order to support the GNOME Boxes application, a GUI virtual machine tool). Hence, this blog post.

Prerequisites

Obviously, you’ll need Vagrant installed; I chose to install Vagrant from the Fedora repositories using dnf install vagrant. At the time of this writing, that installed version 1.9.8 of Vagrant. You’ll also need the Libvirt plugin, which is available via dnf:

dnf install vagrant-libvirt vagrant-libvirt-doc

At the time of writing, this installed version 0.40.0 of the Libvirt plugin, which is the latest version. You could also install the plugin via vagrant plugin install vagrant-libvirt, though I didn’t test this approach. (In theory, it should work fine.)

As with most other providers (the AWS and OpenStack providers being the exceptions), you’ll also need one or more Vagrant boxes formatted for the Libvirt provider. I found a number of Libvirt-formatted boxes on Vagrant Cloud, easily installable via vagrant box add. For the purposes of this post, I’ll use the “fedora/27-cloud-base” Vagrant box with the Libvirt provider.

Finally, because Vagrant is orchestrating Libvirt on the back-end, I also found it helpful to have the Libvirt client tools (like virsh) installed. This lets you see what Vagrant is doing behind the scenes, which can be helpful at times. Just run dnf install libvirt-client.

Using Libvirt with Vagrant

Once all the necessary prerequisites are satisfied, you’re ready to start managing Libvirt guest domains (VMs) with Vagrant. For a really quick start:

  1. cd into a directory of your choice
  2. Run vagrant init fedora/27-cloud-base to create a sample Vagrantfile
  3. Boot the VM with vagrant up

For more fine-grained control over the VM and its settings, you’ll want to customize the Vagrantfile with some additional settings. Here’s a sample Vagrantfile that shows a few (there are many!) of the ways you could customize the VM Vagrant creates:

Vagrant.configure("2") do |config|
  # Define the Vagrant box to use
  config.vm.box = "fedora/27-cloud-base"

  # Disable automatic box update checking
  config.vm.box_check_update = false

  # Set the VM hostname
  config.vm.hostname = "fedora27"

  # Attach to an additional private network
  config.vm.network "private_network", ip: "192.168.100.101"

  # Modify some provider settings
  config.vm.provider "libvirt" do |lv|
    lv.memory = "1024"
  end # config.vm.provider
end # Vagrant.configure

For a more complete reference, see the GitHub repository for the vagrant-libvirt provider. Note, however, that I did run into a few oddities, particularly around networking. For example, I wasn’t able to create a new private Libvirt network using the libvirt__network_address setting; it always reverted to the default network address. However, using the syntax shown above, I was able to create a new private Libvirt network with the desired network address. I was also able to manually create a new Libvirt network (using virsh net-create) and then attach the VM to that network using the libvirt__network_name setting in the Vagrantfile. Some experimentation may be necessary to get precisely the results you’re seeking.

Once you’ve instantiated the VM using vagrant up, then the standard Vagrant workflow applies:

  • Use vagrant ssh <name> to log into the VM via SSH.
  • Use vagrant provision to apply any provisioning instructions, such as running a shell script, copying files into the VM, or applying an Ansible playbook.
  • Use vagrant destroy to terminate and delete the VM.

There is one potential “gotcha” of which to be aware: when you use vagrant box remove to remove a Vagrant box and you’ve created at least one VM from that box, then there is an additional step required to fully remove the box. When you run vagrant up with a particular box for the very first time, the Libvirt provider uploads the box into a Libvirt storage pool (the pool named “default”, by default). Running vagrant box remove only removes the files from the ~/.vagrant.d directory, and does not remove any files from the Libvirt storage pool.

To remove the files from the Libvirt storage pool, run virsh pool-edit default to get the filesystem path where the storage pool is found (if no changes have been made, the “default” pool should be located at /var/lib/libvirt/images). Navigate to that directory and remove the appropriate files in order to complete the removal of a particular box (or a specific version of a box).

So far—though my testing has been fairly limited—I’m reasonably pleased with the Libvirt provider when running against a local Libvirt daemon. The performance is good, and I haven’t had to “jump through hoops” to make the virtualization provider work (as I did with VirtualBox on Fedora).

If you have any questions or feedback, hit me up on Twitter. Thanks!

AWS re:Invent 2017 Keynote with Andy Jassy

This is a liveblog of the re:Invent 2017 keynote with Andy Jassy, taking place on Wednesday at the Venetian. As fully expected given the long queues and massive crowds, even arriving an hour early to the keynote isn’t soon enough; there’s already a huge crowd gathered to make it into the venue. Fortunately, I did make it in and scored a reasonable seat from which to write this liveblog.

The pre-keynote time is filled with catchy dance music arranged by a live DJ (same live DJ as last year, if I’m not mistaken). There’s already been quite a few announcements made this year even before today’s keynote: Amazon Sumerian (AR/VR service), new regions and availability zones (AZs), and new bare metal instances, just to name a few of the big ones. There’s been a great deal of speculation regarding what will be announced in today’s keynote, but there’s no doubt there will be a ton of announcements around service enhancements and new services. Rumors are flying about a managed Kubernetes offering; we shall see.

Promptly at 8am, the keynote starts with a brief video, and Andy Jassy, CEO of AWS, takes the stage. Jassy welcomes attendees to the sixth annual conference, and confirms that the attendance at the event is over 43,000 people—wow!

Jassy starts with a quick update on the AWS business:

  • $18B revenue run rate
  • 42% growth rate (if I captured that correctly)
  • Millions of customers with a pretty varied customer base (lots of technology startups, enterprise customers from pretty much every vertical, and public sector users)
  • Thousands of system integrators who’ve built their business on AWS consulting

Jassy reviews the latest “Magic Quadrant,” showing AWS with a strong lead over all other competitors, and shows a study that gives AWS 44% of the public cloud marketshare (more than all other competitors combined).

Moving out of the business update, Jassy begins to lay the framework for the rest of the keynote. He compares people building technology solutions (“builders”) to musicians, who want the freedom to choose the technology building blocks (the “instruments”) to create the solution (the “song”). According to Jassy, AWS radically changes what’s possible for builders by giving them unprecedented choice and flexibility. To help with the keynote, a band is going to play five different songs, each of which captures some aspect of how AWS enables builders to build incredibly new and powerful solutions.

The first song is “Everything is Everything,” by Lauryn Hill. Jassy explains that “everything is everything” applies to technology because the choice of platform/provider is incredibly important, and builders shouldn’t have to settle for less than everything. AWS has more than any other provider, says Jassy, meaning they have the “everything” that builders need/want, leading him into a lengthy rant (in a good way) outlining the breadth of AWS’ services (including, notably, a mention of VMware Cloud on AWS).

Jassy mentions that the pace of innovation is also continuing to expand, with an expected 1,300+ service announcements over the course of 2017.

At this point, Jassy brings out Mark Okerstrom, President and CEO of Expedia. Okerstrom talks about the technology challenges that a company operating at Expedia’s scale (600M+ site visits monthly, greater than 750 million searches per day) experiences. Expedia has committed to move 80% of mission critical applications to AWS within the next 2-3 years. Why? Resiliency, optimization, and performance, says Okerstrom. Okerstrom wraps up his portion with a quote by Mark Twain (on how travel is fatal to bigotry), and Jassy returns to the stage.

Jassy turns his attention to AWS’ compute offerings. Jassy outlines the range of compute instance types (such as the new M5, H1, and I3m [bare metal] instances), and then moves to talk about containers. He positions ECS (Elastic Container Service) as something that AWS built “back when there was no predominant orchestration system,” and outlines some of the advantages that ECS offers (deep integration with other AWS services, better scale, and service integrations at the container level).

All that being said, Jassy recognizes that Kubernetes has emerged as a leading container orchestration platform, and that customers who want to run Kubernetes on AWS have some complexities to manage. Jassy recognizes Amazon Elastic Container Service for Kubernetes (EKS), a managed Kubernetes service running on top of AWS. EKS has a number of features that Jassy outlines:

  • Hybrid cloud compatible
  • Highly available (masters deployed across multiple AZs, for example)
  • Automated upgrades and patches

This gives AWS two different managed container offerings: ECS and EKS. However, Jassy says that containers want more—they want to run containers without having to manage servers and clusters. This leads to an announcement of AWS Fargate, which allows customers to run containers without managing servers, clusters, or instances. Just package your application into a container, upload it to Fargate, and AWS takes care of the rest (says Jassy). Fargate will support ECS immediately, and will support EKS in 2018. (Although at this point it’s unclear exactly what “supporting” ECS or EKS means.)

Next, Jassy moves on to discussing serverless (Functions as a Service, or FaaS). AWS Lambda has already gathered hundreds of thousands of customers. Jassy points out that FaaS really needs to be more than just code execution; you also need event-driven services (like Lambda and Step Functions), lots of event sources (all the various triggers from AWS services), and the ability to execute functions at the edge as well as in the cloud (like Lambda@Edge and Greengrass).

This brings Jassy back to the “everything is everything” mantra, and how the broad range of compute offerings that AWS supplies satisfies customers’ demands for “everything is everything.”

Changing direction slightly, Jassy talks about what “freedom” means to him and to AWS. This leads him back to the house band, who plays “Freedom” by George Michael.

The “freedom” discussion leads Jassy to a discussion about databases, and a number of not-very-subtle attacks against Oracle. Customers want open database engines, and this demand is what led AWS to create Amazon Aurora. Aurora is MySQL- and PostgreSQL-compatible but offers the scale and performance that users demand from commercial databases. Jassy states that Aurora is the fastest-growing service in the history of AWS.

Aurora offers the ability to scale out for reads, but customers wanted scale-out write support. Jassy announces a preview of Aurora Multi-Master, which supports multiple instances of Aurora for both read/write support across multiple AZs (with multi-region support coming in 2018). The preview for single region/multi-master is open today.

Next, Jassy announces Aurora Serverless—on-demand, auto-scaling Amazon Aurora. This service eliminates the need to provision instances, automatically scales up/down, and starts up and shuts down automatically.

However, relational databases are the only solution out there; sometimes a different type of solution is needed. Sometimes a key-value datastore is a better solution, leading Jassy to talk about DynamoDB and ElastiCache (which currently supports Redis and Memcached). To expand the functionality and utility of DynamoDB, Jassy announces DynamoDB Global Tables. DynamoDB Global Tables is the first fully-managed, multi-master, multi-region database. DynamoDB Global Tables enables low-latency reads and writes to locally available tables. It’s generally available today.

Jassy next announces DynamoDB Backup and Restore, to simplify the process of backing up and restoring data from/to DynamoDB databases. This new offering will enable customers to back up hundreds of terabytes of data with no performance interruption or performance impact. This offering is generally available today, with point-in-time restore coming in 2018.

To better enable using data across multiple databases, Jassy announces the launch of Amazon Neptune, a fully-managed graph database. Neptune supports multiple graph models, is fast and scalable, enables greater reliability with multiple replicas across AZs, and is easy to use with support for multiple graph query languages.

This leads to the third song, “Congregation” by the Foo Fighters. Jassy calls out the apparent contradiction in the song about having blind faith but not false hope, and compares that to the conviction that builders have when building out great ideas even when they’re not sure it will work. Getting feedback from customers is one way to help with this, and Jassy says that analytics are the answer here. Naturally, AWS has great analytics, so Jassy talks about the various solutions that AWS has to offer.

In the realm of data lakes, Jassy calls out S3 as the most popular choice for data lakes today, and takes a few minutes to talk about the advantages of S3 (he again refers to the Gartner Magic Quadrant to show S3 in a strong leadership position). S3’s position is further strengthened by ties to things like Amazon Athena, Amazon EMR, Amazon Redshift, Amazon Elasticsearch Service, Amazon Kinesis, Amazon QuickSight, and AWS Glue.

At this point, Jassy brings out Roy Joseph, Managing Director at Goldman Sachs, to talk about how Goldman uses analytics on AWS. Joseph stresses Goldman’s position as a source of innovation; 25% of Goldman’s employees are engineers who have written 1.5B lines of code across more than 7K applications. In order to compete effectively, Joseph says that Goldman Sachs needs strong engineering, risk management, and distribution. Three examples shared by Joseph include Marcus (consumer retail loans), Marquee (access to risk and pricing platform), and Symphony (secure messaging and collaboration; originally internal-only but now seeing growth as an inter-bank platform). So why public cloud? According to Joseph, a greater demand for risk management drives a need for more calculations, which in turn means more compute capacity—and the public cloud was the best way to satisfy that need. That being said, Joseph outlined some concerns that Goldman had to overcome: extending an internally-built management framework and ensuring data privacy. To help ensure data privacy, Goldman worked with AWS to create a “BYOK” (Bring Your Own Key) solution for key management.

Jassy returns to the stage to continue the discussion around analytics. To help customers perform analytics on the correct subset of data that might be stored in S3, Jassy announces S3 Select, the ability to use standard SQL statements to “filter” out or select the correct subset of S3 data. Jassy shares some TPC-DS benchmarks on a Presto queries (8 seconds without S3 Select, 1.8 seconds [4.5x faster] with S3 Select).

Jassy next announces Glacier Select, which allows you to run queries directly against data stored in Amazon Glacier. This is generally available today.

Shifting focus slightly, Jassy takes the conversation toward machine learning, and asks the house band to play another song. This time it’s “Let it Rain” by Eric Clapton, and Jassy says the lyrics of the song reflect the desire of builders for machine learning to be easier to use and embrace than it is right now.

Jassy says that Amazon has been doing machine learning for 20 years, and points to things like Amazon’s personalized recommendations, or Alexa’s natural language understanding, or the pick paths Amazon uses for the robots in the warehouse. This makes AWS well-positioned to make machine learning easier to use and consume.

According to Jassy, there are three layers to machine learning.

The bottom layer is for expert ML practitioners who deeply understand learning models and frameworks, and Jassy re-iterates AWS’ support for all the various major frameworks and interfaces customers want to use.

The middle layer is for everyday developers who aren’t experts in ML, but it’s still too complicated for most users. To help with the challenges in this layer, Jassy introduces Amazon SageMaker (leverages open source Jupyter project). SageMaker provides built-in, high performance algorithms, but doesn’t prevent users from bringing their own algorithms and frameworks. SageMaker also greatly simplifies training and tuning, and helps automate the deployment/operation of machine learning in production.

To further help get machine learning into the hands of developers, Jassy announces DeepLens, the world’s first HD video camera with built-in machine learning support. Jassy brings out Dr. Matt Wood to talk more about DeepLens and SageMaker. After talking for a few minutes, Wood does a demo of DeepLens performing album identification and facial expression recognition.

The top layer, according to Jassy, is a set of application services that leverage machine learning. Examples here are Lex, Polly, and Rekognition. Jassy announces Rekognition Video, which is real-time batch video analysis (like what Rekognition does for photos). To help get video/audio data into AWS, Jassy announces Amazon Kinesis Video Streams. Rekognition Video is deeply integrated with Kinesis Video Streams.

On the language side (as opposed to video), Jassy announces Amazon Transcribe to convert speech into accurate, gramatically correct text (initially available with English and Spanish). In the near future, Transcribe will support multiple speakers and custom dictionaries.

Jassy also announces Amazon Translate, which does real-time language translation as well as batch translation. It will support automatic language detection in the near future.

Next, Jassy announces Amazon Comprehend, a fully-managed natural language processing service. It analyzes information in text and identifies things like entities (people, places, things), key phrases, sentiment, and the language of the content. Comprehend can not only identify information in a single document, but can also be used to perform topic modeling across large numbers of documents.

To talk a bit about how the NFL is using Amazon and machine learning, Jassy brings out Michelle McKenna-Doyle, SVP and CIO of the NFL. McKenna-Doyle shares some details on Next Gen Stats (NGS), which spans AWS services like Lambda, CloudFront, DynamoDB, EC2, S3, EMR, and the Amazon API Gateway (among others). NGS generates 3TB of data for every week of NFL games. McKenna-Doyle also talks briefly about future plans for incorporating machine learning and artificial intelligence into the NFL’s NGS plans (to do things like formation detection, route detection, and key event identification).

As McKenna-Doyle leaves the stage, the house band kicks up again with another song (remember there are five songs, as outlined by Jassy). This one is “The Waiting” by Tom Petty, and Jassy connects the lyrics to IoT and edge devices.

In order to get out of the keynote in a timely fashion, I’m wrapping up the liveblog here (sorry for the abbreviated coverage).

Liveblog: Deep Dive on Amazon Elastic File System

This is a liveblog of the AWS re:Invent 2017 session titled “Deep Dive on Amazon Elastic File System (EFS).” The presenters are Edward Naim and Darryl Osborne, both with AWS. This is my last session of day 2 of re:Invent; thus far, most of my time has been spent in hands-on workshops with only a few breakout sessions today. EFS is a topic I’ve watched, but haven’t had time to really dig into, so I’m looking forward to this session.

Naim kicks off the session with looking at the four phases users go through when they are choosing/adopting a storage solution:

  1. Choosing the right storage solution
  2. Testing and optimizing
  3. Ingest (loading data)
  4. Running it (operating it in production)

Starting with Phase 1, Naim outlines the three main things that people think about. The first item is storage type. The second is features and performance, and the third item is economics (how much does it cost). Diving into each of these items in a bit more detail, Naim talks about file storage, block storage, and object storage, and the characteristics of each of these approaches. Having covered these approaches, Naim returns to file storage (naturally) and talks about why file storage is popular:

  • Works natively with operating systems
  • Provides shared access while providing consistency guarantees and locking functionality
  • Provides a hierarchical namespace

Generally speaker, file storage hits the “sweet spot” between latency and throughput compared to block and object storage.

According to Naim, the key features of EFS are:

  • Simple (easy to use, operate, consume)
  • Elastic (no problem growing to accommodate capacity)
  • Scalable (consistent low latencies, thousands of concurrent connnections)
  • Highly available and durable (all file system objects stored in multiple AZs)

Next, Naim shows the typical “customer logo” slide to show how widely adopted EFS is by the AWs customer base and some of the use cases seen. Although the presenter said he wasn’t going to go through all the logos, he spends more than a few minutes going through almost all of them.

With regards to security, EFS offers a number of security-related features. Network access to EFS is controlled via security groups and NACLs. File and directory access is controlled via POSIX permissions; administrative access is managed via IAM. Encryption is also supported, with key storage in KMS.

Naim next goes through some pricing comparisons showing how EFS is much cheaper than DIY storage solutions using EC2 instances and EBS volumes.

EFS completes the “trifecta” of storage solutions that AWS offers, which cover the whole range of storage types (EFS for file, EBS for block, S3 for object).

A new feature announced last week is EFS File Sync, which is designed to get data from on-premises file systems into EFS, and is designed to operate up to 5x faster than traditional Linux copy tool. Naim indicates that Osborne will discuss this in more detail later in the session.

Next Naim discusses some architectural aspects of EFS, and how NFS clients on EC2 instances across various AZs might access EFS. EFS is POSIX-compliant and supports NFS v4.0 and v4.1. Of course, Naim points out that you should always test to verify that everything works as expected.

Now Osborne steps up to talk specifically about performance. When creating an EFS instance, you can select either “general purpose” (the default) or “max I/O” (designed for very large scale-out workloads, at the cost of slightly higher latencies). GP may have lower latencies, but has a ceiling of 7K operations/second. For GP file systems, AWS does expose a CloudWatch metric that allows users to see where they fall within the 7K ops/sec limit.

Osborne next compares EFS and EBS PIOPS (not sure what this stands for). Again, there are trade-offs (there are always trade-offs with technology decisions). As with any file system, throughput is a function of I/O size, meaning that larger I/O size will create more throughput. To really take advantage of EFS, Osborne indicates that parallelism is the key.

Changing gears slightly, Osborne reviews the mount options for using EFS with Linux instances on EC2. Linux kernel 4.0 or higher is recommended, as is NFS v4.1. More details are available in the documentation, according to Osborne.

Overall throughput is gated/tied to file system capacity; sustained throughput is 50 MBps per TB of storage, with bursts up to 100 MBps.

Osborne shifts focus now to talk about ingest, i.e., getting data into EFS. He reviews a couple different options (connecting on-premises servers to EFS via Direct Connect or using a third-party VPN solution). Both of these options can, according to Osborne, be used not only for migration but also for bursting or backup/disaster recovery. In order to optimize data copy/ingest, parallelism is again the key. Osborne reviews a few standard Linux copy tools (like rsync, cp, fpsync, or mcp). According to Osborne, rysnc offers relatively poor performance. fpsync is essentially multi-threaded rsync; mcp is a drop-in replacement for cp developed by NASA; both of these tools offer better performance than their single-threaded counterparts. The best performance comes from combining tools with GNU Parallel.

This dicussion of ingest performance and tools brings Osborne back to the topic of EFS File Sync, which is a multi-threaded tool that uses parallelism to maximize throughput, and offers encrypted transfers to EFS for security. EFS File Sync can be used to transfer from on-premises to EFS, between EFS instances in different regions, or from DIY shared storage solutions to EFS.

Next, Osborne shows a recorded video of using EFS File Sync to copy data between two different EFS instances. The demo shows copying roughly 20GB of data in about 4 minutes.

Naturally, you can move objects from Amazon S3 into EFS; this would involve using an EC2 instance that accesses S3 (perhaps via the AWS CLI) and an NFS mount backed by EFS. Osborne recommends maximizing parallelism to get the best possible ingest performance. GNU Parallel comes up here again.

Naim steps up to take over to talk about operations. EFS exposes a number of CloudWatch metrics, and all EFS API calls can be logged to CloudTrail.

Osborne comes back up to talk about some reference architectures using EFS. The first example is a highly-available WordPress architecture, followed by similar architectures for Drupal and Magento. Another example architecture is an EFS backup solution implemented via a CloudFormation template.

Wrapping up the session, Naim reviews some additional resources available via the Amazon web site, and announces a new series of storage-focused training classes that provide more in-depth training on S3, EFS, and EBS. At this point, Naim closes out the session.

Recent Posts

Liveblog: IPv6 in the Cloud - Protocol and Service Overview

This is a liveblog of an AWS re:Invent 2017 breakout session titled “IPv6 in the Cloud: Protocol and Service Overview.” The presenter’s name is Alan Halachmi, who is a Senior Manager of Solutions Architecture at AWS. As with so many of the other breakout sessions and workshops here at re:Invent this year, the queues to get into the session are long and it’s expected that the session will be completely full.

Read more...

A Sample Makefile for Creating Blog Articles

In October of this year, I published a blog post talking about a sample Makefile for publishing blog articles. That post focused on the use of make and a Makefile for automating the process of a publishing a blog post. This post is a companion to that post, and focuses on the use of a Makefile for automating the creation of blog posts.

Read more...

Installing MultiMarkdown 6 on Fedora 27

Long-time readers are probably aware that I’m a big fan of Markdown. Specifically, I prefer the MultiMarkdown variant that adds some additional extensions beyond “standard” Markdown. As such, I’ve long used Fletcher Penny’s MultiMarkdown processor (the latest version, version 6, is available on GitHub). While Fletcher offers binary builds for Windows and macOS, the Linux binary has to be compiled from source. In this post, I’ll provide the steps I followed to compile a MultiMarkdown binary for Fedora 27.

Read more...

Using Docker Machine with KVM and Libvirt

Docker Machine is, in my opinion, a useful and underrated tool. I’ve written before about using Docker Machine with various services/providers; for example, see this article on using Docker Machine with AWS, or this article on using Docker Machine with OpenStack. Docker Machine also supports local hypervisors, such as VMware Fusion or VirtualBox. In this post, I’ll show you how to use Docker Machine with KVM and Libvirt on a Linux host (I’m using Fedora 27 as an example).

Read more...

Happy Thanksgiving 2017

In the US, today (Thursday, November 23) is Thanksgiving. I’d like to take a moment to reflect on the meaning of Thanksgiving.

Read more...

Installing Older Docker Client Binaries on Fedora

Sometimes there’s a need to have different versions of the Docker client binary available. On Linux this can be a bit challenging because you don’t want to install a “full” Docker package (which would also include the Docker daemon); you only need the binary. In this article, I’ll outline a process I followed to get multiple (older) versions of the Docker client binary on my Fedora 27 laptop.

Read more...

Installing Postman on Fedora 27

I recently had a need to install the Postman native app on Fedora 27. The Postman site itself only provides a link to the download and a rather generic set of instructions for installing the Postman native app (a link to these instructions for Ubuntu 16.04 is also provided). There were not, however, any directions for Fedora. Hence, I’m posting the steps I took to set up the Postman native app on my Fedora 27 laptop.

Read more...

Making AWS re:Invent More Family-Friendly

AWS re:Invent is just around the corner, and Spousetivities will be there to help bring a new level of family friendliness to the event. If you’re thinking of bringing a spouse, partner, or significant other with you to Las Vegas, I’d encourage you to strongly consider getting him or her involved in Spousetivities.

Read more...

Technology Short Take 90

Welcome to Technology Short Take 90! This post is a bit shorter than most, as I’ve been on the road quite a bit recently. Nevertheless, there’s hopefully something here you’ll find useful.

Read more...

How to Tag Docker Images with Git Commit Information

I’ve recently been working on a very simple Flask application that can be used as a demo application in containerized environments (here’s the GitHub repo). It’s nothing special, but it’s been useful for me as a learning exercise—both from a Docker image creation perspective as well as getting some additional Python knowledge. Along the way, I wanted to be able to track versions of the Docker image (and the Dockerfile used to create those images), and link those versions back to specific Git commits in the source repository. In this article, I’ll share a way I’ve found to tag Docker images with Git commit information.

Read more...

Deep Dive into Container Images in Kolla

This is a liveblog of my last session at the Sydney OpenStack Summit. The session title is “OpenStack images that fit your imagination: deep dive into container images in Kolla.” The presenters are Vikram Hosakote and Rich Wellum, from Cisco and Lenovo, respectively.

Read more...

Carrier-Grade SDN-Based OpenStack Networking Solution

This session was titled “Carrier-Grade SDN Based OpenStack Networking Solution,” led by Daniel Park and Sangho Shin. Both Park and Shin are from SK Telecom (SKT), and (based on the description) this session is a follow-up to a session from the Boston summit where SK Telecom talked about an SDN-based networking solution they’d developed and released for use in their own 5G-based network.

Read more...

Can OpenStack Beat AWS in Price

This is a liveblog of the session titled “Can OpenStack Beat AWS in Price: The Trilogy”. The presenters are Rico Lin, Bruno Lago, and Jean-Daniel Bonnetot. The “trilogy” refers to the third iteration of this presentation; each time the comparison has been done in a different geographical region (first in Europe, then in North America, and finally here in Asia-Pacific).

Read more...

Lessons Learnt from Running a Container-Native Cloud

This is a liveblog of the session titled “Lessons Learnt from Running a Container-Native Cloud,” led by Xu Wang. Wang is the CTO and co-founder of Hyper.sh, a company that has been working on leveraging hypervisor isolation for containers. This session claims to discuss some lessons learned from running a cloud leveraging this sort of technology.

Read more...

Make Your Application Serverless

This is a liveblog from the last day of the OpenStack Summit in Sydney, Australia. The title of the session is “Make Your Application Serverless,” and discusses Qinling, a project for serverless (Functions-as-a-Service, or FaaS) architectures/applications on OpenStack. The presenters for the session are Lingxian Kong and Feilong Wang from Catalyst Cloud.

Read more...

Older Posts

Find more posts by browsing the post categories, content tags, or site archives pages. Thanks for visiting!