Scalably Managing ExoGENI nodes using AWS tools

Introduction and Overview

In this post we will explore how to use Amazon AWS tools to scalably manage the infrastructure of your slices. These tools allow you to manage hybrid infrastructures consisting of EC2 instances and external nodes, in our case created in ExoGENI testbed. They allow to perform remote command executions on multiple nodes at once, inventory the state of the nodes – their software, network and other configurations and perform custom tasks, all without differentiating between EC2 nodes and ExoGENI nodes. In fact, it is possible to use these tools only on ExoGENI nodes, without having any EC2 nodes involved. The management tasks can be done from EC2 web console, or using AWS CLI tools, or programmatically, with libraries like Boto. This tutorial concentrates on using the web console and command line tools only.

In this tutorial we will be using several AWS services: EC2, CloudFormation (Infrastructure as Code), IAM (Identity and Access Management) and SSM (Simple Systems Manager). A disclaimer: IAM, SSM and CloudFormation services are included into EC2 pricing, you pay for the EC2 instances you start, S3 storage space and sometimes traffic. That means if you are starting only ExoGENI instances, there should be no costs if you do not use S3 buckets.

Prerequisites:

The tutorial follows the following workflow

  1. Start up AWS stack using CloudFormation. We will create a small ‘slice’ inside AWS with 3 instances
  2. We will demonstrate the use of the SSM Run Command on those instances
  3. We will start an ExoGENI slice, whose instances automatically join SSM
  4. We will demonstrate how to manage EC2 and ExoGENI instances together using the same tools

Starting EC2 stack using CloudFormation

We begin by downloading a CloudFormation template that starts the EC2 side of our experiment. Notice that it isn’t necessary to have EC2 instances to use the Run Command, however in the tutorial we show both EC2 instances and ExoGENI instances.

In our case the stack consists of three hosts, one bastion host with a public IP address and two other hosts in different subnets that communicate with the outside world using a NAT gateway.

screen-shot-2017-01-05-at-3-09-05-pm

The stack can be started using the following command:

$ aws cloudformation create-stack --stack-name GENIStack --template-body file:///path/to/downloaded/geni-vpc.template --parameters ParameterKey=InstanceType,ParameterValue=t2.small,ParameterKey=KeyName,ParameterValue=<Name of your SSH Key Pair> --capabilities CAPABILITY_IAM

There are several important parameters in this command we should discuss:

  • –stack-name GENIStack is the name you are giving this stack. All EC2 instances in the stack will be tagged with this name and you will be able to invoke remote commands on them based on this name
  • –template-body must be a URL of the template you are starting. In this case it is a file on a filesystem. Could also be an S3 object
  • –parameters ParameterKey=InstanceType,ParameterValue=t2.small,ParameterKey=KeyName,ParameterValue=<Name of your SSH Key Pair> specifies in Key/Value tuples several parameters. In this case we specify that our EC2 instances will be t2.small and we must name the SSH key to be used with them. The name should be visible as ‘Key Pair Name’ in EC2 console under Network & Security/Key Pairs
  • –capabilities CAPABILITY_IAM, because this template creates roles inside AWS IAM, it needs special capabilities declared explicitly

While this command is executing you can check the progress of the stack either via AWS CloudFormation web console, or using the CLI:

$ aws cloudformation describe-stacks

While the stack is being created, lets take a look at several elements of the stack template file that are critical to this tutorial.  The template is a JSON file.

Each instance in this stack is tagged with the RunCmdInstanceProfile associated with RunCmdRole IAM role that allows instances in the stack limited privilege to use the SSM service. This is an equivalent of a ‘speaks-for’ in GENI:

 "RunCmdInstanceProfile": {
    "Type": "AWS::IAM::InstanceProfile",
    "Properties": {
      "Path": "/",
      "Roles": [ { "Ref": "RunCmdRole" } ]
    }
 },
 "RunCmdRole": {
    "Type": "AWS::IAM::Role",
    "Properties": {
       "AssumeRolePolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [
          {
             "Sid": "",
             "Effect": "Allow",
             "Principal": {
             "Service": "ec2.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
       }
       ]
    },
    "Path": "/"
    }
 }

Each instance assumes RunCmdInstanceProfile and the role uses a policy RunCmdPolicies that allows for SSM operations (policy omitted for brevity).

Another important aspect is the startup script used by each instance that updates, then starts the SSM agent on startup:

 "IamInstanceProfile": {
    "Ref": "RunCmdInstanceProfile"
    },
 "UserData": { "Fn::Base64" : { "Fn::Join" : ["", [
    "#!/bin/bash -xe\n",
    "cd /tmp\n",
    "echo ", {"Ref": "AWS::Region"}, " > region.txt\n",
    "curl https://amazon-ssm-",{"Ref": "AWS::Region"}, ".s3.amazonaws.com/latest/linux_amd64/amazon-ssm-agent.rpm -o amazon-ssm-agent.rpm\n",
    "sudo yum install -y amazon-ssm-agent.rpm\n",
    "sudo restart amazon-ssm-agent\n"
 ]]}}

Once the stack completes, you should see something like this (adjusted for your parameters):

$ aws cloudformation describe-stacks
STACKS 2017-01-05T16:05:42.658Z False arn:aws:cloudformation:us-east-1:621231197516:stack/GENIStack/cf1a7e80-d360-11e6-ae3f-503f23fb559a GENIStack CREATE_COMPLETE
CAPABILITIES CAPABILITY_IAM
OUTPUTS Primary private IP of host 2 Host2 Private IP 192.168.2.200
OUTPUTS Primary private IP of host 1 Host1 Private IP 192.168.1.26
OUTPUTS Primary public IP of gateway host EIP IP Address 34.196.53.116 on subnet subnet-02b4df2f
PARAMETERS KeyName MyKeys
PARAMETERS InstanceType t2.small

And in CloudFormation console

screen-shot-2017-01-05-at-11-22-11-am

In the EC2 console, when you go down to ‘Systems Manager Shared Resources’ and click on ‘Managed Instances’ you should see three EC2 instances belonging to the stack you just created lit up ‘green’:

screen-shot-2017-01-05-at-11-24-01-am

Notice that the console already offers you a way to run commands on them using the ‘Run Command’ button. Run Command operation is based on a number of pre-existing JSON document templates (SSM Documents) that are selected to run a particular type of a command. AWS classifies the documents as Windows or Linux compatible.

The full list of currently available documents is available via the EC2 console in Systems Manager Shared Resources/Documents or via AWS CLI:

$ aws ssm list-documents

You can click on the Run Command button and select an ‘SSM Document’ that is a template for the command. In this case we want to run a command-line ‘ifconfig -a’, so select ‘AWS-RunShellScript’ document and fill out the form (select all instances and in the command space enter ‘ifconfig -a’). Run the command. SSM issues a GUID corresponding to this command and you can inspect the output by clicking on the GUID and looking at command output for each instance. SSM is asynchronous and you need to wait for command completion on individual instances to see the output.

AWS keeps the full history of Run Command invocations, previous invocations can be explored in the EC2 console under ‘Systems Manager Services/Run Command’ with commands listed by date and guid.

screen-shot-2017-01-05-at-11-36-23-am

We can achieve the same results from AWS CLI by doing the following:

$ aws ssm send-command --instance-ids i-0b387c665628f5f9b i-02e2a213adfa03bab i-0e6edec45f97ede23 --document-name "AWS-RunShellScript" --comment "IP config" --parameters commands=ifconfig --output text

The above command explicitly names EC2 instances on which the command needs to be executed. Alternatively you can use this form:

$ aws ssm send-command --targets "Key=tag:aws:cloudformation:stack-name,Values=GENIStack" --document-name "AWS-RunShellScript" --comment "IP config" --parameters commands=ifconfig --output text

Notice that in this case we match instances by the name of the CloudFormation stack we gave above.

The output of the command can be examined using

$ aws ssm list-command-invocations --command-id <guid of the command invocation returned by the previous command> --details

There are other commands under aws ssm toolset, you should be free to explore them (do ‘aws ssm help‘).

Starting ExoGENI slice with instances connected to AWS SSM

In this section we will add ExoGENI instances to the list of instances managed via SSM. Before you start the slice you must create a special kind of credential for your instances to be able to talk to AWS SSM.

We begin by creating a new role we will call SSMServiceRole to provide SSM credentials to hybrid (non-EC2) instances. First we must create a JSON trust file that allows principals assume that role (cut and paste the contents and call it SSMService-Trust.json):

{
 "Version": "2012-10-17",
 "Statement": {
 "Effect": "Allow",
 "Principal": {"Service": "ssm.amazonaws.com"},
 "Action": "sts:AssumeRole"
 }
}

Use the file to create the role:

$ aws iam create-role --role-name SSMServiceRole --assume-role-policy-document file://SSMService-Trust.json

Associate a standard (managed) AWS policy AmazonEC2RoleforSSM with this role that allows SSM operations:

$ aws iam attach-role-policy --role-name SSMServiceRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM

If you were paying attention, you might ask “Didn’t we create a similar role with the template for EC2 instances?” and you’d be right. However at present it doesn’t appear possible to use the role created as part of CloudFormation stack outside the stack.

You can inspect existing roles in your account by using AWS IAM web console and clicking on ‘Roles’ or by executing a CLI command:

$ aws iam list-roles

In this case the command should show two roles – one created by CloudFormation, and the other the role we created just now.

Now we must create temporary tokens for SSM agent on your ExoGENI instances to access the SSM service. Notice keyword is ‘temporary’ as in they have an expiration date past which the instances will not be able to communicate with SSM. Because of that, it is critical that there is only a minimal time skew between your ExoGENI instances and AWS (more on that below). Each token is associated with some number of ‘registrations’ – nodes in your slice that can use SSM (default is 1) and an expiration date (default 24 hours). We create a new activation for our slice:

$ aws ssm create-activation --default-instance-name MyXoServers --iam-role SSMServiceRole --registration-limit 10 --expiration-date 2017-01-10T20:30:00.000Z

Examining parameters above:

  • Default instance name will be a string by which your instances are known in the SSM (each will also be issued a unique instance identifier)
  • We must include the role we defined above in the activation
  • Define the max number of nodes you plan to have in your ExoGENI slice by registration limit
  • Define the expiration date (in this case using UTC)

The command provides two strings, the first one a code (20 characters), the second, a registration ID. Both are needed by the SSM agent in your ExoGENI instances to authenticate to SSM. Now we’re ready to start the ExoGENI slice.

When starting an ExpGENI slice, you can use any topology on any rack or controller, so long as you include the following post-bootscript for each instances you want managed via AWS SSM. Notice this script is for CentOS 6.X images, you may need to adapt it to Debian derivatives and systemctl-based RedHat-like distributions.

#!/bin/bash

SSMDIR=/tmp/ssm
AWSREGID="<provide the activation registration guid>"
AWSCODE="<provide the activation code>"
AWSREGION=us-east-1
NTPSERVER=clock1.unc.edu

# NO NEED TO EDIT BELOW FOR CentOS 6.x

ntpdate ${NTPSERVER} > /dev/null
/etc/init.d/ntpd restart 
mkdir ${SSMDIR}
curl https://amazon-ssm-${AWSREGION}.s3.amazonaws.com/latest/linux_amd64/amazon-ssm-agent.rpm -o ${SSMDIR}/amazon-ssm-agent.rpm
yum install -y ${SSMDIR}/amazon-ssm-agent.rpm 
stop amazon-ssm-agent
amazon-ssm-agent -register -id ${AWSREGID} -code ${AWSCODE} -region ${AWSREGION}
start amazon-ssm-agent

This file downloads SSM agent on boot, provides it with the credentials to AWS SSM service acquired in the previous step and restarts it. Notice the invocation of ntpdate – it is critical for the operation of SSM that the clocks in the instances are relatively true. If you have a significant clock skew, the agent on the instance will fail to connect to SSM.

Define a slice  topology in Flukes and be sure to cut and paste a modified version of the post-boot script above into each node you intend to manage via AWS. Notice that in addition to the code and registration id, you may beed to modify the AWS region, depending on the setting in your AWS account.

You can watch your slice come up in Flukes, but also use the EC2 Systems Manager Shared Resources/ Managed Instances console to see managed instances in your slice come up and go green. Note that node names given in Flukes show up in the console as ‘Computer Name’; also note that each ExoGENI instance also receives a unique instance ID starting with ‘mi’. Finally, note that the IP address reported for all instances (EC2 and ExoGENI) is the private address assigned to the management interface eth0.

screen-shot-2017-01-05-at-1-52-37-pm

Using Tools to Manage the Hybrid Infrastructure

Now that we have a ‘slice’ of EC2 and an ExoGENI slice that respond to AWS management tools, we can demonstrate some of the capabilities.

First off, just like in example above we can issue random commands to multiple instances in a scalable fashion, as shown above for EC2, but now we can use the AWS-issued instance IDs for our ExoGENI instances to name them. First we can list all managed instances using AWS CLI:

$ aws ssm describe-instance-information

Using instance IDs reported above we can craft a command to send to all instances:

$ aws ssm send-command --instance-ids <space separated list of instance ids from EC2 and your slice> --document-name "AWS-RunShellScript" --comment "IP config" --parameters commands=ifconfig --output text

You can check on the status of the invocations (if they completed successfully):

$ aws ssm list-command-invocations --command-id <guid of command id returned by previous command>

If you want to see the output, add –details to the previous command. You can also do

$ aws ssm get-command-invocation --command-id <guid of command id> --instance-id <id of the instance>

to inspect status of individual invocations on nodes.

We can also take inventory (software, network configuration) of the nodes and have it refresh periodically. The inventory can be visible in the web console and be saved into S3 bucket (costs will apply). This can be done from the console by clicking on ‘Setup Inventory’ button in the managed instances list. We will demonstrate doing it via CLI here. Unlike the per-command invocation shown above, inventory requires creating an association between an SSM inventory document and instances with a cron schedule so it periodically refreshes its content:

$ aws ssm create-association --name AWS-GatherSoftwareInventory --targets  "Key=instanceids,Values=<comma separated list of instance ids>" --schedule-expression "cron(0 0/30 * 1/1 * ? *)" --parameters networkConfig=Enabled,windowsUpdates=Disabled,applications=Enabled

This step takes a while (10 minutes or more) to complete, you can see in EC2 console under Managed Instances (by clicking on the instance) the state of the association. Once it completes, the inventory becomes available to view.

You can also see the state of existing associations by executing

$ aws ssm list-associations

Note that each association has a unique guid, which can be used to query for the state of association:

$ aws ssm describe-association --association-id <association guid>

After the association completes successfully we can query for inventory of the nodes:

$ aws ssm list-inventory-entries --instance-id <one of instance ids above> --type-name <inventory type>

Inventory type name is one of the following strings:

  • AWS:Application – lists installed packages
  • AWS:Network – lists interface configuration
  • AWS:AWSComponent – lists installed AWS components on the instance (typically SSM agent)
  • Other types are Windows-specific.

Conclusion

This tutorial demonstrated how to use AWS remote management tools to jointly manage EC2 and ExoGENI instances. Some of this functionality, particularly the remote execution, can be done in other ways, however the AWS approach offers several advantages:

  • Asynchronous event-driven nature makes it significantly more scalable, compared to typically serial execution of commands via remote shell (you can use psh though to speed things up)
  • Historical information about commands saved in AWS for review to ensure experiment progress log and repeatability
  • Comprehensive software inventory (also possible to keep history) per instance
  • Programmatic API available for scripting via Boto

This concludes the tutorial, the two following sections have suggestions on troubleshooting and next steps.

Troubleshooting

  • SSM agent behavior on the instances is logged under /var/log/amazon/ssm
  • If you run out of activations or your activation for SSM agent in ExoGENI nodes expires, you can create a new activation, and configure the ssm agent on each node with new credentials following the flow of the ExoGENI post-boot script above.
  • If you get stuck being unable to specify a particular CLI parameter, check this page.

Things to explore further

  • Programmatic API implementations, like Boto
  • Implementing new SSM command documents specific to your experiment

Have something to add?

Loading Facebook Comments ...