Taking the mystery out of ExoGENI resource binding Part 2: inter-rack slices

In the previous entry in this series we talked about unbound slices as a convenient way to create slices without worrying about which rack they need to be placed at.

In this brief entry we describe how to take fuller control of where elements of the slice are placed in ExoGENI and talk about the features of the ExoGENI stitching engine, that allows experimenters to create slices with compute and storage slivers placed in different racks and `stitched’ together using dynamically established Layer 2 connections.

ORCA control software pioneered the wide application of stitching and its stitching features have been consistently upgraded to offer the most flexibility to experimenters. Today ExoGENI stitching covers the following features:

  • Inter-rack slices that can include resources in multiple racks with any-to-any connectivity between racks
  • Coverage of racks deployed on Internet2, NLR FrameNet, ESnet, BEN and soon NSI  transit providers
  • Ability to support multi-point as well as point-to-point Layer 2 connections in slices across multiple racks
  • Ability to support stitchports – endpoints that mark connections between a slice and some fixed infrastructure – a university lab, a supercomputer, a storage array etc.

ExoGENI stitching is completely separate from the nascent GENI stitching that is being developed today. It is supported both via GENI tools like Flack and Omni (with caveats below) and fully supported using Flukes.

In this post we cover the simple inter-domain topologies and reserve the topics of multi-point connections and stitchports for future entries.

Creating inter-domain slices

We frequently refer to slices that cover more than one rack as `inter-domain’ slices. Creating slices like this requires using the ExoSM ORCA controller since it has the right level of visibility into resources inside individual racks and into transit providers (like Internet 2, NLR FrameNet, ESnet and so on). As is usual we will be using the Flukes tool to create the experiment slice topology.

After starting Flukes (assuming our GENI credentials are visible to Flukes) the first thing we do is select the ExoSM controller URL from the list of available controllers in Flukes. Note that your .flukes.properties file should be properly configured to allow you to communicate with this controller.

Selecting ExoSM controller in Flukes

Selecting ExoSM controller in Flukes

Flukes will prompt you for the key alias and password, or simply the password depending on the type of credential (JKS or PEM) that you provided. It will then query the system for available sites and their capabilities.

Now it is time to draw our inter-domain topology. We’ll start simple – create a slice with 3 nodes – one in UvA Amsterdam rack, another in Houston rack and another at BBN. We will then compare the latency on links from Amsterdam to Houston and Amsterdam to Boston.

Simply place the nodes on the canvas, connect them with links and right-click on each of them to assign instance type, VM image and importantly the domain, i.e. the rack that each node will be placed in. ExoGENI control software will take care of figuring out the paths and provisioning them.

Creating inter-domain slices in Flukes

Creating inter-domain slices in Flukes

You can either click ‘Auto IP’ button to assign IP addresses automatically or do it manually for each node. Remember that you cannot have the same subnet assigned to two different links incident on the same node. After that, we as usual pick a name for the slice and submit it.

We then switch to the ‘Manifest’ pane and query the system until all slivers (links and nodes) are ‘Active’. You will note that the system provided you with a lot more slivers than you asked for. This is because ExoGENI has requested and stitched together all the intermediate VLANs on your behalf. To get the topology to display properly, play around with ‘Graph Layout’ menu options.

Inter-domain slice manifest in Flukes

Inter-domain slice manifest in Flukes

Now it is time to login and measure latency from Amsterdam to one of the nodes. Right-clicking on the node and displaying properties will tell you which IP addresses assigned to which link.

While `ping’ is not the best tool for measuring latency, it works as the first approximation, and in this case the results are

  • Amsterdam-BBN ~ 140ms
  • Amsterdam-Houston ~ 160ms

For larger latencies, try NICTA rack (Australia) to Amsterdam.

GENI tools support

In general both Flack/GENI Portal and Omni can submit simple inter-domain requests to ExoSM. Domain binding is not yet well supported in Flack, however since RSpec generation in Omni is manual, can be easily done in it. Stitchports and multi-point layer 2 connections are not supported by either Flack/GENI Portal or Omni.

Have something to add?

Loading Facebook Comments ...