This press release from the National Science Foundation highlights the research by energy researchers from NCSU in using ExoGENI:
Author: Prof. Rudra Dutta of NCSU Computer Science
I have been using GENI in some form in my teaching for the last three years -
very tentatively to start with, but more extensively over time. Fall, 2014
was my most ambitious use so far.
The course I was teaching was Internet Protocols – a graduate level course on
networking that assumes at least one previous general networking course.
I assume basic working knowledge of networking, TCP/IP, socket programming,
and general knowledge of common Internet functionality such as HTTP, DNS etc.
After a quick refresher of some of this topic, the course dives into details of
the forwarding engine, QoS issues in forwarding, programming kernel modules
with netfilter, some content about routing, SDN, etc. The first half of the
course largely individual and group assignments on these topics, and the
second half is one large group project. Groups are assigned by myself for both
assignments and project – not self-assigned.
In this instance, I used GENI in two different ways – first, specific
questions out of some of the homework assignments were required to be done on
GENI, and later, GENI was specified as one of the three platforms that students
could use for the project. More detailed information about the administration,
including the specific assignments, is available for those interested from
http://dutta.csc.ncsu.edu/csc573_fall14/index.html. I exclusively guided the
students into using ExoGENI substrates, because experience from previous
semesters indicated that it was the most nearly consistent in providing
slice behaviors. (Some other substrates would have varying slice and
stitching behavior, when trying with the same RSPEC multiple times – this was
confusing to students.) We also used a methodology of designing/reserving
through Flukes and accessing separately by ssh, because it went well with the
rest of ExoGENI, and again presented a unique way to negotiate the
authentication/authorization issues for the students.
Before assigning the first homework, I had briefly covered GENI operations in
class. The first assignment actually had them create GENI IDs, request joining
the ncsu_teaching project, and they ended up simply being able to reserve a
slice with a simple four-node tandem network, then setting up routing tables at
each node to get a simple ping through. Later homework assignments were more
complex, until the final one asked them to create a seven-node topology and use
both OpenFlow and kernel module programming to build and investigate the
behavior of a firewall.
There were a total of 86 students, who were eventually
grouped into 22 project teams; however, the class started with a somewhat
larger number of students who attempted the early assignments. There were the
usual initial problems; students complained of resources not being available,
access never working, very sluggish access, and other similar issues. Upon
investigation most of these could be resolved to misunderstandings about ssh
key files, lack of appreciation of how much extra bandwidth it requires to
push through a GUI through two sets of networking connections (many of the
students had no suitable self-owned computing to access GENI from, and were
using servers from VCL, the NCSU computing cloud, to do so), not realizing that
management interfaces were visible to them and trying to use them for their
slice usage, etc. There were also some actual GENI issues – over this period
ExoGENI went through some ExoSM problems, which caused them to advocate that
anybody not using cross-rack stitching should use specific SMs rather than
ExoSM (contrary to what the webpages typically said), and also changed the
format of the Flukes .properties file, which the TA had to scramble to get
communicated to all students. By far the problem that had the biggest impact
on the students was that resources were not always available when needed -
students would wait for hours and days without requests being provisioned.
We cannot be sure but believe that these represent real resource crunches, not
an artifact or mistake of some kind.
When time came to propose project ideas, I was somewhat surprised (after all
the complaints) that 12 out of the 22 teams picked GENI as their platform
outright, and another 7 listed it as one of the two platforms they were going
to use (3 of these eventually ended up working on GENI). While the teams had
varied success in their projects, I was glad to see that they had all
negotiated GENI well. Some of the projects were quite impressive. Most of
them would have been possible to execute in my department’s networking lab,
but it would not have been possible to support the same number and variety of
Each of the teams that used GENI as their project platform wrote up a short
assessment of the role of GENI in their project. A few of these are appended.
Most of them speak of pros as well as cons, but on the whole they confirm that
the availability of GENI enriched this course, and produced learning benefit
that would not have been unattainable without it.
As we’re upgrading the ExoGENI infrastructure to the new release of ORCA (5.0 Eastsound), there are a few things experimenters should know about the features and capabilities of this new release.
The main feature being added is the so called state recovery, or the ability to restart the various ORCA actors and retain the state about created slices. This will allow experimenters to run long-lived experiments without concerns about the interference of software updates or some other disruptive events. The recovery handles many situations, although catastrophic events may still result in the loss of slice information.
Another area of attention for us has been bare-metal node provisioning – we have strengthened the code that performs bare-metal provisioning, making it more error-proof and also added the ability to attach iSCSI storage volumes to bare-metal nodes. This capability until now has only worked for virtual machine slivers.
ORCA5 has allowed us to enable hybrid mode support in the rack switches, which in simple terms means those experimenters that care to use the OpenFlow capabilities of the switch, can do that, while the rest can use traditional VLANs, with more predictable performance guarantees.
Finally, we introduced the ability to see the boot consoles of VMs in case of failure, a feature we hope will help in debugging stubborn image creation issues.
- Attachment to mesoscale VLANs
- Won’t work properly with current NDL converter
- Doesn’t work due to yet to be determined problems with switch hybrid configuration – packets don’t pass properly between OpenFlow and VLAN parts of the switch.
- NDL conversion for some slice manifests may not work properly. Slices may appear disconnected. This requires an update to the NDL converter, which will be done once more racks are upgraded.
We’ve described in the past how to run OpenFlow experiments in ExoGENI using virtual OVS switches hosted in virtual machines. With the release of ORCA5, it is now possible to run some experiments using the real OpenFlow switches in ExoGENI racks (for IBM-based racks they are the BNT G8264).
To do that, start your OpenFlow controller on a host with a publicly accessible IP address. Then create a slice in Flukes (remember to assign IP addresses to dataplane interfaces of the nodes) as shown in the figure below, making sure to mark the reservation as ‘OpenFlow’ and fill in the details – your email, slice password (not really important – can be any random string and you don’t need to remember it) and the url of the controller in the form of
tcp:<hostname or ip>:<port number, typically 6633>
Submit the slice and wait for the Flowvisor on the rack head node to connect to your controller. You should see the various events (e.g. PACKET_IN) flowing by in the controller log. And that’s it.
Current limitations – only one link per slice. Only ORCA5 IBM racks can do this at the moment, which excludes UvA, WVnet, NCSU and Duke racks.
Author: Jeffrey L. Tilson and Jonathan Mills.
Context: Open Science for Synthesis is unique bi-coastal training offered for early career scientists who want to learn new software and technology skills needed for open, collaborative, and reproducible synthesis research. UC Santa Barbara’ National Center for Ecological Analysis and Synthesis (NCEAS) and University of North Carolina’s Renaissance Computing Institute (RENCI) co-lead this three-week intensive training workshop with participants in both Santa Barbara, CA and Chapel Hill, NC from July 21 – August 8, 2014. The training was sponsored by the Institute for Sustainable Earth and Environmental Software (ISEES) and the Water Science Software Institute(WSSI), both of which are conceptualizing an institute for sustainable scientific software.
The participants were initially clustered into research groups based, in part, upon mutual interests. Then in conjunction with their research activities, daily bi-coastal sessions were started to develop expertise in sustainable software practices in the technical aspects that underlie successful open science and synthesis – from data discovery and integration to analysis and visualization, and special techniques for collaborative scientific research as applied to the team-projects. The specific projects are described at https://github.com/NCEAS/training/wiki/OSS-2014-Synthesis-Projects.
Specifics of ExoGENI: In support of the research teams, ExoGENI provisioned a total of three slices, where a slice is defined as one or more compute resources (virtual machines or bare metal nodes) that are interconnected via a dedicated private network. The largest slice contained four virtual machines (VM), with each VM having 75 GB of disk space, 4 cpus, and 12GB of RAM. A second slice, using two of the same sized VMs as the first, additionally had a 1 TB storage volume mounted via iSCSI onto each host. The last slice utilized two bare metal nodes, each with 20 CPU cores and 96GB of RAM, and had R installed for statistical programming. These slices were allocated throughout the duration of the conference. Access by workshop participants was provided via ssh keys. Workshop staff were provided additional keys for root access.
Lessons learned: The ExoGENI provided resources were easy to assemble and make available to the research teams. Each team provided their best guess regarding memory, disk, and computation needs which resulted in three different classes of ExoGENI resources.
The ExoGENI resources that were initiated for participants were all Linux oriented. Moving forward, alternative operating systems should be considered perhaps by getting research group feedback at the start of the workshop.
ExoGENI is a new GENI testbed that links GENI to two advances in virtual infrastructure services outside of GENI: open cloud computing (OpenStack) and dynamic circuit fabrics. ExoGENI orchestrates a federation of independent cloud sites located across the US and circuit providers, like NLR and Internet2 through their native IaaS API interfaces, and links them to other GENI tools and resources.
ExoGENI is, in effect, a widely distributed networked infrastructure-as-a-service (NIaaS) platform geared towards experimentation and computational tasks. ExoGENI employs sophisticated topology embedding algorithms that take advantage of semantic resource descriptions using NDL-OWL – a variant of Network Description Language.
Individual ExoGENI deployments consist of cloud site “racks” on host campuses, linked with national research networks through programmable exchange points. The ExoGENI sites and control software are enabled for flexible networking operations using traditional VLAN-based switching and OpenFlow. Using ORCA (Open Resource Control Architecture) control framework software, ExoGENI offers a powerful unified hosting platform for deeply networked, multi-domain, multi-site cloud applications. We intend that ExoGENI will seed a larger, evolving platform linking other third- party cloud sites, transport networks, and other infrastructure services, and that it will enable real-world deployment of innovative distributed services and new visions of a Future Internet.
To learn about how to use the testbed, please visit the ExoGENI wiki.
Projects that power ExoGENI:
- Network Delay Emulation
- Using Dropbox for ExoGENI Images
- Running OpenFlow experiments with real rack switches
- Creating Custom ExoGENI Images using Virtual Box
- Dealing with LLDP topology discovery in ExoGENI slices.
- Lies, damn lies, and iperf: Dataplane network tuning in ExoGENI today
- Creating a Custom Image from an Existing Virtual Machine
- Using stitchports to connect slices to external infrastructure
- New Capability: SoNIC-enabled Software-Defined Precise Network Measurements in GENI
- Example: creating slivers of iSCSI Storage
- Taking the mystery out of ExoGENI resource binding Part 2: inter-rack slices
- Example: Modifying a Slice (add/remove nodes)
- ExoGENI New Features from Sep. 2013 upgrade
- Hadoop in a Slice
- Taking the mystery out of ExoGENI resource binding Part 1: unbound slices
- Example: Postboot Scripts for Creating Hostnames and Name Resolution
- Welcome to ExoBlog!
In this article, courtesy of Rajesh Gopidi, we are going to introduce you to a way of dealing with LLDP-based topology discovery in OpenFlow slices, i.e. slices that stand up OVS instances in the nodes and may run their own version of FlowVisor on top of OVS to simulate, for example, a multi-controller environment within the slice.
The proposed solution is implemented as a new command command in Flowvisor which updates the ethertype and destination MAC address parameters used to identify LLDP packets in order to prevent them being “swallowed” by the hardware switches within ExoGENI infrastructure that create links between nodes.
It is very easy to stand up OpenFlow experiments in ExoGENI by using one of the the OVS images available to the experimenters. However, one typical problem all of them face is the inability of OpenFlow controller(s) to discover the topology of switches – OpenVSwitch running inside a node – by sending LLDP packets. This is due to the fact that underlying physical switche(s) that make up the virtual link(s) connecting nodes in the slice absorb them instead of relaying them. This precludes the OpenFlow controllers in the slice from discovering the topology.
One simple work-around for the above problem is to modify the header – ethertype and destination MAC address – of LLDP packets sent out by the OpenFlow controller(s) so that the underlying physical switches don’t recognize them. For example, in Floodlight controller, one can change the header of LLDP packets by modifying the constants TYPE_LLDP and LLDP_STANDARD_DST_MAC_STRING in net.floodlightcontroller.packet.Ethernet.java and net.floodlightcontroller.linkdiscovery.internal.LinkDiscoveryManager files respectively.
While the above work-around can be successfully applied in scenarios where there is only one OpenFlow controller programming the switches in the network. However, consider a case where the network in the slice is virtualized using Flowvisor, and multiple controllers are enabled to program a switch simultaneously. In such scenarios, Flowvisor, which also uses LLDP, will not be able to classify packets with modified LLDP header as LLDP probes. Thus, it nullifies the work around and drops LLDP packets sent by OpenFlow controller(s).
This is where the new command comes in handy. Using the new command one can modify the parameters used by Flowvisor to identify LLDP packets. To update both the ether type and destination MAC address parameters, use the command as shown below:
fvctl update-lldp-header <ether type in hex> <destination MAC address>
After modifying the LLDP parameters as shown above, the OpenFlow controller is able to discover the links in the topology using LLDP packets, as shown in the figures below:
To make this work you will need to install the updated flowvisor into your slice. The code for the flowvisor that has these new options can be found in github. Export to zip, download and install by following the instructions in the INSTALL file.
There are no new software features. This maintenance was meant to restore stitching to UCD rack broken by the reconfiguration of Internet2 AL2S.
- There is a new rack at TAMU that is accessible for intra-rack slices only for now due to lack of physical connectivity to its upstream provider. Once the connectivity is there we will enable stitching to TAMU. See the wiki for rack controller information.
- There is a new version of Flukes with minor changes. As a reminder, Flukes has been reported not to work well with Java 7 on certain platforms, specifically Mac OS. Java 6 typically solves the problems.
- Connectivity to OSF rack currently is malfunctioning due to problems with ESnet OSCARS control software instance. We will announce when those problems have been addressed.
- Has been fixed.
- Stitching to UCD has been restored.
This post comes to us courtesy of Cong Wang from UMass Amherst.
ExoGENI provides the key features for stitching among multiple ExoGENI racks or connecting non-ExoGENI nodes to ExoGENI slices via a stitchport. This article provides some initial instructions on how to stitch ExoGENI slices to the external infrastructure with stitchports using VLANs.
The procedure of stitching to external stitch port is similar to the example above, except that the local machine needs to be configure correctly to allow connection to the VLAN. In the following example, the local laptop (outside ExoGENI testbed) is located in University of Wisconsin Madison campus. It connects to ExoGENI via layer 2 vlan 920. The Flukes request is shown in the figure. The available stitchport URL and Label/Tag can be found in ExoGENI wiki. More stitchports at other locations can be added upon request via the users mailing list.
After the slice is successfully reserved, the manifest view should be similar to the one shown below:
In order to attach the local (non-ExoGENI) node to the VLAN, the interface connected to the VLAN has to be configured correctly. In addition to the regular IP configuration, the attached interface has to be configured with the correct VLAN ID.
In the following an example for the case of Ubuntu is given:
- In case this module is not loaded the 8021q module into the kernel:
sudo modprobe 8021q
- Then create a new interface that is a member of a specific VLAN. VLAN id 10 is used in this example. Keep in mind you can only use physical interfaces as a base, creating VLAN’s on virtual interfaces (i.e. eth0:1) will not work. We use the physical interface eth1 in this example. The following command will add an additional interface next to the interfaces which have been configured already, so your existing configuration of eth1 will not be affected:
sudo vconfig add eth1 10
- At last, assign an address to the new interface (the address should be consistent with other IP addresses using in the dataplane of the slice):
sudo ip addr add 172.16.0.5/24 dev eth1.10
At this point, you should be able to ping Node1, which means the stitchport has been successfully attached into ExoGENI.