Welcome to ExoGENI

Featured

ExoGENI is a new GENI testbed that links GENI to two advances in virtual infrastructure services outside of GENI: open cloud computing (OpenStack) and dynamic circuit fabrics. ExoGENI orchestrates a federation of independent cloud sites located across the US and circuit providers, like NLR and Internet2 through their native IaaS API interfaces, and links them to other GENI tools and resources.

ExoGENI is, in effect, a widely distributed networked infrastructure-as-a-service (NIaaS) platform geared towards experimentation and computational tasks. ExoGENI employs sophisticated topology embedding algorithms that take advantage of semantic resource descriptions using NDL-OWL – a variant of Network Description Language.

Individual ExoGENI deployments consist of cloud site “racks” on host campuses, linked with national research networks through programmable exchange points. The ExoGENI sites and control software are enabled for flexible networking operations using traditional VLAN-based switching and OpenFlow. Using ORCA (Open Resource Control Architecture) control framework software, ExoGENI offers a powerful unified hosting platform for deeply networked, multi-domain, multi-site cloud applications. We intend that ExoGENI will seed a larger, evolving platform linking other third- party cloud sites, transport networks, and other infrastructure services, and that it will enable real-world deployment of innovative distributed services and new visions of a Future Internet.

To learn about how to use the testbed, please visit the ExoGENI wiki.

Projects that power ExoGENI:

  • ORCA-BEN – core development of ORCA features. ExoGENI is controlled by a specific deployment of ORCA tailored to GENI needs and requirements.
  • NeworkedClouds – adapting OpenStack to a networked clouds environment.

List of ExoBlog Posts

Featured

This is a ‘sticky’ post listing all ExoBlog entries for your convenience.

Using GENI for teaching Computer Networking at NCSU

This post comes to us courtesy of Prof. Rudra Dutta of NCSU Computer Science:

I have been using GENI in some form in my teaching for the last three years -

very tentatively to start with, but more extensively over time. Fall, 2014
was my most ambitious use so far.

The course I was teaching was Internet Protocols – a graduate level course on
networking that assumes at least one previous general networking course.
I assume basic working knowledge of networking, TCP/IP, socket programming,
and general knowledge of common Internet functionality such as HTTP, DNS etc.
After a quick refresher of some of this topic, the course dives into details of
the forwarding engine, QoS issues in forwarding, programming kernel modules
with netfilter, some content about routing, SDN, etc. The first half of the
course largely individual and group assignments on these topics, and the
second half is one large group project. Groups are assigned by myself for both
assignments and project – not self-assigned.

In this instance, I used GENI in two different ways – first, specific
questions out of some of the homework assignments were required to be done on
GENI, and later, GENI was specified as one of the three platforms that students
could use for the project. More detailed information about the administration,
including the specific assignments, is available for those interested from
http://dutta.csc.ncsu.edu/csc573_fall14/index.html. I exclusively guided the
students into using ExoGENI substrates, because experience from previous
semesters indicated that it was the most nearly consistent in providing
slice behaviors. (Some other substrates would have varying slice and
stitching behavior, when trying with the same RSPEC multiple times – this was
confusing to students.) We also used a methodology of designing/reserving
through Flukes and accessing separately by ssh, because it went well with the
rest of ExoGENI, and again presented a unique way to negotiate the
authentication/authorization issues for the students.

Before assigning the first homework, I had briefly covered GENI operations in
class. The first assignment actually had them create GENI IDs, request joining
the ncsu_teaching project, and they ended up simply being able to reserve a
slice with a simple four-node tandem network, then setting up routing tables at
each node to get a simple ping through. Later homework assignments were more
complex, until the final one asked them to create a seven-node topology and use
both OpenFlow and kernel module programming to build and investigate the
behavior of a firewall.

There were a total of 86 students, who were eventually
grouped into 22 project teams; however, the class started with a somewhat
larger number of students who attempted the early assignments. There were the
usual initial problems; students complained of resources not being available,
access never working, very sluggish access, and other similar issues. Upon
investigation most of these could be resolved to misunderstandings about ssh
key files, lack of appreciation of how much extra bandwidth it requires to
push through a GUI through two sets of networking connections (many of the
students had no suitable self-owned computing to access GENI from, and were
using servers from VCL, the NCSU computing cloud, to do so), not realizing that
management interfaces were visible to them and trying to use them for their
slice usage, etc. There were also some actual GENI issues – over this period
ExoGENI went through some ExoSM problems, which caused them to advocate that
anybody not using cross-rack stitching should use specific SMs rather than
ExoSM (contrary to what the webpages typically said), and also changed the
format of the Flukes .properties file, which the TA had to scramble to get
communicated to all students. By far the problem that had the biggest impact
on the students was that resources were not always available when needed -
students would wait for hours and days without requests being provisioned.
We cannot be sure but believe that these represent real resource crunches, not
an artifact or mistake of some kind.

When time came to propose project ideas, I was somewhat surprised (after all
the complaints) that 12 out of the 22 teams picked GENI as their platform
outright, and another 7 listed it as one of the two platforms they were going
to use (3 of these eventually ended up working on GENI). While the teams had
varied success in their projects, I was glad to see that they had all
negotiated GENI well. Some of the projects were quite impressive. Most of
them would have been possible to execute in my department’s networking lab,
but it would not have been possible to support the same number and variety of
projects.

Each of the teams that used GENI as their project platform wrote up a short
assessment of the role of GENI in their project. A few of these are appended.
Most of them speak of pros as well as cons, but on the whole they confirm that
the availability of GENI enriched this course, and produced learning benefit
that would not have been unattainable without it.


Advantages:
1. Were able to allocate resources as needed on-demand, with different specifications.
2. Allowed us to work remotely as per our convenience.
3. We could create a topology in Flukes with the start-up script created for the different nodes.
This allowed us to create new slices with the nodes already configured with some basic tools like tshark, Open vSwitch installed on the nodes required.

Disadvantages:
1. There is no way to modify a slice once created. This necessitated creation of new slices even for slight change in requirements. (For instance the bandwidth of links could not be modified after slice creation).
2. The environment wasn’t very predictable in its behavior.
Even if a reservation was successful, the nodes would not initialize to a successful state sometimes.
Also, it was difficult to reserve slices on some racks sometimes.


Advantages –
* Real network simulation – Slices and nodes being provisioned in a real
* network gave more real network statistics than if instances contained in a
* controlled environment like NetLabs or VCL Managed infrastructure – GENI
* allowed us to save a topology and recreate it instantly. This saved us a lot
* of time than if we were to create virtual hosts manually everytime.
Disadvantages –
* Large topology issues: long waiting time for the devices to change from
* ticketed to active in ‘response manifest’ for topologies with more than 10
* devices. Main problem faced was “resources not available” while reserving
* the slice. This was seen across multiple controllers


Our team found GENI extremely beneficial throughout the development process due
to the amazing simplicity it offers in creating new topologies. We found it
quite quick and simple to create a new customized topology any time we needed
to test a specific part of our project that previous topologies were not
capable of conducting. Using GENI also enabled each team member to individually
work on their respective assignments in small isolated topologies without
interfering with another teammate’s work. Finally, GENI’s advantage over
physical networks is that it provides a great level of flexibility throughout
the development process and the demo day (e.g. we could work on our project and
demo it anywhere on campus as long as we have our lap tops with us). However,
we recognize that working on the actual physical network has a great advantage
of providing hands-on experience with the network equipment. In our opinion,
the lack of this hands-on experience is the only drawback of using GENI.


Advantages

1. Remote access to save time connecting wires and cables.
It was easier to simulate a typical network topology and work on it virtually
than actual physical connections. Our project involved testcases to be run
on 2-3 different network topologies. So, creating a slice for each topology was
less time consuming than connecting workstations in Netlabs.
2. The ability to obtain a slice and store the various node configurations and
to be able to retrieve those configurations with ease and continue working. All
the nodes required quagga to be installed on it, also there were frequent change
to the code as per requirements, which had to be incorporated by recompiling
the application module. Doing this on virtual nodes simultaneously by running
bash scripts was easy.
3. Ease of testing in a simulated environment aping actual large scale networks.
4. GENI allowed us to create multiple topologies and hence all team members were
able to simultaneously work on test-cases individually on their own topologies.
This saved us a lot of time.

Disadvantages
1. Running test-cases like interface shutdown on the interfaces which needed to
be performed using the “ifdown” was not possible on GENI and reported an error
saying the interface did not exist.
2. Running kernel modules to get device statistics from the linux data structure
net_device was not functional for the virtual interfaces. It worked only for
eth0.


Advantages:
- Configuration of nodes was simple.
- The fact that we could access the nodes from anywhere turned out to be very advantageous because our class schedules, exams, and other assignments left us with limited time to work on the project together — it would have been difficult to work on NetLabs.
- Configuring OVS for the controller was easy. We were told that some changes were to be done in Floodlight in order for it to listen to external switches. We’d have had to this in VCL, but in GENI, it turned out to be easy.

Disadvantages:
- If any command, or programming error caused the configuration of the eth0 interface to get corrupted, we would have had to create a new slice and start over, because we’d have lost connectivity to the node. This would have been easily correctable in NetLabs. This did not occur specifically in the project, but it did happen to us once during a group homework assignment on GENI.
- There was one instance when we lost connectivity to the nodes, for about 10-15 minutes. We assumed it was because of some kind of maintenance.


GENI platform was very useful for our project. The best thing about it is the easy remote access to the nodes. It was also easy for modifying the kernel codes and building the modules. When initial modifications were tried, multiple crashes happened in the nodes which could be easily recovered.
Each team member could create his own set-up and work in parallel in GENI. As we required multiple AQM’s to be inserted in nodes, we created multiple slices (one for each), this helped us save time avoiding regular initial set-ups.
Only disadvantage of GENI is that some times it gets slow and sometimes slice building would take time.


Platform : GENI

We used GENI because our Demo topology required 7 Linux server nodes & GENI
provided the ease of recreation of the topology in case of failures while doing
integration Testing and fixing and debugging issues found during end to end
Testing. Project needed compilation of 2 kernel modules and an executable user
file. So, it was easy on GENI when we had to make changes to the modules and
user executable file and insert the modules. Also the teams previous experience
and familiarity with GENI due to past home-works was a motivation to use GENI.

Some risks and challenges we faced while using GENI were availability of exact
kernel version that is needed for the code to work in the way required thus
resulting is us not getting a slice in GENI. Whenever kernel module that hung
meant reloading the image on a GENI node. We did not face that problem many
times, whenever we got into such situation we made the changes and then
everything was first tested on our local Virtual machines, where we only had to
reboot the virtual machine.There was also a risk of cutting of SSH access to
our topology because of a firewall rule that blocks packets from undefined
SourceIP to undefined DestIP as access the Eth0 interface IP gets denied by
that rule, resulting in us not able to access to those nodes.


The use of GENI helped us test our project on several complex topologies. We did not have to go through the trouble of creating several VCL images in order to test a
simple topology.

Although GENI helped us test with several nodes, we faced a perennial problem of getting failed resources or disconnected links.
Also, an extended reservation for a slice did not guarantee the existence of the slice till the last date of reservation.

Use of different controllers or racks did not help much.


Experience with GENI :

The GENI platform made working on our kernel based project a good experience. The slices were mostly easily available, other than the time when
there was possibly ‘big demand’. Getting the required image on a node was also straightforward with GENI. For our project we wished to utilize the
GENI DESKTOP for monitoring throughput, while sending traffic through the nodes. This is an interesting feature supported by GENI through FLACK.
On the whole, GENI offered a supportive learning environment for the project.


We used GENI as a platform for our project. The major advantage of using GENI was it allows to create our own topology which makes testing our project easier
because we needed different set of topology. But getting the desired topology with more than 4 switches, controller and hosts was difficult.
Sometimes, one of the switch will become inactive (FAILED state) which causes to get the complete topology again or contact GENI folks to resolve it.


Advantages of GENI:
Using GENI platform gave us access to wide range of resources to experiment like we had many kinds of servers, different kinds of operating systems to try our module on. We could also freely upload different utility softwares on the linux boxes, configure them as switch, router etc as and when needed.
GENI gave us flexibility to access the network from anywhere which would not be possible in Physical labs with same ease.

Disadvantages of GENI:
Once a network was created, if we needed to make any change to the topology for example changing IP address, we had to again create the entire network and ask for new resources.
Since, we had to implement kernel modules which meant always compiling and inserting modules into the kernel, in order to run and test them, we had the risk of the kernel crashing and the node requiring a restart every time this happened. Since we had created the network in GENI, we did not have the privileges to restart the node in case its kernel crashed. Hence, the only solution was to recreate the network, put in the request and wait for it the nodes to get commissioned(which would fail most of the time). Once, that was over we had to update the forwarding and routing tables so packets could be sent successfully. This entire setting up and configuration would lead to a lot of extra time spent in case a crash occurred. So, we had to resort to first implementing and testing the kernel modules in VMs on our machines where we could quickly restart the machines.


Advantages of using GENI:

1. Initially we were playing around with Click module and few times got our configurations wrong. With GENI, it was easy to just create new slice.
2. Using GENI, we were able to work as per our convenience.
3. Different members were able to work on multiple slices independently to try out different parts of the project. Like me and Nikhil were trying out Click configurations on my slice while Aditya was trying out different traffic generators on his slice. So you could say we were able to work in parallel.

Disadvantages of using GENI:

1. Our data and configurations need to be redone on a new slice each time. But compared to the perks this was a small overhead. Also be redoing them each time we memorized the steps and CLIs.
2. Bandwidth of ethernet links on GENI is limited. For our project, actual ethernet bandwidth application could have been tried.


Advantages of using GENI:
1. Ability to access devices remotely, unlike Netlabs. Thus, the team could easily work from home.
2. VCL is apt only to create standalone hosts. In our project, we required 8 nodes, thus GENI provided an easy interface to create networked VMs.
3. Easy to provision nodes with specific images. Anytime we used to face issue with nodes in the network, we would easily delete the slice and reserve a new one.

Disadvantages of using GENI:
1. Unable to always reserve resources. We had a few instances when we were not able to reserve the required topology dur to resource constraints on the controller/orchestrator.
2. Connectivity issues. SSH was not available a couple of times, though it was rare.


We have used GENI for our project based on OpenFlow,
and it has been a handy tool because of the range of
features available in it.

The single most important feature that I liked in GENI over VCL
is the admin privileges. Our project requires root access and
it easily available for us to get it in GENI. With root access
we were able to install various third party softwares and if
something went wrong it is easy to create another slice and
delete the old one, there is no real damage to any physical system.

It would be better if the number of orca controllers and the
resources available in each of them are increased as we faced some
delay in creation of slices from RCI, UCD controllers.
Moreover, if options such as, adding new nodes to an existing
slice etc, if available, would have saved a lot of time in
performing redundant work.


Advantages:
GENI provided us a platform to check the concepts taught in class and how they were implemented in real-world topologies, like adding ARP tables, Route tables and monitoring network traffic. Ability to create topologies rather than connecting systems manually and opportunity to test latest protocols like OpenFlow over these topologies is great. Our project needed us to load and unload kernel modules repeatedly and the root user permissions that GENI provided us made it really easy to do some necessary in kernel too. Comparing it to a case where kernel modules are loaded in local systems and they got hung, just the ability obtain a new node to perform testing on losing one made it very comfortable to implement kernel modules.

Disadvantages:
During high demand, it was difficult to obtain slice to perform experiments. GENI’s UI can be made more use-friendly. After slice is submitted and all nodes become active, hovering mouse over interfaces/nodes can show a simple pop-up note giving details about node’s IP address and interface’s details. Whenever a user logs in, it opens a new terminal window, handling multiple windows can be cumbersome if there are many nodes, it will be better if there is an option to open them in new tabs.

ORCA5 upgrade

As we’re upgrading the ExoGENI infrastructure to the new release of ORCA (5.0 Eastsound), there are a few things experimenters should know about the features and capabilities of this new release.

The main feature being added is the so called state recovery, or the ability to restart the various ORCA actors and retain the state about created slices. This will allow experimenters to run long-lived experiments without concerns about the interference of software updates or some other disruptive events. The recovery handles many situations, although catastrophic events may still result in the loss of slice information.

Another area of attention for us has been bare-metal node provisioning – we have strengthened the code that performs bare-metal provisioning, making it more error-proof and also added the ability to attach iSCSI storage volumes to bare-metal nodes. This capability until now has only worked for virtual machine slivers.

ORCA5 has allowed us to enable hybrid mode support in the rack switches, which in simple terms means those experimenters that care to use the OpenFlow capabilities of the switch, can do that, while the rest can use traditional VLANs, with more predictable performance guarantees.

Finally, we introduced the ability to see the boot consoles of VMs in case of failure, a feature we hope will help in debugging stubborn image creation issues.

Known issues:

  • Attachment to mesoscale VLANs
    • Won’t work properly with current NDL converter
    • Doesn’t work due to yet to be determined problems with switch hybrid configuration – packets don’t pass properly between OpenFlow and VLAN parts of the switch.
  • NDL conversion for some slice manifests may not work properly. Slices may appear disconnected. This requires an update to the NDL converter, which will be done once more racks are upgraded.

Using ExoGENI for training in reproducible synthesis research

Author: Jeffrey L. Tilson and Jonathan Mills.

Context: Open Science for Synthesis is unique bi-coastal training offered for early career scientists who want to learn new software and technology skills needed for open, collaborative, and reproducible synthesis research. UC Santa Barbara’ National Center for Ecological Analysis and Synthesis (NCEAS) and University of North Carolina’s Renaissance Computing Institute (RENCI) co-lead this three-week intensive training workshop with participants in both Santa Barbara, CA and Chapel Hill, NC from July 21 – August 8, 2014. The training was sponsored by the Institute for Sustainable Earth and Environmental Software (ISEES) and the Water Science Software Institute(WSSI), both of which are conceptualizing an institute for sustainable scientific software.

The participants were initially clustered into research groups based, in part, upon mutual interests. Then in conjunction with their research activities, daily bi-coastal sessions were started to develop expertise in sustainable software practices in the technical aspects that underlie successful open science and synthesis – from data discovery and integration to analysis and visualization, and special techniques for collaborative scientific research as applied to the team-projects. The specific projects are described at https://github.com/NCEAS/training/wiki/OSS-2014-Synthesis-Projects.

Specifics of ExoGENI: In support of the research teams, ExoGENI provisioned a total of three slices, where a slice is defined as one or more compute resources (virtual machines or bare metal nodes) that are interconnected via a dedicated private network.  The largest slice contained four virtual machines (VM), with each VM having 75 GB of disk space, 4 cpus, and 12GB of RAM.  A second slice, using two of the same sized VMs as the first, additionally had a 1 TB storage volume mounted via iSCSI onto each host.  The last slice utilized two bare metal nodes, each with 20 CPU cores and 96GB of RAM, and had R installed for statistical programming.  These slices were allocated throughout the duration of the conference. Access by workshop participants was provided via ssh keys. Workshop staff were provided additional keys for root access.

Lessons learned: The ExoGENI provided resources were easy to assemble and make available to the research teams. Each team provided their best guess regarding memory, disk, and computation needs which resulted in three different classes of ExoGENI resources.

The ExoGENI resources that were initiated for participants were all Linux oriented. Moving forward, alternative operating systems should be considered perhaps by getting research group feedback at the start of the workshop.

Notes from June 2014 maintenance

There are no new software features. This maintenance was meant to restore stitching to UCD rack broken by the reconfiguration of Internet2 AL2S.

  • There is a new rack at TAMU that is accessible for intra-rack slices only for now due to lack of physical connectivity to its upstream provider. Once the connectivity is there we will enable stitching to TAMU. See the wiki for rack controller information.
  • There is a new version of Flukes with minor changes. As a reminder, Flukes has been reported not to work well with Java 7 on certain platforms, specifically Mac OS. Java 6 typically solves the problems.
  • Connectivity to OSF rack currently is malfunctioning due to problems with ESnet OSCARS control software instance. We will announce when those problems have been addressed.
    • Has been fixed.
  • Stitching to UCD has been restored.

 

 

Lehigh University CSE 303 Operating System HW10

Author: Dawei Li

Topic: Category-based N-gram Counting and Analysis Using Map-Reduce Framework

Class Size: 14 students

We use one GENI account (through Flukes) to create slices for all students, and distribute the SSH private key as well as the assigned IP address to each of them.

Statistics
Total Slices 15
Total Nodes 73
Controllers Used OSF (7 slices/7 nodes each) WVN (8 slices/3 nodes each)
Duration 9 days (11 slices) 14 days (4 slices)

Comments:

The Flukes tool is really convenient. I just spend a few hours to figure out how to use it and how to create a Hadoop cluster. Only one thing is that I have to poll the status of the slice myself to know if it is ready or not. As far as I know, the Flack GUI of ProtoGENI can poll it automatically and show users the changing status until it is ready.

I have heard no complaint from students about connection problem, meaning that the testbed resources are relatively stable whether accessing on campus or not. Some students cannot log into the testbed just because they are not familiar with SSH. However, one grader said that he couldn’t log into the testbed on May 3rd (around midnight) using one OSF slice, but he can log in again the next morning.

Notes from April 2014 maintenance

This note summarizes the results of maintenance across the entire ExoGENI Testbed in April 2014.

The highlights

  • Minor fixes added to BEN to better support inter-domain topologies
  • UDP performance issue addressed. See below for more details.
  • FOAM updated across the racks
  • Floodlight updated across the racks
  • Connectivity
    • most of the connectivity caveats from the previous note still apply.
    • UvA rack is currently not reachable. We suspect a problem with the configuration in SURFnet, which we will address.
      • This behavior appears to have resolved itself by 05/06 without our intervention. Please report any further problems. 

The details: UDP performance

Some of you have observed very poor performance for UDP transfers – extremely high packet losses and very low transfer rates. This issue was traced to three separate causes:

  • Poor implementation of “learning switch” functionality in the version of Floodlight OpenFlow controller we were using. It resulted in sudden losses of packets after a period of time, particularly between bare-metal nodes. To resolve this issue we upgraded Floodlight to version 0.9 and replaced the “learning switch” module in it with a better-behaved “forwarding” module.
  • Insufficient configuration of the QEMU interface to the guest VM, which resulted in very high packet losses. We updated the Quantum agent to support the proper options.
  • Sensitivity of UDP transfers to host- and guest-side transmit and receive buffers. We tuned the host-side buffers on worker nodes, however the tuning guest-side must be accomplished by the experimenter.

To explain further how to get the best performance out of UDP on ExoGENI we will publish a separate blog entry in the immediate future.

Notes from Mar 2014 maintenance

This note summarizes the results of maintenance on XO-BBN, XO-RCI, XO-FIU, XO-UFL, XO-SL racks and ExoSM controller.

The highlights

The purpose of the maintenance event was to reconfigure several of the racks to allow for wider range of vlans to be usable for stitching.

  • New rack XO-UCD at UC Davis was added, however it is not fully reachable due to unresolved connectivity in CENIC.
  • The rack at Starlight (XO-SL) has been reconfigured to use the currently available narrow range of vlans and added support for GEC19 SDX demo via vlan 1655.
  • Support for SONIC cards was added to XO-RCI and XO-UCD
  • A controller bug was resolved in ExoSM

Connectivity caveats

Not all racks currently visible through ExoSM can be reached using stitching. We expect these issues should be resolved in the immediate future:

  • XO-UCD connectivity via CENIC across all vlans
    • Resolved on 03/07/2014
  • XO-SL connectivity across all vlans
    • Resolved on 03/12/2014
  • XO-NICTA continues to have problems with vlans 4003 and 4005
  • XO-BBN has a problem with vlan 2601 in NOX/BBN network
    • Resolved on 04/05/2014
  • Direct connectivity between XO-UFL and XO-FIU across all vlans
    • Resolved on 05/20/14

 

Notes from Feb 2014 maintenance

This post is intended to describe changes in topology and behavior after shifting BBN (xo-bbn) , UH (xo-uh) and UFL (xo-ufl) racks from NLR to Internet2 AL2S as well as a number of software fixes.

The Highlights

  • Please visit the updated ExoGENI topology diagram to see how racks are connected to each other: https://wiki.exogeni.net/doku.php?id=public:experimenters:topology
  • Added initial poorly tested support for ‘speaks-for’ GENI credentials in GENI AM API wrapper deployed in ExoSM only for now.
  • Point-to-point and inter-rack multi-point stitching continues to be supported
    • A number of bug-fixes to improve the stability, see some caveats in the details below
    • Inter-rack multi-point stitching only available via ORCA native API/Flukes tool
  • An updated version of Flukes. Please see the Release Notes in Flukes for more information
    • Optional support for GENI Portal/Slice Authority – registers your slices with the GENI Portal
    • Support for the coloring extension (see details below)
  • An updated version of NDL-to-RSpec converter which includes the following fixes
    • Links now have proper properties (i.e. bandwidth) in manifests
    • Bug in inter-domain manifests with duplicate interface names fixed
    • Per interface VLAN ranges should be properly advertised in stitching extension RSpec
    • Support for coloring extension RSpec (see details below) introduced
  • Two new racks will be visible in ExoSM advertisements and in Flukes: XO-OSF and XO-SL.
    • OSF, located at Oakland Scientific Facility, Oakland, CA (xo-osf)
    • SL, located at Northwestern University, Chicago, IL (xo-sl)
  • Inter-rack connectivity has the following important caveats:
    • Currently it is not possible to stitch UFL and FIU directly to each other due to limitations of AL2S service. It is possible to have them as two branches of a multi-point connection. We are working on the solution to the point-to-point issue.
    • Connectivity to NICTA is experiencing problems due to what we think are misconfigured VLANs in TransPacWave. If your slice gets tags 4003 and 4005 going to NICTA, connectivity is not assured. Simply try to create a new slice, leaving the broken slice in place. Then delete the broken slice.
    • Connectivity to SL has not been properly plumbed in places, so does not work for the moment.
      • This issue has been resolved as of 03/12/14
    • Connectivity to UFL appears to be broken through FLR across all available VLANs. We are working to resolve this issue.
      • This issue has been resolved as of 02/20/2014

The details – Multi-point topology embedding and templates

When using post-boot script templates with multi-point connections, the following rule needs to be observed:

  • When embedding an intra-rack topology (slice local to a single rack) with a broadcast link, to get to the IP address of the node on the broadcast link use the  link name, e.g. “VLAN0″
  • When embedding an inter-rack topology (slice across multiple racks) with a broadcast link, to get to the IP address of the node on the broadcast link use the link name concatenated with the node name, e.g. “Node0-VLAN0″

This is a temporary limitation that will be removed in the near future.

Additionally, there are the following limitations to the topology embedding engine

  • it does not properly deal with slices that combine inter-rack multi-point connections with inter-rack point-to-point connections going across BEN (to xo-rci, for example).
  • it does not properly deal with slices that have two stitch ports on the same port URL, but different VLAN tags in the same slice

We expect to be able to remedy these soon, for now please avoid such requests.

The details – Application Coloring ontology and coloring RSpec Extension

This ontology was designed to allow attaching general application-specific attributes to slivers (nodes and links) and create labelled directed dependencies between them.
These are NOT read by the control framework, but, rather, transparently passed through from request to manifest and allow application-level annotation of the request. It is important to understand that the processing of the elements of this schema is left to the application creating requests and processing resulting  manifests.

This ontology (see https://geni-orca.renci.org/trac/browser/orca/trunk/ndl/src/main/resources/orca/ndl/schema/app-color.owl) is modeled after property graphs with multiple colors (or labels) associated with each node, link and color dependency. Each color or color dependency can have multiple properties associated with it, as may be needed by the applications running in the slice:

- any number of key-value pairs
- a blob of text
- a blob of XML

The new version of Flukes supports adding color-labeled properties to nodes and links and the creation of colored dependencies between elements of the slice, also with properties.

There is a matching RSpec coloring extension schema defined here: http://www.geni.net/resources/rspec/ext/color/2/color.xsd

The initial application of this extension is to allow GEMINI and GIMI to specify measurement roles of the slivers in the slice in RSpec. However, it was designed to be general to allow specifying other relationships and attributes without additional special-case effort for the aggregates to support them.