Using perfSonar in ExoGENI


Special thanks go to Brian Tierney of LBL/ESnet for his help in creating the perfSonar image.

This post describes how to use a perfSonar image in ExoGENI slices. The image built for this blog post is now posted in the ExoGENI Image Registry and available in Flukes.

Name: psImage-v0.3
Hash: e45a2c809729c1eb38cf58c4bff235510da7fde5

Note that we are using a Level 2 perfSonar image out of a Centos 6.6 base image with modified ps_light docker from ESnet. However the registration with the perfSonar lookup service is disabled in this image.

Theory of operation

The perfSonar image uses Docker technology to deploy its components. The following elements are included in the image:

  • Client programs for nuttcp, iperf, iperf3, bwctl and owamp included as simple RPMs accessible by all users
  • Server programs for bwctl and owamp running inside a Docker

The image starts Docker on boot, loads the needed Docker images and automatically launches the ‘ps_light_xo’ Docker with the server programs in it.

-bash-4.1# docker ps
CONTAINER ID        IMAGE                COMMAND                CREATED             STATUS              PORTS               NAMES
ba28266c1aec        ps_light_xo:latest   "/bin/sh -c '/usr/bi   6 minutes ago       Up 6 minutes                            suspicious_lovelace  

Under normal operation the user should not have no interact with the server programs – the Docker is running in net host mode and the server programs listen on all the interfaces the VM may have. However, if needed, the user can gain access to the Docker with server programs using the following command:

$ docker exec -ti <guid> /bin/bash

where ‘<guid>’ refers to the automatically started Docker. You can find out the guid by issuing this command:

$ docker ps

Using the image

You can create a topology using the perfSonar image (listed in Image registry and above) and then run the client programs on some nodes against server nodes on other nodes. Since the image has both client and server programs, measurements can be done in any direction as long as IP connectivity is assured.

Once the slice has booted try a few client programs:

-bash-4.1# owping
Approximately 13.0 seconds until results available

--- owping statistics from []:8852 to []:8966 ---
SID:	ac100002d8c6ba8674af285470d65b0b
first:	2015-04-01T14:42:15.627
last:	2015-04-01T14:42:25.314
100 sent, 0 lost (0.000%), 0 duplicates
one-way delay min/median/max = -0.496/-0.4/-0.144 ms, (err=3.9 ms)
one-way jitter = 0.2 ms (P95-P50)
TTL not reported
no reordering

--- owping statistics from []:8938 to []:8954 ---
SID:	ac100001d8c6ba867d50999ce0a1166f
first:	2015-04-01T14:42:15.553
last:	2015-04-01T14:42:24.823
100 sent, 0 lost (0.000%), 0 duplicates
one-way delay min/median/max = 1.09/1.3/1.5 ms, (err=3.9 ms)
one-way jitter = 0.2 ms (P95-P50)
TTL not reported
no reordering


-bash-4.1# bwctl -c
bwctl: Using tool: iperf
bwctl: 16 seconds until test results available

Server listening on TCP port 5578
Binding to local address
TCP window size: 87380 Byte (default)
[ 15] local port 5578 connected with port 59083
[ ID] Interval       Transfer     Bandwidth
[ 15]  0.0-10.0 sec  1356333056 Bytes  1081206753 bits/sec
[ 15] MSS size 1448 bytes (MTU 1500 bytes, ethernet)


Things to note

OWAMP in particular is sensitive to accurate time measurements, which is why the VMs come packaged with ntpd that starts on boot. However this does not solve all the problems. Measuring jitter in a VM may produce unpredictable results due to VM sharing cores with other VMs in the same worker node. While in ExoGENI we do not oversubscribe cores, we also do not (yet) do any core pinning when placing VMs inside workers, which means time artifacts may occur when VMs switch cores.

The end result is that while jitter measurements using OWAMP may have high resolution, their accuracy should be questioned. To improve the accuracy try using larger instance sizes, like XOLarge and XOExtraLarge.

Have something to add?

Loading Facebook Comments ...