Images with Network IDS Tools

Two new images with network IDS tools are added to ExoGENI Image Registry. Both images can be used to deploy Network IDS tools to the slices.
Centos 7.4 v1.0.4 BRO
Ubuntu 14.04 Security Onion

Bro Network Security Monitor” is a framework that can be used to monitor network traffic. It has built-in analyzers to inspect the traffic for all kinds of activity. Bro Web Site includes documentation.

Security Onion is a Linux distro for intrusion detection, network security monitoring, and log management. It’s based on Ubuntu and contains Snort, Suricata, Bro, OSSEC, Sguil, Squert, ELSA, Xplico, NetworkMiner, and many other security tools. Detailed information can be found on wiki and web site.

1. Configuration of Bro instance from image “Centos 7.4 v1.0.4 BRO”
Bro v2.5.2 is built from source with pf_ring and installed to /opt directory.

Minimal starting configuration can be done by modifying /opt/bro/etc/node.cfg and /opt/bro/etc/broctl.cfg files.

Standalone:

[root@bro ~]# cat /opt/bro/etc/node.cfg 
# Example BroControl node configuration.
#
# This example has a standalone node ready to go except for possibly changing
# the sniffing interface.

# This is a complete standalone configuration.  Most likely you will
# only need to change the interface.
[bro]
type=standalone
host=localhost
interface=eth0

Cluster (multiple workers with pf_ring):

[root@bro ~]# cat /opt/bro/etc/node.cfg 
[manager]
type=manager
host=localhost
#
[proxy]
type=proxy
host=localhost

[bro-eth1]
type=worker
host=localhost
interface=eth1
lb_method=pf_ring
lb_procs=5
#pin_cpus=1,3

Deploy configuration and start Bro:

[root@bro ~]# broctl deploy
checking configurations ...
installing ...
removing old policies in /opt/bro/spool/installed-scripts-do-not-touch/site ...
removing old policies in /opt/bro/spool/installed-scripts-do-not-touch/auto ...
creating policy directories ...
installing site policies ...
generating cluster-layout.bro ...
generating local-networks.bro ...
generating broctl-config.bro ...
generating broctl-config.sh ...
stopping ...
bro-eth1-1 not running
bro-eth1-2 not running
proxy not running
manager not running
starting ...
starting manager ...
starting proxy ...
starting bro-eth1-1 ...
starting bro-eth1-2 ...

[root@bro ~]# broctl status
Name         Type    Host             Status    Pid    Started
manager      manager localhost        running   1307   25 Oct 15:46:02
proxy        proxy   localhost        running   1348   25 Oct 15:46:04
bro-eth1-1   worker  localhost        running   1399   25 Oct 15:46:05
bro-eth1-2   worker  localhost        running   1401   25 Oct 15:46:05

2. Configuration of Security Onion instance from image “Ubuntu 14.04 Security Onion”

After deploying the VM, login with SSH X11 forwarding and run sosetup.

$ ssh -Y -i ~/.ssh/id_rsa root@147.72.248.6
... [output omitted] ...

root@so-1:~# sosetup

Follow the prompts:

Screen Shot 2017-10-25 at 10.37.27

Next window about network interface configuration can be omitted since we will not change the management interface. However, if a configuration needs to be done through this window, eth0 should be selected as the management interface with the current IP address of the VM (from 10.103.0.0/24 subnet) and netmask (255.255.255.0) and default gateway 10.103.0.1 .

Screen Shot 2017-10-25 at 10.37.49

Details about the server configuration can be found on the wiki . This sample configuration will select “Evaluation Mode”.

Screen Shot 2017-10-25 at 10.39.04

Dataplane interfaces (eth1, eth2 … ) can be selected for monitoring.
Screen Shot 2017-10-25 at 10.39.23

A local user account needs to be created to access Squil, Squert and ELSA.
Screen Shot 2017-10-25 at 10.39.48

Screen Shot 2017-10-25 at 10.40.10

Screen Shot 2017-10-25 at 10.40.28

Configuration changes will be committed.

Screen Shot 2017-10-25 at 10.40.43

Screen Shot 2017-10-25 at 12.14.01

Information messages pop up.

Screen Shot 2017-10-25 at 10.41.59

Screen Shot 2017-10-25 at 10.42.25

Screen Shot 2017-10-25 at 10.48.39

Firewall needs to be configured to allow connections to the instance. this should be done after sosetup is completed.

Screen Shot 2017-10-25 at 10.49.13

Screen Shot 2017-10-25 at 10.49.36

Configure firewall for access to the instance (Entries are mentioned with bold text below):

root@so-1:~# so-allow
This program allows you to add a firewall rule to allow connections from a new IP address.

What kind of device do you want to allow?

[a] - analyst - ports 22/tcp, 443/tcp, and 7734/tcp
[c] - apt-cacher-ng client - port 3142/tcp
[l] - syslog device - port 514
[o] - ossec agent - port 1514/udp
[s] - Security Onion sensor - 22/tcp, 4505/tcp, 4506/tcp, and 7736/tcp

If you need to add any ports other than those listed above,
you can do so using the standard 'ufw' utility.

For more information, please see the Firewall page on our Wiki:
https://github.com/Security-Onion-Solutions/security-onion/wiki/Firewall

Please enter your selection (a - analyst, c - apt-cacher-ng client, l - syslog, o - ossec, or s - Security Onion sensor):
a
Please enter the IP address of the analyst you'd like to allow to connect to port(s) 22,443,7734:
152.54.9.188
We're going to allow connections from 152.54.9.188 to port(s) 22,443,7734.

Here's the firewall rule we're about to add:
sudo ufw allow proto tcp from 152.54.9.188 to any port 22,443,7734

We're also whitelisting 152.54.9.188 in /var/ossec/etc/ossec.conf to prevent OSSEC Active Response from blocking it.  Keep in mind, the OSSEC server will be restarted once configuration is complete.

To continue and add this rule, press Enter.
Otherwise, press Ctrl-c to exit.
PRESS ENTER
Rule added
Rule has been added.

Here is the entire firewall ruleset:
Status: active

To                         Action      From
--                         ------      ----
22/tcp                     ALLOW       Anywhere
22,443,7734/tcp            ALLOW       152.54.9.188
22/tcp (v6)                ALLOW       Anywhere (v6)


Added whitelist entry for 152.54.9.188 in /var/ossec/etc/ossec.conf.

Restarting OSSEC Server...
Deleting PID file '/var/ossec/var/run/ossec-remoted-5006.pid' not used...
Killing ossec-monitord .. 
Killing ossec-logcollector .. 
ossec-remoted not running ..
Killing ossec-syscheckd .. 
Killing ossec-analysisd .. 
ossec-maild not running ..
Killing ossec-execd .. 
Killing ossec-csyslogd .. 
OSSEC HIDS v2.8 Stopped
Starting OSSEC HIDS v2.8 (by Trend Micro Inc.)...
Started ossec-csyslogd...
2017/10/25 16:20:23 ossec-maild: INFO: E-Mail notification disabled. Clean Exit.
Started ossec-maild...
Started ossec-execd...
Started ossec-analysisd...
Started ossec-logcollector...
Started ossec-remoted...
Started ossec-syscheckd...
Started ossec-monitord...
Completed.

Check status of the services:

root@so-1:~# service nsm status
Status: securityonion
  * sguil server                                                                           [  OK  ]
Status: HIDS
  * ossec_agent (sguil)                                                                    [  OK  ]
Status: Bro
Name         Type       Host          Status    Pid    Started
bro          standalone localhost     running   7390   25 Oct 16:14:13
Status: so-1-eth1
  * netsniff-ng (full packet data)                                                         [  OK  ]
  * pcap_agent (sguil)                                                                     [  OK  ]
  * snort_agent-1 (sguil)                                                                  [  OK  ]
  * snort-1 (alert data)                                                                   [  OK  ]
  * barnyard2-1 (spooler, unified2 format)                                                 [  OK  ]

Web UI can be accessed through the public IP address of the VM. Squert and ELSA can be accessed from the links:

Screen Shot 2017-10-25 at 10.56.43

Screen Shot 2017-10-25 at 10.57.02

Screen Shot 2017-10-25 at 10.57.42

Jumbo Frame Support on Dataplane Interfaces

ExoGENI Testbed supports jumbo frames on dataplane interfaces across sites. (Currently, all racks except UMass and WVN racks support jumbo frames. UMass and WVN interfaces will be configured in the following weeks.)

VMs are created with dataplane interfaces that have an MTU of 1500 bytes. Underneath bridges and physical interfaces along the path are configured for an MTU of 9000. Currently, neuca tools don’t have an option to setup MTU size. MTU size needs to be modified within inside the VM.

Screen Shot 2017-06-13 at 13.28.23

On node0:

root@Node0:~# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr FA:16:3E:00:21:09  
          inet addr:172.16.0.1  Bcast:172.16.0.3  Mask:255.255.255.252
          inet6 addr: fe80::f816:3eff:fe00:2109/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:25 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1922 (1.8 KiB)  TX bytes:378 (378.0 b)

root@Node0:~# ifconfig eth1 mtu 9000

On node1:

root@Node1:~# ifconfig eth1
eth1      Link encap:Ethernet  HWaddr FA:16:3E:00:16:BD  
          inet addr:172.16.0.2  Bcast:172.16.0.3  Mask:255.255.255.252
          inet6 addr: fe80::f816:3eff:fe00:16bd/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:7 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:530 (530.0 b)  TX bytes:398 (398.0 b)

root@Node1:~# ifconfig eth1 mtu 9000

Now, jumbo frames can be exchanged:

root@Node0:~# ping -M do -s 8972 -c 3 172.16.0.2
PING 172.16.0.2 (172.16.0.2) 8972(9000) bytes of data.
8980 bytes from 172.16.0.2: icmp_seq=1 ttl=64 time=114 ms
8980 bytes from 172.16.0.2: icmp_seq=2 ttl=64 time=56.9 ms
8980 bytes from 172.16.0.2: icmp_seq=3 ttl=64 time=56.7 ms

--- 172.16.0.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2059ms
rtt min/avg/max/mdev = 56.732/76.150/114.798/27.329 ms

-M do : Prohibits fragmentation
-s : Sets packetsize. 8972 bytes (padding) + 20 bytes (TCP header) + 8 bytes (ICMP header) = 9000

Creating Windows images on ExoGENI

In this post, creation and configuration of a Windows image is described. Virtual Machines running Windows OS can be used on ExoGENI. As it is mentioned on this post ExoGENI does not host VM images. Windows images should be created by the users. This can be done on any platform. ExoGENI testbed baremetal servers can be used for image creation as well. In this post, steps for installing KVM to a baremetal server to use as a virtualization platform for image creation is described, too.

  • Activation of the OS should be managed by the user.
  • Dataplane interfaces can be created by ORCA, however configuration of interfaces is not supported yet.
  • Attaching iSCSI storage is not supported yet.
  • Upon creation of the VM, IP address assignment and other network configuration for dataplane interfaces should be done manually within inside the VM.

Steps to create and deploy an image are as below:

  1. Install virtualization platform for image creation
  2. Install and customize the OS
  3. Create and deploy the image

Virtualization Platform Installation

1. Create a slice with one baremetal server. Baremetal server will be used as the hypervisor to provision an instance. Image file of the instance will be converted and deployed on a web server or image registry to provision VMs on ExoGENI.

2. Install KVM and virtualization platform.

yum update -y
yum install qemu-kvm qemu-img -y
yum groupinstall virtualization-client virtualization-platform virtualization-tools -y
service libvirtd start
yum install vnc -y

3. Install RPMs for X11 Forwarding. We will need to launch VNC viewer to access the VM.

yum install -y xorg-x11-xauth xorg-x11-fonts-* xorg-x11-utils
touch /root/.Xauthority

4. On ExoGENI, baremetal servers are booted off “stateless images” and OS runs on ramdisk. Each server has two harddrives which can be partitioned and mounted after the server is provisioned. (Another option is attaching storage to the server during slice creation.) We will use the physical drives, partition and mount to provide storage for the hypervisor. (If there are already defined partitions, either re-use them or delete, re-partition and create filesystem)

[root@Node0 ~]# fdisk /dev/sda
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x77e3f885.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
         switch off the mode (command 'c') and change display units to
         sectors (command 'u').

Command (m for help): p

Disk /dev/sda: 299.0 GB, 298999349248 bytes
255 heads, 63 sectors/track, 36351 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x77e3f885

   Device Boot      Start         End      Blocks   Id  System

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-36351, default 1): 
Using default value 1
Last cylinder, +cylinders or +size{K,M,G} (1-36351, default 36351): 
Using default value 36351

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

[root@Node0 ~]# fdisk  -l

Disk /dev/sda: 299.0 GB, 298999349248 bytes
255 heads, 63 sectors/track, 36351 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x77e3f885

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1       36351   291989376   83  Linux

[root@Node0 ~]# mkfs.ext4 /dev/sda1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
18251776 inodes, 72997344 blocks
3649867 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
2228 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
4096000, 7962624, 11239424, 20480000, 23887872, 71663616

Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.

Create directories, mount hard drive:

[root@Node0 ~]# mkdir /opt/kvm
[root@Node0 ~]# mount /dev/sda1 /opt/kvm
[root@Node0 ~]# mkdir /opt/kvm/iso
[root@Node0 ~]# mv /var/lib/libvirt /opt/kvm/.
[root@Node0 ~]# ln -s /opt/kvm/libvirt /var/lib/libvirt

5. Create bridge interface which will be used by KVM. During this image creation process, we will not need to use network connection to the VM, but bridge interface can be used to access the VM, if needed. Public interface of the server will be bridged to access the VM, so we need to create and configure the bridge with the public interface. (This interface depends on the type of racks. On UCS-B series ExoGENI racks, this interface is eth0, whereas on IBM racks em1.)

Create a script, then execute:

### Configure br0 on the baremetal server
#!/bin/bash

PHYS_IF="eth0"
BR_IF="br0"
IPADDR=$(ifconfig $PHYS_IF | grep "inet addr" | awk '{print $2}' | cut -d: -f2)
NETMASK=$(ifconfig $PHYS_IF | grep "inet addr" | awk '{print $4}' | cut -d: -f2)
GATEWAY=$(ip route show | grep default | awk '{print $3}')

brctl addbr ${BR_IF}
ifconfig ${PHYS_IF} 0.0.0.0 down
brctl addif ${BR_IF} ${PHYS_IF}

ifconfig ${PHYS_IF} up
ifconfig ${BR_IF} ${IPADDR} netmask ${NETMASK} up
route add -net default gw ${GATEWAY}
[root@Node0 ~]# chmod +x  configure_bridge.sh 
[root@Node0 ~]# ./configure_bridge.sh
[root@Node0 ~]# ifconfig -a
br0       Link encap:Ethernet  HWaddr 00:25:B5:00:02:7F  
          inet addr:10.101.0.16  Bcast:10.101.0.255  Mask:255.255.255.0
          inet6 addr: fe80::225:b5ff:fe00:27f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:21 errors:0 dropped:0 overruns:0 frame:0
          TX packets:19 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:1444 (1.4 KiB)  TX bytes:1750 (1.7 KiB)

eth0      Link encap:Ethernet  HWaddr 00:25:B5:00:02:7F  
          inet6 addr: fe80::225:b5ff:fe00:27f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:819589 errors:0 dropped:0 overruns:0 frame:0
          TX packets:187926 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1206755994 (1.1 GiB)  TX bytes:16406110 (15.6 MiB)

eth1      Link encap:Ethernet  HWaddr 00:25:B5:00:02:4F  
          inet6 addr: fe80::225:b5ff:fe00:24f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 b)  TX bytes:888 (888.0 b)

eth2      Link encap:Ethernet  HWaddr 00:25:B5:00:02:5F  
          inet6 addr: fe80::225:b5ff:fe00:25f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9000  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 b)  TX bytes:888 (888.0 b)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

virbr0    Link encap:Ethernet  HWaddr 52:54:00:A1:FD:F2  
          inet addr:192.168.122.1  Bcast:192.168.122.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

virbr0-nic Link encap:Ethernet  HWaddr 52:54:00:A1:FD:F2  
          BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

6. Copy Windows installer ISO and VirtIO drivers for Windows to the server. (Windows installer needs to be provided by the user.)

wget http://geni-images.renci.org/images/windows/virtio-win-0.1-22.iso

7. Create the instance:

[root@Node0 ~]# qemu-img create -f qcow2 /var/lib/libvirt/images/win7.qcow 20G
Formatting '/var/lib/libvirt/images/win7.qcow', fmt=qcow2 size=21474836480 encryption=off cluster_size=65536 

[root@Node0 ~]# /usr/libexec/qemu-kvm -m 2048 -cdrom /opt/kvm/iso/Win7Enterprise-64bit.iso -drive file=/var/lib/libvirt/images/win7.qcow,if=virtio -drive file=/opt/kvm/iso/virtio-win-0.1-22.iso,index=3,media=cdrom -net nic,model=virtio -net user -nographic -usbdevice tablet -vnc :9 -enable-kvm 

On another terminal login with X11 forwarding:

ssh -Y -i ~/.ssh/id_geni_ssh_mcevik_rsa root@139.62.242.122

Add iptables rule for VNC connection (port 5109):

iptables -A INPUT -p tcp --dport 5909 -j ACCEPT

Connect to the instance via VNC:

[root@Node0 ~]# vncviewer 127.0.0.1:9

Windows OS Installation and Customization

Install the OS by selecting VirtIO driver from the ISO image attached to the instance:

 Screen Shot 2017-06-05 at 23.40.43
 Screen Shot 2017-06-05 at 23.41.18
Screen Shot 2017-06-05 at 23.41.23
Screen Shot 2017-06-05 at 23.41.28
Screen Shot 2017-06-05 at 23.41.41
Screen Shot 2017-06-05 at 23.43.30
Screen Shot 2017-06-05 at 23.44.03
Screen Shot 2017-06-05 at 23.44.17
Screen Shot 2017-06-06 at 00.35.05
Screen Shot 2017-06-06 at 00.37.34

Switch to audit mode by CTRL+SHIFT+F3:

Screen Shot 2017-06-06 at 00.38.00 Screen Shot 2017-06-06 at 00.38.39

Install virtio network driver:

Screen Shot 2017-06-06 at 01.55.32 Screen Shot 2017-06-06 at 01.56.02 Screen Shot 2017-06-06 at 01.56.10 Screen Shot 2017-06-06 at 00.39.56

Select “Work Network”:

Screen Shot 2017-06-06 at 00.40.51

Download and install Firefox:

 Screen Shot 2017-06-06 at 00.43.13

Download and install Cloudbase-Init:

Screen Shot 2017-06-06 at 00.46.14
Screen Shot 2017-06-06 at 00.46.21 Screen Shot 2017-06-06 at 00.46.46 Screen Shot 2017-06-06 at 00.50.29

Run System Preparation Tool with the settings below:

Screen Shot 2017-06-06 at 02.01.55

Reboot the instance:

 Screen Shot 2017-06-06 at 02.03.56

Login in audit mode:

Create a user account “exogeni” (Administrator).

Configure the user account to log on automatically:
– Run netplwiz
– User Accounts dialog box: Clear “Users must enter a user name and password to use this computer” check box. Enter the user’s password.

Reboot and see automatic login:

Enable Remote Desktop, disable Remote Assistance:

 Screen Shot 2017-06-06 at 02.17.33

Configure Windows Firewall:

Advanced settings: Disable “Network Discovery” rules:

Screen Shot 2017-06-06 at 02.15.23

Create new rule to allow incoming and outgoing ICMP traffic for pinging:
– Rule Type: Custom
– Program: All programs
– Protocols and ports: ICMPv4 – All ports
– Scope: Any IP address for both local and remote IP addresses
– Action: Allow connection
– Profile: Domain, private, public selected
– Name: PING

Configure “restarts” after automatic updates: To complete installation of the network drivers when the VM is launched, enable automatic restart after updates.
– Run gpedit.msc
– Local Group Policy Editor: Local Computer Policy, Computer Configuration, Administrative Templates, Windows Components, Windows Update

Screen Shot 2017-06-06 at 02.45.58
Screen Shot 2017-06-06 at 03.16.37

Shut down the instance. All customizations are saved to the qcow image.

Image creation and deployment

After customizations, qcow2 image needs to be converted to raw image:

qemu-img convert -O raw win7.qcow win7.raw.img
gzip win7.raw.img

Generate the metadata file:

<images>
    
</images>

After slice is created, you can connect to the VM by Remote Desktop and assign IP addresses to the dataplane interfaces. Also, activation of the OS needs to be done with a valid licence key.

Screen Shot 2017-06-06 at 10.05.43

Screen Shot 2017-06-06 at 08.42.50 Screen Shot 2017-06-06 at 08.43.12

Securing Your Slice – Part 1

This post is intended to outline the fundamental security precautions for the virtual and baremetal servers that are launched on ExoGENI testbed.

ExoGENI provides experimenters the ability to use their own images when creating slices and launching virtual machines. Testbed infrastructure is well-isolated from the slivers, providing a lot of flexibility to the experimenters on virtual and baremetal servers. At the same time, since individual slivers are created with interfaces on hosting campus IP networks, each virtual or baremetal server should be carefully administered during or after slice creation to ensure proper security measures are taken.

Security is a highly complicated area of concern in server administration. However, we want to outline the fundamental measures for protection of the virtual or baremetal servers which have public internet access. These measures should be added on top of the default OS installation to restrict and control access to the servers, minimize the risk of being a vulnerable hot spot within the campus network.

In this post, we will describe minimally necessary security measures for a linux server regardless of the flavor (CentOS, Ubuntu, Debian etc.), and provide example scripts/commands which can be used during slice creation. (These scripts/commands are based on CentOS 6 distribution and equivalent commands need to be gathered for Ubuntu, Debian, etc.)

“Post Boot Scripts” feature of ORCA can be used to inject security configuration to the nodes or node groups. Details about Post Boot Scripts and templates on this page can be considered as a valuable resource for configuring the virtual or baremetal servers as well as rich scripting and templating capabilities of ExoGENI.

 

1. Servers should include up-to-date packages

After the VM is created, packages can be updated by using a portion of postboot script below:

yum -y update

For kernel updates, rebooting the server is needed. However, VM nodes on ExoGENI testbed are not safely rebootable. Because of some complexities introduced by the virtualization infrastructure, and udev schemes, connectivity with the virtual machines cannot be ensured after rebooting. We do not suggest rebooting the servers on ExoGENI testbed until we implement network device. This will be explained in the following posts. Rebooting may still be needed for system library updates such as glibc. If there is a security concern and an update is needed for such packages, then updating the image, saving it and using that image for slice creation may be necessary.

It is a best practice to upload updated image files (kernel, ramdisk, filesystem) to the web-server and boot up the virtual machines by using the up-to-date images.

 

2. User Authentication

SSH public keys are injected into the virtual or baremetal servers during slice creation. Besides, password authentication for remote root login through SSH should be disallowed by editing the line shown below in sshd_config file:

PermitRootLogin without-password

sshd_config file can be updated by using a portion of postboot script below:

sed -i 's/\#PermitRootLogin yes/PermitRootLogin without-password/g' /etc/ssh/sshd_config
service sshd reload

 

3. Firewall configuration

Inbound and outbound traffic can be controlled by a firewall such as iptables.

Iptables utilizes built-in tables (mangle, filter, nat) to process packets. Each table has a group of chains that represent the actions to be performed on the packet. Built-in chains in filter table are INPUT, OUTPUT and FORWARD. Rules are added to the chains. A packet is checked against each rule in turn (from top to the bottom). An action is taken to ACCEPT or DROP the packet that matches the rule. No further processing is done after a matching rule is found and packet is processed (order of the rules is significant). If a packet passes down through all of the rules in the chain and reaches the bottom without being matched against any rule, then the default policy for that chain is applied to the packet.

Hardening a server with iptables should be taken seriously. IP addresses or networks (both source and destination) as well as ports should be specified to allow traffic for the required connections and reject all other traffic. Default firewall policy and some kernel parameters need to be adjusted, too. Although many useful resources about hardening servers and firewall configuration can be found on the internet, we will prepare a dedicated post to elaborate on this topic for a baseline configuration that can be used for most of the slices. This page can be used as a good starting point to learn about iptables.

Below, we provide a basic set of rules that allows all outgoing connections and blocks all unwanted incoming connections:

# Flush all existing rules
iptables –F

# Set default policy on each chain
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT

# Accept incoming packets destined for localhost interface
iptables -A INPUT -i lo -j ACCEPT

# Accept incoming packets that are part of/related to an already established connection
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Accept incoming packets for SSH connections
iptables -A INPUT -p tcp --dport 22 -j ACCEPT

# Accept incoming packets that belong to icmp protocol
iptables -A INPUT -p icmp -j ACCEPT

If no firewall rules are present, then a basic set of rules can be created and activated using a portion of postboot script below:

iptables –F
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
service iptables save

Firewall rules can be checked as below:

-bash-4.1# iptables -nvL
Chain INPUT (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
   38  4071 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0           state RELATED,ESTABLISHED 
    0     0 ACCEPT     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0           tcp dpt:22 
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 24 packets, 2254 bytes)
 pkts bytes target     prot opt in     out     source               destination     

 

4. Network services access control

Controlling access to network services can be an important task to provide a balance between flexibility and security within the research-oriented test environment. TCP wrappers is a mechanism used to allow or deny hosts access to the network services. Access files (/etc/hosts.allow and /etc/hosts.deny) are used to determine whether a client is allowed or not. Details on this page can be considered as a valuable resource for configuration.

One common use-case is to restrict access to the portmap service when NFS is being utilized within the slice. Since NFS relies on portmap service which is a dynamic port assignment daemon for RPC services, information about the running services is revealed and can be obtained by an “rpcinfo” request from the servers.

It is critical to restrict access to portmap service through the public interface, and allow access only from the data-plane network. Also, data-plane IP addresses and not public IP addresses should be used in /etc/hosts, /etc/exports, /etc/fstab and other relevant files for NFS configuration.

Steps below should be taken to configure an NFS server:

– On the NFS server host, add the line below to /etc/hosts.deny to allow “rpcinfo” queries only from the data-plane network

rpcbind: ALL EXCEPT <DATAPLANE NETWORK>

Example:

rpcbind: ALL EXCEPT 172.16.0.0/255.255.0.0

– On NFS clients, there is no need to run rpcbind service, and it can be disabled.

Network service access rule can be added by using a portion of postboot script below (Note that data-plane network should be replaced with the network within the slice):

cat << EOF >> /etc/hosts.deny
rpcbind: ALL EXCEPT 172.16.0.0/255.255.0.0
EOF

– Check RPC information that NFS server reveals from the client

# No RPC information is returned for the query from the public interface of the NFS server (192.1.242.62)

bash-4.1# rpcinfo -p 192.1.242.62
rpcinfo: can't contact portmapper: RPC: Authentication error; why = Client credential too weak

# RPC information is returned for the query from the data-plane interface of the NFS server (172.16.0.5)

-bash-4.1# rpcinfo -p 172.16.0.5
   program vers proto   port  service
    100000    4   tcp    111  portmapper
    100005    1   udp  54381  mountd
    100003    3   tcp   2049  nfs
    100227    3   udp   2049  nfs_acl
    100021    1   udp  45462  nlockmgr
    (Sample output shown. Some output omitted)

It is evident that all ExoGENI Testbed users are well aware of the importance of security. A major concern is that security-related problems trigger some restrictions and degradations on the infrastructure. Proposed measures should be applied to every slice as a primary task. Taking precautions to secure your slices will accommodate a rather secure environment both for your precious data/work and the rest of the world.

 

Appendix:

#!/bin/bash

sed -i 's/\#PermitRootLogin yes/PermitRootLogin without-password/g' /etc/ssh/sshd_config
service sshd reload

iptables –F
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -j ACCEPT
iptables -A INPUT -p icmp -j ACCEPT
service iptables save

cat << EOF >> /etc/hosts.deny
rpcbind: ALL EXCEPT 172.16.0.0/255.255.0.0
EOF

yum -y update