wiki:Internal/OpenFlow/OFIntro

Version 5 (modified by akoshibe, 11 years ago) ( diff )

*Draft* An Intro to OpenFlow@ORBIT

This page is meant to get you up and running quickly with OpenFlow-related experiments/development on the ORBIT testbeds.

I. A simple OpenFlow Network

We begin with a simple setup of a Mininet network controlled by a controller (Floodlight) running on a separate Sandbox node, which looks like this:

           network
   node1-1  link   node1-2
  [Mininet]------[Floodlight]

1.1 Some prerequisites - Using the prepackaged node image

To make things easier, we provide images pre-installed with several potentially useful packages, including:

This makes things easy since you can image multiple nodes with the same image, and pick and choose what to run where.

The image is named of-pkg.ndz. omf can be used to image nodes with it:

$ omf load -i of-pkg.ndz

The nodes will be off after it's imaged. Turn them on:

$ omf tell -a on

Once on, you can log into them as root using their names, e.g. node1-1.

1.1.1 node/Sandbox layout

When you log onto a Sandbox, you are logged into the console machine, from which you can use omf and the likes to image and log onto/manage the nodes.

Each node (save those on sandbox4) have two interfaces. The first, eth1, connects to your console connection for managing the nodes, and is assigned an IP address of the form 10.1x.y.z, where x = sandbox number, and y and z = node number e.g. if your node is named node1-2, and is part of Sandbox8, it will be 10.18.1.2. Do not take down this interface or change its address - you will lose your connection. The second, eth0, is down by default, and is open to any kind of use. Both are gigabit links and can be used for experimentation, but in general, the second one should be used unless there are specific circumstances.

1.1.2 managing/configuring nodes

This is done by using SSH to log into the nodes as root. Logging into each is okay, but can get cumbersome if you have many nodes, on which you have to do the exact same thing. In this case, commands may also be issued via SSH from the console, without manually logging into each node (and ending up with a dozen terminal windows):

user@console.sb8:~$ ssh -o StrictHostKeyChecking="no" root@node1-1 "command_to_run_1;command_to_run_2"

This runs command_to_run_1 and command_to_run_2 on node1-1 as if you'd logged into it to issue it at the shell.

Each command is delimited by a semicolon, and the full string is surrounded by double quotes. The -o StrictHostKeyChecking="no" stops SSH from checking host keys and is optional.
This can be used in a script to run from the console to quickly set up many nodes. We use it in some of the following examples to make it easier to show what is happening where.

1.1.3 Installing your own tools

For people interested in learning more about/installing these packages, they can refer to Section II for a summary and quick setup instructions for each and links to more information.

1.2 Running the network

As a two-node example, we image the nodes on Sandbox8, as explained in Section 1.1. One is used for the controller, and the other, the Mininet network.

  1. Bring up and assign addresses to eth0 of the nodes. Both should be in the same IP block. If done from console, the commands look like this:
    $ ssh root@node1-1 "ifconfig eth0 inet 192.168.1.1 up"
    $ ssh root@node1-2 "ifconfig eth0 inet 192.168.1.2 up"
    
    The nodes should now be able to ping eachother via eth0:
    $ ssh root@node1-1 "ping -c 1 192.168.1.2"
    PING 192.168.1.2 (192.168.1.2) 56(84) bytes of data.
    64 bytes from 192.168.1.2: icmp_req=1 ttl=64 time=0.614 ms
    
    --- 192.168.1.2 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms
    
  1. Start the controller on one node. We arbitrarily pick node1-1. From a shell on node1-1, launch Floodlight:
    # cd floodlight
    # java -jar target/floodlight.jar
    
    After you give it a few seconds, Floodlight should be listening to port 6633 on all interfaces available on the node (eth0, 1, and lo). If you want, you can start up tcpdump or something similar on a separate terminal on node1-1 to begin capturing control messages:
    # tcpdump -i lo port 6633 
    
    Alternatively, you can start tcpdump to write to a .pcap file for later analysis with wireshark with the OpenFlow plugin, or ofstats or oftrace, which are part of liboftrace.
    # tcpdump -w outfile.pcap -i lo port 6633 
    
  2. Launch Mininet. From another shell on node1-2:
    # mn --topo=single,2 --controller=remote,ip=192.168.1.1
    
    This will give you a virtual network of two hosts and one switch pointed to the running Floodlight instance on node1-1. Once at the prompt, try pinging one host from the other (it should work):
    mininet> h1 ping h2
    PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data.
    64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=8.19 ms
    64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=0.164 ms
    64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.025 ms
    64 bytes from 10.0.0.2: icmp_req=4 ttl=64 time=0.024 ms
    ^C
    --- 10.0.0.2 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 2999ms
    rtt min/avg/max/mdev = 0.024/2.101/8.193/3.517 ms
    
    At the same time, you should see (lots of) packets being captured by tcpdump in node1-1's terminal:
    root@node1-1:~/floodlight# tcpdump -i eth0 port 6633
    tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
    listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
    20:18:30.188181 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [S], seq 3242563912, win 14600, options [mss 1460,sackOK,TS val 699854 ecr 0,nop,wscale 4], length 0
    20:18:30.188321 IP 192.168.1.1.6633 > 192.168.1.2.41631: Flags [S.], seq 2665849071, ack 3242563913, win 14480, options [mss 1460,sackOK,TS val 700809 ecr 699854,nop,wscale 4], length 0
    20:18:30.188466 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [.], ack 1, win 913, options [nop,nop,TS val 699854 ecr 700809], length 0
    20:18:30.188618 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [F.], seq 1, ack 1, win 913, options [nop,nop,TS val 699854 ecr 700809], length 0
    20:18:30.190310 IP 192.168.1.1.6633 > 192.168.1.2.41631: Flags [.], ack 2, win 905, options [nop,nop,TS val 700810 ecr 699854], length 0
    20:18:30.224204 IP 192.168.1.1.6633 > 192.168.1.2.41631: Flags [P.], seq 1:9, ack 2, win 905, options [nop,nop,TS val 700818 ecr 699854], length 8
    20:18:30.224426 IP 192.168.1.2.41631 > 192.168.1.1.6633: Flags [R], seq 3242563914, win 0, length 0
    20:18:30.402564 IP 192.168.1.2.41632 > 192.168.1.1.6633: Flags [S], seq 1611313095, win 14600, options [mss 1460,sackOK,TS val 699908 ecr 0,nop,wscale 4], length 0
    20:18:30.402585 IP 192.168.1.1.6633 > 192.168.1.2.41632: Flags [S.], seq 367168075, ack 1611313096, win 14480, options [mss 1460,sackOK,TS val 700863 ecr 699908,nop,wscale 4], length 0
    ...
    

Using OpenVswitch directly

Mininet's datapaths are backed by OVS. Therefore, if you have a Mininet install, you get OVS for "free". You can use OVS directly for your data plane.

Multiple controller instances

You can launch multiple instances of Floodlight on one or more nodes. If you decide to run the instances on a single host, the ports used by the Floodlight instances must not conflict i.e. each instance must be assigned a different set of ports.


Installation

The following are the installation steps and basic usage for the software that are found on the image. For more information, refer to their respective pages; Floodlight and Mininet in particular have very thorough docs.

Quick links:

2.1 Floodlight
2.2 Mininet
2.3 CBench
2.4 liboftrace

2.1 Floodlight

docs: http://docs.projectfloodlight.org/display/floodlightcontroller/Floodlight+Documentation

For the most part the following is a repetition of some of the things there. Truth be told, if you plan to modify/develop on Floodlight it is better to just install it on a local machine where you can use eclipse (either that, or you can try to X11 forward, but that doesn't always go well).

dependencies

sudo apt-get install git-core build-essential default-jdk ant python-dev eclipse

installation

The following fetches and builds the latest stable release:

git clone git://github.com/floodlight/floodlight.git
cd floodlight
git checkout fl-last-passed-build
ant

To import as a project on Eclipse, run the following while in the same directory:

ant eclipse

run

Assuming everything worked out:

java -jar target/floodlight.jar

from the floodlight/ directory launches Floodlight. It will output a bunch of messages while it searches for, loads, and initializes modules. You can refer to the output attached below for what it should look like - there may be warnings, but they should be harmless.

This command also launches in the foreground, so you can either launch it in a terminal multiplexer like screen or tmux, or with a 1>logfile 2>&1 & tacked to the end. The former is probably recommended.

development

Tutorials and other information can be found here: http://docs.projectfloodlight.org/display/floodlightcontroller/For+Developers

2.2 Mininet

website: http://mininet.org/
It is highly recommended to run trough the docs, especially the following:

If you post to the list especially before you read the FAQ's, you will likely just be asked if you have checked them.

installation/build

The VM is the recommended way to run Mininet on your machine.
The following is for a native install (as on the node image).

The method differs for different versions of Ubuntu. The following is for 12.04. For others, refer to this page. The following also takes care of the dependencies.

sudo apt-get install mininet/precise-backports

Then disable ovs-controller:

sudo service openvswitch-controller stop
sudo update-rc.d openvswitch-controller disable

You may also need to start open Vswitch:

sudo service openvswitch-switch start

You can verify that it works with the following:

sudo mn --test pingall

This sets up a 2-host, 1-switch topology and pings between the hosts. The output looks similar to this:

*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 
*** Adding switches:
s1 
*** Adding links:
(h1, s1) (h2, s1) 
*** Configuring hosts
h1 h2 
*** Starting controller
*** Starting 1 switches
s1 
*** Ping: testing ping reachability
h1 -> h2 
h2 -> h1 
*** Results: 0% dropped (0/2 lost)
*** Stopping 2 hosts
h1 h2 
*** Stopping 1 switches
s1 ...
*** Stopping 1 controllers
c0 
*** Done
completed in 0.460 seconds

run

There are many flags and options associated with launching Mininet. mn --help will display them.
For example, to start the same topology as the pingall test, but with a controller running separately from Mininet:

# mn --topo=single,2 --controller=remote,ip=10.18.1.1 --mac
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 
*** Adding switches:
s1 
*** Adding links:
(h1, s1) (h2, s1) 
*** Configuring hosts
h1 h2 
*** Starting controller
*** Starting 1 switches
s1 
*** Starting CLI:
mininet>
  • —topo=single,2 : one switch with two hosts
  • —controller=remote,ip=10.18.1.1 : controller at 10.18.1.1
  • —mac : non-random MAC addresses

Some useful ones are:

  • controller external to Mininet, at IP addr and port p:
    --controller=remote,ip=[addr],port=[p] 
    
  • non-random host MAC addresses (starting at 00:00:00:00:00:01 for h1)
    --mac
    

usage

You can find available commands for the command line by typing ? at the prompt. exit quits Mininet.
Some basic examples:

  • display topology:
    mininet> net
    c0
    s1 lo:  s1-eth1:h1-eth0 s1-eth2:h2-eth0
    h1 h1-eth0:s1-eth1
    h2 h2-eth0:s1-eth2
    
  • display host network info:
    mininet> h1 ifconfig
    h1-eth0   Link encap:Ethernet  HWaddr 00:00:00:00:00:01  
              inet addr:10.0.0.1  Bcast:10.255.255.255  Mask:255.0.0.0
              inet6 addr: fe80::200:ff:fe00:1/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:135 errors:0 dropped:124 overruns:0 frame:0
              TX packets:7 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000 
              RX bytes:8906 (8.9 KB)  TX bytes:558 (558.0 B)
    
    lo        Link encap:Local Loopback  
              inet addr:127.0.0.1  Mask:255.0.0.0
              inet6 addr: ::1/128 Scope:Host
              UP LOOPBACK RUNNING  MTU:16436  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:0 
              RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
    
  • ping host 1 from host 2
    mininet> h2 ping h1
    PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data.
    64 bytes from 10.0.0.1: icmp_req=1 ttl=64 time=10.0 ms
    ^C
    --- 10.0.0.1 ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 10.026/10.026/10.026/0.000 ms
    

scripting

Mininet has a Python API, whose docs can be found online: http://mininet.org/api/
Examples can also be found here: https://github.com/mininet/mininet/tree/master/examples

Once you write a script, you can run it as a script:

python mn_script.py

2.3 Cbench

website: http://docs.projectfloodlight.org/display/floodlightcontroller/Cbench+(New)

dependencies

sudo apt-get install autoconf automake libtool libsnmp-dev libpcap-dev

installation/build

git clone git://gitosis.stanford.edu/openflow.git
cd openflow; git checkout -b mybranch origin/release/1.0.0
git clone git://gitosis.stanford.edu/oflops.git
git submodule init && git submodule update
wget http://hyperrealm.com/libconfig/libconfig-1.4.9.tar.gz
tar -xvzf libconfig-1.4.9.tar.gz
cd libconfig-1.4.9
./configure
sudo make && sudo make install
#cd ../oflops/netfpga-packet-generator-c-library/
#./autogen.sh && ./configure && make
sh ./boot.sh ; ./configure --with-openflow-src-dir=${OF_PATH}/openflow/
make install

run

Run from the cbench directory under oflops:

cd cbench 
cbench -c localhost -p 6633 -m 10000 -l 10 -s 16 -M 1000 -t 
  • -c localhost : controller at loopback
  • -p 6633 : controller listaning at port 6633
  • -m 10000 : 10000 ms (10 sec) per test
  • -l 10 : 10 loops(trials) per test
  • -s 16 : 16 emulated switches
  • -M 1000 : 1000 unique MAC addresses(hosts) per switch
  • -t : throughput testing

for the complete list, use the -h flag.

The output for the above command looks like this:

cbench: controller benchmarking tool
   running in mode 'throughput'
   connecting to controller at localhost:6633 
   faking 16 switches offset 1 :: 3 tests each; 10000 ms per test
   with 10 unique source MACs per switch
   learning destination mac addresses before the test
   starting test with 0 ms delay after features_reply
   ignoring first 1 "warmup" and last 0 "cooldown" loops
   connection delay of 0ms per 1 switch(es)
   debugging info is off
16:53:14.384 16  switches: flows/sec:  18  18  18  18  18  18  18  18  18  18  18  18  18  18  18  18   total = 0.028796 per ms 
16:53:24.485 16  switches: flows/sec:  20  20  20  20  20  20  20  20  20  20  20  20  20  20  20  20   total = 0.031999 per ms 
16:53:34.590 16  switches: flows/sec:  24  24  24  24  24  24  24  24  24  24  24  24  24  24  24  24   total = 0.038380 per ms 
RESULT: 16 switches 2 tests min/max/avg/stdev = 32.00/38.38/35.19/3.19 responses/s

2.4 liboftrace (ofdump/ofstats)

docs:

https://github.com/capveg/oftrace/blob/master/README
http://www.openflow.org/wk/index.php/Liboftrace

dependencies

sudo apt-get install libpcap-dev swig libssl-dev

installation/build

git clone git://github.com/capveg/oftrace.git
cd oftrace
./boot.sh
./configure --with-openflow-src-dir=${OF_PATH}/openflow/
make && make install

run

There are two tools pre-packaged with liboftrace (as per a mailing-list entry):

  1. ofstats: a program which calculates the controller processing delay, i.e., the difference in time between a packet_in message and the corresponding packet_out or flow_mod message.
  2. ofdump: a program that simply lists openflow message types with timestamps by switch/controller pair.

Both have the same syntax:

[ofstats|ofdump] [controller IP] [OF port]

Without the arguments it defaults to localhost:6633.

For example, with a pcap file named sample.pcap from a tcpdump session sniffing for traffic from a controller at 192.168.1.5, port 6637:
ofdump:

# ofdump sample.pcap 192.168.1.5 6637
DBG: tracking NEW stream : 192.168.1.5:6637-> 192.168.1.6:47598 
DBG: tracking NEW stream : 192.168.1.6:47598-> 192.168.1.5:6637 
FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 0      LEN 8   TIME 0.000000
FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 0      LEN 8   TIME 0.026077
FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 5      LEN 8   TIME 0.029839
FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 6      LEN 128 TIME 0.1070415

...

FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 10     LEN 60  TIME 0.2038485
 --- 2 sessions:  0 0
FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 13     LEN 24  TIME 0.2038523
FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 10     LEN 60  TIME 0.2038573
FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 13     LEN 24  TIME 0.2038614
FROM 192.168.1.6:47598          TO  192.168.1.5:6637    OFP_TYPE 10     LEN 60  TIME 0.2038663
FROM 192.168.1.5:6637           TO  192.168.1.6:47598   OFP_TYPE 13     LEN 24  TIME 0.2038704
Total OpenFlow Messages: 20015

ofstats:

# ofstats sample.pcap 192.168.1.5 6637  
Reading from pcap file 1.pcap for controller 192.168.1.5 on port 6637
DBG: tracking NEW stream : 192.168.1.5:6637-> 192.168.1.6:47598 
DBG: tracking NEW stream : 192.168.1.6:47598-> 192.168.1.5:6637 
0.008088        secs_to_resp buf_id=333 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued
0.000454        secs_to_resp buf_id=334 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued
0.000437        secs_to_resp buf_id=335 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
0.000534        secs_to_resp buf_id=336 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued
0.000273        secs_to_resp buf_id=337 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued
0.000486        secs_to_resp buf_id=338 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 2 queued
0.000379        secs_to_resp buf_id=339 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
0.000275        secs_to_resp buf_id=340 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued
...
0.000135        secs_to_resp buf_id=10330 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
0.000132        secs_to_resp buf_id=10331 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 1 queued
0.000131        secs_to_resp buf_id=10332 in flow 192.168.1.5:6637 -> 192.168.1.6:47598 - packet_out - 0 queued

Since the outputs are dumped to stdout it is probably best to redirect it to a file for parsing later, like so:

# ofstats sample.pcap 192.168.1.5 6637 > outfile
Note: See TracWiki for help on using the wiki.