= !FlowVisor = !FlowVisor is a specialized !OpenFlow controller that creates slices within an !OpenFlow network, allowing the same physical deployment to support multiple controllers. Here we describe the setup/installation !FlowVisor 0.8.3 on Ubuntu 11.04. == references == * main docs: https://openflow.stanford.edu/display/DOCS/Installation+Guide == prereqs == The latest !FlowVisor is based on java, requiring the following packages: {{{ apt-get install ant sun-java6-jdk }}} It is assumed that you have !OpenFlow installed and in working condition. == installation == The installation process has gotten very streamlined compared to the 6.0 days. Refer to older versions of this page to see how it is done for version 6.* of flowvisor. [[BR]] A flowvisor user may be created as the admin user for flowvisor (e.g. running `fvctl`). This will come into play during the install when you are prompted to specify a flowvisor user/group. 1. install above-mentioned packages 2. pull from mercurial repository: {{{ hg clone https://bitbucket.org/onlab/flowvisor hg update -C 0.8-MAINT }}} the `update` command is equivalent to a git checkout and puts us on the (most recent stable) flowvisor-8.3 branch. 3. run `make` and `make install`. The `make install` process will prompt you with several questions: {{{ native@ubuntu:~/flowvisor$ make install ant Buildfile: /home/native/flowvisor/build.xml init: compile: [javac] /home/native/flowvisor/build.xml:34: warning: 'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for repeatable builds .... Installation prefix (/usr/local): FlowVisor User (needs to already exist) (native): FlowVisor Group (needs to already exist) (native): Install to different root directory () Installing FlowVisor into /usr/local with prefix=/usr/local as user/group native:native .... Enter password for account 'fvadmin' on the flowvisor: Generating default config to /usr/local/etc/flowvisor/config.xml --- 'flowvisor!logging' not set in config; defaulting to loglevel 'NOTE' --- Setting logging level to DEBUG }}} 4. Generate a default configuration file: {{{ fvconfig generate /usr/local/etc/flowvisor/flowvisor-config.json }}} Now your install should be ready to run. == configuration == the primary means of configuring flowvisor is via ''fvctl''.Unlike older versions of nox, you don't need to create configuration scripts, which is great. Here, we'll describe how to use ''fvctl'' through an example run based on the flowvisor webpage. === The testrun Setup === In this setup, !FlowVisor is started on port 6655, with an instance of nox listening on port 6656. A !FlowVisor slice is created to listen to any controllers using port 6656 (in this case, our test NOX instance). A single virtual switch was made to listen for controllers on port 6655, whereby its control is given to NOX by flowvisor. Everything, unless specified, is done in the flowvisor directory. 1. ''start !FlowVisor.'' Unlike nox, there are no flags to throw it into the background. Here it is started on port 6655: {{{ flowvisor -p 6655 & }}} flowvisor has several options, although config.xml is now irrelevant: {{{ r$ flowvisor -v Starting FlowVisor FlowVisor version: flowvisor-0.8.3 Rob Sherwood: rsherwood@telekom.com/rob.sherwood@stanford.edu --------------------------------------------------------------- err: ParseException: org.openflow.example.cli.ParseException: unknown option: v FlowVisor [options] config.xml option type [default] usage -d|--debug String [NOTE] Override default logging threshold in config -p|--port Integer [0] Override port from config -l|--logging Log to stderr instead of syslog -j|--jetty portInteger [-1] Override jetty port from config -h|--help Print help }}} 2. ''delete sample slices.'' After inspecting them with ''getSliceInfo'', remove them with ''deleteSlice''. There are two slices, "alice" and "bob". Each action usually requires authentication using the password you created in step 1. {{{ fvctl getSliceInfo alice fvctl deleteSlice alice fvctl getSliceInfo bob fvctl deleteSlice bob }}} 4. ''create your slice.'' This is done via ''createSlice''. The syntax goes roughly like this: {{{ fvctl createSlice tcp:: }}} So for our example, we have a slice "nox-test," which expects a controller with ip 172.16.0.4 living on port 6656: {{{ fvctl createSlice nox-test tcp:172.16.0.4:6656 foo@sampledomain.com }}} 5. ''create your flowspace.'' Flowspaces define the policy for your slices. Many parameters can be tacked onto ''addFlowSpace'': {{{ fvctl addFlowSpace all "Slice:=" }}} * priority - a number between 0 and 2^31^. Higher value = higher priority * policy - flow matching policies. Details are under '''FLOW SYNTAX''' in the fvctl(1) man pages. * slice name - the name of your slice. * permissions - similar to ''chmod'', with delegate=1, read=2, write=4. What we want is very minimal: allow all for the controller of the slice: {{{ fvctl addFlowSpace all 1000 any "Slice:nox-test=7" }}} 6. ''start the controller.'' Start nox on the IP:port combination defined for the controller in the !FlowVisor slice: {{{ ./nox_core -i ptcp:6656 switch packetdump -d }}} 7. ''start virtual switch.'' Here we instantiate an IP8800 switch with a virtual switch listening to !FlowVisor, which is 172.16.0.4: {{{ setvsi 22 37,38 tcp 172.16.0.4:6655 dpid 0x001010223232 }}} The datapath should come up as "connected" under ''showswitch''. == Moving controllers to slices == Here are the steps used to actually move the ORBIT infrastructure to a !FlowVisor slice. 1. ''edit default port for noxcore.'' The default settings for snac (e.g. what port it uses, ect) is found in /etc/default/noxcore. Edit the port it uses for the control channel from 6633 to the new port: {{{ OF_LISTEN="-i ptcp:6634" }}} 2. start flowvisor and set up slices: {{{ flowvisor config.xml 6633 & fvctl createSlice orbit-snac tcp:172.16.0.4:6634 foo@domain.com fvctl addFlowSpace all 10000 any "Slice:orbit-snac=7" }}} 3. restart snac: {{{ /etc/init.d/noxcore restart }}} !FlowVisor should now be managing the slice for SNAC. ---- == Some notes on architecture and implementation. == In general, !FlowVisor is well-documented (see refs link at top of page). This section describes some more in-depth implementation details of !FlowVisor (currently 0.8.3). === Event handling === Event handling is done by registering handlers with a loop construct (`FVEventLoop`). Handlers are implementations of the `FVEventHandler` interface. Handlers include core !FlowVisor components such as `OFSwitchAcceptor`, `FVClassifier`, and `FVSlicer`. The overview of the relationship between these components can be found [https://openflow.stanford.edu/display/DOCS/IO+Overview here]. Events are `FVEvent`-derived classes, with `FVEvent` being the base unit for passing events between event handlers. They come in several categories, including: * `FVIOEvent` - a new incoming !OpenFlow message * `OFKeepAlive` - an !OpenFlow ECHO_REQUEST * `TearDownEvent` - shutdown of !FlowVisor components * `FVRequestTimeoutEvent` - a timer has expired Some things with 'event' in their names, such as `UnhandledEvent`, may represent an exception. === Startup === '''In `Flowvisor.java`:''' 1. Instantiate event loop 1. If the topology controller is needed, initialize 1. Initialize the OFSwitchAcceptor 1. Initialize the API server '''In `OFSwitchAcceptor`:''' 1. Initialize socket select to listen for incoming connections 1. If incoming connection is an FVIOEvent, spawn an instance of `FVClassifier` '''In `FVClassifier`:''' 1. Send controller side of !OpenFlow handshake (Hello, features request) * The initial process also pushes a flow mod that causes switches to drop all, until further flow information can be acquired. 2.For each event: 1. Classify the type of event 1. check if message is from a new or previously known switch (check ''switchInfo'', a structure that holds an `OFFeatureReply` for the switch this classifier is associated with) * if known switch, call ''classifyFromSwitch'', the appropriate handler for the message (defined by the message itself as part of the `Classifiable` interface) * if not (and message is a FEATURES_REPLY), fetch a copy of the flow map, and from it, a list of slices associated with the switch's DPID (the list is stored in a classifier's ''newSlices'' list) * for each slice in ''newSlices'', spawn a `FVSlicer` if slice does not already have a slice (e.g. an entry in ''slicerMap'', a map of slice names to active `FVSlicer` instances) * update ''slicerMap'' with contents of ''newSlices''