Table of Contents
SB9 is dedicated to wired networking and cloud computing experimentation. Nodes in sandbox 9 don't have wireless interfaces.
Sandbox 9 consists of 8 servers, each with the following specifications:
- Server model: Dell R740XD
- CPU: 2x Intel Xeon Gold 6126
- Memory: 192GB
- Disk: 256gb ssd
- Network: 2x 25gbe mellanox connect-x4 lx
- Network: 2x 100gbe mellanox connect-x5 lx (pending, shares 1x pcie3.0x16 slot)
- Network: 1x 10gbe (pending)
- Accelerator1: Nvidia V100 16gb
- Accelerator2: Xilinx Alveo U200 FPGA (2x 100gbe ethernet onboard)
They are connected to the following P4 Capable Switch: Edge Core Wedge100BF-32x
- Ports: 32x 100g ports, each can break out to 2x 50, 2x 40, 4x 25, or 4x 10g ports.
- OS: Azure SONIC operating system, or OpenNetworkLinux
- Management: In band, out of band, or via onboard BMC.
Manual install of new switch os: Currently this must be done by a user with admin access. A service will be provided to automate this operation.
- download image and change symlink on repo1 at path
- Connect to BMC, and run ONIE following steps at https://github.com/opencomputeproject/OpenNetworkLinux/blob/master/docs/GettingStartedWedge.md
- For SONIC, follow https://github.com/Azure/SONiC/wiki/Quick-Start
- Set up mapping of serial lanes to ports
All hardware below this line has been retired. The information is preserved for reference.
SB9 is dedicated to OpenFlow experimentation (NOTE: Nodes in SB9 don't have wireless interfaces!). As show in Figure 1, Sanbox 9 is built around an OpenFlow capable switch with 12 nodes, 7 of which are equipped with NetFPGA cards, 2 nodes with NetFPGA-10G card and 2 general purpose ORBIT nodes and a sandbox console. In addition, there is a fast cache machine, with a 10g connection and a ssd storage array fast enough to saturate that link.
The switch labeled 'sw-sb-09', a Pronto 3290, provides the central connectivity backplane for 'DATA' interface of all hosts/NetFPGAs in the sandbox. Each host (node1-1..node1-12) is connected to the sw-sb-09 switch through one 1GbE data connection on the interface 'eth0' or 'exp0'. As is the case with the rest of ORBIT nodes, a second GbE interface ('eth1' or 'control') of each node, that is used exclusively for experiment control (incl. ssh/telnet sessions), is connected to an external control switch outside of sandbox.
The first 7 hosts (node1-1..node1-7) each contain a 4x1GbE NetFPGA installed on a PCI slot. Each NetFPGA has 4 connections to the top switch, corresponding to its 4-GbE ports nf2c0-nf2c3.
Nodes node1-8 and node1-9 each contain a 4x10GbE NetFPGA installed on a PCI Express slot. top switch while the remaining two are directly connecting the two nodes.
Node1-12 has eth4 connected to the pronto switch at 10g, with eth5 currently unconnected. It also has 2 unconnected 10gbase-T ports, enumerated as eth2 and eth3.
The Pronto switch is a OpenFlow enabled switch and can be run in native or OpenFlow mode by controlling its boot configuration. When in native mode, it runs the Pica8 Xorplus switch software while when it is in OpenFlow mode it can run either stock Indigo firmware or experimenter provided OpenFlow image for Pronto switches. The mode of operation is controlled by the network aggregate manager service.
|Switchport #||Node||Device||Interface #|