The lastest release of the SS7000 software includes support for Infiniband HCAs. Each controller may be configured with a Sun Dual Port Quad Rate HCA (Sun option x4237) in designated slots. The slot configuration varies by product with up to three HCAs on the Sun Storage 7410. The initial Infiniband upper level protcols (ULP) include IPoIB and early adopter access for NFS/RDMA. The same file and block based data protocols (NFS, CIFS, FTP, SFTP, HTTP/WebDav, and iSCSI) we support over ethernet are also supported over the IPoIB ULP. OpenSolaris, Linux, and Windows clients with support for the OpenFabrics Enterprise Distribution (OFED 1.2, OFED 1.3, and OFED 1.4) have been tested and validated for IPoIB. NFS/RDMA is offered for early adopters of the technology for Linux distributions that run with the 2.6.30 kernel and greater.

Infiniband Configuration


Infiniband IPoIB datalink partition and IP interface configuration is easy and painless using the same network BUI page or CLI contexts as ethernet. Port GUID information is available for configured partitions on the network page as shown below. This makes it easy to add SS7000 HCA ports to a partition table on a subnet manager. Once a port has been added to a partition on the subnet manager, the IPoIB device will automatically appear in the network configuration. At this point, the device may be used to configure partition datalinks and then interfaces. If desired, IP network multi-pathing (IPMP) can be employed by creating multi-pathing groups for the IPoIB interfaces.





HCA and port GUID and status information may also be found on the hardware slot location. Navigate to the Maintenance->Hardware->Slots for the controller and click on the slot information icon to get see firmware, GUID, status and speeds associated with the HCA and ports.




Performance Preview


So how does Infiniband perform in the SS7000 family? Well, it really depends upon the workload and a adequately configured system. Here, I'll demonstrate two simple workloads on a base SS7410.


Server


  • Sun Storage 7410
  • Software release Q3.2009
  • 2 x quad core AMD Opteron 2.3 GHz CPUs
  • 64GBytes DRAM
  • 1 JBOD (23 disk, each 1 Tbyte) configured for mirroring
  • 2 logzillas configured for striping
  • 2 Sun Dual Port DDR Infiniband HCA, one port each configured across two separate subnets
Clients

  • 8 x blade servers, each containing:
  • 2 x quad core Intel Xeon 1.6 GHz CPUs
  • 3GBystes DRAM
  • 1 Sun Dual Port Infiniband HCA EM
  • Filesystems mounted using NFSv4, wsize=4
  • Solaris Nevada build 118
  • 8 Solaris Nevada (build 118) clients: 4 clients connected to subnet 1 and 4 clients connected to subnet 2
Fabric

  • Sun 3x24 Infiniband Data Switch, switches 0 (subnet 1) and 2 (subnet 2) configured across server and clients
  • 2 OFED 1.4 OpenSM subnet managers operating as masters for switch 1 and 2

The SS7410 is really under-powered (2 CPUs, 64G memory, 1 JBOD, 2 logzillas, DDR HCAs) and no where near its operational limitations.



Sequential Reads


In this experiment, I used a total of 8 clients with up to 5 threads each performing sequential reads to separate files in a slightly modified version of Brendan's seqread.pl script. The clients are evenly assigned to each of the HCA ports. More than 5 threads per-client did not yield any significant gain as I hit the maximum amount I could get from the CPU. I ran the experiment twice: once for NFSv4 over IPoIB and once for NFSv4/RDMA. As expected, IPoIB yields better results with smaller block sizes but I was surprised to see IPoIB outperform NFS/RDMA with 64K transfer block sizes and stay in the running with every size in between.




I'm using the default quality of service (QOS) on the subnet manager and clients that are evenly assigned to each of the HCA ports. As a result, we can see a nice even distribution of network throughout across each of the devices and IOPS per-client.


Synchronous Writes


In the read experiment, I was able to hit an upper bound on CPU utilization at about 8 clients x 5 threads. What will it take to reach a maximum limit for synchronous writes? To help answer that question, I'll use a stepping approach to the single synchronous write experiment above. Looping through my 8 clients at one minute intervals, I'll add a 4K synchronous write thread every second until the number of IOPS levels. At about 10 threads per client, we start to see the the number of IOPS reach a maximum. This time CPU utilization is below its maximum (35%) but latency turns into a lake-effect ice storm. We eventually top out at 38961 write IOPS for our 80 client threads.





As a sanity check, I also captured the per-device network throughput. If I account for the additional NFSv4 operations and packet overhead, 93.1MB/sec seems reasonable. I ran this experiment with NFS/RDMA and discovered a marked drop-off (30%) in IOPS when run for a long period. Until then, NFS/RDMA performed as well as IPoIB. Something to investigate.


Next

I have a baseline for my woefully underpowered SS7410. For 4K sequential reads, I quickly bumped into CPU utilization limits at 56378 IOPS. With the synchronous write workload, I top out 38691 IOPS due to increased disk latency. But all is not lost, the SS7410 is far from its configurable hardware limitations. The next round of experiments will include:


  • Buff up my 7410: give it two more CPUs and double the memory to help with reads
  • Add more JBODS and logzillas to help with writes
  • Configure system into a QDR fabric to help the overall throughput

More...