![100g ethernet testing 100g ethernet testing](https://www.brighttalk.com/communication/96345/preview_1389897681.png)
Frustrated I kept going and tried many different options. However, I couldn’t develop a definitive test suites to show that. I felt sure there would be significant differences in transfer speeds based on the use of different ‘sysctl’ values. Over the next year, I would take time every few weeks and test the link to see what variables impacted the transfer speeds.
![100g ethernet testing 100g ethernet testing](https://m.media-amazon.com/images/I/51OCpzO-lHL._SL1500_.jpg)
Starlight also agreed to let me have an account on a 100g system in Chicago.
100G ETHERNET TESTING DOWNLOAD
The few day testing done at SC did show that the BBR congestion algorithm did boost the download transfer speeds, rates went from ~60Gbps to ~75Gbps, a significant increase.Īfter the conference, Jim and Fei and others at Starlight and CENIC agreed to keep the link between Chicago and Sunnyvale configured and usable from campus. With help from many, particularly SCinet, John Graham at UCSD, and Jim Chen and Fei Yeh from Northwestern and Starlight, a quick and dirty test bed was deployed. However at SC17, I heard about a new shiny object from Google, the BBR ( bottleneck bandwidth and round-trip propagation time) congestion control tool. Each year there is a bandwidth challenge and capacity escalation, and many 100g connections to most I2 or other regional POPs.įor two years, I’ve been busy running our booth to do much with networking testing. Instead I’d been attending SC conferences for a number of years. See Tending toward the lazy dimension, I didn’t work through the simple process of applying for access.
![100g ethernet testing 100g ethernet testing](https://www.temcom.com/wp-content/uploads/2017/01/Tektronix-100GBASE-CR4-100GBASE-KR4-and-CAUI-4.jpg)
This quickly became less interesting, as simply moving traffic, at 100g, from one system to another on the same switch, was pretty easy and didn’t seem to stress the servers, switches or me.ĮSnet has a nice 100g testbed available for researchers to do testing. In an attempt to stay ahead of the researchers, a challenging task, I’ve worked to establish a temporary local test bed of 100g connected systems and switches. In my role in Research Computing, I know that before long, researchers will start to have a need for ever more bandwidth capacity for the movement of data. As the economy of scales goes, it ends up more cost effective to just jump to 100g rather than multiple 10g connections with all the associated routers, switches and configuration complexity that introduces.Īll that is just another way of saying that there currently is a large amount of unused bandwidth deployed, operational and unused. So far, most to these very large ‘pipes’ have been deployed because the campus’s previous link had occasionally bumped into the 10g capacity of that existing connection. Many research universities are moving to a 100g connection to either Internet2 and/or the commodity Internet.