Why So Many Types of High-speed Interconnects?

 
Uncategorized

Rationale behind the myriad of different interconnect technologies and products

Creating high-speed interconnect links between servers and storage, switches and routers involves many different types of technologies in order to minimize the cost involved. With large data centers buying tens of thousands of devices, costs add up quickly. A 3-meter long DAC cable is priced at approximately $100 but a 10km reach single-mode transceiver $4,000-$5,000. AOCs and multi-mode transceivers priced in between.

Today, most modern data centers have zeroed in on the SFP and QSFP form-factors for use with DAC and AOC cabling and optical transceivers. By focusing on only a few types and ordering in high unit volumes, greater leverage of scale can be achieved, not only in the cables and transceivers, but also all the equipment they link to such as switches and network adapters that may reside in servers, storage HDD, SSD, NVMe arrays. Add to this, the spare parts that also need to be stocked.

Currently, the modern data center uses SFP and QSFP (“+” for 10G and “28” for 25G) in DAC, AOCs and both multi-mode and single-mode transceivers. DAC uses copper wires. Parallel 4-channel AOC and transceivers (SR4 & PSM4) use 8 optical fibers; single-channel transceivers and AOCs (SR, LR) use 2 fibers. CWDM4 and LR4 transceivers use 2 fibers and multiplex four channels in one fiber to save fibers costs over long reaches.

 

High-speed interconnects all strive to:

  • Implement the lowest cost links
  • Achieve the highest net data throughput (i.e., fastest data rate with least amount of data errors, data retransmissions and minimal latencies).
  • Transmit over various distances

 

To achieve these goals, various technologies are often used each of which has its own set of benefits and limitations. Data center professionals want to build all links with single-mode fiber, duplex LC connectors and single-mode transceivers. Build the fiber into the data center infrastructure once and forget it using single-mode fiber as it does not have reach limitations that DAC copper and multi-mode fiber do; then upgrade the transceivers with each new transceiver advancement.

While the fibers and LC connectors are already at the lowest cost points, the problem is the single-mode transceivers, which are very complex to build requiring many different material systems, are hard to manufacture and therefore expensive. Basically, the longer the reach needed to send the data, the higher the price as the technology gets more complicated and the harder to manufacture.

Most single-mode transceivers are built using a great deal of manual labor and piece part assembly in processes designed to address the low volume telecom market. The new hyperscale data centers are ordering parts in record numbers and the piece part manufacturing method is difficult to scale up. Silicon Photonics technology attempts to use CMOS silicon IC processes to integrate many of the devices and sub-micron alignments required.

As a result, data centers often use an array of different high-speed interconnects matching each interconnect type to specific reach requirements. DAC is the lowest cost however, after about 3-5 meters, the wire acts like a radio antenna and the signal becomes unrecognizable.  AOCs are used from 3 meters to about 30 meters after which installing long cables becomes difficult. More expensive multi-mode transceivers, with detachable optical connectors, can reach up to 100 meters, then the large 50-um fiber core causes the signal to scatter and become unrecognizable. Some multi-mode transceivers (eSR4) and links can be engineered to 300-400 meters (eSR4), but it gets a little tricky matching the special transceivers, fibers, and optical connector in the link.

Single-mode fiber uses a tiny 9-um light carrying core so the signal pulse stays together over very long distances and can travel literally between continents. Parallel single-mode transceivers (PSM4) with 8-fibers can reach up to 500m-2km. The PSM4.MSA standard is 500m, but Mellanox’s PSM4s can reach up to 2km; about four times the reach of the PSM4 spec.

After 500 meters, the cost of 8 fibers adds up with each meter, so multiplexing the four channels signals into only two single fibers is more economical over long fiber runs. CWDM4 is used for up to 2 km and LR4 up to 10 km.

In the chart below on the bottom axis, as the reach gets longer, different technologies are used. Starting with DAC cables on the left and ending with 10km LR4 transceivers on the far right. Also note on the vertical axis, the faster the data rates, the shorter in reach the DAC and multi-mode optics (SR, SR4) becomes while single-mode fiber remains largely reach independent.

 

Data Rate versus Interconnect Reach

 

Summary

All the different technologies and cable product types are designed to minimize the costs involved in building data center high-speed interconnects. While many enterprise data centers might use 5,000-10,000 devices, hyperscale builders with 2 million servers are ordering interconnects in the hundreds of thousands.

Mellanox sells end-to-end solutions, and designs and manufactures not only the switches and network adapters systems including the silicon ICs, but also designs and manufactures the cables and transceivers – including the silicon VCSEL and Silicon Photonics ICs driver and TIAs ICs.  For single-mode transceivers, Mellanox has its own Silicon Photonics product line and internal wafer fab for its PSM4 and AOC transceivers.

Mellanox sells state-of-the-art 25/50/100G products in copper DAC and optical AOC cables and both multi-mode and single-mode transceivers.

Mellanox recently announced it shipped its 100,000th 100G DAC cable and 200,000 th 100G transceiver/AOC module and is a leading supplier in all four interconnect product areas.

 

More Information

About Brad Smith

Brad is the Director of Marketing at Mellanox, based in Silicon Valley for the LinkX cables and transceivers business focusing on hyperscale, Web 2.0, enterprise, storage and telco markets. Recently, Brad was Product Line Manager for Intel’s Silicon Photonics group for CWDM4/CLR4 and QSFP28 product lines and ran the 100G CLR4 Alliance. Director of Marketing & BusDev at OpSIS MPW Silicon Photonics foundry. President/COO of LuxSonar Semiconductors ( Cirrus Logic) and co-founder & Director of Product Marketing of NexGen, a X86-compatible CPU company sold to AMD - now the X86 product line. Brad also has ~15 years in technology market research as Vice president of the Computer Systems group at Dataquest/Gartner; VP/Chief Analyst at RHK and Light Counting networking research firms. Brad started his career at Digital Equipment near Boston with the VAX 11/780 and has served as CEO, president/COO and on the board-of-directors of three start-up companies. Brad has a BSEE degree from the University of Massachusetts; MBA from University of Phoenix and holds 2 optical patents

Comments are closed.