16 Ways LinkX DAC Cables are “Breaking Out” All Over Servers and Storage

 
Adapters, Cables, Data Center, High Performance Computing (HPC), Link-X, Storage, Switches, , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

Part I of Three Part Blog Series on Cables & Transceivers

1

CAT-5e cabling and 1GBASE-T have dominated the data center interconnect scene for 15 plus years. However, the transition to 10G Ethernet proved to be a significant hindrance in both power consumption and cost. That’s when Direct Attach Copper (DAC) cabling, aka Twinax, snuck in and grabbed significant market share. Now it has become the preferred interconnect for inside server racks, especially for high-speed links at 25G, 50G and 100G in just about all applications in hyperscale, enterprise, storage, and in many HPC installations.

 

 

2

LinkXTM is the Mellanox brand for the DAC, AOC and optical transceiver product lines.

What is DAC Cabling?

DAC forms a direct electrical connection hence the name, Direct Attach Copper cabling.  A DAC is simply two wires where the 1,0 electrical signal is the voltage difference between two wires. A wire pair is used to create one directional lane; so two pairs creates a single-channel, bi-directional interconnect. Similarly, eight wire pairs for four-channels. Wrap it all up in multiple layers of shielding foil, and solder the wires onto a tiny PCB with an EPROM chip that contains identity data about the protocol, data rate, cable length, etc. Then, put it all in an industry standard plug shell such as SFP or QSFP to create the complete cable with connector ends. While there isn’t much inside DAC cables, there is a lot of design engineering and manufacturing technology that goes into them.

SFP & QSFP Industry Standard “Plugs” or MSAs

3

Inside the LinkXTM DAC Cable

4

At high signal rates, the wires act like radio antennas. This means the longer the reach and higher the data rate, resulting in requiring more EMI shielding and the cable becomes thicker, more difficult to bend. IEEE and IBTA sets the cable standard specifications for Ethernet and InfiniBand applications.  The standard for 10Gb/s signaling supports reaches of 7 meters; the maximum reach for 25Gb/s DACs is usually 3 meters – enough to span up & down server racks.

Why is DAC So Popular?

The popularity of DAC can be summed up in two words; low price. Copper cabling is the least expensive way to interconnect high speed systems together. It’s hard to beat the cost of a copper wire, a solder ball and tiny PCB all built on an automated machines.  More complex technologies such as optical fibers, GaAs VCSEL lasers, SiGe control ICs, InP lasers or Silicon Photonics, which all require sub-micron alignment tolerances, manual labor and a vast assortment of technology piece parts to assemble, cost much more than DACs but support longer reaches

“DAC in the Rack”

These low-cost, low-power consumption and high-performance capabilities has led DAC to becoming very popular in hyperscale, enterprise and many HPC systems as they are being used to interconnect servers and storage to top-of-rack switches using network adapter cards.  Because the number of cables needed in only one rack can be 32-56 or more, even small performance or cost difference become very important. This is especially true when large data centers deploy tens or hundreds of thousands of cable links.

Why Bother with Half a Watt of Power?

Besides low price, the other big reason for their enduring popularity is that DAC consumes almost zero power. Several studies show only one Watt saved at the component level (e.g. chip or cable) translates to between 3-to-5 Watts at the facility level. The Wattage multiplies when factoring in the facility with all the power distribution losses from 100 KV street lines down to 3 Volts and adding in the cooling fans in every one of 54-72 servers in a single rack chassis, plus all of the intermediate fans on the way to the rooftop A/C –just to power and cool that one Watt extra. Active Optical Cables consume ~ 2.2 Watts; transceivers up to 4.5 Watts; DACs zero!

Now, multiply this savings by 100,000 cables and a few dollars saved on each cable on the capital acquisition expenses (Capex) and power consumption operating expenses (Opex) and the costs adds up fast! Large data centers spend upwards of $4 million per month on electric bills!

Mellanox Offers 16 LinkXTM DAC Options!

Why So Many Options? The answer is: to optimize costs at every connection point. DAC cables are used in 32-to-56 port top-of-rack switches supporting up to 128 links (4x25G times 32 ports). DAC cables are used in many different configurations linking both new and older equipment.

Mellanox offers six different cabling schemes for interconnecting switches and network adapter to subsystems using SFP and QSFP DAC cables and port adapters.

  • SFP-SFP cables
  • QSFP-QSFP cables
  • QSFP-4SFP breakout cables
  • QSFP-2QSFP breakout cables
  • SFP-QSFP adapter cables
  • QSA: QSFP-SFP mechanical port adapter used with SFP cables

Six DAC LinkXTM Cabling Options

5

To continue doing the math, now multiply times two for 10G and 25G line rates which totals 12 different Ethernet DAC options. Add to that four different InfiniBand DAC QSFP-QSFP DAC cables in EDR (4x25G), FDR (4x14G), FDR10 (4x40G) and QDR (4x40G) rates for a total of sixteen different DAC interconnect options.  (There is even one more if you want to include 14G-based Ethernet that uses 14G FDR InfiniBand signaling to transport the Ethernet protocol. Called “VPI”, this is unique to Mellanox and enables 4x14G or 56G Ethernet.)

This means there are 16 different ways to create the most cost and performance optimized network links available from Mellanox for InfiniBand and Ethernet protocols.

DAC In-The-Rack

This picture shows all the different cabling options for linking rack systems to Top-of-Rack switches.  DAC cables can also link various subsystems to other subsystems directly as well.  Shown above in 25G line rates, the switches, network adapters and DAC cables are all available in 10G line rates as well.

6

Backwards Compatible

Mellanox hardware is also line rate backwards compatible. For example, the Ethernet SN2400 32-port 100G switch or 25G/100G ConnectX-4 network adapter card can run at 25G as well as 10G line rates. The results are similar for InfiniBand equipment with 10G, 14G and 25G line rates. This enables connecting slower or older equipment to newer, faster systems without issues.

Mellanox DAC Manufacturing

Most DAC manufacturers just build only DAC cables. Mellanox designs and manufactures all its own switch systems, network adapters, DAC and AOC cables and optical transceivers. This vertical integration, “end-to-end” approach ensures everything works together seamlessly. At Mellanox, so called, “Plug & Play” means plug-in and walk-away; not the usual Plug & Play-All-Day needed to get things to work.

BER – Designed for High-Performance Computing (HPC)

It’s been said that nearly anyone can build a 10G DAC cable. But not everyone can build one that works perfectly at blazing fast speeds of 25Gb/s that operate for many years under high temperatures, under all conditions found in modern data centers and one that does not induce bit errors into the data streams.

All Mellanox DAC cables are designed to HPC InfiniBand supercomputer BER standards (even our Ethernet DACs) which requires a bit error ratio (BER) of one-bit error in 1015 bits (expressed as 1E-15 or 10-15).

The IEEE Ethernet industry standard is BER of 1E-12 or one-bit error in 1012 bits transmitted.

Expressed another way, that is one bit error every 2.5 seconds (1,440 bit errors per hour). All Mellanox DAC cables are tested to BER of 1E-15 or one bit error every 42 minutes (1.4 bits per hour).  Which cable would you choose to send your electronic pay check over?

Too many bit errors from poor quality DAC cables means data packets get dropped and the data has to be retransmitted making 100G more like 85G. And most operators won’t even know it!

“Just Use FEC to Clean it up!”

We’ve heard many data center operators say, “We’ll just use FEC to clean it up”. In the server rack, the use of Forward Error Correction circuits (FEC) is not recommended at reaches <2 meters per the latest IEEE spec at 25G line – 2 meters is the most common reach for DAC for linking high-value servers!  FEC adds about 120ns delays each way. For server uplinks where all the traffic is, this delay can really slow things down.  FEC can detect and correct only so many errors before it becomes overloaded and forces a packet retransmit.  Server uplinks are the most important links to maintain error free as they account for 65 percent of the total hardware costs and where all the data is processed. So, keeping them efficient is very important to maintaining high throughput.

More Value – Extending the Reach Past 3 Meters

Mellanox DAC cables typically can reach significantly further than competitor’s DAC cables which often just barely achieve the IEEE standards of 3 meters and at a BER 1E-12. Without using FEC on the host, Mellanox DAC cables can reach as far as 5 meters (16 feet) which is enough to span 3-4 server racks. Competitor cables with 3 meter limitations have to resort to more expensive AOCs or optical transceivers after 3 meters.

Note: The use of FEC, what types of FEC, cable thickness, and lengths are currently hotly contested subjects in the 25G industry and the IEEE with no firm decisions yet – so stay tuned!

Zero Latency Delays

InfiniBand markets are much more stringent about signal quality than Ethernet markets as InfiniBand systems are all about minimizing latency delays. So InfiniBand markets avoid the use of FEC, which can cost 120ns each way to clean up data errors. Since DAC cables have no electronics or opto-to-electronic conversion in the data path, as do optical devices, DAC latency delays are near zero.  In big data centers, the latency delay can add up with all the various interconnects that data has to pass through so minimizing it is of key importance to operators.

Learn from DAC Disasters

Many “inexpensive” DAC cables typically use shoddy manufacturing techniques, sample testing (maybe 1 in 10 cables tested) and less electrical shielding in the cable to save costs. This results in many installations having the dreaded, “DAC Disaster”. This is where going “cheap” becomes “really expensive” when factoring in system down-time and chasing down intermittent signal losses and drops from low-quality cables. Link drops have even occurred by simply moving a cable a few inches to see the port number on the switch. Signaling is at the near margin limit, the shielding at the bend in the cable opens up, signal squirts out, and the link drops. Just try to diagnose that problem! Some installations had to completely replace the DAC cabling as a result of “going cheap”.  Mellanox DAC cables offering BER 1E-15 without the host FEC enabled (vs 1E-12 with host FEC enabled) means there is a lot of signaling margin left to absorb signal losses and random or burst-mode noise.

“Some things you just don’t go cheap on.

Parachutes, eye surgeons, and DAC cables!”

Every Cable Tested in Real Systems

As Mellanox is also a switch and network adapter systems company, we test every DAC cable in real switching and adapter systems 32-48 at a time for extending times under raised temperatures found in actual systems deployment. This is unlike most competitors who typically test one at a time (or sample testing) on a technician’s bench for a few minutes using manual labor and expensive test equipment.

All the cables are tested to BER of 1E-15 – thousands of times better than competing Ethernet cable suppliers. So, there is a lot of spare signal margin in the cables rather than, “just barely qualifying and operating the edge” as many competitor cables often do.

Closing Thoughts

Some buyers attempt to shave a few dollars building “Frankenstein” systems from multiple vendor’s equipment but they often end up paying big time in qualification, maintenance and reliability. In e-commerce applications, even one minute of down time can be very costly.

The combination of high-quality cable materials, Mellanox designed and manufactured cables using real systems testing and at a minimum standard of 1E-15 BER makes Mellanox LinkXTM cables a preferred choice in high-speed, critical systems applications and at blazing 25G line rates that includes just about all applications!

If you can find any other way to interconnect switches and network adapters using DAC cables – we’d like to hear about it! DAC cables are a tool in the networking tool kit and it’s important to understand use advantages and limitations. In my next few blogs, I’ll talk about AOCs and optical transceivers for connecting servers and switches in breakout and straight interconnect schemes.

7

 

 

 

 

 

 

 

Resources

 

 

 

 

About Brad Smith

Brad is the Director of Marketing at Mellanox, based in Silicon Valley for the LinkX cables and transceivers business focusing on hyperscale, Web 2.0, enterprise, storage and telco markets. Recently, Brad was Product Line Manager for Intel’s Silicon Photonics group for CWDM4/CLR4 and QSFP28 product lines and ran the 100G CLR4 Alliance. Director of Marketing & BusDev at OpSIS MPW Silicon Photonics foundry. President/COO of LuxSonar Semiconductors ( Cirrus Logic) and co-founder & Director of Product Marketing of NexGen, a X86-compatible CPU company sold to AMD - now the X86 product line. Brad also has ~15 years in technology market research as Vice president of the Computer Systems group at Dataquest/Gartner; VP/Chief Analyst at RHK and Light Counting networking research firms. Brad started his career at Digital Equipment near Boston with the VAX 11/780 and has served as CEO, president/COO and on the board-of-directors of three start-up companies. Brad has a BSEE degree from the University of Massachusetts; MBA from University of Phoenix and holds 2 optical patents

Comments are closed.