All posts by Brad Smith

About Brad Smith

Brad is the Director of Marketing at Mellanox, based in Silicon Valley for the LinkX cables and transceivers business focusing on hyperscale, Web 2.0, enterprise, storage and telco markets. Recently, Brad was Product Line Manager for Intel’s Silicon Photonics group for CWDM4/CLR4 and QSFP28 product lines and ran the 100G CLR4 Alliance. Director of Marketing & BusDev at OpSIS MPW Silicon Photonics foundry. President/COO of LuxSonar Semiconductors ( Cirrus Logic) and co-founder & Director of Product Marketing of NexGen, a X86-compatible CPU company sold to AMD - now the X86 product line. Brad also has ~15 years in technology market research as Vice president of the Computer Systems group at Dataquest/Gartner; VP/Chief Analyst at RHK and Light Counting networking research firms. Brad started his career at Digital Equipment near Boston with the VAX 11/780 and has served as CEO, president/COO and on the board-of-directors of three start-up companies. Brad has a BSEE degree from the University of Massachusetts; MBA from University of Phoenix and holds 2 optical patents

Mellanox @ OFC 2018: Mellanox Live Demo 3-meter 200G & 400G DAC Cables

The data center’s biggest industry conference centered around high-speed interconnects, the Optical Fiber Conference, was in San Diego, CA March 13 to 15 with attendance of about 15,000 people.

In the Mellanox booth, we showed live demonstrations of our 200G and 400G Direct Attached Copper (DAC) interconnect product line.

In this section of the booth we showed our 200G/400G DAC product line which is based on 50G PAM4 signaling and consists of:

  • 200G QSFP56 -to- QSFP56 in 4x50G PAM4
  • 200G QSFP56 -to- Dual 100G QSFP56 in 4x50G PAM4 and 2x50G PAM4
  • 400G QSFP-DD -to- QSFP-DD in 8x50G PAM4
  • 400G QSFP-DD -to- Dual 200G QSFP56 in 8x50G PAM4 and 4x50G PAM4

The LinkX DAC portfolio offers just about every combination of 100G, 200G and 400G copper DACs using the new modulation scheme of 50G PAM4 and new form-factors of QSFP56 and QSFP-DD.  200G is likely to be priced between 100G and 400G and provide a more gradual step to faster networking better matching customer needs as the jump from 100G to 400G is pretty big and entails a lot of signaling and system changes.


100G Forever!

The 100G rate is likely to be around for a long time starting out a 4x25G NRZ, transitioning to 2x50G PAM4 and eventually ending up at 100G PAM4 per lane and 4x100G (400G) and 8x100G (800G) PAM4-based links.  The QSFP56 100G ends will enable 100G to the server in a traditional 4-channel QSFP form-factor which is slightly shorter than the QSFP-DD and considerably smaller than the OSFP form-factors. Additionally, QSFP56 switch and network adapter ports are be backwards compatible with QSFP28s.

200G QSFP56 & 400G QSFP-DD Straight and Y-Splitter Static DAC Display


200G/400G DAC Live Demo Using Ixia System


Dual 26 AWG

The 400G DACs uses dual 26 AWG copper wires to enable easy bending of the cable. This removed all doubts about 3 meters and 26AWG cables can be used in a QSFP-DD form-factor which several opponents said could not be done and would require shorter reach and thinner copper cables or a new form-factor such as the OSFP. QSFP-DD ports are backwards compatible with QSFP28 which is very important to provide a smooth upgrade path and links with older systems. Also, the dual cable makes it very easy to split 200G into dual 100G and 400G into dual 200G cables.


Dual 26AWG Cables Each with 4-Channels Enables Easy Bending


Power, Latency and Bit Errors

DAC cables form a directly attached electrical connection between switches and network adapters. Since there is no optics or electronics in the data path, there is no power consumption or latency delay in converting electrical signals to optics and back and astoundingly low bit error ratios can be achieved since there is no optics or electronics to add bit errors.


Near Zero Power Consumption

Since there is no optics or electronics in the data path, there is no power consumption.  Jumping to 400G AOCs or transceivers will require 7-15 Watts of power consumption per port! People forget that data center equipment operates 24×7 for 3-5 years, so even a few Watts of power consumption really adds up – for every cable and thousands of cables may be deployed.

“DAC cables consume zero power.

Upgrade to optics at 7-15 Watts per port times 32 ports and boom,

you just turned equivalent of a hair dryer on low!”

Near-Zero Latency

DAC cables also offer near zero latency delays which is important to enable the data to get through the link as fast as possible.  Low-latency is especially important in server-switch-memory links as this not only the highest traffic area but also the highest value traffic as it is where all the computation occurs. Mellanox is the leader in InfiniBand networking where the ultimate in low-latency links is demanded. We have transferred this design and manufacturing technology knowledge to the Ethernet space to build near zero latency DAC cables.


At 400G, Latency is Getting Much Worse

At 100G (4x25G NRZ) latency delay from the host RS-FEC is about 120ns to compute and correct bit errors.  Ethernet-based, 400G systems use the KP4 FEC standard which can induce up to 250ns of latency delay to compute and correct errors. This occurs in each direction. So, a single round trip has built in 500ns of latency delay. By using very high-quality 100G DAC cables from Mellanox, for <3-meter lengths, FEC can be turned off in the host saving considerable power and latency in each port.

“Half a microsecond savings on a

zillion server-to-memory I/O transactions in
thousands of DAC cables can really add up

and zero power consumption is very hard to beat!”


Very Low Bit Error Ratio (BER) = High Quality

In our OFC 2018 booth, Mellanox demonstrated 400G dual 26 AWG DAC cables operating at a BER of 1E-10 continuously for four days!  This is an incredibly low BER for a 3-meter cable operating with 8-channels of 50G PAM4.  With the host FEC turned off, will save considerable power and latency from the FEC circuitry. With FEC turned on, bit errors might be seen measured in days instead of seconds as with competitive offerings. The IEEE now requires FEC on for 400G links but may relax the standard in future years as they did recently with 25/100G very short lengths.

BER is the measure of the number of bit errors compared to the number of bits transferred. It is also a measure of the quality of the cable’s design and construction.

At 4x25G NRZ (100G), competitive DAC cables BER typically starts out around 5E-5 and relies on the host FEC to clean up the cable errors to meet the IEEE minimum standard of 1E-12.  When the number of errors exceeds the ability of the FEC to correct, the data has to be retransmitted. This is when cheap cables become very expensive.

Mellanox’s low BER is achieved by using high-speed design techniques, special materials and robotic assembly. Mellanox is the leader in low-latency InfiniBand switching systems.  These systems require FEC to be off as it induces too much latency, so DACs have to be tested to BER rating of 1E-15 assuming FEC off vs IEEE Ethernet standard of 1E-12 with FEC on. Mellanox applies these design and manufacturing techniques to address minimizing BER at 50G PAM4 for both 200G and 400G.

The combination of very low BER with no FEC enabled means data center operators can run the server-switch-memory links at 3 meters or more without the use of FEC. This saves considerable power in the switches and network adapters in computing the FEC and 250ns of latency delay in each direction. While this approach now works with 25G/100G signaling as an industry standard, it may be a while before the IEEE standardizes it for 400G.

There is a considerable difference in DAC cables that all claim meeting low BER ratings.  Some “barely” make the IEEE spec requirement and may fail under stresses or even normal cable bending. High bit errors mean forced retransmits losing considerable time.  Mellanox’s cables are designed to operate well within required parameters offering considerable signal integrity margins and protecting high-value-added data.


The Bottom Line

DAC cable have the simplest design, fewest components to fail (high-reliability), the lowest acquisition cost (Capex) coupled with zero power consumption operating at 24×7 for 3-5 years (Opex) with a side of zero latency thrown in. After these three, the important features are seeking the lowest bit error ratio (BER) and possibly deploying DAC without the use of FEC to save more power and latency delays. At enormous data rates of 100, 200 and 400G, low quality DAC cables can become extremely expensive when they fail or worse – fail intermittently!

25/50/100G clearly has hit mainstream and Mellanox offers a full line of DAC, AOC and transceiver products for any data center application.  200G is powering up for 2018 and 400G beyond. The new PAM4 modulation scheme, 8-channel architectures and new form-factors will bring a lot of new capabilities and changes to the systems infrastructure.  Stay tuned for more LinkX cables and transceivers product developments. 2018 promises to show several new product announcements and capabilities.


Supporting Resources:

Mellanox @ OFC 2018: Mellanox Live & Static Demos 100G, 200G, 400G DAC and AOC Cables and Transceivers

The data center’s biggest industry conference centered around high-speed interconnects, the Optical Fiber Conference (OFC), was in San Diego, CA March 13-to-15 with 15,000 attendees.

In the Mellanox booth, we showed our full line of 25G NRZ-based 25/50/100G DAC and AOC cables including multi-mode and single-mode optical transceivers both in a static display and operating live in a switching system rack. We showed live demonstrations of our 50G PAM4-based 200G and 400G Direct Attached Copper (DAC) interconnect product line. Lastly, live demos of our 200G SR4 QSFP56 VCSEL-based transceiver and a static demo of our 400G SR8 in an OSFP form-factor.

Below is a photo tour of what was on display in the booth.

Mellanox Booth at OFC 2018


25/50/100G Spectrum Switches, ConnectX Adapters, Cables and Transceivers Display


On the left, we showed our Spectrum Ethernet series of 25/50/100G network switches:

  • SN2700 32-port 100G QSFP28, 1RU
  • SN2410 8-port 100G QSFP28 and 48-port 25G SFP28, 1RU
  • SN2100 16-port 100G QSFP28, ½ width 1RU
  • SN2010 4-port 100G QSFP28 and 18-port 25G SFP28, ½ width 1RU


Ethernet Storage Fabric Switch

The half-wide SN2100 and SN2010 switches enable mixing and matching different 25G SFP28 and 100G QSFP28 combination in the same 1RU rack slot to best fit the application.  Mellanox’s half-width SN2010 Top-of-Rack (TOR) switch is the best switch for storage and hyperconverged networks.

The new SN2010 is designed for low-latency applications for Ethernet Storage Fabric (ESF) networks and offers four 100G QSFP28 up links and eighteen 25G SFP28 down links to NVME FLASH systems all in a half-wide 1RU form-factor. Also operates at 10G/40G speeds as well.

In the middle area of the display, we showed the ConnectX 3,4,5 series of network adapters in 25G NRZ-based SFP28 and QSFP28 and the 50G PAM4-based ConnectX-6 supporting dual 200G QSFP56 ports of 4x50G PAM4.

Mellanox ConnectX-6 Dual 200G QSFP56 Network Adapter


Mellanox-designed Switch, Adapter and Transceiver ICs

In the background we displayed the Mellanox-designed CMOS switch and network adapter ASIC wafers and the BiCMOS multi-mode transceiver IC wafers in 8-inch and 12-inch formats.

Mellanox is the only company that builds not only switches, network adapter and transceivers but also the ICs inside – including DAC cables as well. This enables Mellanox to offer end-to-end systems with performances, signal integrity and power consumption tuning in ways that systems assembled from multiple companies cannot achieve.

Mellanox-Designed Spectrum, ConnectX and LinkX Multi-Mode Transceiver 8” & 12” IC Wafers


25G SR & 100G SR4 & AOC Open Transceiver Display with IC Wafer


100G SR4 Transceiver Module Showing Mellanox-Designed Tx Rx ICs


25/50/100G Live Demo Rack

Switches, network adapters, cables and transceivers were all shown running live in the system rack and illustrated the complete end-to-end capabilities of Mellanox.

  • Black cables are Mellanox-designed DAC cables and 1:2 and 1:4 splitters
  • Aqua cables are Mellanox-designed AOCs, and 1:2 and 1:4 splitter AOCs and multi-mode SR/SR4 transceivers
  • Yellow fibers are attached to LR, LR4, PSM4 and CWDM4 single-mode transceivers.
Mellanox Switches, Network adapters, cables and transceivers Live Rack Demo


Pictorial of Mellanox Switches, Network Adapters, Cables and Transceivers


200G/400G DAC and SR4

On the right side of the booth, we showed two live running demos for next generation products with the next generation 50G PAM4 modulation scheme. A 200G and 400G DAC cable display with a QSFP-DD 8x50G PAM4 DAC running in an Ixia system. The display right side shows the 200G 4x50G PAM4 SR4 in QSFP56 running live and displaying the PAM4 eye diagram. Also, on display was a 400G SR8 OSFP transceiver based on two integrated 200G SR4 transceivers into a single 8-channel OSFP package.

200G/400G DAC and SR4 Live Demo & SR8 Display


The Bottom Line

25/50/100G clearly has hit mainstream and Mellanox offers a full line of DAC, AOC and transceiver products for any data center application.  200G is powering up for 2018 and 400G beyond that. The new PAM4 modulation scheme, 8-channel architectures and new form-factors will bring a lot of new capabilities and changes to the systems infrastructure.  200G is a much smaller and smoother step. Stay tuned for more LinkX cables and transceivers product developments. 2018 promises to show several new product announcements and capabilities.

Supporting Resources:


Introducing the DynamiX QSA™ Family of Port Adapters

Connects Any SFP DAC, AOC or Transceiver to a QSFP Port

The DynamiX QSATM family of port adapters is the answer to many Data Center professional’s prayers because it resolves the systems linking issues between different line rates and different form-factors used in switches and network adapters such as SFP and QSFP. In addition, it is inexpensive and a life saver in many system upgrade applications.

System Problems:

The trend for faster line rates and new MSA form-factors continues to expand. The market is flooded with a jumble of buzz-words and products that are loaded with specific technical booby traps. In addition, mistakes are becoming very expensive. Systems designers are discovering that the variety and flavors of interconnect products is increasing exponentially. This is all causing IT professionals an increasing amount of irritation and no small amount of confusion as they try to align all these elements, especially when attempting to seamlessly upgrade to new equipment while still supporting legacy systems.

Switch and network adapters are currently offered in QSFP+, QSFP28 and SFP+ and SFP28 form-factors and based on 1G, 10G, or 25G-per-line rates. So, IT managers can easily find themselves with a 25G QSFP-based system and 10G-per-line SFP-based network adapter yet have no clue how to link them up. Different line rates and different form-factors present a challenge but FYI, the answer is: DynamiX QSATM port adapters.

LinkX DynamiX QSATM Solutions:

The DynamiX QSATM family of port adapters is designed, patented and manufactured by Mellanox. The adapter fits into a QSFP 4-channel port in a switch or network adapter and accepts a SFP-based device inserted in the adapter end. This enables passing through a single-channel to the SFP-based device inserted into a larger QSFP port. The faster speed devices are backwards compatible with the slower line speed devices such as:

  • QSA+ 10Gb/s supports 1Gb/s and SFP+ devices
  • QSA28 25Gb/s supports 1Gb/s, 10Gb/s and 25Gb/s and both SFP+ and SFP28 devices

An example of linking a 10G SFP+ transceiver to a 100Gb/s QSFP28 switch using SFP+, SFP28 and QSFP28 devices with the DynamiX QSA™ adapter


Product Specifics:

  • Patented design manufactured by Mellanox US patent 7-934-959
  • Available in two speed versions, 10G and 25G
  • Contain an EPROM to tell the host what the device configuration is.
  • Operates independently from the SFP device inserted
  • Passes through SFP device configuration EPROM information
  • Consumes no power – passive except for configuration setup
  • Induces no signal latency
  • Can be used in QSFP+ and QSFP28-based switches and network adapters
  • Supports a wide range of SFP-based DAC and AOC cables plus both multi-mode and single-mode optical transceivers:
    • Cables
      • CR: DAC copper SFP (3-5m)
      • AOCs: SFP multi-mode (100m)
    • Transceivers
      • SR: SFP multi-mode transceiver (100m)
      • LR: SFP single-mode transceiver (10km)
      • SX: 1G SFP+ multi-mode transceiver (500m)
      • Base-T: 1G SFP converter using CAT-5 copper UTP cables

DynamiX QSA Supported SFP Devices


The new DynamiX QSATM QSA28 model sports a black, low mass, small-profile, easy-pull tab that is becoming more and more popular in very crowded data center racks. The low-mass, loop design enables more free airflow through the crowded systems and does not flap-in-the-wind as with larger tabs.

In high-density system racks, one can visualize over 100 cable ends and transceivers tabs flapping in the wind.  Besides the restriction in air flow, the larger tabs also create a lot of noise and eventually reliability issues. The tiny, looped design eliminates these factors

DynamiX QSA™ QSA28 Model Low-mass, Looped Tab Design Versus Traditional Flat and Long Tab


  • The new DynamiX QSATM family of port adapters resolves many system upgrade issues with respect to linking different form-factors and line rates to switch and network adapter ports
  • The solutions from Mellanox enable a smooth transition when upgrading to newer and faster systems while continuing to support slower speed equipment.
  • Inexpensive, easy-to-use, and plug & play.


Supporting Resources:

Contact your Mellanox sales representative for availability and pricing options and stay tuned to my blog more interconnect news and tips.

Sign up for free ebook download here: LinkX Cables and Transceivers ebook


About Mellanox & LinkX

Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of optical transceivers and high-performance, end-to-end smart interconnect solutions for data center servers and storage systems. Mellanox offers a full line of 10G to 200Gb/s cables and transceivers for hyperscale, enterprise, telecom and storage datacenter applications for Ethernet and InfiniBand protocols.

  • LinkX 25G/50G/100G/200Gb/s DAC & AOC cables and transceivers
  • New Quantum switches with 40 ports of 200Gb/s QSFP28 in 1RU chassis
  • New ConnectX®-6 adapters with two ports of 200Gb/s QSFP28
  • Silicon Photonics 100Gb/s Optical Engines and IC components

Download Free LinkX Ebook on DAC, AOCs and Transceivers for High-Speed Data Centers

What’s New?

Mellanox has introduced a new ebook that describes the latest developments in Direct Attach Copper (DAC) cables, Active Optical Cables (AOCs) and both multi-mode and single-mode optical transceivers for use in modern high-speed data centers. With 33 pages of descriptive text and diagrams, this ebook is a must-read for anyone involved in high-speed networking.

The new Mellanox ebook tries to simplify all the various buzz-words and acronyms and boil it all down to simple-to-understand concepts. It is designed as a quick-read, is blessedly equation free and includes a variety of artwork and photos to make complex issues more understandable and visually interesting.

With a focus on the latest 25Gb/s line rates, the ebook examines the latest copper and optical interconnect technologies for linking top-of-rack, leaf, and spine switches to servers, storage and network appliances using network adapters. These system design techniques have been recently adopted by many hyperscale data center builders and the techniques they are using are rippling throughout the data center industry, migrating to small and large enterprise data centers, and even new telecom data centers as they represent the most cost-effective high-speed interconnect solutions available.

The ebook focuses on the SFP28 and QSFP28 form-factors and interconnect reaches from 0.5 meters to 10 km in both copper and optics which covers the majority of data center links.

About Mellanox & LinkX

Mellanox Technologies, Ltd. (NASDAQ: MLNX), a leading supplier of both Ethernet and InfiniBand switches, network adapters, cables and transceivers for 10G/40G and 25G/100G systems. Mellanox is one of the few companies that designs and builds switching systems, adapters, cables and transceivers including the silicon and offers complete “end-to-end” interconnect solutions for modern data center servers and storage systems.

In addition to manufacturing complete network switches, network adapters, cables and transceivers, Mellanox also designs its own switch and network adapter CMOS silicon ICs, BiCMOS transceiver ICs and Silicon Photonics technologies.

Supporting Resources:

Contact your Mellanox sales representative for availability and pricing options and stay tuned to my blog more interconnect news and tips.

Sign up for free ebook download here: LinkX Cables and Transceivers ebook

LinkX is the Mellanox trademark and name for its cables and transceivers product line

Mellanox Showcases 100G CPRI Transceiver, Optical Engines and Silicon Photonics Wafers at ECOC and CIOE

With networks and data centers all going wireless and into the clouds, Mellanox is announcing the volume production of a new 96G/100Gb/s SR4 multi-mode 4-channel transceiver with one of the top worldwide wireless infrastructure suppliers. The solution features an extended temperature range for supporting outside plant optical front haul links in the Fifth Generation of Mobile Networks, and supports dual line rates as well as both protocols of Ethernet and CPRI or Common Public Radio Interface which are used primarily between remote radio heads (RRH) and base band units (BBU).

Solution Highlights:

  • Based on 100G SR4 Ethernet multi-mode transceiver shipping in high volume
  • Employs Mellanox-designed transceiver ICs
  • Boasting dual line rates and protocols of 24.3G CPRI and 25.8G Ethernet
  • Offers an extended temperature range
  • Available now in volume shipments

“Front haul” links are named thus because the traffic is carried from the cellular antennas to base station controllers, which is at the front of the wireless network. Since wireless antennas are generally found in an outside plant environment, the SR4 CPRI transceiver has a wider temperature rating.

With trials beginning in China and around the world in late 2018, and with full deployments estimated to begin in 2019, this new transceiver is suitable for short-reach outside plant optical transmission between remote radio head (RRH) and base band units (BBU). Multi-mode transceivers create the lowest cost optical links and can be used both inside base stations as well as for short reaches up to 100 meters running up to the top of antenna structures. More expensive single-mode transceivers are used for longer reaches.

CPRI/fiber links are rapidly replacing microware systems and are positioned to play well with Centralized Radio Access Networks (C-RAN) and software-defined-everything in the future.

What’s Driving the Move to 5G?

All of these demand factors are coming at Internet systems all at once which is driving data traffic through the roof and is at the root of multi-acre-sized hyperscale data centers which seem to be popping up everywhere:

  • IoT: Internet connections are expanding and linking nearly everything Internet of Things (IoT) and poses to link everything from power and water grids, autos, homes, appliances, tools, pets, watches to smart water bottles!
  • UHDTV: 4K and 8K resolution HDTV is a reality now. PCs, HDTVs, even cell phones support 4K video today. Moscow 2018 and Japan 2020 Olympics with 8K video will drive network demand. 4K offers not only four times the pixel resolution of standard 2K HDTV but also has many advance features as well.
  • 4K cellphone photos – already people are texting 4K cat photos and videos!
  • Video Streaming – TV and Movies; video calls; drones with 4K video cameras
  • Internet connected autos
  • Virtual and augmented reality; real-time gaming
  • Everything available from anywhere – on cellphones, pads and PCs


When Will 5G Be Relevant?

The bottom line is that carriers and everyone in the content supply chain knows there will be big money in 5G and new revenue streams from new 5G applications and capabilities. The pressure is on to accelerate deployments and so are the standards battles for alternative approaches poised to delay everything.

The new 5th generation wireless network scheduled to come on line beginning in 2019 will provide a much faster and efficient connection to cellular wireless equipment. Offering more than just a faster line rate, 5G will improve the connection quality and range. Preliminary tests have shown data rates as high as 10 Gb/s to the cell phone which reflects an application game changer!

Mellanox 100G SR4 CPRI Transceiver Features

The newly announced 100Gb/s SR4 transceiver for wireless networks has its genesis in the highly popular, standard temperature version of the 100G SR4 for hyperscale, Web 2.0 Cloud and enterprise data centers. Unique features include:

  • 4-channels of 24.33G or 25.78G supporting CPRI 97G or Ethernet 100G
  • Standard QSFP28 form-factor compliant to the SFF-8655
  • MPO multi-mode optical connector, 8-fibers
  • IEEE CAUI-4 electrical specification
  • Selectable Tx and Rx retiming (CDRs)
  • Supports digital diagnostic monitoring of supply voltage, temperature, transmit/receive power, and laser bias.
  • Boasts ISFU – In-Service-Firmware-Upgrade a capability that enables upgrading the transceiver firmware while running traffic in addition to module being hot pluggable
  • Tested extensively in Mellanox switching systems
  • Bit Error Ratio (BER) better than 1E-15, about 1,000 times fewer bit errors than competitor’s products

The CPRI SR4 transceiver is designed using Mellanox’s own designed transceiver control ICs for VCSEL laser drivers and TIA receiver amplifiers. The close matching of driver and receiver electronics with optics enables the SR4 to boast a 1.5W typical power consumption without retiming and 2.2W with retiming – some of the lowest power ratings in the industry. With power budgets skyrocketing, every Watt saved at the component level translates into several Watts of system cooling and related power consumption driving fans, and AC equipment. Powering remotely located equipment is very expensive.

Besides Mellanox designing and building an entire transceiver, Mellanox also builds switching systems, and the transceivers are tested in real switching systems such as the 32-port 100G SN2700 switch, and not simulated on a test bench as most competitors do. By building our own electronics and transceivers, as well as testing in real systems, this guarantees they will work out of the box and at a bit error ratio (BER) of 1E-15. This is about 1,000 fewer bit errors than the IEEE industry standard of 5E-5 and using forward error correction (FEC) to achieve 1E-12. The additional BER rating enables a wider margin for components in difficult operating conditions of outside plant.

Three Converging Factors Changing CPRI Networks:

  • Line Rates Converge: CPRI line rates have not been aligned with the Ethernet rates due to historical reasons when long reach coaxial copper wires were used. Rates evolved from 3G, 6G, 12G and now to 24.33G and is now very close to the Ethernet 25.78G rate.
  • Adoption of Four-Channels: Most of the CPRI networks use single-channel SFP transceivers but the CPRI committee is now considering using the 4-channel to increase port density to handle the huge influx of traffic, simpler fiber management and one optical connector for four instead of only one channel.
  • QSFP Form-factor: 3 or 97.2G too close to the Ethernet 100G rates to ignore. Hence, the CPRI committee has considered using the 4x25G (100G) QSFP28 form-factor standard Ethernet transceivers now available in huge volumes.

These factors lead to CPRI network builders adopting standard 4x25G (100G) QSFP28 Ethernet transceivers already being produced in huge unit volumes which should enable lower network costs and faster deployments. Dual rate capability makes it a double hit.

Bottom line:

While standards, software architectures, hardware architecture of central office-based or distributed BBU, C-RAN or not, and buzz-words are all in a high state of flux, a few things are clear:

  • A hard fiber link is needed from the antennas to base stations no matter what else happens beyond that in “software land”. Fiber has the bandwidth and the long reach capabilities. The days of microwave and copper coax wire are over.
  • Multi-mode optics is clearly the least expensive solution, making the 100G SR4 dual rate transceivers a perfect match going forward over the short reach.
  • It is also clear that both CPRI and Ethernet will play side by side roles in advanced 5G networks due to the unique capabilities of each protocol as well as the high unit volume cost advantages of Ethernet versus specialized CPRI transceivers.

Mellanox’s dual protocol, dual rate SR4 transceiver using Mellanox designed ICs, manufactured and tested by Mellanox in Mellanox systems and optimized for very low-power consumption and low bit error rates makes it a perfect low-cost choice for CPRI/Ethernet networks.


Supporting Resources:

LinkX is the Mellanox trademark and name for its cables and transceivers product line


Mellanox Announces 100G CPRI Transceiver and Accelink PSM4 Optical Engine Partnership

On display at Two Big Tradeshows CIOE & ECOC

Mellanox is showcasing its LinkX cables and transceivers, with ConnectX adapters and Ethernet and InfiniBand switches at the September CIOE and ECOC trade shows.  ECOC and CIOE are the two biggest interconnect events of the year besides the Optical Fiber Conference (OFC) in March.

  • CIOE: China International Opto-electronics Expo, Shenzhen, China Sept 6-9
  • ECOC: European Convention on Optical Communication, Gothenburg, Sweden, Sept 18-20

Mellanox announces:

  • 100G SR4 CPRI transceiver: wireless front haul protocol, multi-mode transceiver with an extended temperature rating of -10C-+75C and announcing we are entering volume production. With trials beginning in China and around the world this year, and with full deployments estimated to begin in 2019, this new transceiver is suitable for short-reach outside plant optical transmission between remote radio head (RRH) and base band units (BBU).

The CPRI transceiver is targeted at the next generation 5G wireless infrastructure build out and the potential unit volumes are staggering.  5G will enable ~10Gb/s to your cell phone, virtual reality, IOT, and 4K video – four times the resolution of your HDTV today.

  • Mellanox/Accelink partnership: Using our 100G PSM4 Silicon Photonics optical engine, Accelink will to build 1550nm PSM4 transceivers. Accelink is a leading Chinese opto-electronics components supplier with a one of the most comprehensive end-to-end product lines and one-stop solutions in the industry.

The 1550nm PSM4 relationship will create multiple industry sources for PSM4 transceivers based on Mellanox’s Silicon Photonics optical engine and transceiver ICs.


Mellanox CIOE booth #1A22-1, Hall 1 and ECOC booth 531 where we will show:

  • Full line of 100Gb/s transceivers for hyperscale and datacenter applications
  • LinkX 25G/50G/100Gb/s DAC & AOC cables and 100G SR4 & PSM4 transceivers
  • New Quantum switches with 40 ports of 200Gb/s QSFP28 in 1RUchassis
  • New ConnectX®-6 adapters with two ports of 200Gb/s QSFP28
  • Silicon Photonics Optical engines and components

Supporting Resources:

  • Learn more about LinkX cables and transceivers: LINK
  • Learn more about Mellanox complete 100GbE switches and adapters: LINK
  • Follow Mellanox on: TwitterFacebookGoogle+LinkedIn, and YouTube
  • Mellanox 25G/100G SR/SR4 transceivers: BLOG
  • Mellanox 100G PSM4 transceiver blog on: BLOG



Why So Many Types of High-speed Interconnects?

Rationale behind the myriad of different interconnect technologies and products

Creating high-speed interconnect links between servers and storage, switches and routers involves many different types of technologies in order to minimize the cost involved. With large data centers buying tens of thousands of devices, costs add up quickly. A 3-meter long DAC cable is priced at approximately $100 but a 10km reach single-mode transceiver $4,000-$5,000. AOCs and multi-mode transceivers priced in between.

Today, most modern data centers have zeroed in on the SFP and QSFP form-factors for use with DAC and AOC cabling and optical transceivers. By focusing on only a few types and ordering in high unit volumes, greater leverage of scale can be achieved, not only in the cables and transceivers, but also all the equipment they link to such as switches and network adapters that may reside in servers, storage HDD, SSD, NVMe arrays. Add to this, the spare parts that also need to be stocked.

Currently, the modern data center uses SFP and QSFP (“+” for 10G and “28” for 25G) in DAC, AOCs and both multi-mode and single-mode transceivers. DAC uses copper wires. Parallel 4-channel AOC and transceivers (SR4 & PSM4) use 8 optical fibers; single-channel transceivers and AOCs (SR, LR) use 2 fibers. CWDM4 and LR4 transceivers use 2 fibers and multiplex four channels in one fiber to save fibers costs over long reaches.


High-speed interconnects all strive to:

  • Implement the lowest cost links
  • Achieve the highest net data throughput (i.e., fastest data rate with least amount of data errors, data retransmissions and minimal latencies).
  • Transmit over various distances


To achieve these goals, various technologies are often used each of which has its own set of benefits and limitations. Data center professionals want to build all links with single-mode fiber, duplex LC connectors and single-mode transceivers. Build the fiber into the data center infrastructure once and forget it using single-mode fiber as it does not have reach limitations that DAC copper and multi-mode fiber do; then upgrade the transceivers with each new transceiver advancement.

While the fibers and LC connectors are already at the lowest cost points, the problem is the single-mode transceivers, which are very complex to build requiring many different material systems, are hard to manufacture and therefore expensive. Basically, the longer the reach needed to send the data, the higher the price as the technology gets more complicated and the harder to manufacture.

Most single-mode transceivers are built using a great deal of manual labor and piece part assembly in processes designed to address the low volume telecom market. The new hyperscale data centers are ordering parts in record numbers and the piece part manufacturing method is difficult to scale up. Silicon Photonics technology attempts to use CMOS silicon IC processes to integrate many of the devices and sub-micron alignments required.

As a result, data centers often use an array of different high-speed interconnects matching each interconnect type to specific reach requirements. DAC is the lowest cost however, after about 3-5 meters, the wire acts like a radio antenna and the signal becomes unrecognizable.  AOCs are used from 3 meters to about 30 meters after which installing long cables becomes difficult. More expensive multi-mode transceivers, with detachable optical connectors, can reach up to 100 meters, then the large 50-um fiber core causes the signal to scatter and become unrecognizable. Some multi-mode transceivers (eSR4) and links can be engineered to 300-400 meters (eSR4), but it gets a little tricky matching the special transceivers, fibers, and optical connector in the link.

Single-mode fiber uses a tiny 9-um light carrying core so the signal pulse stays together over very long distances and can travel literally between continents. Parallel single-mode transceivers (PSM4) with 8-fibers can reach up to 500m-2km. The PSM4.MSA standard is 500m, but Mellanox’s PSM4s can reach up to 2km; about four times the reach of the PSM4 spec.

After 500 meters, the cost of 8 fibers adds up with each meter, so multiplexing the four channels signals into only two single fibers is more economical over long fiber runs. CWDM4 is used for up to 2 km and LR4 up to 10 km.

In the chart below on the bottom axis, as the reach gets longer, different technologies are used. Starting with DAC cables on the left and ending with 10km LR4 transceivers on the far right. Also note on the vertical axis, the faster the data rates, the shorter in reach the DAC and multi-mode optics (SR, SR4) becomes while single-mode fiber remains largely reach independent.


Data Rate versus Interconnect Reach



All the different technologies and cable product types are designed to minimize the costs involved in building data center high-speed interconnects. While many enterprise data centers might use 5,000-10,000 devices, hyperscale builders with 2 million servers are ordering interconnects in the hundreds of thousands.

Mellanox sells end-to-end solutions, and designs and manufactures not only the switches and network adapters systems including the silicon ICs, but also designs and manufactures the cables and transceivers – including the silicon VCSEL and Silicon Photonics ICs driver and TIAs ICs.  For single-mode transceivers, Mellanox has its own Silicon Photonics product line and internal wafer fab for its PSM4 and AOC transceivers.

Mellanox sells state-of-the-art 25/50/100G products in copper DAC and optical AOC cables and both multi-mode and single-mode transceivers.

Mellanox recently announced it shipped its 100,000th 100G DAC cable and 200,000 th 100G transceiver/AOC module and is a leading supplier in all four interconnect product areas.


More Information

Mellanox LinkX™ Cables Connect the Scientific & Engineering Community

As many know, the circumference of the earth is 40,075 km. Doing the math, we start with the fact that Mellanox has shipped over 2 million copper DAC cables to-date. As the length of wire in 2 million cables is 144,000 km, that is essentially enough to circle the earth at the equator 3.5 times or get one a third of the way to the moon! Math below:


(3 wires/lane bundle) x (8 wire bundles in QSFP DAC) x (3 meters long average) x (2 million DACs) = 144,000 km


This fun fact got me thinking about how Mellanox connects the scientific and engineering community to their vital research in very real and tangible ways.

Mellanox’s approach to computer networking is to keep the CPU doing the most important tasks and leave the rest of tasks to the network. This approach preserves the most expensive resource, the CPUs, to do the important computing tasks and the network to manage the data traffic in and out of the CPU and storage sub-systems where the CPU is not needed. Mellanox calls this “Data-Centric CPU Offload” versus “CPU-centric OnLoad”. When the CPU is involved with simply moving data around, it is idle literally 40-80 percent of the time waiting for data. With OnLoad architectures, the CPU is involved with simply moving data around – not actually doing computing work. Every data transfer induced time delays or latency. This is like making an appointment to ask your CEO permission every time you have to sharpen a pencil. Lots of “busy” but not much actual “work”. Mellanox’s switches and network adapters, along with LinkX line of cables and transceivers, enables optimizing the movement of data and keeps it on the network there by enabling the CPU to do its best job. This is a fundamental theme at Mellanox and one of the factors of our success and applies to both Ethernet and InfiniBand protocols. The network switches and adapter ICs have billions of transistors to perform logical processing at the network level and not get the CPU involved.

Most engineering and scientific research requires massive amounts of data and iterations but often the formula being computed is pretty simple. So, the main problem in engineering and scientific computing is moving data fast enough to keep the CPU fully fed – which Mellanox’s switches, adapters and interconnects are ideally suited to do with our CPU-Offload architectures. The ConnectX-5 Multi-Host Socket adapters can increase the ROI at the server with 30 -60 percent better CPU utilizations, 50 -80 percent lower data transfer latency and 15 -28 percent faster data throughput. All of these benefits are derived by using intelligent network adapter and switches to keep the data moving on the network and not moving in and out of the CPU needlessly.

InfiniBand Systems ROI: Switches, Adapters and Interconnects

Mellanox Ethernet and InfiniBand-based systems deliver the lowest latency, fasted computing solutions for all kinds of engineering and scientific applications such as: Aerospace, automotive electronics, molecular dynamics, genetic engineering, chemistry, weather analysis, structural engineering, etc.

Mellanox Systems Used to Design Mellanox Systems!

Believe it or not, Mellanox even uses its Ethernet and InfiniBand switches, network adapters, cables and transceivers in CAE/CAD engineering systems to design the ICs and electronics that go inside the switches, network adapters, cables and transceivers that we sell! Think of it like a Martin Escher drawing of a hand with a pencil draws the picture of a hand with a pencil!

The Mellanox LinkX product line of cables and transceivers are all designed by Mellanox engineers – from the internal ICs to the complete assemblies. IC CAE systems are used to design, simulate and layout transceiver control ICs used in both multi-mode and single-mode Silicon Photonics transceivers. Optical engineering software is used to model the high-speed optics ray tracing and reflections inside the Silicon Photonics and fibers. Mechanical and Thermal CAE systems are used to design the mechanical aspects of the transceiver ends and thermal modeling. Electromagnetic design software is used to model the high-speed signals inside DAC copper cables and Silicon Photonics optical transceivers and the EMI/RFI emissions to meet industry standards. Lastly, the entire DAC, AOC, multi-mode and single-mode transceiver assemblies are all designed and modeled by Mellanox engineers.

Only a couple of actual formulas need to be computed but massive amounts of data in the simulations and designs – ideally suited for Mellanox OffLoad Ethernet and InfiniBand switches, adapters, cables and transceivers.

Supporting Resources:

SFP-DD – Next Generation SFP Module to Support 100G

New transceiver MSA form-factor enables doubling the SFP bandwidth and supporting fast line rates while maintaining SFP backwards compatibility.

Recently, a group of industry suppliers gathered to form a new transceiver module form-factor or Multi-Source Agreement (MSA). The agreement aims to foster the development of the next generation of SFP form-factor used in DAC and AOC cabling as well as optical transceivers. Mellanox often leads these sorts of technology developments and is a founding member of the SFP-DD MSA, as well as both QSFP-DD and OSFP MSAs.

While all the specs are not final yet, it’s called the SFP-DD or Small Form-factor Pluggable – Double Density. The “double density” refers to offering two rows of electrical pins enabling two-channels instead of the traditional one-channel in an SFP architecture – the smallest industry standard form-factor available today for data center systems.

New designs offer improved EMI and thermal management and will enable 50G and 100G PAM4 signaling in each channel for 100G and 200G support and up to 3.5 Watts of thermal dissipation; the same as the current QSFP28 which is about 2.5 times larger than the SFP-DD.

Bottom line:

First products on the market will likely be based on 50G PAM4 signaling and will feature two channels, offering 100G in SFP-DD form-factor. These new switches and network adapter configurations will enable increased bandwidth switch plate density, essentially doubling today’s current density.

This advancement will enable 100G (2x50G PAM4) in a tiny SFP port and 50G and 100G links to servers and storage in the smallest MSA available, with the highest number of 100G front-panel pluggable ports in a Top-of-Rack switch.  Eventually, two channels of 100G PAM4 will enable 200G per SFP-DD device.


Maintaining Popular Breakout Cabling to Servers

With the advent of new 8-channel form factors such as QSFP-DD, OSFP and COBO, a new 2-channel form factor was needed to enable 4-to-1 breakouts for servers and storage.

These time-tested data center Top-of-Rack breakout or splitter cable configurations can be maintained going forward to 400G with the SFP-DD in both copper DAC and AOC cables enabling supporting 10G, 25G, 50G, 100G and eventually 200G to the server such as:

  • 40G QSFP+ -to-Quad 10G SFP+
  • 100G QSFP28-to-Quad 25G SFP28
  • 100G QSFP28-to-Dual 50G QSFP28
  • 400G QSFP-DD-to-Quad 100G SFP-DD
  • 400G QSFP-DD-to-Dual 200G SFP-DD

Servers today typically support one or two CPUs per server but are heading towards supporting four and eight CPUs per server in the future, with additional DRAM and FLASH on board and PCIe Gen4 at 16GT/s requiring more server uplink bandwidth.  Today, 10G and 25G uplinks are popular and some hyperscale companies also require 50G uplinks. At four and eight CPUs per server, 100G and 200G uplinks will be required.

Mellanox recently introduced two new 100G AOCs breakout cables that feature 100G-to-Quad 25G SFP28 and 100G-to-Dual 50G QSFP28. They are also available in copper DAC cabling. These breakout configurations can also be made using transceivers and splitter passive fiber cables if optical connectors are needed to detach fibers from the transceivers.

Similarly, new QSFP-DD and SFP-DD breakout cables will be available in the future to support new 50G PAM4-based switches and network adapters.

Mellanox 100G DAC and AOC Product Line based on QSFP28 and SFP28


The SFP-DD new form-factors tie with Mellanox’s recent 200GbE Spectrum-2 switch IC announcement which is based on 50G PAM4 signaling and points to future 200G and 400G switch, network adapter, cable and transceiver developments from Mellanox.


Poised to Support the Next 5-10 years

By doubling the number of lanes, and at the same time doubling the number of bits per clock sent with PAM4 modulation, the SFP-DD bandwidth can transfer 100G versus the SFP28 at 25G. This translates into or four times the bandwidth of SFP28. In the future, the SFP-DD MSA goal is to support 100G PAM4 modulation enabling 200G (2x100G) per SFP-DD package which translates to eight times the current SFP28 bandwidth in the same physical space.

PAM4 Modulation versus NRZ

100G in a SFP-DD form-factors is the so-called Ethernet Alliance “holy grail” in high-speed interconnects. 100G is likely to be the next “10G” which has been the main stay in data centers for the last 10+ years. SFP-DD enables 100G in the smallest form-factors available and is likely to be around for many years to come – starting out in hyperscale and later moving into the enterprise and storage.



The MSA members will develop operating parameters, signal transmission speed goals, and protocols for the SFP-DD interface, which expands on the popular SFP pluggable form factor.  Targets include:


  • DAC reach: 3-meter 28 AWG Direct Attach Copper (DAC) aka Twinax,


  • SFP Backwards Compatibility: with the SFP28 and SFP+ so that upgrades are easy and support for slower devices in new 50G PAM4 systems.


  • Break Out Support: Using the next generation 8-channel 400G QSFP-DD in a switch, the SFP-DD can be used in a quad breakout configuration of four 100G. Similarly, for dual breakouts of 200G to dual 100G or quad 50G.


  • Higher power dissipation: With advanced thermal designs, the SFP-DD goal is to support up to 3.5 Watts – equal to the current and much larger QSFP28 MSA.

Comparison of SFP-DD with QSFP28 and QSFP-DD


The SFP-DD MSA founding members include: Mellanox Technologies, Alibaba, Broadcom, Brocade, Cisco, Dell EMC, Finisar, HPE, Huawei, Intel, Juniper Networks, Lumentum, Molex, and TE Connectivity.

Mellanox offers complete end-to-end solutions of switches, network adapters, cables and transceivers supporting both the SFP+ for 10G line rates and the SFP28 for 25G line rates. Soon 50G PAM4 for 200G and 400G systems and interconnects for both Ethernet and InfiniBand.


Supporting Resources:



QSA Adapters Get Even Better at 25Gb/s

QSA solves problems linking different port sizes and speeds equipment together

Problems: You have a 4-channel QSFP port on a switch or network adapter but you’ve got a single-channel subsystem that uses SFP and you want to connect older equipment, storage or a 10G device. Or you have a new shiny 25Gb/s-based Spectrum switch or ConnectX-4 or -5 network adapters and you want to connect to slower 10Gb/s equipment. How do you connect the different port types and speeds together?

Answer: Get the Mellanox QSA Adapter. QSFP-to-SFP Adapter – now supporting 25Gb/s!

Sometimes, the simplest things can solve big problems and frustrations. The QSA is one such device and costs less than a dinner for one.


What is a QSA?

The QSA is a Mellanox designed and patented, mechanical adapter that fits neatly inside a QSFP port and enables plugging in a smaller, single-channel SFP device into a QSFP 4-channel port.  Only the one-channel gets passed through even though the mechanical port is 4-channels.  The QSA contains a configuration EPROM to tell the host what it is and what speed to run at.  Unless one is configuring it to run a slower line rate, it is plug-and-plug again-and-play – nothing to configure in software.


Features and Notes

  1. QSAs are available in 2 versions: 10G and 25G.
    • 10G version also supports 1G
    • 25G version supports 1G and 10G
  2. QSA accepts a huge range of 10G and 25G cables and transceiver types:
    • CR DAC copper SFP (3-7m)
    • SR SFP multi-mode transceiver (100m)
    • SFP multi-mode AOCs (100m)
    • LR SFP single-mode transceiver (10km)
    • SX 1G SFP+ multi-mode transceiver (500m)
    • Base-T 1G SFP converter that uses CAT-5 copper UTP cables (100m)
  3. Passive and consumes no power
  4. Does not induce and signal latency delays
  5. Contains an EPROM to tell the switch port what it is – used in the initial configuration
  6. Only one channel passes through to the QSFP port
  7. Supports Ethernet-only as InfiniBand doesn’t generally use SFP single-channel links.
  8. There is even a DAC adapter cable with SFP on one end and QSFP on the other.
  9. MC2309130-xxx to 3m and to 3 meters and MC2309124-xxx to 7 meters.


The copper DACs have a maximum reach of 3 – 7 meters but with a LR transceiver module with single-mode fiber can reach as far a 10km or 6.25 miles!

10G and 25Gb/s Cables and Transceivers Options For Use in QSA Adapters


In the past, Mellanox offered network adapters in SFP and QSFP versions of the cards. But starting with ConnectX-6, only the QSFP28 versions will be offered and if a SFP single-channel is required, the QSA will be the solution to create the connection.


Not everything in the world runs or needs to run at 25Gb/s so the QSA is a neat way to link slower 10G sub-systems to new high-speed Spectrum switches and ConnectX-5 network adapter and later upgrade the slower equipment to 25Gb/s.

More Information: 

LinkX is the Mellanox trademark and name for its cables and transceivers product line