All posts by Brad Smith

About Brad Smith

Brad is the Director of Marketing at Mellanox, based in Silicon Valley for the LinkX cables and transceivers business focusing on hyperscale, Web 2.0, enterprise, storage and telco markets. Recently, Brad was Product Line Manager for Intel’s Silicon Photonics group for CWDM4/CLR4 and QSFP28 product lines and ran the 100G CLR4 Alliance. Director of Marketing & BusDev at OpSIS MPW Silicon Photonics foundry. President/COO of LuxSonar Semiconductors ( Cirrus Logic) and co-founder & Director of Product Marketing of NexGen, a X86-compatible CPU company sold to AMD - now the X86 product line. Brad also has ~15 years in technology market research as Vice president of the Computer Systems group at Dataquest/Gartner; VP/Chief Analyst at RHK and Light Counting networking research firms. Brad started his career at Digital Equipment near Boston with the VAX 11/780 and has served as CEO, president/COO and on the board-of-directors of three start-up companies. Brad has a BSEE degree from the University of Massachusetts; MBA from University of Phoenix and holds 2 optical patents

Mellanox Announces 100G CPRI Transceiver and Accelink PSM4 Optical Engine Partnership

On display at Two Big Tradeshows CIOE & ECOC

Mellanox is showcasing its LinkX cables and transceivers, with ConnectX adapters and Ethernet and InfiniBand switches at the September CIOE and ECOC trade shows.  ECOC and CIOE are the two biggest interconnect events of the year besides the Optical Fiber Conference (OFC) in March.

  • CIOE: China International Opto-electronics Expo, Shenzhen, China Sept 6-9
  • ECOC: European Convention on Optical Communication, Gothenburg, Sweden, Sept 18-20

Mellanox announces:

  • 100G SR4 CPRI transceiver: wireless front haul protocol, multi-mode transceiver with an extended temperature rating of -10C-+75C and announcing we are entering volume production. With trials beginning in China and around the world this year, and with full deployments estimated to begin in 2019, this new transceiver is suitable for short-reach outside plant optical transmission between remote radio head (RRH) and base band units (BBU).

The CPRI transceiver is targeted at the next generation 5G wireless infrastructure build out and the potential unit volumes are staggering.  5G will enable ~10Gb/s to your cell phone, virtual reality, IOT, and 4K video – four times the resolution of your HDTV today.

  • Mellanox/Accelink partnership: Using our 100G PSM4 Silicon Photonics optical engine, Accelink will to build 1550nm PSM4 transceivers. Accelink is a leading Chinese opto-electronics components supplier with a one of the most comprehensive end-to-end product lines and one-stop solutions in the industry.

The 1550nm PSM4 relationship will create multiple industry sources for PSM4 transceivers based on Mellanox’s Silicon Photonics optical engine and transceiver ICs.


Mellanox CIOE booth #1A22-1, Hall 1 and ECOC booth 531 where we will show:

  • Full line of 100Gb/s transceivers for hyperscale and datacenter applications
  • LinkX 25G/50G/100Gb/s DAC & AOC cables and 100G SR4 & PSM4 transceivers
  • New Quantum switches with 40 ports of 200Gb/s QSFP28 in 1RUchassis
  • New ConnectX®-6 adapters with two ports of 200Gb/s QSFP28
  • Silicon Photonics Optical engines and components

Supporting Resources:

  • Learn more about LinkX cables and transceivers: LINK
  • Learn more about Mellanox complete 100GbE switches and adapters: LINK
  • Follow Mellanox on: TwitterFacebookGoogle+LinkedIn, and YouTube
  • Mellanox 25G/100G SR/SR4 transceivers: BLOG
  • Mellanox 100G PSM4 transceiver blog on: BLOG



Why So Many Types of High-speed Interconnects?

Rationale behind the myriad of different interconnect technologies and products

Creating high-speed interconnect links between servers and storage, switches and routers involves many different types of technologies in order to minimize the cost involved. With large data centers buying tens of thousands of devices, costs add up quickly. A 3-meter long DAC cable is priced at approximately $100 but a 10km reach single-mode transceiver $4,000-$5,000. AOCs and multi-mode transceivers priced in between.

Today, most modern data centers have zeroed in on the SFP and QSFP form-factors for use with DAC and AOC cabling and optical transceivers. By focusing on only a few types and ordering in high unit volumes, greater leverage of scale can be achieved, not only in the cables and transceivers, but also all the equipment they link to such as switches and network adapters that may reside in servers, storage HDD, SSD, NVMe arrays. Add to this, the spare parts that also need to be stocked.

Currently, the modern data center uses SFP and QSFP (“+” for 10G and “28” for 25G) in DAC, AOCs and both multi-mode and single-mode transceivers. DAC uses copper wires. Parallel 4-channel AOC and transceivers (SR4 & PSM4) use 8 optical fibers; single-channel transceivers and AOCs (SR, LR) use 2 fibers. CWDM4 and LR4 transceivers use 2 fibers and multiplex four channels in one fiber to save fibers costs over long reaches.


High-speed interconnects all strive to:

  • Implement the lowest cost links
  • Achieve the highest net data throughput (i.e., fastest data rate with least amount of data errors, data retransmissions and minimal latencies).
  • Transmit over various distances


To achieve these goals, various technologies are often used each of which has its own set of benefits and limitations. Data center professionals want to build all links with single-mode fiber, duplex LC connectors and single-mode transceivers. Build the fiber into the data center infrastructure once and forget it using single-mode fiber as it does not have reach limitations that DAC copper and multi-mode fiber do; then upgrade the transceivers with each new transceiver advancement.

While the fibers and LC connectors are already at the lowest cost points, the problem is the single-mode transceivers, which are very complex to build requiring many different material systems, are hard to manufacture and therefore expensive. Basically, the longer the reach needed to send the data, the higher the price as the technology gets more complicated and the harder to manufacture.

Most single-mode transceivers are built using a great deal of manual labor and piece part assembly in processes designed to address the low volume telecom market. The new hyperscale data centers are ordering parts in record numbers and the piece part manufacturing method is difficult to scale up. Silicon Photonics technology attempts to use CMOS silicon IC processes to integrate many of the devices and sub-micron alignments required.

As a result, data centers often use an array of different high-speed interconnects matching each interconnect type to specific reach requirements. DAC is the lowest cost however, after about 3-5 meters, the wire acts like a radio antenna and the signal becomes unrecognizable.  AOCs are used from 3 meters to about 30 meters after which installing long cables becomes difficult. More expensive multi-mode transceivers, with detachable optical connectors, can reach up to 100 meters, then the large 50-um fiber core causes the signal to scatter and become unrecognizable. Some multi-mode transceivers (eSR4) and links can be engineered to 300-400 meters (eSR4), but it gets a little tricky matching the special transceivers, fibers, and optical connector in the link.

Single-mode fiber uses a tiny 9-um light carrying core so the signal pulse stays together over very long distances and can travel literally between continents. Parallel single-mode transceivers (PSM4) with 8-fibers can reach up to 500m-2km. The PSM4.MSA standard is 500m, but Mellanox’s PSM4s can reach up to 2km; about four times the reach of the PSM4 spec.

After 500 meters, the cost of 8 fibers adds up with each meter, so multiplexing the four channels signals into only two single fibers is more economical over long fiber runs. CWDM4 is used for up to 2 km and LR4 up to 10 km.

In the chart below on the bottom axis, as the reach gets longer, different technologies are used. Starting with DAC cables on the left and ending with 10km LR4 transceivers on the far right. Also note on the vertical axis, the faster the data rates, the shorter in reach the DAC and multi-mode optics (SR, SR4) becomes while single-mode fiber remains largely reach independent.


Data Rate versus Interconnect Reach



All the different technologies and cable product types are designed to minimize the costs involved in building data center high-speed interconnects. While many enterprise data centers might use 5,000-10,000 devices, hyperscale builders with 2 million servers are ordering interconnects in the hundreds of thousands.

Mellanox sells end-to-end solutions, and designs and manufactures not only the switches and network adapters systems including the silicon ICs, but also designs and manufactures the cables and transceivers – including the silicon VCSEL and Silicon Photonics ICs driver and TIAs ICs.  For single-mode transceivers, Mellanox has its own Silicon Photonics product line and internal wafer fab for its PSM4 and AOC transceivers.

Mellanox sells state-of-the-art 25/50/100G products in copper DAC and optical AOC cables and both multi-mode and single-mode transceivers.

Mellanox recently announced it shipped its 100,000th 100G DAC cable and 200,000 th 100G transceiver/AOC module and is a leading supplier in all four interconnect product areas.


More Information

Mellanox LinkX™ Cables Connect the Scientific & Engineering Community

As many know, the circumference of the earth is 40,075 km. Doing the math, we start with the fact that Mellanox has shipped over 2 million copper DAC cables to-date. As the length of wire in 2 million cables is 144,000 km, that is essentially enough to circle the earth at the equator 3.5 times or get one a third of the way to the moon! Math below:


(3 wires/lane bundle) x (8 wire bundles in QSFP DAC) x (3 meters long average) x (2 million DACs) = 144,000 km


This fun fact got me thinking about how Mellanox connects the scientific and engineering community to their vital research in very real and tangible ways.

Mellanox’s approach to computer networking is to keep the CPU doing the most important tasks and leave the rest of tasks to the network. This approach preserves the most expensive resource, the CPUs, to do the important computing tasks and the network to manage the data traffic in and out of the CPU and storage sub-systems where the CPU is not needed. Mellanox calls this “Data-Centric CPU Offload” versus “CPU-centric OnLoad”. When the CPU is involved with simply moving data around, it is idle literally 40-80 percent of the time waiting for data. With OnLoad architectures, the CPU is involved with simply moving data around – not actually doing computing work. Every data transfer induced time delays or latency. This is like making an appointment to ask your CEO permission every time you have to sharpen a pencil. Lots of “busy” but not much actual “work”. Mellanox’s switches and network adapters, along with LinkX line of cables and transceivers, enables optimizing the movement of data and keeps it on the network there by enabling the CPU to do its best job. This is a fundamental theme at Mellanox and one of the factors of our success and applies to both Ethernet and InfiniBand protocols. The network switches and adapter ICs have billions of transistors to perform logical processing at the network level and not get the CPU involved.

Most engineering and scientific research requires massive amounts of data and iterations but often the formula being computed is pretty simple. So, the main problem in engineering and scientific computing is moving data fast enough to keep the CPU fully fed – which Mellanox’s switches, adapters and interconnects are ideally suited to do with our CPU-Offload architectures. The ConnectX-5 Multi-Host Socket adapters can increase the ROI at the server with 30 -60 percent better CPU utilizations, 50 -80 percent lower data transfer latency and 15 -28 percent faster data throughput. All of these benefits are derived by using intelligent network adapter and switches to keep the data moving on the network and not moving in and out of the CPU needlessly.

InfiniBand Systems ROI: Switches, Adapters and Interconnects

Mellanox Ethernet and InfiniBand-based systems deliver the lowest latency, fasted computing solutions for all kinds of engineering and scientific applications such as: Aerospace, automotive electronics, molecular dynamics, genetic engineering, chemistry, weather analysis, structural engineering, etc.

Mellanox Systems Used to Design Mellanox Systems!

Believe it or not, Mellanox even uses its Ethernet and InfiniBand switches, network adapters, cables and transceivers in CAE/CAD engineering systems to design the ICs and electronics that go inside the switches, network adapters, cables and transceivers that we sell! Think of it like a Martin Escher drawing of a hand with a pencil draws the picture of a hand with a pencil!

The Mellanox LinkX product line of cables and transceivers are all designed by Mellanox engineers – from the internal ICs to the complete assemblies. IC CAE systems are used to design, simulate and layout transceiver control ICs used in both multi-mode and single-mode Silicon Photonics transceivers. Optical engineering software is used to model the high-speed optics ray tracing and reflections inside the Silicon Photonics and fibers. Mechanical and Thermal CAE systems are used to design the mechanical aspects of the transceiver ends and thermal modeling. Electromagnetic design software is used to model the high-speed signals inside DAC copper cables and Silicon Photonics optical transceivers and the EMI/RFI emissions to meet industry standards. Lastly, the entire DAC, AOC, multi-mode and single-mode transceiver assemblies are all designed and modeled by Mellanox engineers.

Only a couple of actual formulas need to be computed but massive amounts of data in the simulations and designs – ideally suited for Mellanox OffLoad Ethernet and InfiniBand switches, adapters, cables and transceivers.

Supporting Resources:

SFP-DD – Next Generation SFP Module to Support 100G

New transceiver MSA form-factor enables doubling the SFP bandwidth and supporting fast line rates while maintaining SFP backwards compatibility.

Recently, a group of industry suppliers gathered to form a new transceiver module form-factor or Multi-Source Agreement (MSA). The agreement aims to foster the development of the next generation of SFP form-factor used in DAC and AOC cabling as well as optical transceivers. Mellanox often leads these sorts of technology developments and is a founding member of the SFP-DD MSA, as well as both QSFP-DD and OSFP MSAs.

While all the specs are not final yet, it’s called the SFP-DD or Small Form-factor Pluggable – Double Density. The “double density” refers to offering two rows of electrical pins enabling two-channels instead of the traditional one-channel in an SFP architecture – the smallest industry standard form-factor available today for data center systems.

New designs offer improved EMI and thermal management and will enable 50G and 100G PAM4 signaling in each channel for 100G and 200G support and up to 3.5 Watts of thermal dissipation; the same as the current QSFP28 which is about 2.5 times larger than the SFP-DD.

Bottom line:

First products on the market will likely be based on 50G PAM4 signaling and will feature two channels, offering 100G in SFP-DD form-factor. These new switches and network adapter configurations will enable increased bandwidth switch plate density, essentially doubling today’s current density.

This advancement will enable 100G (2x50G PAM4) in a tiny SFP port and 50G and 100G links to servers and storage in the smallest MSA available, with the highest number of 100G front-panel pluggable ports in a Top-of-Rack switch.  Eventually, two channels of 100G PAM4 will enable 200G per SFP-DD device.


Maintaining Popular Breakout Cabling to Servers

With the advent of new 8-channel form factors such as QSFP-DD, OSFP and COBO, a new 2-channel form factor was needed to enable 4-to-1 breakouts for servers and storage.

These time-tested data center Top-of-Rack breakout or splitter cable configurations can be maintained going forward to 400G with the SFP-DD in both copper DAC and AOC cables enabling supporting 10G, 25G, 50G, 100G and eventually 200G to the server such as:

  • 40G QSFP+ -to-Quad 10G SFP+
  • 100G QSFP28-to-Quad 25G SFP28
  • 100G QSFP28-to-Dual 50G QSFP28
  • 400G QSFP-DD-to-Quad 100G SFP-DD
  • 400G QSFP-DD-to-Dual 200G SFP-DD

Servers today typically support one or two CPUs per server but are heading towards supporting four and eight CPUs per server in the future, with additional DRAM and FLASH on board and PCIe Gen4 at 16GT/s requiring more server uplink bandwidth.  Today, 10G and 25G uplinks are popular and some hyperscale companies also require 50G uplinks. At four and eight CPUs per server, 100G and 200G uplinks will be required.

Mellanox recently introduced two new 100G AOCs breakout cables that feature 100G-to-Quad 25G SFP28 and 100G-to-Dual 50G QSFP28. They are also available in copper DAC cabling. These breakout configurations can also be made using transceivers and splitter passive fiber cables if optical connectors are needed to detach fibers from the transceivers.

Similarly, new QSFP-DD and SFP-DD breakout cables will be available in the future to support new 50G PAM4-based switches and network adapters.

Mellanox 100G DAC and AOC Product Line based on QSFP28 and SFP28


The SFP-DD new form-factors tie with Mellanox’s recent 200GbE Spectrum-2 switch IC announcement which is based on 50G PAM4 signaling and points to future 200G and 400G switch, network adapter, cable and transceiver developments from Mellanox.


Poised to Support the Next 5-10 years

By doubling the number of lanes, and at the same time doubling the number of bits per clock sent with PAM4 modulation, the SFP-DD bandwidth can transfer 100G versus the SFP28 at 25G. This translates into or four times the bandwidth of SFP28. In the future, the SFP-DD MSA goal is to support 100G PAM4 modulation enabling 200G (2x100G) per SFP-DD package which translates to eight times the current SFP28 bandwidth in the same physical space.

PAM4 Modulation versus NRZ

100G in a SFP-DD form-factors is the so-called Ethernet Alliance “holy grail” in high-speed interconnects. 100G is likely to be the next “10G” which has been the main stay in data centers for the last 10+ years. SFP-DD enables 100G in the smallest form-factors available and is likely to be around for many years to come – starting out in hyperscale and later moving into the enterprise and storage.



The MSA members will develop operating parameters, signal transmission speed goals, and protocols for the SFP-DD interface, which expands on the popular SFP pluggable form factor.  Targets include:


  • DAC reach: 3-meter 28 AWG Direct Attach Copper (DAC) aka Twinax,


  • SFP Backwards Compatibility: with the SFP28 and SFP+ so that upgrades are easy and support for slower devices in new 50G PAM4 systems.


  • Break Out Support: Using the next generation 8-channel 400G QSFP-DD in a switch, the SFP-DD can be used in a quad breakout configuration of four 100G. Similarly, for dual breakouts of 200G to dual 100G or quad 50G.


  • Higher power dissipation: With advanced thermal designs, the SFP-DD goal is to support up to 3.5 Watts – equal to the current and much larger QSFP28 MSA.

Comparison of SFP-DD with QSFP28 and QSFP-DD


The SFP-DD MSA founding members include: Mellanox Technologies, Alibaba, Broadcom, Brocade, Cisco, Dell EMC, Finisar, HPE, Huawei, Intel, Juniper Networks, Lumentum, Molex, and TE Connectivity.

Mellanox offers complete end-to-end solutions of switches, network adapters, cables and transceivers supporting both the SFP+ for 10G line rates and the SFP28 for 25G line rates. Soon 50G PAM4 for 200G and 400G systems and interconnects for both Ethernet and InfiniBand.


Supporting Resources:



QSA Adapters Get Even Better at 25Gb/s

QSA solves problems linking different port sizes and speeds equipment together

Problems: You have a 4-channel QSFP port on a switch or network adapter but you’ve got a single-channel subsystem that uses SFP and you want to connect older equipment, storage or a 10G device. Or you have a new shiny 25Gb/s-based Spectrum switch or ConnectX-4 or -5 network adapters and you want to connect to slower 10Gb/s equipment. How do you connect the different port types and speeds together?

Answer: Get the Mellanox QSA Adapter. QSFP-to-SFP Adapter – now supporting 25Gb/s!

Sometimes, the simplest things can solve big problems and frustrations. The QSA is one such device and costs less than a dinner for one.


What is a QSA?

The QSA is a Mellanox designed and patented, mechanical adapter that fits neatly inside a QSFP port and enables plugging in a smaller, single-channel SFP device into a QSFP 4-channel port.  Only the one-channel gets passed through even though the mechanical port is 4-channels.  The QSA contains a configuration EPROM to tell the host what it is and what speed to run at.  Unless one is configuring it to run a slower line rate, it is plug-and-plug again-and-play – nothing to configure in software.


Features and Notes

  1. QSAs are available in 2 versions: 10G and 25G.
    • 10G version also supports 1G
    • 25G version supports 1G and 10G
  2. QSA accepts a huge range of 10G and 25G cables and transceiver types:
    • CR DAC copper SFP (3-7m)
    • SR SFP multi-mode transceiver (100m)
    • SFP multi-mode AOCs (100m)
    • LR SFP single-mode transceiver (10km)
    • SX 1G SFP+ multi-mode transceiver (500m)
    • Base-T 1G SFP converter that uses CAT-5 copper UTP cables (100m)
  3. Passive and consumes no power
  4. Does not induce and signal latency delays
  5. Contains an EPROM to tell the switch port what it is – used in the initial configuration
  6. Only one channel passes through to the QSFP port
  7. Supports Ethernet-only as InfiniBand doesn’t generally use SFP single-channel links.
  8. There is even a DAC adapter cable with SFP on one end and QSFP on the other.
  9. MC2309130-xxx to 3m and to 3 meters and MC2309124-xxx to 7 meters.


The copper DACs have a maximum reach of 3 – 7 meters but with a LR transceiver module with single-mode fiber can reach as far a 10km or 6.25 miles!

10G and 25Gb/s Cables and Transceivers Options For Use in QSA Adapters


In the past, Mellanox offered network adapters in SFP and QSFP versions of the cards. But starting with ConnectX-6, only the QSFP28 versions will be offered and if a SFP single-channel is required, the QSA will be the solution to create the connection.


Not everything in the world runs or needs to run at 25Gb/s so the QSA is a neat way to link slower 10G sub-systems to new high-speed Spectrum switches and ConnectX-5 network adapter and later upgrade the slower equipment to 25Gb/s.

More Information: 

LinkX is the Mellanox trademark and name for its cables and transceivers product line



100G PSM4: The Most Configurable & Lowest-Cost Single-Mode Transceiver Available

Single-mode transceivers now priced for high-volume data center use

Parallel Single Mode 4-channel (PSM4) is a type of single-mode transceiver that uses a parallel fiber design for reaches from up to 2 km and for reaches beyond the limits of 100-meter Short Reach 4-channel (SR4) multi-mode transceivers. PSM4s will be the transceiver that enables single-mode fiber to become popular in next-generation data centers due to its low cost and high configurability.

PSM4 is built using one laser (instead of four), split into four paths or channels and separately modulated with electrical data signals. Each channel has its own fibers and is separated throughout the link. PSM4 uses eight-fibers with four-fiber for transmission and four-fibers for receiving. A parallel, eight fiber Multiple Push On (MPO) optical connector is used.

100Gb/s PSM4 Transceiver



The PSM4 transceiver is the lowest cost, 100Gb/s transceiver on the market capable of using single-mode fiber for long reaches up to 2km. The best use case is at reaches less than 500 meters. When tallying up the cost of longer reaches, the cost of the eight fibers adds up for each meter and the CWDM4, using two fibers, becomes more economical.


What Data Centers are 2 km or 1.2 Miles Long?

While most data centers are not 2km (1.2 miles) long, the 2km spec is another way of stating the optical power of the laser. Measured in powers of ten called dBs (Decibels), the Mellanox PSM4 offers ~3.3 dBs of optical power which is enough to push through hundreds of meters of a lossy fiber infrastructure consisting of dirty and/or misaligned optical connectors, jumpers, optical patch panels and other interferences to the light path. This is similar to needing a very powerful flash light to shine through a dense forest of twigs, branches and leaves in the way even though the distance is relatively short.


Single mode fibers are cheap but transceivers expensive – reverse for multi-mode? Huh?

Interestingly, the multi-mode (large core 50-um diameter) fibers are more expensive than single-mode (9-um tiny core diameter) fibers but the transceivers are the reverse! Single-mode fiber is used by the telecom industry and ordered in hundreds of thousands of miles per year – and so it is inexpensive. Multi-mode fiber is used exclusively in data centers and the amount made is relatively small so it is about three times more expensive.

On the other hand, the multi-mode core diameter is large and easy to align with VCSEL lasers and detectors. The 9-um single-mode fiber is very hard to build and align transceiver components with and requires very expensive alignment equipment. Therefore, single-mode transceivers have always been significantly more expensive than multi-mode transceivers.  Multi-mode transceivers are less expensive than single-mode transceivers and exactly the reverse for the fibers.

Typically, single-mode transceivers use 10-20 different tiny parts that all need to be mechanically aligned to sub-micron tolerances. This requires a lot of manual labor, expensive test and alignment equipment and results in a high reject rate.


Silicon Photonics – Solves the Manufacturing Problems

Silicon Photonics does away with most of these problems and integrates the optical components and waveguides into a silicon wafer – the same basic technology used to build CMOS semiconductor electronic chips. This is how single-mode transceivers will eventually become more price competitive with multi-mode transceivers over time. Mellanox builds PSM4s using its internally developed Silicon Photonics technologies located in southern California which has been building and shipping Silicon Photonics products used as transceivers and Variable Optical Attenuators (VOAs) for nearly a decade

Mellanox Designed PSM4 Silicon Photonics and Control ASICs


Numerous PSM4 Applications and Configurations for Any Need.

The PSM4 has many different configuration application uses. It can bus 100Gb/s point-to-point over 2km or can be broken out into dual 50Gb/s or quad 25Gb/s links for linking to servers, storage and other subsystems. Additionally, the breakouts can be made using passive fiber splitter cables or a transceiver/AOC hybrid called a, “pigtail”. The following diagram illustrates the Mellanox, “end-to-end” system solutions consisting of switches and network adapters with cables and transceivers.

25G, 50G, 100G PSM4 Transceiver Applications


PSM4 Breakouts to Servers & Storage

Beside long reach 2km point-to-point links, PSM4 channels can also be split out individually. The diagram below shows a 100G PSM4 transceiver split using a passive breakout splitter cable with an MPO on one end and either dual MPOs (50G) or quad LC connectors (25G) on the other ends. CWDM4 cannot do this feature and can only bus 100Gb/s point-to-point.

Passive Fiber Breakout Configurations


PSM4 “Pigtail” Transceiver

Transceivers have their fibers attached to a detachable optical connector (MPO or LC). Active Optical Cables (AOCs) are two transceivers with the fibers permanently attached inside and not removable. A “Pigtail” (shown above) is a hybrid of both transceivers and AOCs and has the fibers attached to the transceivers with a short 1 meter length of fiber and one of three connector configurations:

  • One 100Gb/s MPO (Four 25G channels) QSFP28
  • Two 50Gb/s MPOs (Two 25G channels) QSFP28
  • Eight 25Gb/s duplex LC (One 25G channel) SFP28

While it only has 1 meter of fiber, it can reach 500 meters in the fiber infrastructure. Pigtails are used to link Top-of-Rack switches to 25Gb/s or 50Gb/s servers and storage subsystems or to plug into passive optical distribution patch panels that connect to other parts of the data center.

Bottom line, the Pigtail saves the material cost and maintenance for one optical connector as well as about 1 dB of optical loss gained back; small but when big data center builder orders tens of thousands, it all adds up.


Not All MPO Connector are the Same!

One thing to note: the MPO used in the SR4 multi-mode MPO (colored aqua) is not the same as the MPO/APC (colored green or yellow) for the PSM4. Optical connectors pass through “most” of the light and some gets reflected back towards the laser from the inside surface of the fiber end in the connector. The PSM4 uses tiny core single-mode fiber and it concentrates any back reflections in the connector infrastructure and aims it back at the laser which can destroy it. So, the single-mode fiber infrastructure polishes the fiber ends at an angle to divert the back reflections away from the laser. Hence, the name Angle Polished Connector (APC). Multi-mode fiber has a big 50-um core and the large area disperses the back reflections making it less of a problem. In reality, the MPO connectors use 12 fibers with four unused.

MPO Optical Connectors

Flat and Angle Polished Fibers


Mellanox PSM4-1550nm Interoperates with Most Industry PSM4s

Most PSM4 transceivers use a PIN detector that has a wide bandwidth spanning both 1310nm and 1550nm. The Mellanox 1550nm PSM4 can talk to almost any 1310nm PSM4 transceiver and the reverse even though the wavelengths are different. Many of our customers have interoperate tested more than ten different suppliers without any issues.

At the Optical Fiber Conferences (OFC), Mellanox demonstrated our 100Gb/s 1550nm PSM4 interoperating with PSM4 transceivers from Innolight and AOI and in breakout configurations at 25Gb/s with the LR transceivers with Oclaro, Hisense and Ligent.

Interoperability Demo 1310nm & 1550nm PSM4s


200Gb/s HDR

The PSM4 will make another debut at 200Gb/s using the QSFP28 form-factors in late 2017 supporting 200Gb/s HDR for InfiniBand and also in a 1:2 splitter configuration split into two 100Gb/s HDR100 QSFP28.



The PSM4 transceiver is the lowest-cost single-mode transceiver available today for use in next-generation data centers as it employs the low-cost and long reach features of single-mode fiber. It is a very flexible transceiver that can link 100Gb/s point-to-point or be split out into individual channels combinations of 25Gb/s or 50Gb/s to servers, storage and other subsystems.

While 100G PSM4s are fairly new to the market, as the popularity climbs and volume manufacturing efficiencies kick in, the PSM4 has a chance at challenging the 100G SR4 multi-mode transceiver in similar market prices when the transceiver and fiber link costs are added up. PSM4s will be the transceivers that enables single-mode fiber to become popular in next-generation data centers.


More Information:


LinkX is the Mellanox trademark and name for its cables and transceivers product line

DesignCon Trade Show – Connecting All the Signaling Dots

Stop by Booth #120 to see Mellanox 25G and 100Gb/s DAC, AOCs, SR4, PSM4 Cables and Transceivers

DesignCon at the Santa Clara Convention Center, to be held Jan. 31-Feb. 2, is the “chippie geek” trade show of the year and you better know how to speak PAM4 fluently this year if you attend. Electronic signaling and optical buzz-words are flying around like never before! So, you better get your CAUI-4 PAM4 and NRZ vernacular straight for SFP28, QSFP28, in SR, SR4 LR4, CWDM4, and PSM4 because 200G/400G 400AUI-8, QSFP-DD and OSFP, DR4, FR4, LR8, DR8 and FR8 are on their way!


Mellanox will be exhibiting its full line of 25Gb/s and 100Gb/s cables, transceivers, switches and network adapters. Not many companies have a complete line of end-to-end systems and interconnect products. Most tradeshows exhibit a lot of “tomorrow-land” products, promises and demos. But Mellanox’s 25Gb/s and 100Gb/s LinkX cables and transceivers as well as Spectrum switches and ConnectX® network adapter are available and shipping in volume today and in a wide deployment of hyperscale, HPC and enterprise markets. In fact, with respect to 100Gb/s network adapters, Mellanox was first to market and now has more than 90 percent market share. We are also the market leader in 100Gb/s SR4 transceivers.


We will be displaying our end-to-end solutions of SN2700 32-port QSFP28 and SN2410 25Gb/s SFP28 Top-of-Rack switches in a full system rack packed with DAC cables and splitters, AOCs, multi-mode SR4 and PSM4 optical transceivers in various configurations and showing the broad configuration and wide flexibility available from Mellanox.

Mellanox Switches, Network Adapters, Cables and Transceivers



Brad Smith and Arlon Martin from the LinkX Interconnect marketing will be at the show to answer any questions. Stop by our booth, number 120, or, if you want to set up a meeting, send me an email at


More Information:  

Learn more about LinkX cables and transceivers at:


Short Reach Optics in Modern Data Centers

Spanning Across the Data Center or Breakouts Within the Rack

Short Reach (SR) multi-mode optics are the lowest priced optical interconnects available today that use optical connectors to separate the transceiver from the optical fibers. Although both support 100m reach, AOCs are much less expensive that SR optics but the former functions as a complete cable and the transceiver end cannot be separated from the fibers. Multi-mode fibers have a large light carrying fiber core and are easier and less expensive to manufacture compared to single-mode optics with tiny fiber cores that are difficult and expensive to build with. For these reason, multi-mode, short reach optics are very popular in modern hyperscale, enterprise and storage data center applications.


While mostly utilized to link Top-of-Rack (ToR) switches to other remote switches and storage subsystems in the network, as the short reach transceiver prices continue to fall, more data center operators are using SR optics to connect ToR switches down to servers and local storage – within a single rack.  This is due to the configuration flexibility connectorized optics provides and the tiny fiber diameters compared to DAC cabling.  All the optical cables supporting a 32-port switch has a diameter less than 2.5cm (1-inch) compared to about 100-125cm (4-5 inches) with copper DAC cables. Thirty-two optical fiber cables would blow off the table with a sneeze but 32 DAC cables could qualify as exercise equipment!


This blog is Part 2 in a 3-part series on Mellanox’s LinkX branded, high-speed interconnects products. Mellanox sells short reach multi-mode optics in 10Gb/s and 25Gb/s line rates single and four-channel configurations enabling 10Gb/s to 100Gb/s of link bandwidth. These are available in SFP and QSFP connector form-factors that use LC and MPO optical connector respectively.


Short reach optics is not new and has a long history of different fibers, connectors and transceiver types at different data rates but modern data centers have zeroed in on SFP/LC and QSFP/MPO form-factors. Mellanox offers both 10Gb/s/25Gb/s single-channel SR transceivers with LC optical connectors as well as four-channel 40G/100Gb/s SR4 transceivers with MPO optical connectors.


VCSEL, Multi-mode …Huh?

SR transceivers employ a large core diameter, 50-um, optical fiber that is easy to interface lasers and detector to, so the costs are much lower than single-mode optics with a tiny 9-um core diameter fiber. But the SR laser pulse tends to scatter into multiple transmission “modes” in the large diameter fiber and becomes unusable after about 100m so the IEEE standards body sets the limit at 100m, assuming four connectors in the run. Multi-mode can reach to 400m, but requires specialized lasers, fibers and connectors.


Multi-mode optics uses a laser called a VCSEL or “Vix-Sell” (Vertical Cavity, Surface Emitting Laser) This laser is created in a vertical cavity in a semiconductor wafer and emits perpendicular to the surface of the chip, hence the name. Multi-mode optics use the 850nm wavelength of infrared laser light which is at an optical transparent window in the glass fibers.





Key SR/SR4 Transceiver features:

  • Connectorized optics – meaning the fibers can be disconnected from the optical transceiver
  • Point-to-point and breakouts – Meaning an SR4 transceiver can operate as a single, four channel link -or- as four separate channels with links to individual subsystems.
  • 100m reach – using multi-mode fibers enables linking sub-systems spanning 200 meters apart (100m in each direction)


Connectorized Optics

Many data centers have structured cabling where the fiber infrastructure is fixed and installed in cabling pipes, under raised floors and integrated into optical patch panels used to manually reconfigure the fiber run end points. Sometimes, fibers run to other system rows, rooms, floors, or even other buildings necessitating the ability to disconnect the fibers from the transceivers installed in the systems. This is something that DAC and AOCs integrated cables cannot do as the wires or fibers are integrated into the plug or transceiver end.


Point-to-Point Applications: ToR-to-Leaf/spine EOR switches

One of the main applications for SR and SR4 transceivers are to link Top-of-Rack (ToR) switches to other parts of the network such as aggregation switches, middle and end-of-row switches, and to leafs in a leaf-spine network. These are typically used as high bandwidth busses that are four-channel SR4s at 40G or 100Gb/s bandwidths. For 1GbE based servers, a 10G or 25Gb/s SR link may be adequate for the ToR uplink. Multi-mode optics is well suited to this application as the reaches in these designs are typically are short spanning a single row or perhaps a few rows.

While several enormous hyperscale operators have made a lot of noise in the press around moving to single-mode fiber, many big hyperscale and enterprise installation still operate as groups of small system clusters where all the systems are well within the reach of 100m multi-mode fiber.  Interestingly multi-mode fiber is about three times more expensive than single-mode fiber, the single-mode transceivers are 50 percent to 10X more expensive than multi-mode transceivers. Single-mode transceivers are difficult to build but offer reaches up to 10Km vs only 100m of multi-mode.


Breakout Application: ToR QSFP Breakouts to SFP Servers & Storage

Linking Top-of-Rack switches down to servers and storage subsystems within the same rack is another popular use for SR and SR4 optics. In the past, SR4 transceivers only transfer at 4-channels at a time to another SR4. New models can split the four into individual single-channels that can be connected to different systems and operate independently. This is important when the link reach needed is greater than the 3-meter capability of DAC copper cables and perhaps spanning more than one rack.  The passive fiber break out cable has a single 4-channel MPO on one end connecting to the SR4 transceiver and four Duplex LC optical connectors on the other end connecting to four separate SFP transceivers each with their own 100m fiber run.

Similarly, two 50Gb/s links can be created from one 100Gb/s/ using an MPO breakout cable with two MPOs connected to 50Gb/s SR2 transceivers using only two channels each (2x25G).


Since each link can be 100m long, a single SR4 port broken out into four SR links can have each end located 100m from the SR4 port so theoretically able to link to anything within a 200m (645 foot) diameter circle with the SR4 in a switch located in the middle and one SR transceivers at North, South, East, and West   ̶  each 100m apart!


How 50Gb/s Can Costs Less Than 40Gb/s?

Top-of-Rack switches such as the Mellanox SN2700 supports 32 ports and available in 40G and 100G port versions. The 32 transceiver ports can split, using breakout fibers, into 64 50Gb/s or 128 25Gb/s ports and configured in multiple mixtures depending on the configuration and bandwidth required. In this way, one 32-port Mellanox SN2700 ToR switch makes 50Gb/s less expensive overall than a 32-port SN2700 40Gb/s switch. Additionally, it provides an upgrade path to 100Gb/s as well by simply changing the fibers and end point 25Gb/s or 50Gb/s transceivers to 100Gb/s SR4s.

32-ports at 40Gb/s with no upgrade path –or- 64 ports at 50Gb/s with an upgrade path.

2.5X bandwidth at only ~50% price premium – you do the math!


This following graphic shows only part of Mellanox’s complete, “end-to-end” portfolio of switches, adapters, DAC/AOC cables, and optical transceivers. Mellanox is one of a few companies in the data center business that designs switch and network adapter ICs, transceiver control and Silicon Photonics ICs, and sells complete switching, adapter and cables and transceivers system solutions. This figure shows SR and SR4 transceivers with breakout fibers used in a server/storage racks, between system racks within rows, and in switch-to-switch networking infrastructures over long reaches.



Server Links

Not every sub-system application today needs 100Gb/s SR4 bandwidth, so breakouts are a convenient way to split a single ToR port into two 50Gb/s or four 25Gb/s links. Most single CPU socket servers today use a network adapter such as the Mellanox ConnectX-series network adapters, with four-to-eight 8GT/s PCIe Gen3 bus lanes. (PCIe is a parallel bus protocol and specified in Giga-Transfers/sec as it is often interrupted by other bus activities). Four times 8GT/s is 32GT/s and subtracting out approximately 20 percent PCIe overhead, these can neatly fit into a single 25Gb/s link. Similarly, four 10GbE CPU I/Os can fit into a 40GbE or 50GbE link. Many hyperscale builders use two socket servers and are using 50Gb/s links and some with four sockets, need 100Gb/s links to the Top-of-Rack switch. Next generation CPUs are becoming available with multiple 25Gb/s ports integrated into the CPU chips and servers are adding enormous amounts of DRAM and FLASH memories requiring faster I/Os. Soon, 100Gb/s will be too slow!

The Open Compute Project (OCP) “Yosemite” is a four-socket server and uses a 100Gb/s uplink – ideal for SR4 transceivers if optical connectors are needed or if not, DAC or AOC cables. Mellanox offers ConnectX-4Lx SFP and QSFP-based network adapter cards designed for the OCP Yosemite server.


“High-Speed” and “Storage” Can Be Used in the Same Sentence Now!

Storage is jumping into the high-speed game as well. For storage links, 10Gb/s in the past has been adequate for HDD arrays but with big SSDs and newer all electronics FLASH memory the jump to 25Gb/s is rapidly being adopted. Only three NVME Flash card link requires 80Gb/s and can saturate one 100G SR4 QSFP28 transceiver link!


Optical Buzzword Cheat Sheet

The optical technologies have more buzz words than you would ever believe and it continues to get worse!  Here are a few definitions for the most popular devices:

Two types of transceiver form-factors (connector shells):

  • SFPSmall, Form-factor Pluggable; single-channel transceiver
  • QSFP – Quad, Small, Form-factor Pluggable, four-channel transceiver
  • “+” as in SFP+ means 10Gb/s, “28” as in SFP28 means 28G maximum data rate


Two types of optical connectors:

  • MPOMultiple Push On; 8-fiber, parallel connector supporting four bi-directional channels
  • Duplex LCLucent Connector; 2-fiber, parallel connector supporting one bi-directional channel
  • LC or MPO connectors can be used in QSFP, but only LC in the smaller SFP form-factor.

Types of optical fibers:

  • Multi-mode is a large core fiber; single-mode a tiny core
  • Either multi-mode or single-mode fiber can be used in either MPO or LC connector.
  • 3 types of multi-mode fibers and reaches: OM2 (30m), OM3 (70m), OM4 (100m+)
  • Orange and aqua colors are  multi-mode; yellow denotes single-mode


Key Take-A-Ways

Mellanox LinkX SR and SR4 transceivers offer the lowest-cost, lowest power consuming, short reach, optical links that use detachable optical connectors.

This enables the most cost efficient, highest ROI, lowest Capex and Opex interconnect solution. The full portfolio of 10Gb/s and 25Gb/s line rates in SFP and QSFP form-factors enable customers to build a wide variety of configurations that meet every application in both InfiniBand and Ethernet protocols.

Mellanox designs and builds its own switch, adapter and transceiver ICs as well as Silicon Photonics.

This enables designing end-to-end complete systems with optimal performance and low power consumption between components. By designing our own ICs and transceivers, the SR4 transceiver offer 2.0W power consumption with the CDRs turned off and 2.8W with all on – one of the lowest power consuming devices in the industry. Matching the transceiver electronics to the switch and network adapter ICs guarantees the best performance possible, hence are used by major hyperscale, enterprise, and storage system builders. Other features include:

  • Bit Error Ratio (BER) better than 1E-15 – 1,000 times better than IEEE standard of 1E-12 Backwards compatible to 10Gb/s and 14Gb/s line rates and offering
  • Programmable Rx Output Amplitude and Tx Input Equalizers
  • Selectable Retimers (CDRs)
  • Digital Diagnostic Monitoring (DDM)
  • Tx power monitoring

Check out the Mellanox LinkX website, more blogs, and log into the Mellanox Community for more detailed DAC, AOC and transceiver white papers and articles in the future. LinkX is Mellanox’s brand name for its cables and transceivers product line.

Contact your Mellanox sales representative for availability and pricing options and stay tuned to my blog more interconnect news and tips.


More Information:  

Mellanox InfiniBand & Ethernet AOCs & DAC cables and transceivers.

Mellanox HDR switches

Mellanox HDR ConnectX-6 host bus adapters

Mellanox Community





AOCs – Active Optical Cables Why are 2 transceivers and fibers priced less than one connectorized transceiver?

Quick Summary

This blog is Part 2 in a 3-part series on Mellanox’s high-speed interconnect products. My first blog covered Direct Attach Copper (DAC) cables which are the least expensive way to create a high-speed 25G/50G/100Gb/s interconnect, but have a maximum reach of about 3-5 meters. Active Optical Cables (AOCs) pick it up from there and can reach up to 100-200 meters and are the least expensive, optical links available. They are widely used in HPCs and more recently became popular in hyperscale, enterprise and storage systems.


What is an AOC?

Optical transceivers convert electrical data signals into blinking laser light which is then transmitted over an optical fiber. Optical transceivers have an optical connector to disconnect the fiber from the transceiver. AOCs bond the fiber connection inside the transceiver end, creating a complete cable assembly much like a DAC cable, only with a 3-200-meter reach capability. AOCs main benefit is the very long reach of optical technology, while acting like a simple, “plug & play” copper cable.

What are AOC Features and Advantages?

Compared to less expensive DAC cables, AOCs offer:

  • Longer reach capability than DAC 3-7 meter limits
  • 3m – 100-meters multi-mode technology
  • 100-200 meters with single-mode, Silicon Photonics
  • Lower weight, thinner cable and bend radius enabling increased airflow cooling and easier system

Compared to more expensive optical transceivers, AOCs offer:

  • Dramatically lower priced solution than two optical transceivers and connectorized fiber based links
  • Lower power consumption at 2.2 Watts versus up to 4.5 Watts for optical transceivers (4-channel)
  • Lower operational and maintenance cost

How is it Different from Two Transceivers with connectorized fibers?

Permanently attaching the fibers is a seemingly simple change but yields a surprisingly large number of technical benefits and cost advantages; enough to create an entirely new category of interconnect products. Since the optics is contained inside the cable, designers do not have to comply to IEEE or IBTA industry standards for transceiver interoperability with other vendors. Hence, customers can pick and choose the lowest cost, best performing technology. Here are some of the results this simple change enables:

  • Lowest priced optical interconnect available – almost half the price of optical transceivers – much more than just the cost of losing the optical connectors. (more on this later)
  • Plug & play: Ease-of-use “cable” features – like DAC cables
  • Long reach: Up to 100 and 200-meter reach depending on the technology
  • Lowest optical power consumption per end – significantly lower than connectorized transceivers – saves operating expenses
  • No optical connectors to clean and maintain – saves operating expenses and increases reliability
  • Optically isolates electrical systems from ground loops as with copper DAC cables – a technical advantage

How Does an AOC Achieve a Lower Product Cost Than Two Transceivers?

An AOC uses two optical transceivers with integrated fiber. So, how on earth can it cost less than a single optical transceiver with an optical connector?

  1. Testing Costs: Optical testing accounts for 40-60 percent of the cost of manufacturing a transceiver. AOCs are tested in switch systems as an electrical test. Cable is plugged in; test patterns and data run; come back later and look at the results. If good, ship! If not, scrap. Optical transceivers, on the other hand, require $500,000 of optical test equipment per station, a very experienced (e.g. expensive) test engineer, and a lot of time on the test bench. AOCs do away with all of this since the testing is only in the electrical domain. Mellanox uses its “scratch & dent” switches to test AOCs and is one way we achieve a bit error ratio (BER) feature of 1E-15 versus 1E-12 IEEE standard, about 1,000 times fewer number of bit errors induced by the AOC link.
  2. Design Freedom: Since the optics is contained inside the AOC cable, designers can utilize the lowest cost materials and transceiver designs. Besides deleting four MPO optical connectors (2 per end), for example, Mellanox’s Silicon Photonics AOC uses only one laser versus four lasers for a multi-mode AOC. Low cost, but short reach orange colored OM2 fiber can be used for
  3. Freedom from Industry Standards: AOCs must comply to the IEEE, IBTA and SFF industry standards for the electrical, mechanical, and thermal requirements but he hardest part are the optical requirements. Since the optics are contained inside the cable, they do not have to meet any standards hence allows a lot more design freedom and eliminates the costly optical testing.

AOCs Offer Lower Operational Costs Too

  1. AOCs do not have optical connectors to manually clean every time they are removed as a single speck of dust inside the connector can completely block the 50-um or 9-um diameter fiber light transmission area. In a transceiver link, there are two fiber ends and two transceiver ends to clean. Besides, the personnel cost the connector cleaners can cost upwards of $250 each and stocked.
  2. AOCs don’t use MPO optical connectors which, in crowded racks, can be dropped and the fiber end scratched rendering them useless.
  3. Optical connectors can channel an electrical static charge that builds up on a long plastic cable and can destroy the sensitive optical transceivers electronics.
  4. AOC is a “plug and play” cable solution rather than a “plug, assemble and clean” solution as with optical transceivers. Optical transceivers, fibers and connectors also have many different and complicated product variances. All these must all be exactly matched to the specific transceiver used and spares kept and a technician trained in the specifications.
  5. AOC cables have a short bend radius and much thinner able thickness than most DAC cables. This makes them easier to deploy and frees up a lot of space for increased air flow cooling in crowded systems.
  6. Lastly, there are big operational power cost savings. One Watt saved at the component level translates to 3-5 Watts at the data center facility level. This is when all the chassis, row, room and facility fans and air conditioning equipment is included along with the electrical power to drive them – not counting the repair and maintenance! AOCs are less complex than optical transceivers and offer lower power consumption. Mellanox’s multi-mode AOCs draw about 2.2Watts per end compared to 2.8-4.5W for optical transceivers.


One Watt saved at the component level translates
into 3-5 Watts at the data center facility level!

How Are AOC used in Modern Data Centers?

While AOCs reaches can extend to the limits of the optical technology used (100-200 meters), installing a long 100-meter (328 foot) cable, complete with an expensive transceiver end, is difficult in crowded data center racks so the average reach typically used is between 3-30 meters. Only one “oops” per cable allowed. Damaging the cable means replacing it as it cannot be repaired. AOCs are typically deployed in open access areas such as within racks or in open cable trays for this reason.

Mellanox’s InfiniBand AOCs started out in about 2005 with DDR (4 x 5Gb/s) for use in the Top10 HPCs and quickly became the preferred solution for the large InfiniBand HPCs in the Top100. Today, it is the norm for QDR, FDR and EDR and in 2017 the newly announced LinkX® 200G HDR announced at SC’16 event in November.

The power and cost savings caught the eye of the Ethernet hyper scale and enterprise data center builders and has since become a popular way to link Top-of-Rack switches upwards to aggregation layer switches such as End-of-Row and leaf switches. Several hyperscale companies have publically stated their preferred use of AOCs for linking Top-of-Rack switches. Additionally, single channel (SFP) AOCs have become very popular in storage subsystems and some hyperscale builders who often run 10Gb/s or 25Gb/s AOCs from a Top-of-Rack switch to subsystems at reaches greater than DAC limits of 3-7 meters.

Here is an example of how AOCs are typically used inside systems racks to link subsystems together and between switches and systems in rows:

2016-12-12-12 Racks

Here is a more detailed view in Ethernet configurations showing the LinkX® 10Gb/s and 25Gb/s based AOCs, Spectrum switches and ConnectX-3, 4, 5 QSFP and SFP network adapters.

Here is an example at the Texas A&M HPC center and as you can see, nothing but orange AOCs and open access!

Typical AOC Use in HPC Super Computers

2016-12-12-14 Texas A&M pic


Newly Announced 200Gb/s HDR InfiniBand AOCs

At the HPC supercomputer conference SC’16 in Salt Lake City Utah, Mellanox announced its 40-port, 200Gb/s HDR InfiniBand line of Quantum-based switches and dual-port ConnectX®-6 host bus adapters. To link them all together, we also announced, a line LinkX® Direct Attach Copper (DAC) cables and splitters and Mellanox’s own design, Silicon Photonics-based, HDR200 Active Optical Cables (AOCs) running at a whopping 200Gb/s in a QSFP56, 4x50Gb/s configuration!

2016-12-12-15 HDR AOC

These will be available in Low-Smoke, Zero-Halogen (LSZH) jackets and reaches from 3 meters up to 100 meters. Available in mid 2017, watch our new LinkX® website for more detailed information coming!

Key Take-A-Ways

Mellanox LinkX® AOC cables offer the lowest-cost optical links with the lowest-power and latency in the industry. This enables the most cost efficient, highest ROI, lowest Capex and Opex interconnect solution. The full portfolio of 10Gb/s, 25Gb/s and 50Gb/s line rates and SFP and QSFP form-factors enable customers the ability to build a wide variety of configurations that meet every application in both InfiniBand and Ethernet. Everything is manufactured by Mellanox and uses Mellanox-designed Silicon Photonics and control ICs.

Check out the Mellanox LinkX® website and Mellanox Academy for more detailed DAC, AOC and transceiver white papers and articles in the future. LinkX® is Mellanox’s brand name for its cables and transceivers product line.

Contact your Mellanox sales representative for availability and pricing options and stay tuned to my blog more interconnect news and tips.

More Information:

Mellanox InfiniBand & Ethernet AOCs & DAC cables and transceivers
Mellanox HDR switches
Mellanox HDR ConnectX-6 host bus adapters


2016-12-12-16 MORE

SNIA Webcast on Ethernet Storage Interconnects- Join Us Dec 1, 10AM PST

2017 Ethernet Roadmap for Networked Storage

So far we have over 600 registered – Don’t miss out!

SNIA (Storage Network Industry Association) Ethernet Storage Forum (ESF) with participation from Dell, Intel, Mellanox, and Microsoft will present on what’s happening for 2017 on high-speed interconnects.  Mega-trends driving the industry: What’s out there today and what’s on the drawing boards. DAC, AOCs, multi-mode and single-mode transceivers, new form-factors QSFP-DD & OSFP and standard group activities: IEEE, 25G/50G Consortium, NBase-T Alliance; and more new secret buzzwords to guarantee all our future job security! (and make us all dizzy) – RDMA, RoCE, NRZ, PAM4, SR16, FR4, LR8, DR4. Coherent, CFP8, COBO.

The rate of change happening in data center interconnects is unprecedented. More new line rates and new form factors being developed now at a rate that is outpacing the past 10 years!

Driving it is mobile, IoT, and smartphones use worldwide as everything goes “online”, video and soon virtual reality.  While 10Gb/s still dominates the Enterprise space, the Hyperscale crowd is driving faster speed and new form factor at an unprecedented rate towards PAM4 signaling, 50Gb/s line rates and 100G, 200G, and 400Gb/s form factors.

With only three NVME FLASH cards able to suck the life out of a 100Gb/s link, memory and storage systems are finally able to join the high-speed party!

Learn all about it at our webcast this Thursday.

Presenters include:

  • Vittal Balasubramanian, Principal Signal Integrity Engineer at Dell
  • Brad Smith, Director of Marketing, LinkX Interconnects, Mellanox
  • Brad Booth, Principal Engineer, Microsoft
  • Fred Zhang, Product Marketing Engineer, Intel
  • John Kim, Moderator, Director of Storage Marketing, Mellanox


Join us on December 1, 2016 at 10AM Pacific time for our live webcast.

Reserve your seat here: 2017 Ethernet Roadmap to Networked Storage