• In April of 2020, NVIDIA completed its plans to acquire Mellanox Technologies. The acquisition unites two of the world’s leading companies that, together with NVIDIA’s computing platform and Mellanox’s interconnects, power over 250 of the world’s TOP500 supercomputers and share customers from every major cloud service provider and computer maker. Combined, the two companies have a broad portfolio of IP and patents that promise to enable continued growth, and with common performance-driven cultures and a longstanding partnership, the duo is a natural fit and primed to continue creating next-generation data center-scale computing solutions.


  • Mellanox continued to grow at a healthy pace in 2019, fueled by strong market demand for our market-leading solutions for 25, 50, and 100Gb Ethernet as well as the launch of 200Gb/s and 400Gb/s Ethernet solutions using 50Gb/s PAM-4 technology. Another growth driver was the general availability of HDR InfiniBand products such as Mellanox ConnectX-6 adapters and Quantum switches as customers implemented HPC, artificial intelligence (AI), and machine learning (ML) solutions.

    In March, NVIDIA announced plans to acquire Mellanox for $6.9 Billion. According to Jensen Huang, founder and CEO of NVIDIA, “The emergence of AI and data science, as well as billions of simultaneous computer users, is fueling skyrocketing demand on the world’s datacenters…. We’re excited to unite NVIDIA’s accelerated computing platform with Mellanox’s world-renowned accelerated networking platform under one roof to create next-generation datacenter-scale computing solutions.” The transaction is pending regulatory approvals.

    The company launched a new set of secure cloud SmartNICs in 2019, including the ConnectX-6 Dx and the BlueField-2 I/O Processing Unit (IPU) to support hardware-accelerated encryption of data at rest and in flight. Mellanox demonstrated 200GbE and 400GbE cables and transceivers, and also launched NVMe SNAP™ to accelerate and simplify flash storage provisioning for bare metal servers. The newly-announced Quantum LongReach Appliances extend EDR and HDR InfiniBand connectivity to distances of 10 and 40km, while the Mellanox Skyway™ 200Gigabit Gateway accelerates connectivity between InfiniBand and Ethernet networks.

    Additional innovations included PCIe Gen4-compatible adapter solutions with the AMD EPYC Rome CPU launch in August; support and training packages for SONiC on Mellanox Spectrum switches; and ConnectX NVMe over Fabrics (NVMe-oF) solutions using both RoCE and TCP/IP. Partner integrations and solutions included optimized virtualized machine learning with VMware and NVIDIA; HDR-based HPC solutions with Dell; security solutions with partners including CheckPoint Software and Guardicore; and -high-performance storage with Datera, DDN, Drivescale, Excelero, HPE, NetApp, Pure Storage, Qumulo, VAST Data, WekaIO and others.

    Major company milestones included passing the 1 Million Ethernet switch ports milestone in Q2; shipping over 1 Million ConnectX and BlueField Ethernet network adapters in Q3; consecutive new revenue records in Q1, Q2, and Q3; and several strategic investments by Mellanox Capital.


  • 2018 was an outstanding year for Mellanox, with 26% annual revenue growth and achieving over one billion dollars in revenue for the first time in company history. The Ethernet business grew rapidly, nearly doubling year over year - driven by the adoption of our market leading 25, 50, and 100G adapters, cables and switches. 2018 saw many first with several newly released product including the next generation Quantum and ConnectX-6 HDR 200Gb/s InfiniBand solutions. Also we began shipping Spectrum-2 Ethernet solutions, which offer increased bandwidth for 200/400GbE. We also began the first shipments of BlueField system-on-chip (SoC) platforms and SmartNIC adapters which combine Mellanox's industry-leading ConnectX®-5 network adapter and acceleration technology with an array of high-performance 64-bit Arm A72 processor cores and a PCIe Gen3/4 switch. Adoption of RDMA over Converged Ethernet (RoCE) continued to ramp with an expanded user base including large hyperscalers and enterprise customers. Mellanox was also the clear leader in High Performance Computing interconnects this year, connecting the top three World's fastest Super Computers; the fastest High-Performance Computing (HPC) and Artificial Intelligence (AI) supercomputer deployed at the Oak Ridge National Laboratory (which was the first 200 gigabit per second dual EDR InfiniBand network), the second fastest in the US deployed at the Lawrence Livermore National Laboratory, and the fastest in China (ranked third). Also on the TOP500 fastest computers, Mellanox InfiniBand and Ethernet solutions connected over 53% of the top performing platforms for a total of 265 systems which was a 38% growth over the previous 12 months. Mellanox InfiniBand® and Ethernet solutions were also selected to accelerate the new NVIDIA® DGX-2 AI system. DGX-2 is the first 2 Petaflop system that combines sixteen GPUs and eight Mellanox ConnectX® adapters, supporting both EDR InfiniBand and 100 gigabit Ethernet connectivity. Finally, Mellanox LinkX optical transceivers, Active Optical Cables (AOCs) and Direct Attach Copper Cables (DACs), surpassed the one million 100Gb/s QSFP28 port milestone. Mellanox continues to see strength across all our product lines, including InfiniBand and Ethernet and is well-positioned to continue our growth as successful execution of our strategies continue to deliver enhanced value for our customers resulting in increased value for our shareholders.


  • In 2017, Mellanox began to benefit from its investments in next generation Ethernet and InfiniBand solutions and its new BlueField system-on-a-chip. During the year, Mellanox became the 2nd largest supplier of Ethernet adapters and the leader in the fastest growing 25G and above segment, with more than 60 percent market share. Furthermore, for the first time, Ethernet based product revenues surpassed those of InfiniBand, fueled by market share gains for adapters, switches, and cables. As a result of earlier investments, the Ethernet-based products recognized 29% Y/Y growth rate, and continued the multi-year revenue diversification strategy. In addition, Mellanox made significant strides in other areas including Machine Learning and AI. Baidu, the leading Chinese language Internet search provider, selected Mellanox to accelerate their Machine Learning Platforms; Tencent Cloud adopted Mellanox interconnect solutions for its high-performance computing (HPC) and artificial intelligence (AI) public cloud offering and Meituan.com selected Mellanox Spectrum™ Ethernet switches, ConnectX® adapters and LinkX™ cables to accelerate its multi-thousand servers for their artificial intelligence, big data analytics and cloud data centers. The Company also celebrated several milestones, shipping more than 100,000 LinkX cables and more than 200,000 optical transceiver modules, both for next generation 100 Gb/s networks. InfiniBand continued to dominate in the High Performance Computing space with InfiniBand accelerating 77 percent of the new high-performance computing systems on the TOP500 list, demonstrating the industry’s strong and continued adoption of InfiniBand and its leading market share in high-performance computing and artificial intelligence. Finally, the company was officially ranked by the international accounting and consulting firm Deloitte as among the 50 fastest growing technology companies in Israel and received The Linley Group’s Analyst Choice Award for “Best Networking Chip” in 2017 for the Mellanox ConnectX-5 Ethernet Adapter IC.


  • Mellanox completed the acquisition of EZchip, a leader in high-performance processing solutions for carrier and data center networks. The Company unveiled ConnectX®-4 Lx adapters, the world’s first 25 and 50Gb/s Ethernet single and multi-host adapters. The BlueField™ family of programmable processors for networking, security, and storage applications was introduced, addressing the industry need for higher levels of SoC (System-on-Chip) integration to simplify system design, lower total power, and reduced overall system cost. ConnectX-5 was also introduced, as the most advanced 10, 25, 40, 50, and 100Gb/s InfiniBand and Ethernet intelligent adapter on the market today. Mellanox interconnect solutions accelerated the world's fastest supercomputer, at the supercomputing center in Wuxi, China. InfiniBand continued to garner market share; InfiniBand solutions were chosen in nearly four times more end-user projects in 2016 versus Omni-Path and five times more end-user projects versus other proprietary offerings as shown in the November 2016 release of the TOP500 list. Mellanox continues to be the leading the high performance Ethernet NIC provider, garnering nearly 90 percent market share of the 25Gb/s and greater adapter market.


  • Mellanox announced Multi-Host™, an innovative technology that provides high flexibility and major savings in building next generation, scalable Cloud, Web 2.0 and high-performance data centers. The Company introduced the industry’s first 100 Gigabit Ethernet, Open Ethernet-based, non-blocking switch, Spectrum, the next generation of its Open Ethernet-based switch IC. With Spectrum, Mellanox was the first to offer end-to-end 10/25/40/50 and 100 Gigabit Ethernet connectivity. InfiniBand continued to garner market share from Ethernet and proprietary interconnects, and surpassed a milestone, connecting the majority of the TOP500 supercomputing list with 51.4 percent of supercomputers. InfiniBand connected systems grew 15.8 percent year-over-year, from June 2014 to July 2015. The shipment of Spectrum, combined with Mellanox’s ConnectX®-4 NICs, and LinkX™ fiber and copper cables, also ensured that Mellanox was the first to deliver comprehensive end-to-end 10, 25, 40, 50 and 100 Gigabit Ethernet data center connectivity solutions.


  • Mellanox released the world’s first 40 Gigabit Ethernet NIC based on Open Compute Project (OCP) designs. ConnectX-3 Pro 40GbE OCP-based NICs are built to OCP specifications and optimize the performance of scalable and virtualized environments by providing virtualization and overlay network offloads. The company introduced CloudX, a reference architecture for building efficient cloud platforms. CloudX is based on the Mellanox OpenCloud architecture which leverages off-the-shelf components of servers, storage, interconnect and software to form flexible and cost-effective public, private and hybrid clouds. Mellanox introduced LinkX, a comprehensive product portfolio of cables and transceivers supporting interconnect speeds up to 100Gb/s for both Ethernet and InfiniBand data center networks. The company announced that its Switch-IB family of EDR 100Gb/s InfiniBand switches achieved world-record port-to-port latency of less than 90ns. In addition, Mellanox announced the ConnectX-4 single/dual-port 100Gb/s InfiniBand and Ethernet adapter, the final piece to the industry’s first complete end-to-end EDR 100Gb/s InfiniBand interconnect solution. The future is very bright for Mellanox, and it’s because of all of our hard work, execution and passion for the company. Here’s to another 15 years and more!


  • Mellanox introduces the “Generation of Open Ethernet”—the first Open Switch initiative. With this new approach, Mellanox took Software Defined Networking (SDN) to the next level, opening the source code on top of its existing Ethernet switch hardware. During the year, Mellanox’s Ethernet market share reached 19 Percent of Total 10GbE NIC, LOM, and Controller Market and propelling the company to one of the Top 3 Ethernet NIC providers. Mellanox acquired two companies: Kotura and IPtronics. These acquisitions enhanced Mellanox’s ability to deliver complete end-to-end optical interconnect solutions at 100Gb/s and beyond. By the end of the year, Mellanox growth had expanded to more than 1,400 employees worldwide.


  • Mellanox expanded the line of end-to-end FDR 56Gb/s InfiniBand interconnect solutions with new 18-port, 108-port, 216-port, and 324-port non-blocking fixed and modular switches. The Connect-IB adapter was announced in June, delivering the industry’s highest throughput of 100Gb/s on a single adapter card utilizing PCI Express 3.0 x16. More key announcements followed in November including Unified Fabric Manager (UFM-SDN) Data Center Appliance and UFM software version 4.0, a comprehensive management solution for SDN and scalable compute and storage infrastructures, and the MetroX series, enabling native InfiniBand and RDMA reach to longer distances up to 80KM. From June 2012 to November 2012, the number of FDR 56Gb/s InfiniBand systems increased by nearly 2.3X, including the top two ranked InfiniBand systems on the TOP500 list. Mellanox InfiniBand solutions connected 43 percent (10 systems) of all PetaScale based systems (23 systems). This year marked the first time Mellanox’s annual revenues exceed the $500 million mark, highlighting the increased demand for its interconnect solutions. By the end of the year, Mellanox growth had expanded to more than 1,260 employees worldwide.


  • Mellanox acquires Voltaire to expand its software and switch product offerings and strengthen its leadership position in providing end-to-end connectivity systems in the growing worldwide data center server and storage markets. Key product announcements included the introduction of SwitchX, the industry’s first FDR 56Gb/s InfiniBand and 10/40 Gigabit Ethernet multi-protocol switches and ConnectX-3, the industry’s first FDR 56 Gb/s InfiniBand and 10/40 Gigabit Ethernet multi-protocol adapter. This move marked the first time Mellanox would begin selling an end-to-end 10/40GbE solution. Today, Mellanox is still the only provider of an end-to-end 40GbE solution. Mellanox also began supporting Open Networking initiatives and joined the Open Networking Foundation, Open Virtualization Alliance and OpenStack as part of a continued commitment to next-generation data center technologies. The company also joined the Open Compute Project (OCP) and introduced the first 10GbE Mezzanine adapters for OCP servers. By the end of 2011, there were more than 840 employees of Mellanox worldwide.


  • Mellanox would begin selling its own switch systems under the IS5000 brand, as well as its own branded copper and fiber cables making it the first company to provide a complete end-to-end InfiniBand solution. InfiniBand momentum continued on the TOP500, InfiniBand-connected systems grew 18 percent year-over-year, representing more than 43 percent of the TOP500, or 215 systems, with more than 98 percent of all InfiniBand clusters leveraging Mellanox InfiniBand solutions. Eyal Waldman named ‘CEO of the Year’ by the Israeli Center for Management. HPC in the Cloud, in its first annual Reader & Editor’s Choice Awards, named Mellanox as a ‘Cloud Network Innovator.’ During the Annual SuperComputing Conference, Mellanox was awarded ‘Best HPC Interconnect Product or Technology’ by HPCwire. Oracle Corporation would announce a strategic investment in Mellanox technologies, acquiring 10.2% of Mellanox’s ordinary shares in the open market for investment purposes, to solidify the common interest in the future of InfiniBand.


  • Mellanox was the first to deliver Microsoft Logo qualified InfiniBand Adapters for Windows HPC Server 2008. Later that year, the company introduced ConnectX-2, a high-performance; low power connectivity solution along with the ConnectX-2 VPI adapter card to deliver flexibility to next generation virtualized and cloud data centers. The industry and press took notice. Now with more 370 employees, Mellanox ranked Number 20 Fastest Growing Company in Israel on Deloitte’s 2009 Israel Technology Fast 50 Program. HPCWire honored the company with two Editor’s Choice Awards for Best Product and Best Government & Industry Collaboration. Now representing nearly 37 percent of the TOP500, Mellanox InfiniBand was shown to enable the highest system efficiency and utilization (96 percent).


  • Named as the “Best of Interop 2008”, Mellanox won the Data Center & Storage category ConnectX® EN 10GbE server and storage I/O adapter. The company announced the availability of the first QDR 40Gb/s InfiniBand Adapters and switch silicon devices, leap frogging the competition, and in record time (1 month after release to production) would quickly power one of the world’s fastest supercomputers. By the end of the year, nearly all top-tier OEMs would be reselling 40Gb/s InfiniBand in their server platforms.


  • Mellanox announced the Initial Public Offering on the NASDAQ in the US traded under the symbol “MLNX”. Later, the company would be listed on the Tel Aviv Stock Exchange (TASE), and added to the TASE TA-75, TA-100, Tel-Tech and Tel-Tech 1. Mellanox would surpass the 2 Million InfiniBand port milestone. By June, the number of Supercomputers using the InfiniBand interconnects increased on the TOP500 list, with 132 supercomputers (26% of the list) connected with InfiniBand, 230% more than the 40 supercomputers on the June 2006 list, and 61% more than the 82 supercomputers reported on the November 2006 list. Mellanox was ranked Number 146 on the Fastest Growing Company in North America on Deloitte’s 2007 Technology Fast 500; Number 16 Fastest Growing Company in Israel on the 2007 Deloitte Israel Technology Fast 50 lists and Number 12 in Deloitte’s Technology Fast 50 Program for Silicon Valley Software and Information Technology Companies. Key Mellanox product announcements during this year included the ConnectX EN Dual-Port 10 Gigabit Ethernet adapter chips and NICs, PCI Express® 2.0 20Gb/s InfiniBand and 10 Gigabit Ethernet Adapters and the Dual-Port 10GBase-T NIC.


  • Mellanox InfiniBand solutions would begin selling through HP for its c-class BladeSystem. This would begin a long tradition of development and joint offerings with HP, which would soon secure HP as one of Mellanox’s top customers. Mellanox launched the ConnectX adapter architecture which would enable QDR 40Gb/s InfiniBand and 10GbE connectivity on a single adapter. InfiniBand-based supercomputers would grow on the TOP500, increasing 105% since June 2006 and 173% since the previous year. Eyal Waldman was named "CEO of the Year" by IMC. Mellanox was listed as #6 in Byte&Switch’s "Top 10 Private Companies: Spring 2006”. The company prepared for an initial public offering and filed a Registration Statement with the Security Exchange Commission. By the end of the year, Mellanox had nearly 170 employees.


  • Mellanox surpasses the 500K InfiniBand port milestone. More technology advances were announced as Mellanox was the first to ship DDR 20Gb/s InfiniBand adapters and switch silicon, making it the industry bandwidth leader in high-performance server-to-server and server-to-storage connectivity. Industry and press accolades came from selection to the Red Herring Top 100 Europe Annual List of the most promising private technology companies along with selection by AlwaysOn as the Top 100 Private Company award winner and Mellanox named Globes Most Promising Start-up in 2005. Mellanox received the Editor’s Choice Award for Most Innovated HPC Networking Solutions from HPCWire. EDN named Mellanox’s InfiniHost III Lx in “Hot 100 in 2005” and Electronic Design magazine named InfiniScale III “Best of Embedded 2005: Hardware.”


  • Mellanox crosses the 200K ports sales milestone and introduced InfiniHost III Ex InfiniBand Host Channel Adapter, and the 3rd generation 144 port InfiniBand switch platform. Later that year, Mellanox showcased the world’s first single-chip 10Gb/s adapter with new “MemFree” InfiniBand Technology that enabled industry-leading price/performance, low power and small footprint. In November, Mellanox announced InfiniHost III Lx, the world’s smallest 10Gb/s adapter. InfiniBand interconnect is now used on more than a dozen systems on the prestigious TOP500 list of the world's fastest computers, including two systems which achieved a top ten ranking on the TOP500 list.


  • At the beginning of the year, Mellanox announced that the company had shipped over 50K InfiniBand ports. Several key announcements were made including the architecture of a PCI Express enabled dual port 10Gb/s InfiniBand HCA device, a 96-port switch design for the High Performance Computing (HPC) market, and a 480Gb/s third generation InfiniBand switch device. By June, Mellanox announced that the company had shipped over 100K InfiniBand ports, and had been selected by Virginia Tech University to create the world’s largest InfiniBand cluster. The company enabled Virginia Tech to build the world’s 3rd fastest supercomputer in a record time of less than four months.


  • Despite events surrounding the Dot-Com collapse and the 9/11 World Trade Center attacks in 2001, Mellanox is able to secure $56M in new funding, showcasing the confidence in the company’s leadership and product direction. Mellanox announced the immediate availability of its Nitro InfiniBand technology based blade server and I/O chassis reference platform. The Nitro platform marked the first ever InfiniBand Architecture based server blade design and provided a framework that helped deliver the full benefits of server blades. Mellanox also demonstrated first ever InfiniBand Server Blade MPI Cluster. During that year, Mellanox announced the availability of the Nitro II 4X 10Gb/s InfiniBand server blade reference platform and partnerships to advance industry standard, hardware independent, Open Source InfiniBand software interfaces. Mellanox continued to receive acknowledgements from the industry and press including selection as Emerging Company of the Year at the Server I/O Conference and Tradeshow, recognition as one of the five "most promising" companies at the first annual, Semiconductor Venture Fair, and named to the Red Herring Magazine 100 for the second year in a row.


  • Mellanox shipped InfiniBridge 10Gb/s InfiniBand devices to customers for revenues, marking the industry’s first commercially available InfiniBand semiconductors supporting both 1X and 4X switches and channel adapters. Four new InfiniBridge Reference Platforms were released and were the first platforms to support 10Gb/s copper connections and small form factor pluggable (SFP) connectors. Mellanox then would introduce “InfiniPCI” technology, enabling transparent PCI to PCI bridging over InfiniBand switch fabrics. Mellanox introduced InfiniScale switching devices that marked the first ever commercially available InfiniBand devices with integrated physical layer SerDes. Mellanox shipped over ten-thousand InfiniBand ports, and now had more than 200 employees worldwide.


  • First draft version of the InfiniBand specification and the InfiniBand Architecture 1.0 specification was released. During the year, Mellanox raised $25.7M in second round financing. Raza Venture Management and Intel Capital joined Sequoia Capital and US Venture Partners in funding Mellanox. During that same year, Agilent, Brocade, and EMC joined the InfiniBand Trade Association. This announcement signaled the storage industry’s support for the InfiniBand Architecture. At this time, Mellanox had 39 employees, with business operations, sales, marketing, and customer service headquartered in Santa Clara, CA, with design, engineering, and quality and reliability operations based in Israel.


  • Mellanox was founded in 1999 by Eyal Waldman along with an experienced team of Engineers in Yokneam, Israel. The company was founded with the purpose of developing semiconductors for data center infrastructure based on the Next Generation I/O (NGIO) standard. Mellanox received first round funding of $7.6M from Sequoia Capital and US Venture Partners. Mellanox began development of an NGIO to FutureIO bridge architecture. During that year, NGIO and Future I/O standards announced a merger. The new standard was temporarily called “ServerIO” and was envisioned to combine the best of class features from both technologies. The first developer’s conference for the newly merged groups was held in San Francisco and the name “InfiniBand” was introduced and the InfiniBand Trade Association was formed.

NVIDIA Mellanox Cookie Policy

This website uses cookies which may help to deliver content tailored to your preferences and interests, provide you with a better browsing experience, and to analyze our traffic. You may delete and/or block out cookies from this site, but it may affect how the site operates. Further information can be found in our Privacy Policy.