Mellanox Takes Home “Outstanding Components Vendor” Award

fig 1LtreadingawardUnlike Leonardo DiCaprio, who finally won his first Oscar in 2016 for survival epic The Revenant, after six nominations, Mellanox Technologies took home the title of “Outstanding Components Vendor” in the Light Reading 2016 Leading Lights Awards with our FIRST TRY. This award is given to the components vendor that stands out from its competitors, is consistently innovative and trendsetting in the industry, makes investors proud and makes employees happy. You can see why we were thrilled to win, and why Mellanox deserves to be the winner.

 

fig 2Ltreadingaward

The Leading Lights program, which is in its 12th year, has 26 core categories focusing on next-generation communications technologies, applications, services and strategies. The awards are given to companies that have shown prominent leadership and inventive thinking in their fields for the past year. Judging was conducted by Light Reading’s editors and the analyst team from Heavy Reading, and the winners were announced at an awards dinner at Hotel Ella in Austin, Texas, on Monday, May 23 to coincide with the Big Communications Event. Kevin Deierling, our VP of Marketing, Chris Shea, our VP of Business Development for the CSP vertical, and I were able to attend and accept the award in person on behalf of Mellanox Technologies.

As part of the celebration, Mellanox had sponsored a table at the award dinner to share this moment with our valuable customers, partners and friends from the cloud and NFV ecosystem including Affirmed Networks, Hewlett Packard Enterprise, Verizon, Technicolor, Nokia and Heavy Reading.

fig 3Ltreadingaward

The more amazing thing is, the Mellanox table won the largest number of awards! Out of the six companies represented on our table, we were honored with a total of four trophies including Affirmed Networking winning “Private Company of the Year”, Technicolor honored for “Best New Cable Product”, and Nokia hailed to have the “Most Innovative SDN Product Strategy”. In addition, the Mellanox table also won the loudest cheering table at the award dinner!

We are thrilled! We are also grateful of the recognition from Light Reading, thank you Steve Saunders and Ray Le Maistre. Based on the volume and quality of entries, as well as how seriously the industry takes these awards, we were very excited and proud to accept it. I’d say it’s even better than an Oscar!

100GbE Switches – Have You Done The Math?

fig 1 math
100GbE switches – sounds futuristic? Not really, 100GbE is here and being deployed by those who do the math…
100GbE is not just about performance, it is about saving money. For many years, the storage market has been “doing the math”, $/IOPs is a very common metric to measure storage efficiency and make buying decisions. Ethernet switches are not different, when designing your network, $/GbE is the way to measure efficiency.
While enhanced performance is always better, 100GbE is also about using less components to achieve better data center efficiency, Capex & Opex. Whether a server should run 10, 25, 50 or 100GbE, this is about performance, but with switches, 100GbE simply means better Return on Investment!
Building a 100GbE switch doesn’t cost 2.5X than building a 40GbE switch. In today’s competitive market, vendors can no longer charge exorbitant prices for their switches. These days are over.
With 25GbE being adopted on more servers simply to get more out of the server you’ve already paid for, 100GbE is the way to connect switches.
fig 2 math
Today, when people do the math, they minimize the number of links between switches by using 100GbE. When a very large POD (Performance Optimized Datacenter) is needed, sometimes we see 50GbE being used as uplink to increase spine switch fan-out and thus the number of servers connected to the same POD. In other cases, people use the fastest available, it used to be 40GbE, and today it is 100GbE.
Who are these customers who migrate to 100GbE? They are those who consider datacenter’s efficiency being highly important for the success of their business. A few examples:
Medallia recently deployed 32 x Mellanox SN2700 running Cumulus Linux – Thorvald Natvig, Medallia lead architect told us that the math is simply about more cost effectiveness, especially when the switches are deployed with zero touch and run simple L3 protocols, eliminating old fashion complications of STP and other unnecessary protocols. QoS? Needed when the pipes are insufficient, not when running 100GbE with enough bandwidth coming from each rack. Buffers? Scale? Mellanox Spectrum ASIC provides everything a Data Center needs today and tomorrow.
University of Cambridge has also done the math and has selected the Mellanox End-to-End Ethernet interconnect solution including Spectrum SN2700 Ethernet switches for its OpenStack-based scientific research cloud. Why? 100GbE is there to unleash the capabilities of the NexentaEdge Software Defined Storage solution which can easily stress a 10/40GbE network.
Enter has been running Mellanox Ethernet Switches for a few years now. 100GbE is coming soon, Enter will deploy Mellanox Spectrum SN2700 switches with Cumulus Linux because they did the math! Enter, as a cloud service providers cannot get lazy and wait for 100GbE to be everywhere before they adopt it. Waiting means losing money. In today’s competitive world, standing is like walking backwards, 100GbE is here, it works and it is priced right!
Cloudalize was about to deploy a 10/40GbE solution. After they did the math, they went directly to 100GbE with Mellanox Spectrum SN2700 running Cumulus Linux.

To summarize: if your Data Center efficiency is important for your business, it is time to do the math:
1. Check the cost of any 10/40/100GbE solution vs. Mellanox Spectrum 100GbE
Cost must include all components: cables, support, licenses (no additional licenses with Mellanox)
2. Please note that even when 10GbE on the server is enough, 100GbE uplinks still make sense
3. A break-out cables always costs less than 4 x single speed cables
4. Pay attention to hidden costs (feature licenses, extra support…)
5. What’s the price of being free with 100% standard protocols and no “vendor specific”, which is a nicer way to say “proprietary” protocols
6. In the event that 100GbE is more cost effective, it is time to view the differences between various 100GbE switch solutions in the market, the following performance analysis provides a pretty good view of the market’s available options
7. How much money do you spend on QoS vs. the alternative of throwing bandwidth on the problem?
8. $/GbE is the best way to measure network efficiency
Feel free to contact me at amitka@mellanox.com , I would be happy to help “doing the math” and compare any 10/40/100GbE solution to Mellanox Spectrum.

Mellanox Named Leading Lights Outstanding Component Vendor Finalist

Lights finalistWhen I heard that Mellanox was named an Outstanding Components Vendor finalist for Lightreading’s Leading Lights Award, I was thrilled and proud, but not surprised because I was confident that Mellanox deserved to be in the spotlight. Mellanox is uniquely positioned to help Communication Service Providers build their next generation infrastructure with our vision in cloud networking and novel approach to high-performance high-quality interconnect. It is our mission to drive new technologies to market, revolutionizing the data center.

Mellanox has been a dominant player in the High-Performance Computing sector, managing large distributed computation-intensive workloads that require high-speed communication between processor cores and between processor cores and data. As a result, the Mellanox architecture and R&D team have rich experience in designing semiconductor chipsets that challenge the communication speed limits, while providing low latency and low power consumption, yet predictable and reliable application performance.

Building on top of our success in HPC, Mellanox expanded our footprint into the hyper-scale web and cloud service provider space, penetrated the majority of the top web services giants on a global basis. The infrastructure for this sector normally follows scale-out, software defined architectural pattern, and a high-performance data center network fabric is key to support communication and data access needs. More importantly, these new generation of companies carrying out the mission of digital transformation expect their infrastructure to support agile innovation, instead of being a roadblock. As such, they want to build their infrastructure much in the same style as building with Lego blocks. At Mellanox, we call this style of network infrastructure building “Open Composable Networks (OCN)”. OCN can truly unleash agile innovation, accelerate diverse workloads, and drive cloud-scale efficiency. It leverages hyper-scale web and cloud network architecture designs and is based on network disaggregation, open and unified abstraction interfaces, and automated orchestration and management.
Just like Lego building needs a set of high-quality basic components, the foundation of OCN relies on Mellanox end-to-end interconnect components that guarantees high performance:
Mellanox ConnectX-4 series of NICs:
- Best DPDK performance of 75 million pps for 100G interface and 33 million pps for 25G interface
– Advanced Switching And Packet Processing (ASAP2) support: SDN control plane with accelerated data plane through NIC ASIC;
Multi-host NIC supporting higher density CPU per server, and open, flexible combination of CPUs;
– Option of advanced acceleration and intelligent offload through on-board FPGA, multi-core processors and network processors.

Mellanox Spectrum Switch IC and Top-of-Rack Switch System:
- Open Ethernet support of Cumulus, Microsoft SONiC, MetaSwitch, OpenSwitch and MLNX-OS
Zero Packet Loss, at any packet size, over any speed (10/25/40/50/100Gb/s) up to 6.4Tb/s switching capacity
– Efficient switch memory management resulting in 9X-15X more effective buffering and congestion resilience
– Fair bandwidth allocation independent of physical port
– Industry-leading, true cut-through latency
– Forwarding database sized for hyper-scale infrastructure build-out
– Optimized for SDN with OpenFlow and overlay tunneling support including: VXLAN, NVGRE, Geneve and MPLS

Mellanox LinkX Cables:
- Copper cables, active optical cables, and optical transceivers to support distances from < 2 m to 2 km
- Silicon Photonics-based single mode and VCSEL-based multi-mode optical modules and cables for 25, 50, and 100Gb/s networks
- Full range of 100Gb/s products in the same QSFP28 footprint

OCN is perfect for NFV use cases of virtualized EPC, IMS/SBC, vCPE, and vCDN for communications service providers to realize virtualization and build multi-cloud infrastructure without performance penalty.

If you are heading to in BCE Austin, be sure to join Mellanox in our two panel discussions:
• BCE Day 1 May 24th 4:15-5:05 p.m. : Components: Data Center Interconnects: Delivering 25G TO 400G
• BCE Day 2 May 25th 2:15-3:05 p.m.: Data Centers and Cloud Services: The New Telco Data Center
My fingers are crossed, I am hoping that Mellanox will walk down the red carpet in Austin as a winner of Outstanding Components Vendor of Leading Lights.

10/40GbE Architecture Efficiency Maxed-Out? It’s Time to Deploy 25/50/100GbE

iStock_flying-animation-information-in-cloud-78487761_HD_1080_2In 2014, after the IEEE rejected the idea of standardizing 25GbE and 50GbE over one lane and two lanes respectively. It was then that a group of technology leaders (including Mellanox, Google, Microsoft, Broadcom, and Arista) formed the 25Gb Ethernet consortium in order to create an industry standard for defining interoperable solutions. The Consortium has been so successfully pervasive in its mission that many of the larger companies that had opposed standardizing 25GbE in the IEEE, have joined the 25GbE Consortium and are now top-level promoters. Since then, the IEEE has changed its original position and has now standardized 25/50GbE.

However, now that 25/50GbE is an industry standard, it is interesting to look back and analyze whether the decision to form the Consortium was the right one.

2016_0604_fig1

There are many ways to handle such an analysis, but the best way is to compare the efficiency that modern ultra-fast and ultra-scalable data centers experience when running over 10/40GbE architecture versus over 25/50/100 architecture. Here, too, there are many parameters that can be analyzed, but the most important is the architecture’s ability to achieve (near) real-time data processing (serving the ever-growing “mobile world”) at the lowest possible TCO per virtual machine (VM).

Of course, processing the data in (near) real-time requires higher performance, but it also needs cost-efficient storage systems, which implies that scale-out software defined storage with flash-based disks must be deployed. Doing so will enable Ethernet-based networking and eliminate the need for an additional separate network (like Fibre Channel) that is dedicated to storage, thereby reducing the overall deployment cost and maintenance.

To further reduce cost, and yet to still support the faster speeds that flash-based storage can provide, it is more efficient to use only one 25GbE NIC instead of using three 10GbE NICs. Running over 25GbE also reduces the number of switch ports and the number of cables by a factor of three. So, access to storage is accelerated at a lower system cost.  A good example of this is the NexentaEdge high performance scale-out block and object storage that has been deployed by Cambridge University for their OpenStack-based cloud.

2016_0604_fig2

Building a bottleneck-free storage system is critical for achieving the highest possible efficiency of various workloads in a virtualized data center. (For example, VDI performance issues begin in the storage infrastructure.) However, no less important is to find ways to reduce the cost per VM, which can be best accomplished by maximizing the numbers of VMs that can run over a single server. With the growing number of cores per CPU, as well as the growing number of CPUs per server, hundreds of VMs can run over a single server, cutting the cost per VM. However, a faster network is essential to avoid being IO bounded. For example, a simple ROI analysis of VDI deployment of 5000 Virtual Desktops that compares just the hardware CAPEX savings shows that running over 25GbE cuts the VM cost in half. Adding the cost of the software and the OPEX further improves the ROI.

2016_0604_table

The growth in computing power per server and the move to faster flash-based storage systems demands higher performance networking. The old 10/40GbE-based architecture simply cannot hit the right density/price point and the new 25/50/100GbE speeds are therefore the right choice to close the ROI gap.

As such, the move by Mellanox, Google, Microsoft, and others to form the 25Gb Consortium in order to push ahead with 25/50GbE as a standard despite the IEEE’s initial short-sighted rejection now seems like an enlightened decision, not only because of the IEEE’s ultimate change-of-heart, but even so more because of the performance and efficiency gains that 25/50GbE bring to data centers.

Round Um Up! OpenStack Austin 2016 Had Something For Everybody

If you’re as lucky as us, you had the opportunity to attend OpenStack Summit 2016 in Austin, Texas this week. The event, which saw 7,500 attendees, is now at the crux of converging HPC, Scientific Computing, and the Cloud. We actually saw a number of market segments that OpenStack experienced the most significant traction on including: Academic/Research for scientific computing Telco/NFV, Cloud Service Providers and large enterprise for Pairs cloud and traditional enterprise. Mellanox is a leader in each of these areas and brings advanced technologies and expertise to help you get the most out of your OpenStack deployments. With our heritage in high performance networking, as well as InfiniBand and Ethernet solutions, Mellanox remains at the center of this convergence. This was evident in the crowd standing ten deep at times in front of the Mellanox booth. We were constantly swamped with the curious, solution seekers, technology lovers and old friends.

Openstack1

To further cement our position in the space and celebrate all things OpenStack, we took several giant leaps forward this week, including two major OpenStack announcements and several sessions at the Summit.

Partnering with the University of Cambridge
Due to unprecedented volumes of data and the need to provide quick and secure access to computing and storage resources, there is a transformation taking place in the way Research Computing Services are being delivered. This is why UoC selected our End-to-End Ethernet interconnect solution, including Spectrum SN2700 Ethernet switches, ConnectX-4 Lx NICs and LinkX cables, for its OpenStack-based scientific research cloud. This has expanded our existing footprint of InfiniBand solution and empowers the UoC to develop an architecture that will lay a foundation for Research Computing.

Powering Europe’s First Multi-Regional OpenStack-Based Public Cloud
One of our customers, Enter, has been building out their OpenStack cloud and adopting open software. In fact, Enter Cloud Suite is Europe’s first multi-regional OpenStack-based cloud, and we’re thrilled to announce that Enter selected our Open Composable Networks as the Ethernet network fabric for its Infrastructure-as-a-Service cloud offering.

Bringing OpenStack Hot Topics to Life
We found that OpenStack storage was a trending topic at the show and are proud to provide great options for this in the form of Ceph, Cinder, and Nexenta. No matter which option you choose, Mellanox interconnects deliver the best OpenStack storage integration, performance, and efficiency.

Another area we saw traction in was for Ethernet Switch solutions. The industry is currently experiencing a strong demand for integration of the converge/hyperconverge system with the network (NICs & switches). NEO is perfectly positioned for this challenge, making the network transparent using an enhanced REST API interface and plugins to OpenStack and other management platforms.

OpenStack-based cloud, like other Clouds, needs a fair switch and Mellanox Spectrum is well positioned for the task with customers realizing the enduring value of Spectrum.

Finally, Mellanox gave three highly successful talks at the show. If you missed any of them, you can view them here:
Mellanox Open Composable Networks
OpenStack Lightning Talk
Chasing the Rainbow: NCI’s Pursuit of High Performance OpenStack Cloud

A Look At The Latest Omni-Path Claims

4sec-thermometerOnce again, the temperature kicked up another few degrees in the interconnect debate with HPC Wire’s coverage based on information released by Intel on the growth of Omni-Path Architecture (OPA). According to Intel, the company behind OPA, have been seeing steady market traction. We have always expected Intel to win some systems, same as QLogic in the past or even Myricom years back; however, while I read over the article in detail, I couldn’t help but argue some of their points.

On Market Traction

Intel has seen continued delays in Omni-Path’s production release. We are not aware of any company that can buy any OPA offering in the channel, and OEMs have not released anything.

In the article, there are a number of public wins referenced including National Nuclear Security Administration’s Tri Labs (Capacity Technology Systems (CTS-1) program) and the Pittsburgh Supercomputing Center. The latter was built with non-production parts as they could not delay any further, and we have heard from sources that performance is lacking.

The specific Department of Energy deal with NNSA is part of the commodity track of the DoE labs, which is a set of small systems used for commodity work. It is not the DoE leadership systems, and we know that Lawrence Livermore National Laboratory decided to use InfiniBand for their next leader system – under the Coral project. The DoE did grant the previous commodity deal to QLogic TrueScale a few years ago, and QLogic has made the same noise we are hearing today – that they are allegedly gaining momentum over Mellanox.

Additionally, the CTS program (formally TLCC), enables a second tier of companies and helps labs to maintain multiple choices for technologies. The program results in building a small scale of systems that the labs are using for basic work, not for their major and high-scale applications. The previous TLCC was awarded to Appro and QLogic, and the current one to Penguin Computing and Intel OPA.

On A Hybrid Approach

Omni-Path is the same technology as the old technology, “InfiniPath” by Pathscale which was later bought and marketed by QLogic under the name “TrueScale.” Similar to QLogic with TrueScale, we believe any description of Omni-Path as a “hybrid” between off-loading and on-loading is likely not supported by the facts. Read more about it in my latest post for HPC Wire. You can see the system performance difference in various HPC application cases, such as WIEN2K, Quantum Espresso, and LS-DYNA.

On Performance

Intel chose to highlight message rate performance, stating “Compute data coming out of MPI tends to be very high message rate, relatively small size for each message, and highly latency sensitive. There we do use an on-load method because we found it to be the best way to move data. We keep in memory all of the addressing information for every node, core, and process running that requires this communications.” While previously Intel claimed 160M messages per second with OPA, they recently admitted it is closer to 79-100M. Mellanox delivers a superior solution with 150M messages per second.

Finally, as of today, Intel has not yet provided application performance benchmarks for OPA, that support details of the article, or offer substance to claims regarding its performance versus Mellanox’s InfiniBand. We have a number of case studies to prove the performance of InfiniBand.

We look forward to seeing what Intel comes out with next.

OpenStack Summit Austin 2016

The OpenStack Summit is a five-day conference for developers, users, and administrators of OpenStack cloud software. Held every six months, the conference schedule rotates based on the OpenStack software release cycle.  This week, the summit is being held in Austin, Texas at the Austin Convention Center.

keynote

The summit started yesterday and we had two successful sessions:

Open Composable Networks: Leverage LEGO Design to Transform Cloud Networking by Kevin Deierling, Mellanox VP Marketing

Kevin talked about a new way of cloud networking that stemmed from the hyper-scale cloud web services giants, but is being made available widely by Mellanox and our cloud ecosystem partners. He shared real world deployments of our OpenStack customers such as Cambridge, Enter and NCI and described the LEGO parts they have used such as: Mellanox NEO, our End-to-End 25/50/100G Ethernet and InfiniBand intelligent interconnect, etc…

Lightning Talks by Moshe Levi, SW Cloud Manager about Using Device Emulator to Enhance CI

Moshe talked about Mellanox SimX and explained how to reduce the number of physical servers and eliminate the physical device dependency in CI.

We invite you to visit Mellanox’s booth (D20) and see the 25/50/100G Cloud Solution based on Spectrum, ConnectX-4 and Mellanox NEO for Network Automation. Make sure to stop by and talk with us!  Here are some photos from yesterday’s sessions along with the Mellanox booth.

session1

session2

booth

Mellanox and NexentaEdge Cranks Up OpenStack Storage with 25GbE!

Mellanox and NexentaEdge High Performance Scale-Out Block & Object Storage  Deliver Line Rate Performance on 25Gbs and 50Gbs Fabrics.

This week at the OpenStack Summit in Austin, we announced that Mellanox end-to-end Ethernet solutions and the NexentaEdge high performance scale-out block and object storage are being deployed by Cambridge University for their OpenStack cloud.

Software-Defined Storage (SDS) is a key ingredient of OpenStack cloud platforms and Mellanox networking solutions, together with Nexenta storage, are the key to achieving efficient and cost effective deployments. Software-Defined Storage fundamentally breaks the legacy storage models that requires a separate Storage Area Network (SAN) interconnect and instead, converges storage onto a single integrated network.

NexentaEdge block and object storage is designed for any petabyte scale, OpenStack or Container-based cloud and is being deployed to support Cambridge’s OpenStack research cloud. The Nexenta OpenStack solution supports Mellanox Ethernet solutions from 10 up to 100 Gigabit per second.

NexentaEdge is a ground-breaking high performance scale-out block and object SDS storage platform for OpenStack environments. NexentaEdge is the first SDS offering for OpenStack to be specifically designed for high-performance block services with enterprise grade data integrity and storage services. Particularly important in the context of all-flash scale-out solutions, NexentaEdge provides always-on cluster-wide inline deduplication and compression, enabling extremely cost-efficient high performance all-flash storage for OpenStack clouds.

Over the last couple of weeks, Mellanox and Nexenta worked jointly to verify our joint solution’s ability to linearly scale cluster performance with the Mellanox fabric line rate. The testbed comprised 3 storage all-flash storage nodes with Micron SSDs and a single block gateway. All 4 servers in the cluster were connected with Mellanox ConnectX-4 Lx adapters, capable of either 25Gbps or 50Gbps Ethernet.

NexentaEdge configured with Nexenta Block Devices on the gateway node demonstrate 2x higher performance as the Mellanox fabric line rate increased from 25Gbps to 50Gbps.

NexentaEdge-graph1

For example, front-end 100% random write bandwidth (with 128KB I/Os) on the NBD devices scaled from 1.3GB/s with 25Gbps networking, to 2.8GB/s with 50Gbps networking. If you consider a 3x replication factor for data protection, these front-end numbers correspond to 25Gbps and 50Gbps line rate performance on the interface connecting the Gateway server to the three storage nodes in the cluster. While NexentaEdge deduplication and compression were enabled, to maximize network load, the dataset used for testing was non-dedupable and non-compressible.

Building and deploying an OpenStack cloud is made easier with a reliable components that have been tested together. Mellanox delivers predictable end-to-end Ethernet networks that don’t lose packets as detailed in the Tolly Report.  NexentaEdge takes full advantage of the underlying physical infrastructure to enable high performance OpenStack cloud platforms that deliver both CapEx and OpEx savings as well as extreme performance scaling compared to legacy SAN-based storage offerings.

Content at the Speed of Your Imagination

media-clip-loop-200pxIn the past, one port of 10GbE was enough to support the bandwidth need of 4K DPX, three ports could drive 8K formats and four ports could drive 4K-Full EXR.  However, the recent evolution in the media and entertainment industry that has been presented this week at the NAB Show showcases the need for higher resolution.  This trend continues to drive the need for networking technologies that can stream more bits per second in real-time. However, these number of ports can drive only one stream of data. New films or video productions today include special effects that necessitate the need to support multiple streams simultaneously in real-time. This creates a major “data size” challenge for the studios and post-production shops, as 10GbE interconnects have been maxed-out and can no longer provide an efficient solution that can handle the ever-growing workload demands.

This is why IT managers should consider using the new emerging Ethernet speeds of 25, 50, and 100GbE. These speeds have been established as the new industry standard, driven by a consortium of companies that includes Google, Microsoft, Mellanox, Arista, and Broadcom, and recently adopted by the IEEE as well.  A good example of the efficiency that higher speed enables is Mellanox ConnectX-4 100GbE NIC that has been deployed in Netflix’s new data center. This solution now provides the highest-quality viewing experience for as many as 100K concurrent streams out of a single server. (Mellanox also published a CDN reference architecture based our end-to-end 25/50/100GbE solutions including: the Mellanox Spectrum™ switch, the ConnectX®-4 and ConnectX-4 LX NICs, and LinkX™ copper and optical cables.)

 

 

table-4k
Bandwidth required for uncompressed 4K/8K video streams

Another important parameter that IT managers must take into account when building media and entertainment data centers is the latency that it takes to stream the data. Running multiple streams over the heavy and CPU-hungry TCP/IP protocol will result in lower CPU utilization (as a significant percentage of the CPU cycles will be used to run the data communication protocol and not the workload itself), which will reduce the effective bandwidth that the real workload can use.

This is why IT managers should consider deploying RoCE (RDMA over Converged Ethernet). Remote Direct Memory Access (RDMA) makes data transfers more efficient and enables fast data move­ment between servers and storage without involving the server’s CPU. Throughput is increased, latency reduced, and CPU power freed up for video editing, compositing, and rendering work. RDMA technology is already widely used for efficient data transfer in render farms and in large cloud deployments such as Microsoft Azure, and can accelerate video editing, encoding/transcoding, and playback.

 

chart1

RoCE utilizes advances in Ethernet to enable more efficient implementations of RDMA over Ethernet. It enables widespread deployment of RDMA technologies in mainstream data center applications. RoCE-based network management is the same as that for any Ethernet network management, eliminating the need for IT managers to learn new technologies. Using RoCE can result is 2X higher efficiency since it doubles the number of streams compared to running over Ethernet (source: ATTO technology).

 

table-4k2
The impact of RoCE for 40Gb/s vs. TCP in the number of supported video steams

 

Designing data centers that can serve the needs of the media and entertainment industry has traditionally been a complicated task that has often led to slow streams and bottlenecks in the pure storage performance, and in many cases has required the use of very expensive systems that resulted in lower-than-expected efficiency gains. Using high performance networking that supports higher bandwidth and low latency guarantees a hassle-free operation and enables extreme scalability and higher ROI for any industry-standard resolution and any content imaginable.

Affirmed Networks Partners with Mellanox to Further Boost NFV Deployment Efficiency

Mellanox and Affirmed

I am very excited that after engaging with Affirmed Networks for extensive integration and certification testing, Mellanox is now officially a partner of this leading virtual Evolved Packet Core (EPC) provider and key supplier of AT&T Domain 2.0 initiative. Through this mutually beneficial partnership, Mellanox aims to boost Affirmed Mobile Content Cloud (MCC) Virtualized Network Function (VNF) deployment efficiency with our high-performance server interconnect solutions.

Affirmed Networks is a leading telecommunications technology supplier with revolutionary Network Function Virtualization (NFV) solutions for EPC, vCPE, Gi LAN and SFC Controller. Affirmed’s virtualized MCC software has been designed to run on virtualized high-volume servers.  However when server I/O capacity becomes constrained, application performance may suffer, resulting in under-utilized CPU resources, and excessive server footprint. Mellanox’s high-speed server interconnect solution enhances utilization of infrastructure resources for Affirmed’s virtualized product offerings, enabling optimal application performance as well as space and energy footprint for vEPC deployment. Affirmed Networks and Mellanox are both HPE OpenNFV ecosystem partners.

Hear Ron Parker, Senior Director of System Architecture at Affirmed Networks, talks about this partnership, and how Mellanox can help supercharge NFV deployment.