CROC is the number one IT infrastructure creation company in Russia, and one of Russia’s top 200 private companies. CROC has become the first public cloud service provider in Russia to adopt InfiniBand—a standard for high-speed data transfer between servers and storage. Migration to a new network infrastructure took approximately one month and resulted in up to a ten-fold increase in cloud service performance.
It’s almost time for SC14 in New Orleans, LA (November 17-20, 2014)! We have many exciting things planned for this annual conference.
Stop by and visit Mellanox Technologies (booth #2939) to see the latest in our industry-leading FDR 56Gb/s InfiniBand and 40/56GbE solutions. Make sure to meet Mellanox HPC experts in person:
- Michael Kagan, CTO
- Dror Goldenberg, VP/Software Architecture
- Gilad Shainer, VP/Marketing
- Scot Schultz, Director – HPC & Technical Computing
- Dr. Richard Graham, Senior Solutions Architect – Software
In our theater area, we will host presentations from leading server and storage OEMs, ISVs, end-users and academia. These presenters will provide insight into the benefits and performance improvements when using low-latency FDR 56GB/s InfiniBand I/O technology.
After the overwhelming success of Hackathon 2014 this past January, Mellanox Israel now presents 3D Hackathon: Develop, Debug, Deploy. This contest is designed to encourage innovation and teamwork, while introducing new 3D software technologies and features in a very quick turnaround time.
Mellanox Israel employees were invited to submit proposals for new software projects related to existing Mellanox technologies and to form a team of up to 3 people to develop a proposal. More than 20 unique software proposals were submitted. The steering committee evaluated and selected 16 proposals for consideration into the final competition. The top 3 teams were awarded prizes. All teams were asked to present working demos.
Mellanox congratulates Yarden Gerbi as she won the Silver medal in the recent World Judo Championship competition. The competition was held August 23-30, 2014 in Chelyabinsk, Russia. Gerbi competes in the under 63 kg (139 lbs.) division.
It was a busy time last week in San Francisco! During VMworld 2014, we announced a collaboration with VMware and Micron to enable highly efficient deployments of Virtual Desktop Infrastructure. The VDI deployment will be a combination of Mellanox’s 10GbE interconnect, VMware’s Virtual SAN (VSAN) and Micron’s SSDs. The joint solution creates a scalable infrastructure while minimizing the cost per virtual desktop user. The new solution will consist of three servers running VMware vSphere and Virtual SAN each with one Mellanox ConnectX-3 10GbE NIC, two Micron 1.4TB P420m PCIe SSDs and six HDDs.
Congratulations go out to Yarden Gerbi as she recently took home the silver medal in competition at the Judo Grand Prix, recently held in Dusseldorf, Germany. This competition brought together 370 athletes from 55 countries. Gerbi secured victories over competitors from Mongolia and Austria and moved on to the semi-finals. Gerbi is currently training in preparation for the 2016 Rio Olympic games.
Cloud computing was developed specifically to overcome issues of localization and limitations of power and physical space. Yet many data center facilities are in danger of running out of power, cooling, or physical space.
Mellanox offers an alternative and cost-efficient solution. Mellanox’s new MetroX® long-haul switch system makes it possible to move from the paradigm of multiple, disconnected data centers to a single multi-point meshed mega-cloud. In other words, remote data center sites can now be localized through long-haul connectivity, providing benefits such as faster compute, higher volume data transfer, and improved business continuity. MetroX provides the ability for more applications and more cloud users, leading to faster product development, quicker backup, and more immediate disaster recovery.
The more physical data centers you join using MetroX, the more you scale your company’s cloud into a mega-cloud. You can continue to scale your cloud by adding data centers at opportune moments and places, where real estate is inexpensive and power is at its lowest rates, without concern for distance from existing data centers and without fear that there will be a degradation of performance.
Moreover, you can take multiple distinct clouds, whether private or public, and use MetroX to combine them into a single mega-cloud. This enables you to scale your cloud offering without adding significant infrastructure, and it enables your cloud users to access more applications and to conduct more wide-ranging research while maintaining the same level of performance.
At DesignCon last week, I followed a speaker who ended his presentation with a quote from Mark Twain, “the reports of my death have been greatly exaggerated!” The speaker was talking about copper cabling on a panel entitled, “Optical Systems Technologies and Integration.” He showed some nice charts on high speed signaling over copper, making the point that copper will be able to scale to speeds of 100 Gb/s.
As next speaker on the panel, I assured him that those of us who come from optical communications are not “trying to kill copper.” Rather, the challenge for companies like Mellanox, an end-to-end interconnect solutions company for InfiniBand and Ethernet applications, is to provide the “right technology for the application.” I spoke about the constraints of 100 Gb/s pipes and our solutions.
Last year, Open Compute Project (OCP) launched a new network project focused on developing operating system agnostic switches to address the need for a highly efficient and cost effective open switch platform. Mellanox Technologies collaborated with Cumulus Networks and the OCP community to define unified and open drivers for the OCP switch hardware platforms. As a result, any software provider can now deliver a networking operating system to the open switch specifications on top of the Open Network Install Environment (ONIE) boot loader.
At the upcoming OCP Summit, Mellanox will present recent technical advances such as loading Net-OS on an x86 system with ONIE, OCP platform control using Linux sysfs calls, full L2 and L3 Open Ethernet Switch API, and also demonstrate Open SwitchX SDK. To support this, Mellanox developed SX1024-OCP, a SwitchX®-2-based TOR switch which supports 48 10GbE SFP+ ports and up to 12 40GbE QSFP ports.
The SX1024-OCP enables non-blocking connectivity within the OCP’s Open Rack and 1.92Tb/s throughput. Alternatively, it can enable 60 10GbE server ports when using QSFP+ to SFP+ breakout cables to increase rack efficiency for less bandwidth demanding applications.
Mellanox also introduced SX1036-OCP, a SwitchX-2-based spine switch, which supports 36 40GbE QSFP ports. The SX1036-OCP enables non-blocking connectivity between the racks. These open source switches are the first switches on the market to support ONIE over x86 dual core processors.
This is an excerpt of a post published today on the Cisco HPC Networking blog by Joshua Ladd, Mellanox:
At some point in the process of pondering this blog post I noticed that my subconscious had, much to my annoyance, registered a snippet of the chorus to Paul Simon’s timeless classic “50 Ways to Leave Your Lover” with my brain’s internal progress thread. Seemingly, endlessly repeating, billions of times over (well, at least ten times over) the catchy hook that offers one, of presumably 50, possible ways to leave one’s lover – “Hop on the bus, Gus.” Assuming Gus does indeed wish to extricate himself from a passionate predicament, this seems a reasonable suggestion. But, supposing Gus has a really jilted lover; his response to Mr. Simon’s exhortation might be “Just how many hops to that damn bus, Paul?”
HPC practitioners may find themselves asking a similar question, though in a somewhat less contentious context (pun intended.) Given the complexity of modern HPC systems with their increasingly stratified memory subsystems and myriad ways of interconnecting memory, networking, computing, and storage components such as NUMA nodes, computational accelerators, host channel adapters, NICs, VICs, JBODs, Target Channel Adapters, etc., reasoning about process placement has become a much more complex task with much larger performance implications between the “best” and the “worst” placement policies. To compound this complexity, the “best” and “worse” placement necessarily depends upon the specific application instance and its communication and I/O pattern. Indeed, an in-depth discussion on Open MPI’s sophisticated process affinity system is far beyond the scope of this humble blog post and I refer the interested reader to the deep dive talk Jeff Squyres (Cisco) gave at Euro MPI on this topic.
In this posting I’ll only consider the problem framed by Gus’ hypothetical query; How can one map MPI processes as close to an I/O device as possible thereby minimizing data movement or ‘hops’ through the intranode interconnect for those processes? This is a very reasonable request but the ability to automate this process has remained mostly absent in modern HPC middleware. Fortunately, powerful tools such as “hwloc” are available to help us with just such a task. Hwloc usually manipulates processing units and memory, but it can also discover I/O devices and report their locality as well. In simplest terms, this can be leveraged to place I/O intensive applications on cores near the I/O devices they use. Whereas Gus probably didn’t have the luxury to choose his locality so as to minimize the number of hops necessary to get on his bus, Open MPI, with the help of hwloc, now provides a mechanism for mapping MPI processes to NUMA nodes “closest” to an I/O device.
Read the full text of the blog here.
Joshua Ladd is an Open MPI developer & HPC algorithms engineer at Mellanox Technologies. His primary interests reside in algorithm design and development for extreme-scale high performance computing systems. Prior to joining Mellanox Technologies, Josh was a staff research scientist at the Oak Ridge National Lab where he was engaged in R&D on high-performance communication middleware. Josh holds a B.S., M.S., and Ph.D. all in applied mathematics.