All posts by Cecelia Taylor

About Cecelia Taylor

Cecelia has served as the Sr. Social Media Manager for Mellanox since 2013. She previously worked at Cisco & ZipRealty managing social media marketing, publishing and metrics. Prior to her career in social media, she worked in audience development marketing for B2B publishing. She has a BA from Mills College and resides in the SF East Bay. Follow her on Twitter: @CeceliaTaylor

Mellanox Software 3D Hackathon

Hackathon Pic1After the overwhelming success of Hackathon 2014 this past January, Mellanox Israel now presents 3D Hackathon:  Develop, Debug, Deploy. This contest is designed to encourage innovation and teamwork, while introducing new 3D software technologies and features in a very quick turnaround time.

 

Mellanox Israel employees were invited to submit proposals for new software projects related to existing Mellanox technologies and to form a team of up to 3 people to develop a proposal. More than 20 unique software proposals were submitted.  The steering committee evaluated and selected 16 proposals for consideration into the final competition.  The top 3 teams were awarded prizes.  All teams were asked to present working demos.

Continue reading

Recap: VMworld 2014 – San Francisco, CA

It was a busy time last week in San Francisco!  During VMworld 2014, we announced a collaboration with VMware and Micron to enable highly efficient deployments of Virtual Desktop Infrastructure. The VDI deployment will be a combination of Mellanox’s 10GbE interconnect, VMware’s Virtual SAN (VSAN) and Micron’s SSDs.  The joint solution creates a scalable infrastructure while minimizing the cost per virtual desktop user. The new solution will consist of three servers running VMware vSphere and Virtual SAN each with one Mellanox ConnectX-3 10GbE NIC, two Micron 1.4TB P420m PCIe SSDs and six HDDs.

 

vmworld14 booth presentation2

Continue reading

Congratulations! Yarden Gerbi Wins Silver, Grand Prix – Dusseldorf, Germany

logo-judo-grand-prixCongratulations go out to Yarden Gerbi as she recently took home the silver medal in competition at the Judo Grand Prix, recently held in Dusseldorf, Germany.  This competition Yarden Gerbi_Medalsbrought together 370 athletes from 55 countries.  Gerbi secured victories over competitors from Mongolia and Austria and moved on to the semi-finals.  Gerbi is currently training  in preparation for the 2016 Rio Olympic games.

Yarden Gerbi_Z0Z1730 Mellanox logo showing

Continue reading

Turn Your Cloud into a Mega-Cloud

Cloud computing was developed specifically to overcome issues of localization and limitations of power and physical space. Yet many data center facilities are in danger of running out of power, cooling, or physical space.

Mellanox offers an alternative and cost-efficient solution. Mellanox’s new MetroX® long-haul switch system makes it possible to move from the paradigm of multiple, disconnected data centers to a single multi-point meshed mega-cloud. In other words, remote data center sites can now be localized through long-haul connectivity, providing benefits such as faster compute, higher volume data transfer, and improved business continuity. MetroX provides the ability for more applications and more cloud users, leading to faster product development, quicker backup, and more immediate disaster recovery.

The more physical data centers you join using MetroX, the more you scale your company’s cloud into a mega-cloud. You can continue to scale your cloud by adding data centers at opportune moments and places, where real estate is inexpensive and power is at its lowest rates, without concern for distance from existing data centers and without fear that there will be a degradation of performance.

Blog MegaCloudBAandA

Moreover, you can take multiple distinct clouds, whether private or public, and use MetroX to combine them into a single mega-cloud.  This enables you to scale your cloud offering without adding significant infrastructure, and it enables your cloud users to access more applications and to conduct more wide-ranging research while maintaining the same level of performance.

Continue reading

Silicon Photonics: Using the Right Technology for Data Center Applications

At DesignCon last week, I followed a speaker who ended his presentation with a quote from Mark Twain, “the reports of my death have been greatly exaggerated!”  The speaker was talking about copper cabling on a panel entitled, Optical Systems Technologies and Integration.”  He showed some nice charts on high speed signaling over copper, making the point that copper will be able to scale to speeds of 100 Gb/s.

 

As next speaker on the panel, I assured him that those of us who come from optical communications are nottrying to kill copper.” Rather, the challenge for companies like Mellanox, an end-to-end interconnect solutions company for InfiniBand and Ethernet applications, is to provide the “right technology for the application.”  I spoke about the constraints of 100 Gb/s pipes and our solutions.

 

 

Continue reading

Mellanox Technologies Delivers the World’s First 40GbE NIC for OCP Servers

Last year, Open Compute Project (OCP) launched a new network project focused on developing operating system agnostic switches to address the need for a highly efficient and cost effective open switch platform. Mellanox Technologies collaborated with Cumulus Networks and the OCP community to define unified and open drivers for the OCP switch hardware platforms. As a result, any software provider can now deliver a networking operating system to the open switch specifications on top of the Open Network Install Environment (ONIE) boot loader.

At the upcoming OCP Summit, Mellanox will present recent technical advances such as loading Net-OS on an x86 system with ONIE, OCP platform control using Linux sysfs calls, full L2 and L3 Open Ethernet Switch API, and also demonstrate Open SwitchX SDK. To support this, Mellanox developed SX1024-OCP, a SwitchX®-2-based TOR switch which supports 48 10GbE SFP+ ports and up to 12 40GbE QSFP ports.

The SX1024-OCP enables non-blocking connectivity within the OCP’s Open Rack and 1.92Tb/s throughput. Alternatively,40GBE NIC designed with OCP Compliant ConnectX-3 it can enable 60 10GbE server ports when using QSFP+ to SFP+ breakout cables to increase rack efficiency for less bandwidth demanding applications.

Mellanox also introduced SX1036-OCP, a SwitchX-2-based spine switch, which supports 36 40GbE QSFP ports. The SX1036-OCP enables non-blocking connectivity between the racks. These open source switches are the first switches on the market to support ONIE over x86 dual core processors.

Continue reading

Process Affinity: Hop on the Bus, Gus!

This is an excerpt of a post published today on the Cisco HPC Networking blog by Joshua Ladd, Mellanox:

At some point in the process of pondering this blog post I noticed that my subconscious had, much to my annoyance, registered a snippet of the chorus to Paul Simon’s timeless classic “50 Ways to Leave Your Lover” with my brain’s internal progress thread. Seemingly, endlessly repeating, billions of times over (well, at least ten times over) the catchy hook that offers one, of presumably 50, possible ways to leave one’s lover – “Hop on the bus, Gus.” Assuming Gus does indeed wish to extricate himself from a passionate predicament, this seems a reasonable suggestion. But, supposing Gus has a really jilted lover; his response to Mr. Simon’s exhortation might be “Just how many hops to that damn bus, Paul?”

HPC practitioners may find themselves asking a similar question, though in a somewhat less contentious context (pun intended.) Given the complexity of modern HPC systems with their increasingly stratified memory subsystems and myriad ways of interconnecting memory, networking, computing, and storage components such as NUMA nodes, computational accelerators, host channel adapters, NICs, VICs, JBODs, Target Channel Adapters, etc., reasoning about process placement has become a much more complex task with much larger performance implications between the “best” and the “worst” placement policies. To compound this complexity, the “best” and “worse” placement necessarily depends upon the specific application instance and its communication and I/O pattern. Indeed, an in-depth discussion on Open MPI’s sophisticated process affinity system is far beyond the scope of this humble blog post and I refer the interested reader to the deep dive talk Jeff Squyres (Cisco) gave at Euro MPI on this topic.

In this posting I’ll only consider the problem framed by Gus’ hypothetical query; How can one map MPI processes as close to an I/O device as possible thereby minimizing data movement or ‘hops’ through the intranode interconnect for those processes? This is a very reasonable request but the ability to automate this process has remained mostly absent in modern HPC middleware. Fortunately, powerful tools such as “hwloc” are available to help us with just such a task. Hwloc usually manipulates processing units and memory, but it can also discover I/O devices and report their locality as well. In simplest terms, this can be leveraged to place I/O intensive applications on cores near the I/O devices they use. Whereas Gus probably didn’t have the luxury to choose his locality so as to minimize the number of hops necessary to get on his bus, Open MPI, with the help of hwloc, now provides a mechanism for mapping MPI processes to NUMA nodes “closest” to an I/O device.

Read the full text of the blog here.

Joshua Ladd is an Open MPI developer & HPC algorithms engineer at Mellanox Technologies.  His primary interests reside in algorithm design and development for extreme-scale high performance computing systems. Prior to joining Mellanox Technologies, Josh was a staff research scientist at the Oak Ridge National Lab where he was engaged in R&D on high-performance communication middleware.  Josh holds a B.S., M.S., and Ph.D. all in applied mathematics.

 

Mellanox Video Recap of Supercomputing Conference 2013 – Denver

We want thank everyone for joining us at SC13 in Denver, Colorado last month. We hope you had a chance to become more familiar with our end-to-end interconnect solutions for HPC.

 

Check out the  videos of the presentations given during the Mellanox Evening Event, held on November 20, 2013 in the Sheraton Denver Downtown Hotel. The event was keynoted by Eyal Waldman, President and CEO of Mellanox Technologies:

Continue reading

Symantec’s Clustered File Storage over InfiniBand

Last week (on December 9th, 2013), Symantec announced the GA of their clustered file storage (CFS). The new solution enables customers to access mission critical data and applications 400% faster than traditional Storage Area Networks (SANs) at 60% of the cost.

Faster is cheaper! Sounds like magic! How they are doing it?

Try to understand the “magic”:  It is important to understand the advantages that using SSD with high performance interconnect enable in the modern scale-out (or clustered) storage systems.   Up to now, SAN-based storage has typically been used to increase performance and provide data availability for multiple applications and clustered systems. However, with the recent high-performance applications demand, SAN vendors are trying to add SSD into the storage array itself to provide higher bandwidth and lower latency response.

Since SSDs offer an incredibly high number of IOPS and bandwidth, it is important to use the right interconnect technology and to avoid bottlenecks associated with access to storage. Old fabric, like Fibre Channel (FC) cannot cope with faster pipe demands, as 8Gb/s (or even 16Gb/s) bandwidth performance is not good enough to satisfy the applications requirements.  While 40Gb/s Ethernet may look like an alternative, InfiniBand (IB) currently supports up to 56Gb/s, with a roadmap to 100Gb/s in next year.

Continue reading