All posts by Gerald Lotto

About Gerald Lotto

Jerry recently joined Mellanox in 2016 as Director HPC and Technical Computing, with more than 30 years of experience with scientific computing. An early adopter of InfiniBand, Jerry built the first HPC teaching cluster in the Harvard’s Department of Chemistry and Chemical Biology with an InfiniBand backbone during the early InfiniBand days. In 2007, he helped to create the Harvard Faculty of Arts and Sciences Research Computing group. In an unprecedented collaborative effort between 5 Universities, industry and state government, Jerry also helped to design the Massachusetts Green High Performance Computing Center in Holyoke, MA which was completed in November 2012. In mid-2013, Jerry left Harvard University to join RAID, Inc as Chief Technology Officer working to help companies, universities and government design, build, integrate and use HPC and technical computing technologies throughout the United States.

Mellanox and Our Interconnected Planet: Global Energy Company Vestas Drives Wind Energy Solutions with Mellanox InfiniBand Solutions

The cost of clean energy technology continues to decline while the “cost” of legacy fuels skyrockets.  The world added record levels of renewable energy capacity in 2016, at an investment level 23 per cent lower than the previous year. This good news is according to new research published by UN Environment, the Frankfurt School — UNEP Collaborating Centre and Bloomberg New Energy Finance (BNEF) who also cited investment in renewables capacity was roughly double that in fossil fuel generation.

Vestas, a global energy company has selected Mellanox InfiniBand solutions for their new supercomputer. Leveraging the low latency performance of Mellanox’s InfinBand for efficiency and scalability advantages, the In-Network Computing advanced offloading engines enables Vestas to use global weather simulations to proactively optimize turbine operational parameters and accelerate the predictive analysis of energy demand, maximizing the return from tens of thousands wind turbines across the world.

Vestas is a global energy company dedicated to wind energy, improving business case certainty and reducing the cost of energy for its customers. With installed wind turbines in 76 countries around the world, Vestas has considerable experience in all key disciplines – engineering, logistics, construction, operations, and service. Every day, Vestas leverages its global experience by continuously improving the performance of its customers’ wind power plants. This is accomplished through the monitoring and performance diagnostics of the world’s largest fleet of wind turbines. The continuous stream of data from the more than 32,000 wind turbines enables Vestas to meticulously plan and carry out service inspections, thereby reducing wind turbine downtime to an absolute minimum. Vestas has installed nearly 60,000 wind turbines on six continents, generating more than 205 million MWh of electricity per year. This represents enough electricity to supply almost 120 million European households with electricity and globally reduces carbon emissions by more than 110 million tons of CO2.

Commencing several years before wind power plant construction, Vestas utilizes its in-house high performance computing system to simulate the weather conditions at the wind site, as well as predicting the eventual power production of the turbines to find the optimal layout for the turbine array. Efforts such as these make it easier to get a new wind power project financed and to meet regulations, while providing the conditions for maximizing return on investment over the wind power project’s lifetime.

Efficient and scalable simulations are critical to ensure maximum generation of clean energy from the tens of thousands wind turbine installed worldwide. Vestas also utilizes its high performance computing system to maintain the wind power industry’s largest wind data library, thereby giving Vestas an unparalleled insight into global wind and weather conditions. Mellanox’s new InfiniBand-based supercomputer enables Vestas to achieve their customer’s business goals today and in the future.

InfiniBand represents the best choice for interconnect high-performance, machine learning and big data systems, as illustrated by the new supercomputer at Vestas. Our advanced In-Network Computing acceleration engines make InfiniBand the most advanced interconnect technology, allowing users to analyze growing amounts of data faster and in the most scalable way. This results in maximizing data centers and applications return on investment. We are happy to work with Vestas to enable more efficient green energy, making the world a better place.

The Global Trends in Renewable Energy Investment 2017 report also related that wind, solar, biomass and waste-to-energy, geothermal, small hydro and marine sources added 138.5 gigawatts to global power capacity in 2016, up almost 9 per cent from the 127.5 gigawatts added the year before. The added generating capacity roughly equates to the output of the world’s 16 largest existing power producing facilities combined. In addition, Americans used more renewable energy in 2016 compared to the previous year, according to the latest energy flow charts released by Lawrence Livermore National Laboratory. Mellanox is helping to lead the way in affordable, alternative energy with our advanced Interconnect technology, technology that is literally, interconnecting our planet.

Supporting Resources:

Coming Clean: Machine Learning Drives Clean Energy Solutions

Machine learning is taking the world by storm.  We hear of breakthroughs nearly every day driven by a wide variety of various applications that leverage deep learning techniques; from image and voice recognition to fraud detection and diagnostic medicine. Perhaps the most well-known examples where machine learning is pervasive is teaching drones to navigate and perform tasks, and of course modern autonomous cars that can drive themselves safely in traffic. All of these types of applications have one thing in common; the systems are relatively easy to understand, describe, model and deploy and there is also a wealth of data with which to train.  However in contrast, the world of organic chemistry deals with molecular structures – and the connections between a given structure, its many properties, and all of the potential applications and uses is a very complex relationship. Designing new molecules that exhibit a desired set of properties remains the holy grail of chemistry – and one researcher at Harvard University is trying to teach computers how to identify them.

Dr. Alan Aspuru-Guzik, Professor of Chemistry and Chemical Biology has been exploring the world of high performance and distributed computing for quite some time. The Clean Energy Project originated in his group and took the form of a screen saver that was designed to help identify materials to design the next generation of low cost, high efficiency solar cells and increase the output of this critical renewable energy resource. Imagine, solar cells as inexpensive to manufacture as a plastic bag! Screening millions of candidate molecules to identify candidates with favorable excitation energy transfer properties is a daunting task, but Alan’s group tapped into the World Community grid with special code that used the available GPU cycles on gaming systems around the world. They developed a system that uses a high throughput virtual discovery and design process to reduce the pool of candidates to those which have potential. To understand why so much processing is necessary, we can imagine the scope of an unstructured space such as molecules with drug-like properties for example, which has been estimated to be more than 1052 times larger than the total number of compounds ever synthesized!  Once a set of candidates have been isolated, however, rigorous analysis of the energy properties at the needed level of accuracy of a single molecule can take literally billions of CPU-hours.

Today, with the potential of artificial intelligence techniques for complex analysis and discovery, the group is turning a new focus on developing an ecosystem of descriptive spaces, datasets, frameworks, and properties for chemistry that can work within a deep learning framework. Unlike images or languages, the three dimensional world of physical, orbital, electrostatic, and quantum space in which chemical molecules exist and interact, has no common, universal schema for representation. Developing a data-driven continuous representation of discrete molecules that can be understood and manipulated by neural networks is one of the first challenges that the group tackled.

Looking to the future, any number of applications using chemical compound design and property prediction could potentially benefit from deep learning techniques. Alan’s group remains active in the area of research exploring different models and training methods to make the best use of this technology, but when they need to scale, they benefit from the University’s HPC cluster, Odyssey, and the Research Computing group who manages the system. Since it was first launched back in 2007, Odyssey has grown into an InfiniBand-connected cluster in excess of 60,000 cores, with more than 35PB of storage comprised of multiple classes of filesystems, each support different performance and workflows. The cluster supports several schools within the Harvard University system, touching nearly every area of technology and research. A diverse team of scientists, engineers, and administrators who comprise the group pride themselves as, “enablers of scale for scientists,” not only managing a cluster but helping to translate scientific challenges to high performance cluster computing solutions.  To quote Scott Yockel, Interim Director of Research Computing at Harvard, “We love when we help researchers have an epiphany that they are simply holding a problem the wrong way. People tend to think serially, but HPC clusters excel at parallel tasks.”

Time, as the saying goes, is money. This is as true in the commercial sector as it is in academic research. The need to accelerate and drive research in a timely and cost-effective manner has never been more needed than it is now. As the Odyssey system grows and demand shifts to more compute-intensive machine learning and hybrid tasks, the in-network computing capabilities of Mellanox InfiniBand such as SHARP (Scalable Hierarchical Aggregation and Reduction Protocol), GPUDirect™ and of course RDMA, hold the promise of accelerating the machine learning and HPC workloads central to this research.

 

Supporting Resources:

Going With the Flow: How Fluid Mechanics Advance Biomedical Research

More FLAIR to Fluid Mechanics via the Monash Research Cloud

Fluid Mechanics represents one of the most exacting and computationally challenging areas of biomedical research and medical diagnostics today. Microfluidics and Nanofluidics are at the forefront of research efforts to understand how fluids behave and flow at small scales. This is critical to medical advancements in that significant fluid dynamics occur throughout the human body including viscous flow, movement through small capillaries, osmosis, coursing through membranes and filters, pumping action and many other activities. Studying the properties of fluids at such small scales helps scientists design more effective medical instruments, including syringe needles, pumps for various applications, and LoC (Lab on Chip) devices. Ongoing research is also aimed at creating extremely precise dosage systems, other methods of drug delivery, and in developing lower cost diagnostic procedures/instruments.

FLAIR (Fluids Laboratory for Aeronautical and Industrial Research), from the Department of Mechanical and Aerospace Engineering, Faculty of Engineering, at Monash University, has been conducting experimental and computational fluid mechanics research for more than two decades, focusing on fundamental fluid flow challenges that impact the automotive, aeronautical, industrial and more recently, biomedical fields.

A key research focus in recent years has been understanding the wake dynamics of particles near walls. These types of interactions are prevalent in the human body as blood flow carries the vast majority of the body’s nourishment, defense, and discarded materials to/from every living cell.  Particle-particle and wall-particle interactions were investigated using an in-house spectral-element numerical solver. When applied to biological engineering, blood cells / leukocytes are numerically modelled as canonical bluff bodies (i.e., as cylinders and spheres) and numerical computations are carried out.

These simulations are useful in understanding biological cell transport. All cells must transfer essential ions and small molecules across semi-permeable plasma membranes. To fulfill the requirements of life, cells exchange gases, such as oxygen and carbon dioxide; excrete waste products; and take in particles of food, water and minerals. Living cells evolved a membrane to fence off and contain its inner organic chemicals, while selectively allowing only essential atoms and simple compounds to cross back and forth. In fact, in 2013, a Nobel Prize was awarded to three scientists who explained the inner workings of the human body’s ‘cellular postal service’. Their work determined how cells shuttle proteins and other biomolecules from one location to another — a process that is important in the release of neurotransmitter chemicals, the secretion of insulin and countless other biological tasks.

The computational and data-intensive nature of this research is among some of the most demanding and the department at Monash has always been a challenge to get access to sufficient computing resources for its needs.  In particular, their project aims to understand the wake dynamics on multiple particles in various scenarios such as rolling, collisions and vortex-induced vibrations; and the resultant mixing which occurs as a result of these interactions, etc. The group’s two- and three-dimensional fluid flow solver also incorporates two-way body dynamics to model these effects. As the studies involve multiple parameters such as Reynolds number, body rotation, height of the body above the wall, etc., the total parameter space is extensive, requiring significant computational resources. While two-dimensional simulations were carried out on single processors, their three-dimensional counterparts required parallel processing, making NeCTAR nodes an ideal platform to run these computations. Some of the visualizations from the group’s three-dimensional simulations are shown the figures below.

Advanced research such as this can consume any extra compute capacity available. This is where a research-oriented computational cloud like R@CMon is needed. Since 2008, the FLAIR team has been making good use of the Monash Campus Cluster (MCC), a high-performance/high-throughput heterogeneous system with over two thousand CPU cores. However, MCC is in heavy demand by researchers from across the university; so much so that FLAIR users often found themselves having to wait for long periods of time before they could run their fluid flow simulations. It therefore became clear that FLAIR researchers needed additional computational resources.

R@CMON was able to secure a 160-core allocation for the FLAIR team, which increased valuable resources for the group. Now, thanks to both NeCTAR and MCC-R@CMon, over one million CPU hours distributed across 4,000 jobs were provided for the project’s intensive calculations.

This powerful computational infrastructure is enabled by networking solutions from Mellanox. The installation at Monash is based on Mellanox’s CloudX platform built from the company’s Spectrum SN2700 Open Ethernet switches, ConnectX-4 NICs, and LinkX cables. Mellanox started out at Monash University with Ethernet fabric built on the Company’s 56GbE SwitchX-2 and CloudX technology and then expanded the architecture to support additional high-performance computing (HPC) capacity and high-throughput computing (HTC) applications. In addition to expansion, Monash University enjoyed a general increase in bandwidth and compute performance by incorporating Mellanox’s more recent generation of 100-Gbps end-to-end interconnect solutions into the University’s cloud node known as R@CMon.

The cloud now utilizes Mellanox end-to-end Ethernet solutions at 10, 40, 56, and 100Gb/s as part of a nationwide initiative that strives to create an open and global cloud infrastructure. Monash selected Mellanox’s RDMA-capable Ethernet technology due to its performance scalability and high efficiency cloud enhancements. The learning institution has used its high performance cloud to establish numerous ‘Virtual Laboratories’ for data-intensive characterization and analysis. Each laboratory provisions virtual desktops and Docker-based tools that are already linked up to the data sources and the HPC resources. This strategy has worked so well, it is fast becoming the standard operating environment for the modern-day researcher to support general purpose HPC and HTC (including GPGPU capabilities and Hadoop), interactive visualization, and analysis.

Monash University’s cloud node, R@CMon, is part of The National eResearch Collaboration Tools and Resources (NeCTAR) Project. NeCTAR aims to enhance research collaboration by connecting researchers throughout Australia and providing them with access to a full suite of digitally-enabled data, analytic and modelling resources that is specifically relevant to their areas of research. Since the initial deployment of a high availability CloudX OpenStack cloud, the University has expanded its RDMA-capable Ethernet fabric, both meeting and exceeding the innovation goals of NeCTAR. The fabric tightly integrates Ceph and Lustre storage with the cloud, meeting the needs of block, object and applications workloads as one converged fabric.

Mellanox Open Ethernet switches provide the flexibility Monash needs as it allows them to mix and match capabilities, which is critical for their dense, but diverse and ever changing compute architecture. Since integrating Mellanox interconnect solutions, they have been able to achieve greater performance than ever before. With this quantum leap from their previous compute environment, we look forward to more innovative discoveries as they continue their ground-breaking research.

Supporting Resources:

 

Untangling the Mysteries Behind the Human Genome: The Key to Customized Patient Care

NCI and their Next-Generation Approach to Medical Research

The field of human genomics research is becoming increasingly active. With the speed of next generation genomic sequencing that is now possible, it is now feasible to create new genomics-based personalized medical services, offering patients more advanced diagnosis. These data can also be used to create a large database of genomics data which can then be used to develop national health policies that benefit society. Understanding the genetic factors that account for human diseases is one of the most important reasons for studying the human genome. Even though many genetic disorders are not yet treatable; early or pre-diagnosis can help improve the quality of life or even extend the lifespan of patients. Current clinical trials on genetic therapies for cystic fibrosis, hemophilia, and other genetic disorders offer the promise of eventual treatments that may give patients a life free from debilitating symptoms. Diagnostic tests can also help couples make informed decisions about whether to risk passing specific disease-related genes to their children. Using genetic testing in conjunction with in vitro fertility; doctors can specifically select embryos that do not carry the dangerous gene. Truly, the human genome holds the key to advancing personalized medical care in a myriad of ways.

Other benefits of studying human DNA and genetics include helping scientists to examine phylogeny to better understand where humans came from and how we relate with one another as an evolutionary species. It can help clarify the connections between different groups of people and give historians and anthropologists a clearer picture of historic human migration patterns. In more mainstream uses, a person’s genome can give clues to their personal ancestry and help him better understand his or her genealogy. Genetic testing has been widely used to verify or rule out relatedness of individual persons or populations. In the area of criminology, human genetic information has been used to either match or rule out a suspect’s DNA to biological evidence found at a crime scene, to identify victims and to exonerate convicted individuals using newer genetic methods that were not available at the time of the initial conviction. When genetic material has been available, individuals have been freed years, even decades after being wrongly incarcerated, all thanks to breakthrough research. Paternity testing has also become a very common legal application of genetic testing.

The potential ethical, social, and legal implications of genetic testing and analysis are numerous and new applications of the technology give rise to new areas of controversy including, for example, human genetic enhancement; altering human DNA to enhance athletic ability or intelligence, or any of a wide variety of physical characteristics. On the other hand, while society sorts out the ethics, being able to alter the human genome at the embryonic level will signal an end to currently incurable genetic diseases such as Down’s syndrome, congenital deafness and congenital heart defects.

This growing field of medical research currently uses gene sequences to understand, diagnose and treat human diseases and promises to revolutionize clinical practice in the coming years, through medical care customized to a patient’s unique genetic makeup. NCI is playing an active and increasingly important role in supporting this next-generation sequencing approach to medical research. Genomic medicine relies on the sequencing of thousands of whole genomes, each of which produces around 200 gigabytes of data which depend on NCI’s capability for fast computation for analysis and storage for archive. Medical research of this kind, working at the population scale, requires large numbers of de-identified genomic sequences to be gathered in one place, like the Garvan Institute of Medical Research’s Medical Genome Reference Bank (MGRB).

The MGRB stores the genomes of thousands of disease-free Australian seniors to provide a rigorous sample with which to compare the genomes of patients with rare diseases and cancer. Setting a new record for processing, the MGRB aligned 1200 human genomes overnight, making full use of the data bandwidth available at NCI.

As part of this effort, NCI is using Mellanox’s interconnect solutions to allow for faster inter-node connectivity and access to storage, providing Australian researchers and scientific research organizations with critical on-demand access to NCI’s high-performance cloud. This cloud facilitates scientific workloads with a deployment that combines the Mellanox CloudX solution with OpenStack software to support high performance workloads on a scalable and easy to manage cloud platform. CloudX simplifies and automates the orchestration of cloud platforms and reduces deployment time from days to hours. The NCI deployment is based on Mellanox 40/56 Gb/s Virtual Protocol Interconnect adapters and switches supporting both InfiniBand and Ethernet. NCI also has Mellanox’s 100Gbit/s EDR InfiniBand interconnect for its new Lenovo NextScale supercomputer. This powerful combination of storage and compute power enable NCI to deliver extremely complex simulations and more accurate predictions, all with the aim of improving the human condition.

Supporting Resources:

 

Back to School with Mellanox HPC

You see it everywhere this time of year. Parents with a lighter spring to their step, smiles breaking out spontaneously as they happily shell out funds for reams of paper, pens, and a multitude of other school supplies. Conversely, you see the guarded dread behind children’s gazes. Yes, it is that glorious time of year for parents and misery for children, it is back to school.

If only those children appreciated how this experience of school can lead to a wonderful future career – the joy of accomplishment that attending a top research university can bring. I remember how I felt that very first day I stepped onto Harvard’s campus as a first year graduate student.  I remember just as clearly the first time I built my very first supercomputer. I’ve worked with InfiniBand since 2003, and I have witnessed firsthand, the amazing, ground-breaking research being done at Universities using Mellanox high performance computing solutions. After completing my degree, I stayed at Harvard for more than two decades building an HPC career, helping create the FAS Research Computing Group and create the Odyssey supercomputer that now services researchers across the entire University.

When it comes to research in the academic community, Mellanox’s InfiniBand can go big, really big. From fundamental high energy physics probing the subatomic structure of matter to mapping the 13 billion year evolution of the entire universe, there are an unlimited number of problems that could not be tackled without HPC. Imagine trying to map the 3.2 billion base pairs of a human genome or model the 86 billion neurons in the human brain, each with up to 10,000 connections, without a supercomputer. The reality is that nearly all scientific research, supercomputing, and InfiniBand go hand-in-hand. Computational modelling is now an integral component of almost all areas of chemistry, physics and materials sciences. As one customer put it so eloquently, “Without high-performance computing, the time to discovery is just too long to accomplish anything.”

At Mellanox, we’ve had a long and successful history working with top academic institutions and universities like Kyushu University where their new supercomputer is being accelerated by Mellanox EDR InfiniBand solutions. Our InfiniBand technology is providing the university with smart accelerations, enabling in-network-computing that ensures faster data processing, higher performance, and greater efficiency for the wide variety of applications workloads that University systems typically need to support. The system is planned to be fully operational by January 2018 and to deliver over 10 Petaflops of peak computing power. The Mellanox EDR InfiniBand solutions enable in-network computing through smart offload engines, including SHARP, Scalable Hierarchical Aggregation and Reduction ProtocolTM technology. This technology enables the interconnect fabric to analyze data as it being transferred within the network, so that a large portion of the computational burden is offloaded from the communication layers into the network hardware. This results in more than an order of magnitude applications performance improvement in many cases.

This University hosts one of the leading supercomputing centers for academic research in Japan, providing high-performance computing resources to study fluid dynamic analysis, molecular science and other scientific disciplines. The new supercomputer will address the growing computational needs of the faculty, serving to support both computationally intensive research and able to support data-science applications as well.

According to Associate Professor Takeshi Nanri: “For the past five years, RIIT has been using Mellanox InfiniBand solutions for our high-performance computing systems. Because of the superior performance and stability, we have great confidence in Mellanox’s InfiniBand solution to power our upcoming supercomputing platform. The 100G InfiniBand EDR solution assists us to migrate to the in-network-computing architecture, which enables faster data analysis and the highest system efficiency of our applications.”

The University of Waterloo is also using Mellanox InfiniBand solutions to enable leading-edge research in a variety of academic disciplines mathematics, astronomy, science, the environment and more. The University of Waterloo is a member of SHARCNET (www.sharcnet.ca), a consortium of 18 universities and colleges operating a network of high-performance compute clusters in south western, central and northern Ontario, Canada. The University of Waterloo system is using Mellanox’s EDR 100Gb/s solutions with smart offloading capabilities to maximize system utilization and efficiency. The system also leverages Mellanox’s InfiniBand to Ethernet gateways to provide seamless access to an existing Ethernet-based storage platform.

A third example is the University of Tokyo, now using Mellanox EDR InfiniBand to accelerate its newest supercomputer. They are using Mellanox Switch-IB 2 EDR 100Gb/s InfiniBand Switch systems and ConnectX®-4 adapters to accelerate its new supercomputer for computational science, engineering by large-scale simulations, and data analysis. Mellanox technology will be used as part of their new system, key to advancing ongoing research and expanding the exciting work being carried out that is leveraging computational science and engineering, computer science, data analysis, and machine learning.

The Information Technology Center at the University of Tokyo is one of Japan’s premiere research and educational institutions, armed with the mission of building, and operating large computer systems. It serves as the core institute of the Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN); a group of supercomputer centers that spans eight leading universities. The new supercomputer will also serve as a test environment for a future supercomputer system that will be tasked with advancing research in machine learning, artificial intelligence, and other multifaceted emerging fields of study.

Mellanox InfiniBand adapters provide the highest performing interconnect solution for High-Performance Computing, Enterprise Data Centers, Web 2.0, Cloud Computing, and embedded environments. Mellanox’s Switch-IB 2 EDR 100Gb/s InfiniBand switches, the world’s first smart switches, enable in-network computing through the principle of co-design using SHARP (Scalable Hierarchical Aggregation and Reduction Protocol) technology. Switch-IB 2 delivers the highest fabric performance available in the market with up to 7Tb/s of non-blocking bandwidth, 90ns port-to-port latency and 195 million messages per second processing capacity per port.

So, as your kids head back to school, even if they are still in elementary school, imagine the research and discoveries that await them when they reach college age.  Share this vision of scientific discovery and high performance computing and help them get excited about the ways they could contribute to our interconnected planet’s future. Maybe some of that guarded dread can be replaced with wide-eyed wonder at the future that awaits them.

And speaking of education, be sure and check out Ai to Z, a Mellanox video that spells out the ABCs of Artificial Intelligence.

Supporting Resources:

Using InfiniBand as a Unified Cluster and Storage Fabric

InfiniBand has been the superior interconnect technology for HPC since it was first introduced in 2001, leading with the highest bandwidth and lowest latency year after year.  Although, it was originally designed and ideal for inter-process communications, what many people may not realize is that InfiniBand brings advantages to nearly every use for an interconnect fabric technology to today’s modern data center.

First, let’s review what a fabric actually does in the context of a Beowulf architecture HPC cluster.  In addition to the inter-process communication already mentioned, compute nodes need access to shared services such as storage, network boot or imaging, internet access, and out of band management.  Traditionally, it was common in the design for HPC cluster to build one or more Ethernet networks in addition to InfiniBand for some of these services.

The primary use of a high performance fabric in any HPC cluster is for IPC (inter-process communication) with support for RDMA and higher level protocols such as MPI, SHMEM, and UPC.  Mellanox InfiniBand HCAs (host channel adapters) support RDMA with less than 1% CPU utilization and the switches in an InfiniBand Fabric can work in tandem with HCAs to offload nearly 70% of the MPI protocol stack to the fabric itself – actually enlisting the network as a new generation of co-processor.  And speaking of co-processors, newer capabilities such as GPUDirect and rCUDA extend many of these same benefits to attached GPGPUs and other coprocessor architectures.

The language of the internet is TCP/IP which also supported by an InfiniBand fabric using a protocol known as IPoIB.  Simply put, every InfiniBand HCA port represents a device to the kernel which can be assigned an IP address and fully utilize the same IPv4 and IPv6 network stacks as Ethernet devices.  Additionally, a protocol called Virtual Protocol Interconnect (VPI) allows any InfiniBand port to operate as an Ethernet port when connected to an Ethernet device and Mellanox manufactures “bridging” products that forward TCP/IP traffic from the IPoIB network to an attached Ethernet fabric for full internet connectivity.

Storage can also utilize the IP protocol, but parallel filesystems such as GPFS, Lustre, and other clustered filesystems also support RDMA as a data path for enhanced performance.  The ability to support both IP and RDMA on a single fabric makes InfiniBand an ideal way to access parallel storage for HPC workloads.  End-to-end data protection features and offloads of other storage related protocols such as NVME over fabrics (PCIe-connected solid state storage) and erasure coding further enhance the ability of InfiniBand to support and accelerate access to storage.

Mellanox ConnectX® InfiniBand adapters also support a feature known as FlexBoot.  FlexBoot enables remote boot over InfiniBand or Ethernet using Boot over InfiniBand, over Ethernet, or even Boot over iSCSI (Bo-iSCSI). Combined with VPI technologies, FlexBoot enables the flexibility to deploy servers with one adapter card into either InfiniBand or Ethernet networks with the ability to boot from LAN or remote storage targets. This technology is based on PXE (Preboot Execution Environment standard specification, and FlexBoot software is based on the open source iPXE project (see www.ipxe.org).

Hyperconverged datacenters, Web 2.0, Machine Learning, and non-traditional HPC practitioners are now taking note of the maturity and flexibility of InfiniBand and adopting it to realize accelerated performance and improved ROI from their infrastructures.  The advanced offload and reliability features offered by Mellanox InfiniBand adapters, switches, and even cables means that many workloads can realize greater productivity, acceleration and increased stability.  Our new InfiniBand router, which supports L3 addressing can even interconnect multiple fabrics with different topologies making InfiniBand able to scale to an almost limitless number of nodes.

InfiniBand is an open standard for computer interconnect, backwards and future compatible and supported by over 220 members of the InfiniBand Trade Association (IBTA).  Mellanox remains the industry leader and committed to advancing this technology generations ahead of our competitors with leading edge silicon products integrated onto our adapters (HCAs), switching devices, cables and more.  If you want the highest performance, lowest latency, best scaling fabric for all of your interconnect needs, consider converging on Mellanox InfiniBand.

Join me on Tuesday, March 14th at 10a.m. for our webinar, One Scalable Fabric for All: Using InfiniBand as a Unified Cluster and Storage Fabric with IBM.