More FLAIR to Fluid Mechanics via the Monash Research Cloud
Fluid Mechanics represents one of the most exacting and computationally challenging areas of biomedical research and medical diagnostics today. Microfluidics and Nanofluidics are at the forefront of research efforts to understand how fluids behave and flow at small scales. This is critical to medical advancements in that significant fluid dynamics occur throughout the human body including viscous flow, movement through small capillaries, osmosis, coursing through membranes and filters, pumping action and many other activities. Studying the properties of fluids at such small scales helps scientists design more effective medical instruments, including syringe needles, pumps for various applications, and LoC (Lab on Chip) devices. Ongoing research is also aimed at creating extremely precise dosage systems, other methods of drug delivery, and in developing lower cost diagnostic procedures/instruments.
FLAIR (Fluids Laboratory for Aeronautical and Industrial Research), from the Department of Mechanical and Aerospace Engineering, Faculty of Engineering, at Monash University, has been conducting experimental and computational fluid mechanics research for more than two decades, focusing on fundamental fluid flow challenges that impact the automotive, aeronautical, industrial and more recently, biomedical fields.
A key research focus in recent years has been understanding the wake dynamics of particles near walls. These types of interactions are prevalent in the human body as blood flow carries the vast majority of the body’s nourishment, defense, and discarded materials to/from every living cell. Particle-particle and wall-particle interactions were investigated using an in-house spectral-element numerical solver. When applied to biological engineering, blood cells / leukocytes are numerically modelled as canonical bluff bodies (i.e., as cylinders and spheres) and numerical computations are carried out.
These simulations are useful in understanding biological cell transport. All cells must transfer essential ions and small molecules across semi-permeable plasma membranes. To fulfill the requirements of life, cells exchange gases, such as oxygen and carbon dioxide; excrete waste products; and take in particles of food, water and minerals. Living cells evolved a membrane to fence off and contain its inner organic chemicals, while selectively allowing only essential atoms and simple compounds to cross back and forth. In fact, in 2013, a Nobel Prize was awarded to three scientists who explained the inner workings of the human body’s ‘cellular postal service’. Their work determined how cells shuttle proteins and other biomolecules from one location to another — a process that is important in the release of neurotransmitter chemicals, the secretion of insulin and countless other biological tasks.
The computational and data-intensive nature of this research is among some of the most demanding and the department at Monash has always been a challenge to get access to sufficient computing resources for its needs. In particular, their project aims to understand the wake dynamics on multiple particles in various scenarios such as rolling, collisions and vortex-induced vibrations; and the resultant mixing which occurs as a result of these interactions, etc. The group’s two- and three-dimensional fluid flow solver also incorporates two-way body dynamics to model these effects. As the studies involve multiple parameters such as Reynolds number, body rotation, height of the body above the wall, etc., the total parameter space is extensive, requiring significant computational resources. While two-dimensional simulations were carried out on single processors, their three-dimensional counterparts required parallel processing, making NeCTAR nodes an ideal platform to run these computations. Some of the visualizations from the group’s three-dimensional simulations are shown the figures below.
Advanced research such as this can consume any extra compute capacity available. This is where a research-oriented computational cloud like R@CMon is needed. Since 2008, the FLAIR team has been making good use of the Monash Campus Cluster (MCC), a high-performance/high-throughput heterogeneous system with over two thousand CPU cores. However, MCC is in heavy demand by researchers from across the university; so much so that FLAIR users often found themselves having to wait for long periods of time before they could run their fluid flow simulations. It therefore became clear that FLAIR researchers needed additional computational resources.
R@CMON was able to secure a 160-core allocation for the FLAIR team, which increased valuable resources for the group. Now, thanks to both NeCTAR and MCC-R@CMon, over one million CPU hours distributed across 4,000 jobs were provided for the project’s intensive calculations.
This powerful computational infrastructure is enabled by networking solutions from Mellanox. The installation at Monash is based on Mellanox’s CloudX platform built from the company’s Spectrum SN2700 Open Ethernet switches, ConnectX-4 NICs, and LinkX cables. Mellanox started out at Monash University with Ethernet fabric built on the Company’s 56GbE SwitchX-2 and CloudX technology and then expanded the architecture to support additional high-performance computing (HPC) capacity and high-throughput computing (HTC) applications. In addition to expansion, Monash University enjoyed a general increase in bandwidth and compute performance by incorporating Mellanox’s more recent generation of 100-Gbps end-to-end interconnect solutions into the University’s cloud node known as R@CMon.
The cloud now utilizes Mellanox end-to-end Ethernet solutions at 10, 40, 56, and 100Gb/s as part of a nationwide initiative that strives to create an open and global cloud infrastructure. Monash selected Mellanox’s RDMA-capable Ethernet technology due to its performance scalability and high efficiency cloud enhancements. The learning institution has used its high performance cloud to establish numerous ‘Virtual Laboratories’ for data-intensive characterization and analysis. Each laboratory provisions virtual desktops and Docker-based tools that are already linked up to the data sources and the HPC resources. This strategy has worked so well, it is fast becoming the standard operating environment for the modern-day researcher to support general purpose HPC and HTC (including GPGPU capabilities and Hadoop), interactive visualization, and analysis.
Monash University’s cloud node, R@CMon, is part of The National eResearch Collaboration Tools and Resources (NeCTAR) Project. NeCTAR aims to enhance research collaboration by connecting researchers throughout Australia and providing them with access to a full suite of digitally-enabled data, analytic and modelling resources that is specifically relevant to their areas of research. Since the initial deployment of a high availability CloudX OpenStack cloud, the University has expanded its RDMA-capable Ethernet fabric, both meeting and exceeding the innovation goals of NeCTAR. The fabric tightly integrates Ceph and Lustre storage with the cloud, meeting the needs of block, object and applications workloads as one converged fabric.
Mellanox Open Ethernet switches provide the flexibility Monash needs as it allows them to mix and match capabilities, which is critical for their dense, but diverse and ever changing compute architecture. Since integrating Mellanox interconnect solutions, they have been able to achieve greater performance than ever before. With this quantum leap from their previous compute environment, we look forward to more innovative discoveries as they continue their ground-breaking research.
- Follow Mellanox on: Twitter, Facebook, Google+, LinkedIn, and YouTube
- Join the Mellanox Community
- Explore: Our Interconnected Planet
- Get your head in the cloud: CloudX
- Check out: The Flair Home Page
- Learn more about: Mellanox Ethernet Solutions
- Research the research: Monash