All posts by Gilad Shainer

About Gilad Shainer

Gilad Shainer has served as Mellanox's Vice President of Marketing since March 2013. Previously, he was Mellanox's Vice President of Marketing Development from March 2012 to March 2013. Gilad joined Mellanox in 2001 as a design engineer and later served in senior marketing management roles between July 2005 and February 2012. He holds several patents in the field of high-speed networking and contributed to the PCI-SIG PCI-X and PCIe specifications. Gilad holds a MSc degree (2001, Cum Laude) and a BSc degree (1998, Cum Laude) in Electrical Engineering from the Technion Institute of Technology in Israel.

UoA and Mellanox Study the Solar System’s Lord of The Rings Saturn

Humans were obsessed with the stars and exploration long before the written word. From Stonehenge and the Mayan calendar, to American mission commander Neil Armstrong and pilot Buzz Aldrin who manned the lunar module Eagle on July 20, 1969, humans have been reaching, literally, for the stars and the planets beyond. The quest for “One giant leap for mankind” is seemingly never ending.

In a major space policy speech at Kennedy Space Center on April 15, 2010, now former U.S. President Barack Obama predicted a manned Mars mission to orbit the planet by the mid-2030s, followed by a landing. We’ve already traveled far beyond our blue planet, photographed in detail the shocking landscape of Pluto, skirted the giant gas planet Jupiter it all its red glory and now it’s time to put down some roots. Humankind is determined to take our Manifest Destiny beyond this mortal coil and colonize an alien planet. Next stop, Mars.

However, all the glamour and prose aside, you don’t need to be an astrophysicist to know that space travel is exceedingly dangerous. For example, cosmic rays, radiation, microgravity, high-speed micrometeorites … just to name a few life-ending conditions space pioneers will face every day in mankind’s quest to colonize Mars.

Peter Delamere, Professor of Space Physics at the University of Alaska Fairbanks’ Geophysical Institute, knows a lot about the weather. Space weather, to be precise. Because space weather impacts many aspects of our near-Earth space environment, it also poses a potential risk to Earth-orbiting satellites, transpolar flights, and, of course, human space exploration. Thus, comparative studies of planetary space environments are crucial for understanding the basic physics that determine space weather conditions. One of the most dramatic manifestations of space weather can be found in the aurorae, or as most of us know it, the aurora borealis or aurora australis.


Turns out, we already know that Earth, Jupiter and Saturn all have aurora lights in their respective polar regions. It’s just that the space weather that creates these lights is fundamentally different. Studies show that Saturn’s aurora may be driven internally by Saturn’s rapid rotation rather than by the solar wind, as is the case on Earth.  Ultimately, space weather research strives to make accurate predictions that will help mitigate risks to ongoing space activity and human exploration.

Illustration of the magnetic field topology and flux circulation at Saturn. Flows are shown with red arrows. Magnetic fields are shown in purple (mapping to outer magnetosphere) and blue (showing bend back and bend forward configurations). From Delamere et al. 2015.

The figures above show results from a three dimensional simulation of the Kelvin-Helmholtz instability (counter-streaming flows that generate vortices) at Saturn’s magnetopause boundary. This is the boundary that mediates the solar wind interaction with Saturn’s magnetosphere.  The complicated surface waves mix solar wind and magnetospheric plasma, causing a, “viscous-like” interaction with the solar wind. Similar processes happen on Earth, but are highly exaggerated on Saturn and Jupiter. The lines are magnetic field lines.


Innovation in the field of Space Plasma Physics, which is driving our collective understanding of space weather and its potential impact, is highly dependent upon access to HPC resources. Numerical simulation requires vast spatial domains inherent in a space plasma environment. So, having access to local reliable HPC resources, such as Mellanox HPC solutions, enables the Computational Space Physics group at the Geophysical Institute to further this important research. The Delamere group, which is part of the Computational Space Physics group at the Geophysical Institute, is currently funded by numerous NASA projects amounting to over $2M, all of which require considerable HPC resources.


When Congress established the Geophysical Institute in 1946, they could not have possibly predicted the depth and impact of the research that would be conducted and the work that would be done there. From space physics and aeronomy; atmospheric sciences; snow, ice, and permafrost; seismology; volcanology; remote sensing; and tectonics and sedimentation, the institute continues to make discoveries and innovations that are changing the world for the better.


In January 2017, with support from the M. J. Murdock Charitable Trust, the Geophysical Institute, UAF vice chancellor of research, UAF International Arctic Research Center, and UAF IDeA Network of Biomedical Research Excellence, UAF Research Computing Systems engineers deployed Mellanox InfiniBand solutions across multiple racks to form their HPC system. We knew something of the work being done at the Geophysical Institute at that time but even we at Mellanox didn’t yet understand the full impact of their research. From deep within the earth, to the far reaches of our solar system, Mellanox’s leadership in HPC solutions is helping to solve some of science’s toughest challenges. The final blog in this series will come full circle and focus on the long-term data and research driven by Uma S. Bhatt, Professor of Atmospheric Sciences at the Geophysical Institute; and the efforts underway to study the climate in the most inhospitable and inaccessible region of our planet, the Arctic.


Supporting Resources:









Our Interconnected Planet: The University of Alaska and Mellanox take HPC Climate Research by Storm

“How’s the weather?” is probably the most oft uttered question in the history of mankind. And with the recent epic hurricane devastating Houston, Texas, the weather is literally on everyone’s minds these days.

Weather has been the bane of humans for as long as we have been around. Everyone thinks that the Inuik have the most words for snow (between 40-50 depending on how you count) but in reality, it is the Scots who claim the most snow-related words, 421 to be precise. Who knew? Flindrikin means a light snow shower, at least if you are in Scotland where people apparently take their weather seriously. Weather is also a serious topic for the researchers tackling Arctic climate at the University of Alaska.

Uma S. Bhatt, Professor of Atmospheric Sciences, Geophysical Institute, University of Alaska, probably knows more words for snow than most. She is seeking a better understanding of the Arctic earth system with respect to the need for long-term climate information (e.g., air temperature, precipitation, wind speeds). The challenge is, most of these data (e.g., atmospheric re-analyses, climate models) are available at spatial resolutions on the order of 100s of kilometers, which is not nearly at a high enough resolution needed to support process studies and to assess local impacts. To address this need, high resolution climate information has been created at a 20-km resolution through a process called dynamical downscaling.

European Center Reanalysis (ERA-Interim) daily average temperature for 4 July 1979 (left) and dynamically downscaled maximum temperature (Tmax) from the Weather Research Forecast (WRF) model at ~20km resolution. Units are ˚C.


Downscaling is particularly successful in improving climate information from lower resolution models in areas of complex topography by producing more realistic precipitation and temperature gradients. Capturing the local temperature variations is only possible through downscaling of climate information. This downscaling activity at the University of Alaska is supported by the Alaska Climate Science Center through the Department of Interior, and is possible only because of the locally available Mellanox HPC computing resources. A key advantage of dynamical downscaling is that a full suite of atmospheric model variables are available, which provides a rich data source for understanding the underlying mechanisms of Arctic climate change. Variables at the surface include precipitation, snow water equivalent, soil moisture and temperature, solar radiation, terrestrial radiation, and sensible latent heat fluxes. Variables at multiple levels in the atmosphere include temperature, moisture, winds, and geopotential height. No wonder your local weather person gets it wrong so often. Investigations of these data will help advance the world’s understanding of climate drivers of various parts of the Earth system. Beyond scientific endeavors, the research team is generously making available this downscaled data to glaciologists, hydrologists, ocean wave modelers, wildlife biologists and others for use in other scientific investigations. These collaborations help everyone better understand data as additional scientists are evaluating this data in the context of their part of the Earth system. This rich data set is also being used to ask questions about glacier mass balance in southern Alaska, rain-on-snow events relevant for caribou mortality, wildland fire susceptibility, and numerous other topics relevant for Alaska and other parts of the world.

According to Ms. Bhatt, the computational demands for dynamical downscaling are quite daunting. Just the data storage requirements can be 3.3 TB (that’s terabyte) and that’s just for one year of raw model output and which reduces to about 300 GB of post-processed data when only the most used variables are extracted and saved as daily values.

So, just much is 1 terabyte these days? Assuming that the average size photo is 500K, then a 1TB hard drive would hold some 2 million photos.

Augmenting HPC resources at UAF in January 2017 by adding Mellanox InfiniBand solutions across multiple racks to form their HPC system, has allowed the team the chance to downscale additional models and different climate scenarios in order to reduce the uncertainty in future projections for Alaska. And sharing this valuable data space with other researchers is key to spirit and generosity of the University of Alaska and their mission to innovate in all areas of research; physics and aeronomy; atmospheric sciences; snow, ice, and permafrost; seismology; volcanology; remote sensing; and tectonics and sedimentation. Along with the University of Alaska, Mellanox is proud to be part of this journey, to be helping with this quest for knowledge and a deeper understanding of our planet and the universe beyond.

Supporting Resources:



The University of Alaska Fairbanks and Mellanox’s HPC Take on Earthquakes

Dr. Carl Tape is an associate professor of geophysics at UAF at the Geophysical Institute and the Department of Geosciences. He is conducting research on seismic tomography and seismic wave propagation. Seismic tomography is a technique for imaging the subsurface of the Earth with seismic waves produced by earthquakes or explosions. Seismic waves travel through the Earth’s layers, and originate from earthquakes, volcanic eruptions, magma movement, large landslides or large man-made explosions that give out low-frequency acoustic energy. Dr. Tape is leading research efforts in Alaska with the goal of developing a 3D model of the subsurface crust and upper mantle.

This might not seem like something the average person might be interested in but consider Alaska is one of the most seismically active regions in the world and by far the biggest earthquake producer in the United States, with an average of six magnitude 6 to 7 earthquakes per year, and about 1,000 earthquakes in Alaska each month. Dr. Tape’s work holds the promise of understanding and reducing seismic hazards, with Mellanox’s InfiniBand leading the way in their Data Center.


This image shows earthquakes with a moment magnitude of greater than 4 from 1990-2010. Moment magnitude measures the size of events in terms of how much energy is released.

In January 2017, with support from the M. J. Murdock Charitable Trust, the Geophysical Institute, UAF vice chancellor of research, UAF International Arctic Research Center, and UAF IDeA Network of Biomedical Research Excellence, the Geophysical Institute’s Research Computing Systems engineers deployed 11 Mellanox EDR switches and 38 compute nodes distributed across six racks.  This deployment brought Chinook to a total core count of 1892 cores and enhanced the cluster bandwidth from QDR speeds (40 Gb/s) to FDR/EDR speeds (56/100 Gb/s). Chinook now has enough rack space and InfiniBand infrastructure in place to expand to over 4,000 cores, if research demand warrants.

Now, what else are they doing at UAF with all that data center power? Well, one of the main missions of the the mission of the Geophysical Institute is to understand basic geophysical processes governing the planet Earth, especially as they occur in or are relevant to Alaska. With a motto of “Drive the Change,” UAF as a whole is focused on driving positive change for their state. The university is working to build a better future by educating a skilled workforce of teachers, engineers and nurses.

With respect to the environment, Earth to the surface of the sun and beyond, the institute turns data and observations into information useful for state and national needs. An act of Congress established the Geophysical Institute in 1946. Since that time, the institute has earned an international reputation for studying Earth and its physical environments at high latitudes. The institute now consists of seven major research units and many research support facilities. The research units include space physics and aeronomy; atmospheric sciences; snow, ice, and permafrost; seismology; volcanology; remote sensing; and tectonics and sedimentation. Our Interconnected Planet theme will focus the next blog on the work of Professor David Newman, and his research on power grids.


Supporting Resources:







Our Interconnected Planet: The University of Alaska Fairbanks Tackles The Unsettling Subject of Turbulence

I have long been convinced that our mutually bumpy airplane rides, all caused by turbulence, is simply getting worse. And I’m not the only one who thinks we are white knuckling it a whole lot more often these days. That is why I so appreciate the work being done by Dr. David Newman, Professor of Physics Department and part of the Geophysical Institute at the University of Alaska Fairbanks. He is leading research on Modeling and Understanding Turbulence and Turbulent Transport, which is music to my ears. If travelers such as myself have to keep reaching for air sickness bags, we would sure like to more about know why.

According to a study by Paul Williams  an atmospheric scientist who conducted a 2013 study, and duly reported by the Huffington Post, it is going to take years of research before turbulence can be definitively linked to something like global warming. In fact, he argues that while turbulence seems to be increasing in frequency right now, the likely culprit is actually social media because so many travelers are taking videos of turbulence and sharing.

I would say, tell this to my lurching stomach, but I digress. The work being done by Dr. Newman is tackling head-on one of computational science’s thorniest problems. In fact, he has developed a brand new High Performance Computing technique for solving turbulence problems called the ‘Parareal’ method, a method gaining popularity and use. Meanwhile, the main thrust of his research is to characterize the nature of, and quantify the mechanisms behind, turbulent transport. Much of the funding for this work comes from the U.S. Department of Energy’s Office of Fusion Energy for modeling and understanding turbulence in the confined plasmas needed to make fusion work as an energy source on Earth. However, much of the research is also directly applicable to Earth’s geophysical systems such as the oceans and the atmosphere. Dr. Newman says the research would not be possible without access to HPC. Insights gained from this research should thus be applicable beyond the plasma fusion context (for example, in ozone transport across atmospheric jets in the polar regions of earth). So, not only will the research help benefit an alternative energy source, fusion, but perhaps ultimately help to figure out why so many of us cannot get on a plane without feeling like we have just been strapped in for a six-hour roller coaster ride.

Again, none of this would be possible without HPC. To recap, back in January 2017, with support from the M. J. Murdock Charitable Trust, the Geophysical Institute, UAF vice chancellor of research, UAF International Arctic Research Center, and UAF IDeA Network of Biomedical Research Excellence, the Geophysical Institute’s Research Computing Systems engineers deployed Mellanox InfiniBand solutions across multiple racks to form their HPC system. The cluster, named Chinook, was made possible by a partnership between UAF and the Murdock Charitable Trust (

Figure A.1 The right panel shows turbulence in a shear flow while the left panel shows the same turbulence without a sheared flow. A shear is a change in the right to left velocity of the fluid shown in these figures. A sheared flow leads to a reduction in cross flow scale lengths and therefore a reduction in cross flow transport. This is an important topic for transport of many constituent quantities such as pollution, salinity, nutrients, temperature etc.


With research on-going at UAF’s Geophysical Institute in disciplines ranging from space physics and aeronomy; atmospheric sciences; snow, ice, and permafrost; seismology; volcanology; and remote sensing to tectonics and sedimentation, HPC is making a difference in advancing our understanding of how our Interconnected Planet works.

Supporting Resources:




Our Interconnected Planet: The University of Alaska Fairbanks and Mellanox’s High Performance Computing Tracking Earth’s Most Massive Ice Shelves

Note: Recently, a chunk of ice the size of Delaware broke off Antarctica’s Larsen C ice shelf (news). With the help of Mellanox’s High Performance Computing (HPC) solutions, the University of Alaska is conducting fascinating work on ice sheets.

Those concerned with what is popularly referred to as, “the fate of humanity” are apt to track comets, asteroids and even the rate of melt of one of earth’s most precious resources, ice sheets. But, as I set out to discuss Mellanox and the University of Alaska Fairbank’s (UAF) fascinating work on ice sheets, the thought came to me that high-tech data centers, and High Performance Computing (HPC) running some of the biggest financial trading centers in the world, were about as far from the vast frozen wild of ice sheets as it gets.

Turns out, the more I looked into it, the more I realized that HPC – and the expansive, pristine ice sheets of our blue planet – are actually very closely aligned. Presently, 10 percent of land area on Earth is covered with glacial ice, including glaciers, ice caps, and the ice sheets of Greenland and Antarctica. This amounts to more than 15 million square kilometers (5.8 million square miles). And it is worthy to note that glaciers store about 75 percent of the world’s fresh water. So, tracking the size and rate of melt for these massive ice sheets is actually very important to humanity. In fact, the Greenland ice sheet occupies about 82 percent of the surface of Greenland, and if melted, would cause sea levels to rise by 7.2 meters. Estimated changes in the mass of Greenland’s ice sheet suggest it is melting at a rate of about 239 cubic kilometers (57.3 cubic miles) per year. Fate of humanity indeed.

Andy Aschwanden, Research Assistant Professor, Geophysical Institute, UAF, is studying the ice flow of the Greenland ice sheet and is on the fast track for credible modeling efforts that will help predict the future evolution of the Greenland ice sheet.

Over the past two decades, the professor reports that large changes in flow have been observed in outlet glaciers, and that the melt rate is speeding up, with a 17 percent increase in ice-sheet wide melt between 2000 and 2005. To track these potentially life altering changes in these outlet glaciers, an ice sheet model is called for. UAF researchers have been hard at work developing the open-source Parallel Ice Sheet Model since 2006, and are genuine pioneers in open-source ice sheet modeling. Development, testing, and cutting edge research on ice sheets, says Aschwanden, goes hand-in-hand with state-of-the-art HPC resources.

Essentially, the simulations needed to track the massive ice sheet’s progress require extremely large resolutions, large outputs and high computational demands – all only available via the formidable computational power found in Mellanox’s HPC computing resources. Such simulations are needed as proof-of-concept; and the HPC resources provided by Mellanox enables routine simulations at ≤1km grid resolution that better resolve the physics of the Greenland ice sheet.


Figure A.4 (A) Observed Greenland surface speeds; box indicates the location of the insets. (B–
E) Basal topography at 150m and degraded ice sheet model resolutions. (F) Observed surface speeds at 150 m. (G–J) surface speeds modeled with PISM (adapted from Aschwanden, Fahnestock, and Truffer (2016)


As we know from research being conducted at the University of Alaska Fairbanks, an act of Congress established the Geophysical Institute in 1946. Since that time, the institute has earned an international reputation for studying Earth and its physical environments at high latitudes. The institute now consists of seven major research units and many research support facilities, including space physics and aeronomy; atmospheric sciences; snow, ice, and permafrost; seismology; volcanology; remote sensing; and tectonics and sedimentation.

In January 2017, with support from the M. J. Murdock Charitable Trust, the Geophysical Institute, UAF vice chancellor of research, UAF International Arctic Research Center, and UAF IDeA Network of Biomedical Research Excellence, UAF Research Computing Systems engineers deployed Mellanox InfiniBand solutions across multiple racks to form their HPC system. “This community, condo model project launches a significant change in how high-performance computing resources are offered to the UA community,” said Gwendolyn Bryson, manager of Research Computing Systems at the UAF Geophysical Institute. “Chinook is more capable, more flexible and more efficient than our legacy HPC resources.”

Our next blog will take us from the vast frozen reaches of Earth’s ice sheets discussed in this blog to the skies above where we look into The University of Alaska’s research on turbulence.


Supporting Resources:

Deep Learning in the Cloud Accelerated by Mellanox

At the GTC’17 conference week, NVIDIA and Microsoft announced new Deep Learning solutions for the cloud. Artificial Intelligence and deep learning have the power to pull meaningful insights from the data we collect, in real time, enabling business to gain a competitive advantage, and to develop new products faster and better. As Jim McHugh, vice president and general manager at NVIDIA said, having AI and Deep leaning solutions in the cloud simplifies the access to the required technology and can unleash AI developers to build a smarter world

Microsoft announced that Azure customers using Microsoft’s GPU-assisted virtual machines for their AI applications will have newer, faster-performing options. Corey Sanders, director of Compute at Microsoft Azure mentioned that the cloud offering will provide over 2x the performance over the previous generation, for AI workloads utilizing CNTK [Microsoft Cognitive Toolkit], TensorFlow, Caffe, and other frameworks.

Mellanox solutions enable to speed up data insights with scalable deep learning in the cloud. Microsoft Azure NC is a massively scalable and highly accessible GPU computing platform. Customers can use GPU Direct RDMA (Remote Direct Memory Access) over InfiniBand for scaling jobs across multiple instances. Scaling out to 10s, 100s, or even 1,000s of GPUs across hundreds of nodes allows customers to submit tightly coupled jobs like Microsoft Cognitive Toolkit. A tool perfect for natural language processing, image recognition, and object detection.

See more at:


Special Effects Winners Need Winning Interconnect Solutions!

Mellanox is proud to enable the Moving Picture Company (MPC, with our world-leading Ethernet and InfiniBand solutions, that are being used during the creative process for Oscar winning special effects.

The post-production and editing phases require very high data throughput due to the need for higher resolution (the number of pixels that make up the screen), better color-depth (how many bits are used to represent each color) and the increase in frame rate (how many frames are played in a second). Data must be edited in real-time, and must be edited uncompressed to avoid quality degradation.

You can do the simple math: a single stream of uncompressed 4K video requires 4096 x 2160 (typical 4K/UHD pixel count) x 24 (color depth bits) x 60 (frames per second) which is 12.7Gb/s. Therefore one needs interconnect speeds of greater than 10G today. As we move to 8K videos, we will need data speeds greater than 100Gb/s! Mellanox is the solutions provider for such speeds and the enabler behind the great movies of today.

We would like to congratulate MPC for winning the British Academy of Film and Television (BAFTA) Special Visual Effects 2017 award! And for winning the 2017 visual effect Oscar! Congratulation from the entire Mellanox team.

Find out more about the creation of the Jungle Book effect:

Mellanox Joins OpenCAPI and GenZ Consortiums, and Continues to Drive CCIX Consortium Specification Development

This week, Mellanox was part of three press releases that announced the formation of two new standardization consortiums – OpenCAPI and GenZ, as well as progress update by the CCIX (Cache Coherent Interconnect for Accelerators) consortium.

These new open standards demonstrates an industry wide collaborative effort and the needs to enable open, flexible and standard solutions for the future data center. The three consortiums are dedicated to delivering technology enhancements that increase the data center applications performance, efficiency and scalability, in particular for data intensive applications, machine learning, high performance computing, cloud, web2.0 and more.

Mellanox is delighted to be part of all three consortium, to be able to leverage the new standards in future products, and to enhance Mellanox Open Ethernet and InfiniBand solutions, enabling better communications between interconnect: the CPU/memory/accelerators.

There are many common goals between the different consortiums: to increase the CPU/Memory/Accelerator – interconnect bandwidth and reduce latency; to enable data coherency between the platform devices and more. While each consortium differs in the specific area of focus, they all drive the need for open standards and the ability to leverage existing technologies.

The CCIX consortium has tripled its members and is getting close to releasing its specification. The CCIX specification enables enhanced performance and capabilities over PCIe Gen4, leveraging the PCIe eco system to enhance future compute and storage platforms.

As a board member of CCIX and OpenCAPI, and a member of GenZ, Mellanox is determined to help drive the creation of open standards. We believe that open standards and the open collaborations between companies and users form the foundation for developing the necessary technology for the next generation cloud, Web 2.0, high performance, machine learning, big data, storage and other infrastructures.

Co-Design Architecture to Deliver Next Generation of Performance Boost

The latest revolution in HPC is the move to Co-Design architecture, a collaborative effort to reach Exascale performance by taking a holistic system-level approach to fundamental performance improvements. This collaboration enables all active system devices to become accelerators by orchestrating a more effective mapping of communication between devices and software in the system to produce a well-balanced architecture across the various compute elements, networking, and data storage infrastructures.

Co-Design architecture exploits system efficiency and optimizes performance by ensuring that all components serve as co-processors in the data center, creating synergies between the hardware and the software, and between the different hardware elements within the data center. This is in diametric opposition to the traditional CPU-centric approach, which seeks to improve performance by on-loading ever more operations to the CPU.

Rather, Co-Design recognizes that the CPU has reached the limits of its scalability, and offers an intelligent network as the ideal co-processor to share the responsibility for handling and accelerating workloads. Since the CPU has reached its maximum performance, the rest of the network must be better utilized to enable additional performance gains.

Besides, the CPU was designed to compute, not to oversee data transfer. By reducing overhead on the CPU, the CPU is freed from non-compute functions and is allowed to focus on its original intent. By placing the algorithms that handle those other functions on an intelligent network, performance improves both on the network and in the CPU itself.

This technology transition from CPU-centric architecture to Co-Design brings with it smart elements throughout the data center, with every active component becoming more intelligent. Data is processed wherever it is located, essentially providing in-network computing, instead of waiting for the processing bottleneck in the CPU.

The only solution is to enable the network to become a co-processor. Smart devices can move the data directly from the CPU or GPU memory into the network and back, and can analyze the data in the process. This means that the new model is for completely distributed in-network computing, wherever the data is located, whether at the node level, at the switch level, or at the storage level.

The first set of algorithms being migrated to the network are data aggregation protocols, which enable sharing and collecting information from parallel processes and distributions. By offloading these algorithms from the CPU to the intelligent network, a data center can see at least 10X performance improvement, resulting in a dramatic acceleration of various HPC applications and data analytics.

In the future we anticipate seeing most data algorithms and communication frameworks (such as MPI) managed and executed by the data center interconnect, enabling us to perform analytics on the data as the data moves.

Ultimately, the goal of any data center is to experience the highest possible performance with the utmost efficiency, thereby providing the best return on investment. For many years, the best way to do this has been to maximize the frequency of the CPUs and to increase the number of cores. However, the CPU-centric approach can no longer scale to meet the massive needs of today’s data centers, and performance gains must be achieved from other sources. The Co-Design approach addresses this issue by offloading non-compute functions from the CPU onto an intelligent interconnect that is capable of not only transporting data from one endpoint to another efficiently, but is now also able to handle in-network computing, in which it can analyze and process data while it is en route.

Sound interesting? Learn more at our upcoming webinar, Smart Interconnect: The Next Key Driver of HPC Performance Gains.




ISC 2016 Recap

Last week we participated at the International Supercomputing conference – ISC’16. In reality, the event started the weekend before, kicked off by the HP-CAST conference where Mellanox presented to the entire HP-CAST audience our newest smart interconnect solutions; solutions that enable the highest application performance and best cost-performance compute and storage infrastructures.

At ISC’16 we have made several important announcements:

The new TOP500 supercomputers list was also announced last Monday, introducing a new number one supercomputer in the world, built at the supercomputing center in Wuxi, China. The new world’s fastest supercomputer delivers 93 Petaflops (three times higher compared to the #2 system on the list), connecting nearly 41 thousands nodes and more than ten million CPU cores. The offloading architecture of the Mellanox interconnect solution is the key to providing world leading performance, scalability and efficiency, connecting the highest number of nodes and CPUs cores within a single supercomputer.

We have witnessed eco-system demand for smart interconnect solutions, that enable smart offloading, both network activity and data algorithms, as it is critical for delivering higher applications performance, efficiency and scalability.

For those who could not attend ISC, we have a video of the highlights you can view here:

And for those of you on the go, you can use your mobile phone to view the Mellanox 360 gallery of ConnectX-5 advantages:

See you at SC’16.

27578232370_e97500703e_o 27574779660_9a4a9fd1e0_o 20160622_155940