Yearly Archives: 2013

UF launches HiPerGator, the state’s most powerful supercomputer

GAINESVILLE, Fla. — The University of Florida today unveiled the state’s most powerful supercomputer, a machine that will help researchers find life-saving drugs, make decades-long weather forecasts and improve armor for troops.

The HiPerGator supercomputer and recent tenfold increase in the size of the university’s data pipeline make UF one of the nation’s leading public universities in research computing.

“If we expect our researchers to be at the forefront of their fields, we need to make sure they have the most powerful tools available to science, and HiPerGator is one of those tools,” UF President Bernie Machen said. “The computer removes the physical limitations on what scientists and engineers can discover. It frees them to follow their imaginations wherever they lead.”

For UF immunologist David Ostrov, HiPerGator will slash a months-long test to identify safe drugs to a single eight-hour work day.

“HiPerGator can help get drugs get from the computer to the clinic more quickly. We want to discover and deliver safe, effective therapies that protect or restore people’s health as soon as we can,” Ostrov said. “UF’s supercomputer will allow me to spend my time on research instead of computing.”

The Dell machine has a peak speed of 150 trillion calculations per second. Put another way, if each calculation were a word in a book, HiPerGator could read the millions of volumes in UF libraries several hundred times per second.

UF worked with Dell, Terascala, Mellanox and AMD to build a machine that makes supercomputing power available to all UF faculty and their collaborators and spreads HiPerGator’s computing power over multiple simultaneous jobs instead of focused on a single task at warp speed.

HiPerGator features the latest in high-performance computing technology from Dell and AMD with 16,384 processing cores; a Dell|Terascala HPC Storage Solution (DT-HSS 4.5) with the industry’s fastest open-source parallel file system; and Mellanox’s FDR 56Gb/s InfiniBand interconnects that provide the highest  bandwidth and lowest latency.  Together these features provide UF researchers unprecedented computation and faster access to data to quickly further their research.

UF unveiled HiPerGator on Tuesday as part of a ribbon-cutting ceremony for the 25,000-square-foot UF Data Center built to house it. HiPerGator was purchased and assembled for $3.4 million, and the Data Center was built for $15 million.

Also today, the university announced that it is the first in the nation to fully implement the Internet2 Innovation Platform, a combination of new technologies and services that will further speed research computing.

Benchmarking With Real Workloads and the Benefits of Flash and Fast Interconnects

Benchmarking is a term heard throughout the tech industry as a measure of success and pride in a particular solution’s ability to handle this or that workload.  However, most benchmarks feature a simulated workload, and in reality, a deployed solution may perform much differently.  This is especially true with databases, since the types of data and workloads can vary greatly.

 

StorageReview.com and MarkLogic recently bucked the benchmarking trend, developing a benchmark that tests storage systems against an actual NoSQL database instance.  Testing is done in the StorageReview lab, and the first round focused heavily on host-side flash solutions.  Not surprisingly, flash-accelerated solutions took the day, with the lowest overall latencies for all database operations, generally blowing non-flash solutions out of the water and showing that NoSQL database environments can benefit significantly from the addition of flash-accelerated systems.

 

In order to accurately test all of these flash solutions, the test environment had to be set up so that no other component would bottleneck the testing.  As it’s often the interconnect between database, client and storage nodes that limits overall system performance, StorageReview plumbed the test setup with none other than Mellanox ultra low-latency, FDR 56Gb/s InfiniBand adapter cards and switches to ensure full flash performance realization and true apples-to-apples test results.

 

StorageReview-MarkLogic-Layout_trans.png

MarkLogic Benchmark Setup

Find out more about the benchmark and testing results at StorageReview’s website: http://www.storagereview.com/storagereview_debuts_marklogic_nosql_storage_performance_benchmark

 

Don’t for get to join the Mellanox Storage Community: http://community.mellanox.com/groups/storage

Product Flash: DDN hScaler Hadoop Appliance

 

Of the many strange-sounding application and product names out there in the industry today, Hadoop remains one of the most recognized.  Why?  Well, we’ve talked about the impact that data creation, storage and management is having on the overall business atmosphere, it’s the quintessential Big Data problem. Since all that data has no value unless it’s made useful and actionable through analysis, a variety of Big Data analytics software and hardware solutions have been created.  The most popular solution on the software side is, of course, Hadoop.  Recently, however, DDN announced an exciting new integrated solution to solve the Big Data equation: hScaler.

 

Based on DDN’s award-winning SFA 12K architecture, hScaler is the world’s first enterprise Hadoop appliance.  Unlike many Hadoop installations, hScaler is factory-configured and simple to deploy, eliminating the need for trial-and-error approaches that require substantial expertise and time to configure and tune.  The hScaler can be deployed in a matter of hours, compared to homegrown approaches requiring weeks or even months, allowing enterprises to focus on their actual business, and not the mechanics of the Hadoop infrastructure.

hScaler_trans.png

DDN hScaler

 

Performance-wise, the hScaler is no slouch.  Acceleration of the Hadoop shuffle phase through the use of Mellanox InfiniBand and 40GbE RDMA interconnects, ultra-dense storage and an efficient processing infrastructure deliver results up to 7x faster than typical Hadoop installations. That means quicker time-to-insight and a more competitive business.

 

For enterprise installations, hScaler includes an integrated ETL engine, over 200 connectors for data ingestion and remote manipulation, high availability and management through DDN’s DirectMon framework.  Independently scalable storage and compute resources provide additional flexibility and cost savings, as organizations can choose to provision to meet only their current needs, and add resources later as their needs change.  Because hScaler’s integrated architecture is four times as dense as commodity installations, additional TCO dollars can be saved in floorspace, power and cooling.

 

Overall, hScaler looks to be a great all-in-one, plug-n-play package for enterprise organizations that need Big Data results fast, but don’t have the time, resources or desire to build an installation from the ground up.

 

Find out more about the hScaler Hadoop Appliance at DDN’s website: http://www.ddn.com/en/products/hscaler-appliance and http://www.ddn.com/en/press-releases/2013/new-era-of-hadoop-simplicity

 

Don’t for get to join the Mellanox Storage Community: http://community.mellanox.com/groups/storage

 

Mellanox Joins the Network Intelligence Alliance

We are happy to join the Network Intelligence Alliance, an industry organization created for collaboration among the Network Economy’s technology providers. Through our participation in the Alliance, Mellanox will help develop and market innovative solutions that further improve networking solutions for Enterprise, Cloud providers and Telecom Operators.

 

Using Mellanox’s low-latency, CPU efficient 10/40GbE NICs and switches, customers can deploy an embedded virtual switch (eSwitch) to run virtual machine traffic with bare-metal performance, provide hardened security and QoS, all with simpler management through Software Defined Networking (SDN) and OpenFlow APIs. The hardware-based security and isolation features in our 10/40GbE solutions can enable wider adoption of multi-tenant clouds while maintaining user service level agreements (SLA). In addition, utilizing SR-IOV to bypass the Hypervisor, customers will gain more VMs when virtualizing Network Functions on their cloud and data center server and storage infrastructure deployments.

 

In a world that now depends and runs on networks—accurate visibility and precise tracking of data crossing networks have become crucial to the availability, performance and security of applications and services. The growing complexity of IP transactions, the explosion of mobile applications, and the mainstream adoption of cloud computing surpass the capabilities of conventional tools to improve how networks operate, expand services, and cope with cybersecurity.  Just like Business Intelligence solutions emerged to unlock information hidden in the enterprise, Network Intelligence technology is an emerging category of technology to reveal the critical details of the data locked inside network traffic and transactions.
Mellanox is excited to be a part of this great group and we are looking forward to collaborating with other members.

http://www.mellanox.com/

Mellanox encourages you to join our community and follow us on; LinkedIn, Mellanox Blog, Twitter, YouTube, Mellanox Community

Xyratex Advances Lustre Initiative

 

The Lustre® file system has played a significant role in the high performance computing industry since its release in 2003.  Lustre is used in many of the top HPC supercomputers in the world today, and has a strong development community behind it.  Last week, Xyratex announced plans to purchase the Lustre trademark, logo, website and associated intellectual property from Oracle, who acquired them with the purchase of Sun Microsystems in 2010. Xyratex will assume responsibility for customer support for Lustre and has pledged to continue its investment in and support of the open source community development.

 

Both Xyratex and the Lustre community will benefit from the purchase. The Lustre community now has an active, stable promoter whose experience and expertise is aligned with their major market segment, HPC, and Xyratex can confidently continue to leverage the Lustre file system to drive increased value in their ClusterStor™ product line, which integrates Mellanox InfiniBand and Ethernet solutions. In a blog post from Ken Claffey on the Xyratex website, the point was made that Xyratex’ investment in Lustre is particularly important to the company, as Xyratex sees its business “indelibly intertwined with the health and vibrancy of the Lustre community” and offers all of its storage solutions based on the Lustre file system. Sounds like a winning proposition for both sides.

 

Find out more about Xyratex’ acquisition of Lustre: http://www.xyratex.com/news/press-releases/xyratex-advances-lustre%C2%AE-initiative-assumes-ownership-related-assets

 

http://community.mellanox.com/groups/storage

 

 

The Mellanox SX1018HP is a game changer for squeezing every drop of latency out of your network

Guest blog by Steve Barry, Product Line Manager for HP Ethernet Blade Switches

One of the barriers to adoption of blade server technology has been the reliance on a limited number of network switches available.  Organizations requiring unique switching capabilities or extra bandwidth have had to rely on Top of Rack switches built by networking companies that have little or no presence in the server market. The result was a potential customer base of users who wanted to realize the benefits of blade server technology but were forced to remain with rack servers and switches due to a lack of alternative networking products. Here’s where Hewlett Packard has once again shown why they remain the leader in blade server technology by announcing a new blade switch that leaves the others in the dust.

 

MellanoxSX1018HPenetSwitch_front_small.jpg         MellanoxSX1018HPenetSwitch_left_small.jpg

                                         Mellanox SX1018HP Ethernet Blade Switch

    

 

Working closely with our partner Mellanox, HP has just announced a new blade switch for the c-Class enclosure that is designed specifically for customers that demand performance and raw bandwidth. The Mellanox SX1018HP is built on the latest SwitchX ASIC technology and for the first time gives servers a direct path to 40Gb. In fact this switch can provide up to sixteen 40Gb server downlinks and up to eighteen 40Gb network uplinks for an amazing 1.3Tb/s of throughput. Now even the most demanding virtualized server applications can get the bandwidth they need. Financial service customers and especially those involved in High Frequency Trading look to squeeze every drop of latency out of their network. Again, the Mellanox SX1018HP excels, dropping port to port latency to an industry leading 230nS at 40Gb. There is no other blade switch currently available that can make that claim.

For customers currently running Infiniband networks, the appeal of being able to collapse their data requirements onto a single network has always been tempered by the lack of support for Remote Direct Memory Access (RDMA) on Ethernet networks. Again, HP and Mellanox lead the way in blade switches. The SX1018HP supports RDMA over Converged Ethernet (RoCE) allowing those RDMA tuned applications to work across both Infiniband and Ethernet networks. When coupled with the recently announced HP544M 40Gb Ethernet/FDR Infiniband adapter, customers can now support RDMA end to end on either network and begin the migration to a single Ethernet infrastructure. Finally, many customers already familiar with Mellanox IB switches provision and manage their network with Unified Fabric Manager (UFM). The SX1018HP can be managed and provisioned with this same tool, providing a seamless transition to the Ethernet word. Of course standard CLI and secure web browser management is also available.

Incorporating this switch along with the latest generation of HP blade servers and network adapters now gives any customer the same speed, performance and scalability that was previously limited to rack deployments using a hodgepodge of suppliers.   Data center operations that cater to High Performance Cluster Computing (HPCC), Telecom, Cloud Hosting Services and Financial Services will find the HP blade server/Mellanox SX1018HP blade switch a compelling and unbeatable solution.

 

 Click here for more information on the new Mellanox SX1018HP Ethernet Blade Switch.

Product Flash: NetApp EF540 Enterprise Flash Array

 

Written By: Erin Filliater, Enterprise Market Development Manager

Via the Storage Solutions Group

 

Everyone knows that flash storage is a big deal.  However, one of the gaps in the flash storage market has been in enterprise flash systems. Flash caching has for some time been in many enterprise storage environments, but enterprise all-flash arrays haven’t.  This week, that all changed with the launch of NetApp’s EF540 Flash Array.  Targeted for business critical applications, the EF540 features the enterprise features we’re used to in a NetApp system: high availability, reliability, manageability, snapshots, synchronous and asynchronous replication and backup and a fully redundant architecture.  Add to that some impressive performance statistics—over 300,000 IOPs, sub-millisecond latency, and 6GB/s throughput—and you have a system to be reckoned with.

NetApp EF540 transparent-sm.png

NetApp® EF540 Flash Array

 

What does all this mean for the IT administrator?  Database application performance boosts of up to 500% over traditional storage infrastructures mean faster business operation results, decreased time-to-market and increased revenue.  Enterprise RAS features lead to less downtime, intuitive management and greater system ROI.

 

Of course, as mentioned earlier in the week in the Are You Limiting Your Flash Performance? post, the network flash systems are connected to also plays a role in boosting performance and reliability.  To this end, NetApp has equipped the EF540 well with 40Gb/s QDR InfiniBand, 10Gb/s iSCSI and 8Gb/s Fibre Channel connectivity options, all with automated I/O path failover for robustness.

 

Following the flash trend, NetApp also announced the all-new FlashRay family of purpose-built enterprise flash arrays, with expected availability in early 2014.  The FlashRay products will focus on efficient, flexible, scale-out architectures to maximize the value of flash installments across the entire enterprise data center stack.  Given all this and the enterprise features of the EF540, there’s no longer a reason not to jump on the flash bandwagon and start moving your enterprise ahead of the game.

 

Find out more about the EF540 Flash Array and FlashRay product family at NetApp’s website: http://www.netapp.com/us/products/storage-systems/flash-ef540/ and http://www.netapp.com/us/company/news/press-releases/news-rel-20130219-678946.aspx

 

Find out more about how Mellanox accelerates NetApp storage solutions at: https://solutionconnection.netapp.com/mellanox-connectx-3-virtual-protocol-interconnect-vpi-adapter-cards.aspx

HP updates server, storage and networking line-ups

 

HP updated its enterprise hardware portfolio with the most notable addition being networking devices that combined wired and wireless infrastructure to better manage bring-your-own-device policies.One of those  highlights is the Mellanox SX1018 HP Ethernet switch, which lowers port latency and improves downlinks.

 

The Mellanox SX1018HP Ethernet Switch is the highest-performing Ethernet fabric solution in a blade switch form factor. It delivers up to 1.36Tb/s of non-blocking throughput perfect for High-Performance Computing, High Frequency Trading and Enterprise Data Center- applications.

 

Utilizing the latest Mellanox SwitchX ASIC technology, the SX1018HP is an ultra-low latency switch that is ideally suited as an access switch providing Infiniband like performance with sixteen 10Gb/40Gb server side downlinks and eighteen 40Gb QSFP+ uplinks to the core with port to port latency as low as 230nS.

 

The Mellanox SX1018HP Ethernet Switch has a rich set of Layer 2 networking and security features and supports faster application performance and enhanced server CPU utilization with RDMA over Converged Ethernet (RoCE), making this switch the perfect solution for any high performance Ethernet network.

 

Mellanox SX1018HP Ethernet Switch

 

HP is the first to provide 40Gb downlinks to each blade server enabling InfiniBand-like performance in an Ethernet blade switch. Another industry first, the low-latency HP SX1018 Ethernet Switch provides the lowest port to port latency of any blade switch, more than four times faster than previous switches

 

When combined with the space, power and cooling benefits of blade servers, the Mellanox SX1018HP Ethernet Blade Switch provides the perfect network interface for Financial applications and high performance clusters.

 

How did Windows Azure achieved performance of 90.2 percent efficiency

Written By: Eli Karpilovski, Manager, Cloud Market Development

 

Windows Azure, one of the largest public cloud providers in the world today, recently ran a system performance benchmark, called LINPACK, to demonstrate the performance capabilities of its ‘Big Compute’ hardware. Windows Azure submitted the results and was certified as one of the world’s largest supercomputers on the TOP500.

 

Results were super impressive – 151.3 TFlops on 8,065 cores with 90.2 percent efficiency, 33% higher efficiency versus other major 10GbE cloud providers that ran the same benchmark!

 

What is their secret? 40Gb/s InfiniBand network with RDMA – the Mellanox way.

 

Learn more about it >>  (http://blogs.msdn.com/b/windowsazure/archive/2012/11/13/windows-azure-benchmarks-show-top-performance-for-big-compute.aspx)

 

Join the Mellanox Cloud Community: http://community.mellanox.com/groups/cloud

Are You Limiting Your Flash Performance?

 

Written By: Erin Filliater, Enterprise Market Development Manager

Via the Storage Solutions Group

As flash storage has become increasingly available at lower and lower prices, many organizations are leveraging flash’s low-latency features to boost application and storage performance in their data centers.  Flash storage vendors claim their products can increase application performance by leaps and bounds, and a great many data center administrators have found that to be true.  But what if your flash could do even more?

 

One of the main features of flash storage is its ability to drive massive amounts of data to the network with very low latencies.  Data can be written to and retrieved from flash storage in a matter of microseconds at speeds exceeding several gigabytes per second, allowing applications to get the data they need and store their results in record time.  Now, suppose you connect that ultra-fast storage to your compute infrastructure using 1GbE technology.  A single 1GbE port can transfer data at around 120MB/s.  For a flash-based system driving, say, 8GB/s of data, you’d need sixty-seven 1GbE ports to avoid bottlenecking your system.  Most systems have only eight ports available, so using 1GbE would limit your lightning-fast flash to just under 1GB/s, an eighth of the performance you could be getting. That’s a bit like buying a Ferrari F12berlinetta (max speed: >211 mph) and committing to drive it only on residential streets (speed limit: 25 mph).  Sure, you’d look cool, but racing neighborhood kids on bicycles isn’t really the point of a Ferrari, is it?  Upgrade that 1GbE connection to 10GbE, and you can cover your full Flash bandwidth with seven ports, if your CPU can handle the increased TCP stack overhead and still perform application tasks.  In terms of our vehicular analogy, you’re driving the Ferrari on the highway now, but you’re still stuck in third gear.  So, how do you get that Ferrari to the Bonneville Salt Flats and really let loose?

 

Take one step further in your interconnect deployment and upgrade that 10GbE connection to a 40GbE with RDMA-over-Converged-Ethernet (RoCE) or 56Gb/s FDR InfiniBand connection. Two ports of either protocol will give you full bandwidth access to your flash system, and RDMA features mean ultra-low CPU overhead and increased overall efficiency.  Your flash system will perform to its fullest potential, and your application performance will improve drastically.  Think land-speed records, except in a data center.

 

Flash and RDMA diagram.png

 

So, if your flash-enhanced application performance isn’t quite what you expected, perhaps it’s your interconnect and not your flash system that’s underperforming.

 

Find out more about the about RoCE and InfiniBand technologies and how they can enhance your storage performance: http://www.mellanox.com/page/storage and http://www.mellanox.com/blog/2013/01/rdma-interconnects-for-storage-fast-efficient-data-delivery/