All posts by admin

Mellanox Collaborates with Dell to Maximize Application Performance in Virtualized Data Centers

Dell Fluid Cache for SAN is enabled by ConnectX®-3 10/40GbE Network Interface Cards (NICs) with Remote Direct Memory Access (RDMA). The Dell Fluid Cache for SAN solution reduces latency and improves I/O performance for applications such as Online Transaction Processing (OLTP) and Virtual Desktop Infrastructure (VDI).

Dell lab tests have revealed that Dell Fluid Cache for SAN can reduce the average response time by 99 percent and achieve four times more transactions per second with a six-fold increase in concurrent users**.

LJ Miller 071714
Continue reading

ConnectX-3 Leverages Network Services in SDN Era

Guest blog by: Alon Harel

 

If your job is related to networking, be it a network admin, an R&D engineer, an architect, or any other job involving networks, it is very likely you have heard people around you (or GASP! maybe even heard yourself) express doubts about the proliferation of Software Defined Networking (SDN) and OpenFlow. How many times have you encountered skepticism about this new revolutionary concept of decoupling control and data planes and “re-inventing the wheel”? Many people used to think “this is hype; it will go away like other new technologies did, and it will never replace the traditional network protocols…” Well, if you perceive SDN/OpenFlow only as a replacement for the current network distributed protocol, these doubts may be turn out to be valid. The concept of saying “OpenFlow is here to replace the old strict protocols” is pretty much the message one gets from reading the old white papers regarding OpenFlow. These papers used to describe the primary motivation for moving to OpenFlow as the determination to introduce innovation in the control plane (that is, the ability to test and apply new forwarding schemes in the network).

 

This long preface is the background for the use case we present below. This use case is not about a new forwarding scheme, nor is it about re-implementing protocols; rather, it is a complementary solution for existing traditional networks. It is about adding network services in an agile way, allowing cost-efficient scalability. It is innovative and fresh and, most importantly, it could have not been done prior to the SDN era. Its simplicity and the fact that it relies on some very basic notions of OpenFlow can only spark the imagination about what can be done further using the SDN toolbox.

 

RADWARE’s security appliance, powered by Mellanox’s OpenFlow-enabled ConnectX®-3 adapter, brings a new value proposition to the network appliance market, demonstrating the power of SDN by enabling the addition of network services in a most efficient and scalable way.

 

Security and attack mitigation service is applied for pre-defined protected objects (servers) identified by their IP address. Prior to SDN, the security appliance had to be a ‘bump in the wire’ because all traffic destined for the protected objects must traverse through it. This, of course, dictates network physical topology, limited by the appliance’s port bandwidth and imposing high complexity when scale comes into play.

 

RADWARE’s DefenseFlow software is capable of identifying abnormal network behavior by monitoring the amount of bytes and packets of specific flows destined for the protected objects. The monitoring is performed by installing specific flows in the forwarding hardware only for the sake of counting the amount of data traversing it. Flow configuration and counter information is retrieved via standard OpenFlow primitives. The naïve approach would be to use the OpenFlow switches to accommodate the flows (counters); however, the limited resource capacity of commodity switches (mainly TCAM, which is the prime resource for OpenFlow) rules out this option. (Note that a switch may be the data path for hundreds or thousands of VMs, each with several monitored flows). Thus, the viability of the solution must come from somewhere else. Enter Mellanox’s OpenFlow-enabled ConnectX-3 SR-IOV adapter.

 

ConnectX-3 incorporates an embedded switch (or eSwitch) enabling VM communication to enjoy bare metal performance. The HCA driver includes OpenFlow agent software, based on the Indigo-2 open source project, which enables the eSwitch to be controlled using standard OpenFlow protocol.

 

Installing the flows (counters) on the edge switch (eSwitch) makes a lot of sense. First, each eSwitch is responsible only for a relatively small amount of protected objects (only those servers running on a specific host), therefore the scale obstacle becomes a non-issue. Moreover, more clever or sophisticated monitoring (for example, event generation when a threshold is crossed) can easily be added, offloading the monitoring application (DefenseFlow in this case).

 

You might think, “What’s new about that? We already have Open vSwitch (OVS) on the server which is OpenFlow capable.” Well, when performance is the name of the game, OVS is out and SR-IOV technology is in. While in SR-IOV mode, VM communication is performed by interfacing the hardware, directly bypassing any virtual switch processing software; therefore, in this mode OVS’s OpenFlow capabilities cannot be used (as it is not part of the data path).

 

Let’s take a look at this practically by describing the setup and operation of the joint solution. The setup is based on standard servers equipped with Mellanox’s ConnectX-3 adapter and OpenFlow-enabled switch and with RADWARE’s DefensePro appliance and DefenseFlow software, which interacts with the Floodlight OpenFlow controller.

SDN bog iamge1.png

Figure 1 – Setup

 

Here’s a description of the joint solution operation, as depicted in Figure 2:

  • DefenseFlow installs the relevant flows on each ConnectX-3 adapter.
  • The security appliance does not participate in the normal data path.
  • ConnectX-3 counts traffic matching the installed flows.
  • Flow counters are retrieved from ConnectX-3.
  • Once an attack is identified, only relevant traffic is diverted to the security appliance (where it is cleared of malicious flows and inserted back toward its destination).

 

 

SDN bog iamge2.png

Figure 2 -Joint Solution

 

I would argue that every skeptic seeing this example use case and the added value it brings to existing network environments using these very basic OpenFlow knobs, would have to reconsider his SDN doubts…

UF launches HiPerGator, the state’s most powerful supercomputer

GAINESVILLE, Fla. — The University of Florida today unveiled the state’s most powerful supercomputer, a machine that will help researchers find life-saving drugs, make decades-long weather forecasts and improve armor for troops.

The HiPerGator supercomputer and recent tenfold increase in the size of the university’s data pipeline make UF one of the nation’s leading public universities in research computing.

“If we expect our researchers to be at the forefront of their fields, we need to make sure they have the most powerful tools available to science, and HiPerGator is one of those tools,” UF President Bernie Machen said. “The computer removes the physical limitations on what scientists and engineers can discover. It frees them to follow their imaginations wherever they lead.”

For UF immunologist David Ostrov, HiPerGator will slash a months-long test to identify safe drugs to a single eight-hour work day.

“HiPerGator can help get drugs get from the computer to the clinic more quickly. We want to discover and deliver safe, effective therapies that protect or restore people’s health as soon as we can,” Ostrov said. “UF’s supercomputer will allow me to spend my time on research instead of computing.”

The Dell machine has a peak speed of 150 trillion calculations per second. Put another way, if each calculation were a word in a book, HiPerGator could read the millions of volumes in UF libraries several hundred times per second.

UF worked with Dell, Terascala, Mellanox and AMD to build a machine that makes supercomputing power available to all UF faculty and their collaborators and spreads HiPerGator’s computing power over multiple simultaneous jobs instead of focused on a single task at warp speed.

HiPerGator features the latest in high-performance computing technology from Dell and AMD with 16,384 processing cores; a Dell|Terascala HPC Storage Solution (DT-HSS 4.5) with the industry’s fastest open-source parallel file system; and Mellanox’s FDR 56Gb/s InfiniBand interconnects that provide the highest  bandwidth and lowest latency.  Together these features provide UF researchers unprecedented computation and faster access to data to quickly further their research.

UF unveiled HiPerGator on Tuesday as part of a ribbon-cutting ceremony for the 25,000-square-foot UF Data Center built to house it. HiPerGator was purchased and assembled for $3.4 million, and the Data Center was built for $15 million.

Also today, the university announced that it is the first in the nation to fully implement the Internet2 Innovation Platform, a combination of new technologies and services that will further speed research computing.

Benchmarking With Real Workloads and the Benefits of Flash and Fast Interconnects

Benchmarking is a term heard throughout the tech industry as a measure of success and pride in a particular solution’s ability to handle this or that workload.  However, most benchmarks feature a simulated workload, and in reality, a deployed solution may perform much differently.  This is especially true with databases, since the types of data and workloads can vary greatly.

 

StorageReview.com and MarkLogic recently bucked the benchmarking trend, developing a benchmark that tests storage systems against an actual NoSQL database instance.  Testing is done in the StorageReview lab, and the first round focused heavily on host-side flash solutions.  Not surprisingly, flash-accelerated solutions took the day, with the lowest overall latencies for all database operations, generally blowing non-flash solutions out of the water and showing that NoSQL database environments can benefit significantly from the addition of flash-accelerated systems.

 

In order to accurately test all of these flash solutions, the test environment had to be set up so that no other component would bottleneck the testing.  As it’s often the interconnect between database, client and storage nodes that limits overall system performance, StorageReview plumbed the test setup with none other than Mellanox ultra low-latency, FDR 56Gb/s InfiniBand adapter cards and switches to ensure full flash performance realization and true apples-to-apples test results.

 

StorageReview-MarkLogic-Layout_trans.png

MarkLogic Benchmark Setup

Find out more about the benchmark and testing results at StorageReview’s website: http://www.storagereview.com/storagereview_debuts_marklogic_nosql_storage_performance_benchmark

 

Don’t for get to join the Mellanox Storage Community: http://community.mellanox.com/groups/storage

Product Flash: DDN hScaler Hadoop Appliance

 

Of the many strange-sounding application and product names out there in the industry today, Hadoop remains one of the most recognized.  Why?  Well, we’ve talked about the impact that data creation, storage and management is having on the overall business atmosphere, it’s the quintessential Big Data problem. Since all that data has no value unless it’s made useful and actionable through analysis, a variety of Big Data analytics software and hardware solutions have been created.  The most popular solution on the software side is, of course, Hadoop.  Recently, however, DDN announced an exciting new integrated solution to solve the Big Data equation: hScaler.

 

Based on DDN’s award-winning SFA 12K architecture, hScaler is the world’s first enterprise Hadoop appliance.  Unlike many Hadoop installations, hScaler is factory-configured and simple to deploy, eliminating the need for trial-and-error approaches that require substantial expertise and time to configure and tune.  The hScaler can be deployed in a matter of hours, compared to homegrown approaches requiring weeks or even months, allowing enterprises to focus on their actual business, and not the mechanics of the Hadoop infrastructure.

hScaler_trans.png

DDN hScaler

 

Performance-wise, the hScaler is no slouch.  Acceleration of the Hadoop shuffle phase through the use of Mellanox InfiniBand and 40GbE RDMA interconnects, ultra-dense storage and an efficient processing infrastructure deliver results up to 7x faster than typical Hadoop installations. That means quicker time-to-insight and a more competitive business.

 

For enterprise installations, hScaler includes an integrated ETL engine, over 200 connectors for data ingestion and remote manipulation, high availability and management through DDN’s DirectMon framework.  Independently scalable storage and compute resources provide additional flexibility and cost savings, as organizations can choose to provision to meet only their current needs, and add resources later as their needs change.  Because hScaler’s integrated architecture is four times as dense as commodity installations, additional TCO dollars can be saved in floorspace, power and cooling.

 

Overall, hScaler looks to be a great all-in-one, plug-n-play package for enterprise organizations that need Big Data results fast, but don’t have the time, resources or desire to build an installation from the ground up.

 

Find out more about the hScaler Hadoop Appliance at DDN’s website: http://www.ddn.com/en/products/hscaler-appliance and http://www.ddn.com/en/press-releases/2013/new-era-of-hadoop-simplicity

 

Don’t for get to join the Mellanox Storage Community: http://community.mellanox.com/groups/storage

 

Mellanox Joins the Network Intelligence Alliance

We are happy to join the Network Intelligence Alliance, an industry organization created for collaboration among the Network Economy’s technology providers. Through our participation in the Alliance, Mellanox will help develop and market innovative solutions that further improve networking solutions for Enterprise, Cloud providers and Telecom Operators.

 

Using Mellanox’s low-latency, CPU efficient 10/40GbE NICs and switches, customers can deploy an embedded virtual switch (eSwitch) to run virtual machine traffic with bare-metal performance, provide hardened security and QoS, all with simpler management through Software Defined Networking (SDN) and OpenFlow APIs. The hardware-based security and isolation features in our 10/40GbE solutions can enable wider adoption of multi-tenant clouds while maintaining user service level agreements (SLA). In addition, utilizing SR-IOV to bypass the Hypervisor, customers will gain more VMs when virtualizing Network Functions on their cloud and data center server and storage infrastructure deployments.

 

In a world that now depends and runs on networks—accurate visibility and precise tracking of data crossing networks have become crucial to the availability, performance and security of applications and services. The growing complexity of IP transactions, the explosion of mobile applications, and the mainstream adoption of cloud computing surpass the capabilities of conventional tools to improve how networks operate, expand services, and cope with cybersecurity.  Just like Business Intelligence solutions emerged to unlock information hidden in the enterprise, Network Intelligence technology is an emerging category of technology to reveal the critical details of the data locked inside network traffic and transactions.
Mellanox is excited to be a part of this great group and we are looking forward to collaborating with other members.

http://www.mellanox.com/

Mellanox encourages you to join our community and follow us on; LinkedIn, Mellanox Blog, Twitter, YouTube, Mellanox Community

Xyratex Advances Lustre Initiative

 

The Lustre® file system has played a significant role in the high performance computing industry since its release in 2003.  Lustre is used in many of the top HPC supercomputers in the world today, and has a strong development community behind it.  Last week, Xyratex announced plans to purchase the Lustre trademark, logo, website and associated intellectual property from Oracle, who acquired them with the purchase of Sun Microsystems in 2010. Xyratex will assume responsibility for customer support for Lustre and has pledged to continue its investment in and support of the open source community development.

 

Both Xyratex and the Lustre community will benefit from the purchase. The Lustre community now has an active, stable promoter whose experience and expertise is aligned with their major market segment, HPC, and Xyratex can confidently continue to leverage the Lustre file system to drive increased value in their ClusterStor™ product line, which integrates Mellanox InfiniBand and Ethernet solutions. In a blog post from Ken Claffey on the Xyratex website, the point was made that Xyratex’ investment in Lustre is particularly important to the company, as Xyratex sees its business “indelibly intertwined with the health and vibrancy of the Lustre community” and offers all of its storage solutions based on the Lustre file system. Sounds like a winning proposition for both sides.

 

Find out more about Xyratex’ acquisition of Lustre: http://www.xyratex.com/news/press-releases/xyratex-advances-lustre%C2%AE-initiative-assumes-ownership-related-assets

 

http://community.mellanox.com/groups/storage

 

 

The Mellanox SX1018HP is a game changer for squeezing every drop of latency out of your network

Guest blog by Steve Barry, Product Line Manager for HP Ethernet Blade Switches

One of the barriers to adoption of blade server technology has been the reliance on a limited number of network switches available.  Organizations requiring unique switching capabilities or extra bandwidth have had to rely on Top of Rack switches built by networking companies that have little or no presence in the server market. The result was a potential customer base of users who wanted to realize the benefits of blade server technology but were forced to remain with rack servers and switches due to a lack of alternative networking products. Here’s where Hewlett Packard has once again shown why they remain the leader in blade server technology by announcing a new blade switch that leaves the others in the dust.

 

MellanoxSX1018HPenetSwitch_front_small.jpg         MellanoxSX1018HPenetSwitch_left_small.jpg

                                         Mellanox SX1018HP Ethernet Blade Switch

    

 

Working closely with our partner Mellanox, HP has just announced a new blade switch for the c-Class enclosure that is designed specifically for customers that demand performance and raw bandwidth. The Mellanox SX1018HP is built on the latest SwitchX ASIC technology and for the first time gives servers a direct path to 40Gb. In fact this switch can provide up to sixteen 40Gb server downlinks and up to eighteen 40Gb network uplinks for an amazing 1.3Tb/s of throughput. Now even the most demanding virtualized server applications can get the bandwidth they need. Financial service customers and especially those involved in High Frequency Trading look to squeeze every drop of latency out of their network. Again, the Mellanox SX1018HP excels, dropping port to port latency to an industry leading 230nS at 40Gb. There is no other blade switch currently available that can make that claim.

For customers currently running Infiniband networks, the appeal of being able to collapse their data requirements onto a single network has always been tempered by the lack of support for Remote Direct Memory Access (RDMA) on Ethernet networks. Again, HP and Mellanox lead the way in blade switches. The SX1018HP supports RDMA over Converged Ethernet (RoCE) allowing those RDMA tuned applications to work across both Infiniband and Ethernet networks. When coupled with the recently announced HP544M 40Gb Ethernet/FDR Infiniband adapter, customers can now support RDMA end to end on either network and begin the migration to a single Ethernet infrastructure. Finally, many customers already familiar with Mellanox IB switches provision and manage their network with Unified Fabric Manager (UFM). The SX1018HP can be managed and provisioned with this same tool, providing a seamless transition to the Ethernet word. Of course standard CLI and secure web browser management is also available.

Incorporating this switch along with the latest generation of HP blade servers and network adapters now gives any customer the same speed, performance and scalability that was previously limited to rack deployments using a hodgepodge of suppliers.   Data center operations that cater to High Performance Cluster Computing (HPCC), Telecom, Cloud Hosting Services and Financial Services will find the HP blade server/Mellanox SX1018HP blade switch a compelling and unbeatable solution.

 

 Click here for more information on the new Mellanox SX1018HP Ethernet Blade Switch.

Product Flash: NetApp EF540 Enterprise Flash Array

 

Written By: Erin Filliater, Enterprise Market Development Manager

Via the Storage Solutions Group

 

Everyone knows that flash storage is a big deal.  However, one of the gaps in the flash storage market has been in enterprise flash systems. Flash caching has for some time been in many enterprise storage environments, but enterprise all-flash arrays haven’t.  This week, that all changed with the launch of NetApp’s EF540 Flash Array.  Targeted for business critical applications, the EF540 features the enterprise features we’re used to in a NetApp system: high availability, reliability, manageability, snapshots, synchronous and asynchronous replication and backup and a fully redundant architecture.  Add to that some impressive performance statistics—over 300,000 IOPs, sub-millisecond latency, and 6GB/s throughput—and you have a system to be reckoned with.

NetApp EF540 transparent-sm.png

NetApp® EF540 Flash Array

 

What does all this mean for the IT administrator?  Database application performance boosts of up to 500% over traditional storage infrastructures mean faster business operation results, decreased time-to-market and increased revenue.  Enterprise RAS features lead to less downtime, intuitive management and greater system ROI.

 

Of course, as mentioned earlier in the week in the Are You Limiting Your Flash Performance? post, the network flash systems are connected to also plays a role in boosting performance and reliability.  To this end, NetApp has equipped the EF540 well with 40Gb/s QDR InfiniBand, 10Gb/s iSCSI and 8Gb/s Fibre Channel connectivity options, all with automated I/O path failover for robustness.

 

Following the flash trend, NetApp also announced the all-new FlashRay family of purpose-built enterprise flash arrays, with expected availability in early 2014.  The FlashRay products will focus on efficient, flexible, scale-out architectures to maximize the value of flash installments across the entire enterprise data center stack.  Given all this and the enterprise features of the EF540, there’s no longer a reason not to jump on the flash bandwagon and start moving your enterprise ahead of the game.

 

Find out more about the EF540 Flash Array and FlashRay product family at NetApp’s website: http://www.netapp.com/us/products/storage-systems/flash-ef540/ and http://www.netapp.com/us/company/news/press-releases/news-rel-20130219-678946.aspx

 

Find out more about how Mellanox accelerates NetApp storage solutions at: https://solutionconnection.netapp.com/mellanox-connectx-3-virtual-protocol-interconnect-vpi-adapter-cards.aspx

HP updates server, storage and networking line-ups

 

HP updated its enterprise hardware portfolio with the most notable addition being networking devices that combined wired and wireless infrastructure to better manage bring-your-own-device policies.One of those  highlights is the Mellanox SX1018 HP Ethernet switch, which lowers port latency and improves downlinks.

 

The Mellanox SX1018HP Ethernet Switch is the highest-performing Ethernet fabric solution in a blade switch form factor. It delivers up to 1.36Tb/s of non-blocking throughput perfect for High-Performance Computing, High Frequency Trading and Enterprise Data Center- applications.

 

Utilizing the latest Mellanox SwitchX ASIC technology, the SX1018HP is an ultra-low latency switch that is ideally suited as an access switch providing Infiniband like performance with sixteen 10Gb/40Gb server side downlinks and eighteen 40Gb QSFP+ uplinks to the core with port to port latency as low as 230nS.

 

The Mellanox SX1018HP Ethernet Switch has a rich set of Layer 2 networking and security features and supports faster application performance and enhanced server CPU utilization with RDMA over Converged Ethernet (RoCE), making this switch the perfect solution for any high performance Ethernet network.

 

Mellanox SX1018HP Ethernet Switch

 

HP is the first to provide 40Gb downlinks to each blade server enabling InfiniBand-like performance in an Ethernet blade switch. Another industry first, the low-latency HP SX1018 Ethernet Switch provides the lowest port to port latency of any blade switch, more than four times faster than previous switches

 

When combined with the space, power and cooling benefits of blade servers, the Mellanox SX1018HP Ethernet Blade Switch provides the perfect network interface for Financial applications and high performance clusters.

 

Download the Data Sheet: