Category Archives: Storage

Product Flash – Bridgeworks Potomac 40Gb iSCSI-to-SAS Bridge

Written By: Erin Filliater, Enterprise Market Development Manager

 

The amount of worldwide digital information is growing on a daily basis, and all of that data has to be stored somewhere, usually in external storage infrastructures, systems and devices.  Of course, in order for that information to be useful, you need to have fast access to it when your application calls for it.  Enter Bridgeworks’ newest bridging product, the Potomac ESAS402800 40Gb iSCSI-to-SAS protocol bridge.  The first to take advantage of 40Gb/s data center infrastructures, the ESAS402800 integrates Mellanox 40Gb iSCSI technology to provide the fastest iSCSI SAN connectivity to external SAS devices such as disk arrays, LTO6 tape drives and tape libraries, allowing data center administrators to integrate the newest storage technologies into their environments without disrupting their legacy systems.

In addition to flat-out speed, plug n’ play connectivity and web-based GUI management make the ESAS402800 easy to install and operate.   Adaptive read- and write-forward caching techniques allow the ESAS402800 bridge to share storage effectively in today’s highly virtualized environments.

 

All of this adds up to easier infrastructure upgrades, more effective storage system migration and realization of the full performance potential of new SAS-connected storage systems. Pretty impressive for a single device.

 

Find out more about the recent Potomac ESAS402800 40Gb iSCSI-to-SAS bridge launch at Bridgeworks’ website:

http://www.4bridgeworks.com/news_and_press_releases/press_releases.phtml?id=252&item=26

RDMA Interconnects for Storage: Fast, Efficient Data Delivery

Written By: Erin Filliater, Enterprise Market Development Manager

We all know that we live in a world of data, data and more data. In fact, IDC predicts that in 2015, the amount of data created and replicated will reach nearly 8 Zettabytes. With all of this data stored in external storage systems, the way data is transferred from storage to a server or application becomes critical to effectively utilizing that information. Couple this with today’s shrinking IT budgets and “do more with less” mindsets, and you have a real challenge on your hands. So, what’s a data center storage administrator to do?

Remote Direct Memory Access (RDMA) based interconnects offer an ideal option for boosting data center efficiency, reducing overall complexity and increasing data delivery performance. Available over InfiniBand and Ethernet, with RDMA over Converged Ethernet (RoCE), RDMA allows data to be transferred from storage to server without passing the data through the CPU and main memory path of TCP/IP Ethernet. Greater CPU and overall system efficiencies are attained because the storage and servers’ compute power is used for just that—computing—instead of processing network traffic. Bandwidth and latency are also of interest: both InfiniBand and RoCE feature microsecond transfer latencies, and bandwidths up to 56Gb/s. Plus, both can be effectively used for data center interconnect consolidation. This translates to screamingly fast application performance, better storage and data center utilization and simplified network management.

On a performance basis, RDMA based interconnects are actually more economical than other alternatives, both in initial cost and in operational expenses. Additionally, because RDMA interconnects are available with such high bandwidths, fewer cards and switch ports are needed to achieve the same storage throughput. This enables savings in server PCIe slots and data center floor space, as well as overall power consumption. It’s an actual solution for the “do more with less” mantra.

So, the next time your application performance isn’t making the grade, rather than simply adding more CPUs, storage and resources, maybe it’s time to consider a more efficient data transfer path.

Find out more: http://www.mellanox.com/page/storage

Partners Healthcare Cuts Latency of Cloud-based Storage Solution Using Mellanox InfiniBand Technology

Interesting article just came out from Dave Raffo at SearchStorage.com. I have a quick summary below but you should certainly read the full article here: “Health care system rolls its own data storage ‘cloud’ for researchers.”

Partners HealthCare, a non-profit organization founded in 1994 by Brigham and Women’s Hospital and Massachusetts General Hospital, is an integrated health care system that offers patients a continuum of coordinated high-quality care.

Over the past few years, ever-increasing advances in the resolution and accuracy of medical devices and instrumentation technologies have led to an explosion of data in biomedical research. Partners recognized early on that a Cloud-based research compute and storage infrastructure could be a compelling alternative for their researchers. Not only would it enable them to distribute costs and provide storage services on demand, but it would save on IT management time that was spent fixing all the independent research computers distributed across the Partners network.

Initially, Partners Healthcare chose Ethernet as the network transport technology. As demand grew the solution began hitting significant performance bottlenecks, particularly during read/write of 100’s of thousands of small files. The issue was found to lie with the interconnect—Ethernet created problems due to its high natural latency. In order to provide a scalable low latency solution, Partners Healthcare turned to InfiniBand. With InfiniBand on the storage back end, Partners experienced roughly two orders of magnitude faster read times. “One user had over 1,000 files, but only took up 100 gigs or so,”said Brent Richter corporate manager for enterprise research infrastructure and services, Partners HealthCare System.”Doing that with Ethernet would take about 40 minutes just to list that directory. With InfiniBand, we reduced that to about a minute.”

Also, Partners chose InfiniBand over 10-Gigabit Ethernet because InfiniBand is a lower latency protocol. “InfiniBand was price competitive and has lower latency than 10-Gig Ethernet,” he said.

Richter said the final price tag came to about $1 per gigabyte.

By integrating Mellanox InfiniBand into the storage solution, Partners Healthcare was able to reduce latency close to zero and increase its performance, providing their customers with faster response and higher capacity.

Till next time,

Brian Sparks

Sr. Director, Marketing Communication

Missed Mellanox at Interop?

Just in case you missed us at Interop 2009, below are just a few of the presentations that took place in our booth.

Mellanox 10 Gigabit Ethernet and 40Gb/s InfiniBand adapters, switches and gateways are key to making your data center F.U.E.L. Efficient

 

Mellanox Product Manager, Satish Kikkeri, provides additional details on Low-Latency Ethernet

 

Mellanox Product Manager, TA Ramanujam, provides insight on how data centers can achieve true unified I/O today

 

Fusion-io’s CTO, David Flynn, presents “Moving Storage to Microsecond Time-Scales”

 

We look forward to seeing you at our next event or tradeshow.

Brian Sparks
brian@mellanox.com

Mellanox Key to Fusion-io’s Demo at Interop

I’m still pondering my take on Interop this year. It’s been a while since I’ve seen so many abandoned spaces on the show floor. Mind you most were 10×10 or 10×20 spots, but you could tell there were others who really went light on their presence. I saw one booth which had a 40×40 booth and just filled it with banner stands. Yikes! So nothing was really grabbing at me until I went to Fusion-io’s booth and saw the wall of monitors with a 1,000 videos playing on it at once. Fusion-io Booth

FINALLY SOMETHING IMPRESSIVE!

Even more amazing, the videos were all being driven by a single PCIe card which had 1.2TB of SSD RAM on it. This one “ioSAN” card from Fusion-io completely saturated 16 servers (126 cpu cores)…and they were able to achieve this through the bandwidth performance and ultra low-latency of 20Gb/s InfiniBand via Mellanox’s ConnectX adapters. In fact, they told me the 20Gb/s InfiniBand connection would allow them to saturate even more servers, yet they only brought 16.

iodrive_duo_flat-24

The video below, featuring Fusion-io’s CTO David Flynn, tells the complete story:

The ioSAN can be used as networked, server-attached storage or integrated into networked storage infrastructure, making fundamental changes to the enterprise storage area. This is a great example of how Mellanox InfiniBand is the enabling technology for next generation storage.

Talk with you again soon,

Brian Sparks
brian@mellanox.com

SSD over InfiniBand

Last week I was at Storage Networking World in Orlando, Florida.  The sessions were a lot better organized with focus on all the popular topics like Cloud Computing, Storage Virtualization and Solid State Storage (SSD).  In our booth, we demonstrated our Layer 2 agnostic storage supporting iSCSI, FCoE (Fibre Channel over Ethernet) and SRP (SCSI RDMA Protocol) all coexisting in a single network. We partnered with Rorke Data who demonstrated a 40Gb/s InfiniBand-based storage array and Texas Memory System’s ‘World’s Fastest Storage’ in our booth demonstrating sustained rates of 3Gb/s and over 400K I/Os using Solid State Drives. 

I attended few of the sessions on the SSD and Cloud Computing stream. SSD was my favorite topic primarily because InfiniBand and SSD together will provide the highest storage performance and has the potential to carve out a niche in the data center OLTP applications market. Clod Barrera, IBM’s Chief Technical Storage Strategist’s presentation on SSD was very good. He had a chart which talked about how HDD I/O rates per GByte had dropped so low and currently staying constant at around 150 to 200 I/Os per drive. On the contrary SSD’s have capability to produce 50K I/Os on Read and 17K I/Os on Write.  Significant synergy can be achieved by combining SSD with InfiniBand technology. InfiniBand delivers the lowest latency of sub 1us and the highest bandwidth of 40Gb/s.  The combination of these technologies will provide significant value in the datacenter and has the potential to change the database and OLTP storage infrastructure.

SSD over InfiniBand delivers:

-  Ultra-fast, lowest latency infrastructure for transaction processing applications

-  Delivering a more compelling Green per GB 

-   Faster recovery time for business continuity applications

-   Disruptive scaling

I see lot of opportunity for InfiniBand technology in the storage infrastructure as SSD provides the much needed discontinuity to the rotary media. 

TA Ramanujam (TAR)
tar@mellanox.com

Look at this beautiful rack!

This week’s blog is short, but it’s about the candy: the Rack — the Data Center’s building block.
The pictures below visually describe what each one of us would like to have in their Data Center.

Density – over 150 cores within less then 10U. Three different interconnects, 1GigE, 10GigE and 40Gb/s InfiniBand, using two adapters and no thick jungle of cables. –> 25% Savings in rack space.

Power – less servers, w/o giving up any compute power; less adapters, without giving up any capabilities; less switches, without giving up any reliability or bandwidth –> 35% Savings in power.

Cost – with a smaller amount of switches and smaller servers’ size, the saved space enables better cooling. Cost is (inevitably) lower by 25%.

Just imagine this Rack with only a single interconnect of choice, and you’ll experience what I and many people have seen: a simple tidy solution leads to better functioning of teams and faster responses to problems (if they ever occur).

Bringing the rack into a functional condition hasn’t been the easiest thing, I agree. When last time I said that some “labor pain” was involved, I mainly meant pain in finding a place in the data center… I never knew how hard it could be to allocate floor space before going through this experience. But once we got the rack built in place (standing there in the corner can be a bit claustrophobic  ), sliding in the servers and switches took almost zero time. And thanks to a pre-prepared image of the OS, the entire rack was up-and-running within less than 24 hours.

I’ll leave you at this point to see the rack for yourself. I’ll be back in my next post with the first market application that we’ve used with that “Data Center in a Rack” – GigaSpaces.

Nimrod Gindi
nimrodg@mellanox.com

 alt=

 

 

 

 

 

 

 

 

 

 

 

 

 

 

System Picking: Ready, Set, Go!

To recap my previous post, we’ve been setting the stage upon which vendors were to be evaluated and we’re ready for the “big race” (which we’ll do without “naming names”):

System: I’ve considered 2 different dense systems which both followed the CPU and memory requirements: dual-socket quad core, 16GB memory (2GB per core), and support for PCI-Express Gen2. One was a blade server system from a Tier-1 vendor and the other was a 1U server which provided more-for-less (2 servers in 1U). We reviewed power requirements from each (blades were better in this category), cost (differences were >10%) and space (1U servers saved some space). Also, if we didn’t need an external switch the blades would then require less (which impacts the big 3: Power, Cost, and Space).

I/O: We wanted to have all 3 dominant interconnects and reviewed switches and NICs separately.

Switches: 1GigE (many options in 1U and we just had to compare power and cost); 10GigE (there weren’t many options and we considered 3 options which varied in the performance they provided and the price), and 40Gb/s InfiniBand (from us/Mellanox).

NICs: 1GigE (we’ve decided to use the on-board); for 10GigE and 40Gb/s InfiniBand we picked our/Mellanox ConnectX adapters which provides the Virtual Protocol Interconnect (VPI) option (best-in-class performance with both 10GigE and 40Gb/s InfiniBand on the same NIC).

Storage: As mentioned in my previous posts I wanted to use a Tier-1 vendor which would provide us the access to all I/O options, and if necessary, add a gateway to enable all of the options. (I’m planning phase 2 which would include Tier-2 vendors as well, but it is yet to be executed). The choice was fairly easy due to the limited number of players in the storage arena.

Needless to say, we negotiated prices (hopefully effectively) and shared our concerns and performance targets with all vendors involved to help them come forward with the best system which met these requirements. As a result, we’ve been exposed to many future systems which promise to meet our requirements BUT keeping-ourselves-honest to the “off-the-shelf” criteria we initially set and promised to follow, we narrowed the “sea of promises” to what we can see, touch, and use today.

Picking the system proved to be a hard and a long process but nothing prepared me for the bureaucracy of the PO process  (which I won’t go into…). At the end of the day we chose 1U servers ands storage with block-storage with file-system overriding it.

I’ll finish-up with the saving numbers (if you would like additional details on this, you can send me an email) and in my next post I’ll shortly describe the labor pains of the hardware bring-up. Last, but not least, the HUGE differences: POWER saving at ~35%, CAP-EX saving over 25%, and SPACE saving cost at the 25% mark.

Nimrod Gindi
nimrodg@mellanox.com

Enterprise Data Center: Picking Hardware Can Be Hard Work

Re-capping last week’s post…I knew we wanted to have a system which would contain all the building blocks of the data center in a single (easily expendable) rack. Internally for Mellanox, I felt we should review the full procurement process to understand and provide data-center managers with better understanding/knowledge of the hard, and proven to be sometimes painful, process.

Now with that high level of understanding in place, we were required to start taking ideology to reality and decide on components to be purchased. I wish it was as simple as it sounded…let’s buy it (Storage, CPU, and I/O), receive it, use it   — ya, right. When a data center manager attempts to buy hardware for specific or a set-of applications, there are many parameters to take into consideration (I bet each of us unconsciously does this when buying something for home use).

CPU – “can you feel the need? The need for speed.” Tom Cruise’s words from Top Gun applies here better then ever – and yes, we felt it too . We wanted to consider a system which would have 8 cores (we do want it to be valid next Monday, and I guess 8 cores can carry us at least that far). Since time was essential, we couldn’t wait for next generation CPUs which were promised to be just around the corner.

Storage – when considering this component we had to ensure a stable platform with all features (DeDup, high availability, hot-spares etc.), we wanted to have a variety of speeds (from SAS/FC 15k RPM to SATA). We narrowed down things to having a block-storage with a file-system overriding it externally (which would enable us to use both when required).

I/O – we wanted to pick a variety of interconnects: 1GigE, 10GigE and 40Gb/s (QDR) InfiniBand. Having a Virtual Protocol Interconnect (VPI) available made our decision easier, as it covered 2 out of 3 in single low-power adapter.

Bearing in mind all the above, we needed to pass our options via several filters to help us zero-down on the right selection.

We started with the big 3: Business Alignment, Cost and Time.

Cost – this is a tricky one… you have CAP-EX and OP-EX, which means we were required to consider each component for being low on power consumption and still be priced at a low, reasonable price.

Time – we were eager to start, so delivery time was a factor…Waiting 4 months for something was out of the question.

Business Alignment – I guess this is the most important but hardest to capture. For us, it needed to meet the following: have all I/O options, be off-the-shelf products, and we needed them to be able to be used with any application “you’ll throw at them”.

If anyone ever thought the above took us all the way home…well, I guess he is in for some surprises…In my next blog post I’ll list what differences we’ve found between 2 set-ups, both of which could address our business needs but were very much different in other major parameters.