Category Archives: Storage

Catching up on the latest from Dell Fluid Cache for SAN

Did you know Dell Fluid Cache for SAN now supports Red Hat® Enterprise Linux® 6.5 and VMware vSphere® ESXi™5.5 U2*? With these two additions plus the ability to use a variety of Dell PowerEdge 12th and new 13th generation Dell servers as Cache Contributor servers, customers have even more deployment options to turbocharge OLTP and power heavy use VDI workloads.

 

Big data analytics are growing in demand across enterprise organizations with the need to sort and analyze vast amounts of data in order to guide business decisions. Many companies using ERP solutions which require vast amounts of I/O to process multiple transactions could benefit from extraordinary performance increases required of these databases by adding Dell Fluid Cache for SAN.

 

Continue reading

Deploying Ceph with High Performance Networks

As data continues to grow exponentially storing today’s data volumes in an efficient way is a challenge.  Many traditional storage solutions neither scale-out nor make it feasible from Capex and Opex perspective, to deploy Peta-Byte or Exa-Byte data stores.

Ceph_Logo_Standard_RGB_120411_fa

In this newly published whitepaper, we summarize the installation and performance benchmarks of a Ceph storage solution. Ceph is a massively scalable, open source, software-defined storage solution, which uniquely provides object, block and file system services with a single, unified Ceph storage cluster. The testing emphasizes the careful network architecture design necessary to handle users’ data throughput and transaction requirements.

 

Ceph Architecture

Continue reading

Mellanox Powers EMC Scale-Out Storage

This week is EMC World, a huge event with tens of thousands of customers, partners, resellers and EMC employees talking about cloud, storage, and virtualization. EMC sells many storage solutions but most of the excitement and recent growth (per the latest EMC earnings announcement) are about scale-out storage, including EMC’s Isilon, XtremIO, and ScaleIO solutions.

As mentioned in my blog on the four big changes in storage, traditional scale-out storage connects many storage controllers together, while the new scale-out server storage links the storage on many servers. In both designs the disk or flash on all the nodes in each node is viewed and managed as one large pool of storage. Instead of having to manually partition and assign workloads to different storage systems, workloads can be either shifted seamlessly from node to node (no downtime) or distributed across the nodes.

Clients connect to (scale-out storage) or run on (scale-out server storage) different nodes but must be able to access storage on other nodes as if it were local. If I’m connecting to node A, I need rapid access to the storage on node A, B, C, D, and all the other nodes in the cluster. The system may also migrate data from one node to another, and rapidly exchange metadata or control traffic to keep track of who has which data.

Continue reading

4K Video Drives New Demands

This week, Las Vegas hosts the National Association of Broadcasters conference, or NAB Show. A big focus is the technology needed to deliver movies and TV shows using 4K video.

Standard DVD video resolution is 720×480. Blue-ray resolution is 1920×1080. But, thanks to digital projection in movie theatres and huge flat-screen TVs at home, more video today is being shot in 4K (4096×2160) resolutions.  The video is stored compressed but must be streamed uncompressed for many editing, rendering, and other post-production workflows. Each frame has over 8 million pixels and requires 24x greater bandwidth than DVD (4x greater bandwidth than Blue-ray).

 

Bandwidth and network ports required for Uncompressed 4K & 8K video
Bandwidth and network ports required for Uncompressed 4K & 8K video

 

Continue reading

Four Big Changes in the World of Storage

People often ask me why Mellanox is interested in storage, since we make high-speed InfiniBand and Ethernet infrastructure, but don’t sell disks or file systems.  It is important to understand the four biggest changes going on in storage today:  Flash, Scale-Out, Appliances, and Cloud/Big Data. Each of these really deserves its own blog but it’s always good to start with an overview.

 

Storage 021814 img1

Flash

Flash is a hot topic, with IDC forecasting it will consume 17% of enterprise storage spending within three years. It’s 10x to 1000x faster than traditional hard disk drives (HDDs) with both higher throughput and lower latency. It can be deployed in storage arrays or in the servers. If in the storage, you need faster server-to-storage connections. If in the servers, you need faster server-to-server connections. Either way, traditional Fibre Channel and iSCSI are not fast enough to keep up. Even though Flash is cheaper than HDDs on a cost/performance basis, it’s still 5x to 10x more expensive on a cost/capacity basis. Customers want to get the most out of their Flash and not “waste” its higher performance on a slow network.

 Storage 021814 img2

Flash can be 10x faster in throughput, 300-4000x faster in IOPS per GB (slide courtesy of EMC Corporation)

  Continue reading

The Storage Fabric of the Future Virtualized Data Center

Guest post by Nelson Nahum, Zadara Storage

It is evident that the future data center will be based on cutting-edge software and virtualization technologies to make the most effective use of hardware, compute power, and storage needs to perform essential analytics and to increase the performance of media-related and advanced web applications. And it turns out that the wires that will connect all this technology together are no less crucial to next-generation data centers and clouds than the software and virtualization layers that run within them.


There are multiple storage fabrics and interconnects available today, including Fibre Channel, Ethernet and SAS. Each has various pros and cons, and fabrics were chosen according to need of performance, compatibility and cost efficiencies.

 

As an enterprise storage as-a-service provider, delivering a software-based cloud storage solution for public, private and hybrid cloud models based on commodity hardware, Zadara Storage provides storage as-a-service in multiple public cloud and colocation facilities around the globe. Consistency, high availability and predictability are key in supplying the scalable, elastic service our customers expect, regardless of their location, facility or the public cloud they employ. The hardware we use needs to be dependable, pervasive and cost-efficient in order to sustain the performance and cost-level of our service, anywhere and at any scale.

 

When choosing our fabric, Ethernet came as a clear choice. Ethernet is likely to become the new standard, and boasts several advantages vital to our product:

  • Ethernet’s speed roadmap is aggressive: from 10GbE to 40GbE, and upcoming 100GbE
  • Ethernet is ubiquitous: we can employ it with no complication at any data center or colocation facility around the globe
  • The latency we have found to be more than manageable, specifically as we use advanced techniques such as IO virtualization and data passthrough
  • Ethernet is the most cost effective: an as-a-service company needs to have a competitive pricing edge.

The future of enterprise storage
The future of Enterprise Storage lies in software and a choice of hardware (premium or commodity). Software-defined storage can scale performance more easily and cost effectively than monolithic hardware, and by combining the best of hardware of software, the customer wins. Ethernet is a critical element of our infrastructure, and Mellanox switches offer significant higher performance and consistent dependability that enables our storage fabric and meets our customer’s needs.

 

Zadara Storage at the Mellanox Booth at VM World 2013
Wednesday, August 28, at 2:15pm
At the Mellanox Booth at VM World 2013, Zadara Storage CEO, Nelson Nahum, will present the Zadara™ Storage Cloud, based on the patent-pending CloudFabric™ architecture, and providing a breakthrough cost structure for data centers. Zadara’s software-defined solution employs standard, off-the-shelf x86 servers, and utilizes Ethernet as its only interconnect to provide performant, reliable, SSD- and spindle-based SAN and NAS as a service.

 

About Zadara Storage
An Amazon Web Services and Dimension Data Technology Partner and winner of the VentureBeat, Tie50, Under the Radar, and Plug and Play cloud competitions, Zadara Storage offers enterprise-class storage for the cloud in the form of Storage as a Service (STaaS). With Zadara Storage, cloud storage leapfrogs ahead to provide cloud servers with high-performance, fully configurable, highly available, fully private, tiered SAN and NAS as a service. By combining the best of enterprise storage with the best of cloud and cloud block storage, Zadara Storage accelerates the cloud by enabling enterprises to migrate existing mission-critical applications to the Cloud.

Benchmarking With Real Workloads and the Benefits of Flash and Fast Interconnects

Benchmarking is a term heard throughout the tech industry as a measure of success and pride in a particular solution’s ability to handle this or that workload.  However, most benchmarks feature a simulated workload, and in reality, a deployed solution may perform much differently.  This is especially true with databases, since the types of data and workloads can vary greatly.

 

StorageReview.com and MarkLogic recently bucked the benchmarking trend, developing a benchmark that tests storage systems against an actual NoSQL database instance.  Testing is done in the StorageReview lab, and the first round focused heavily on host-side flash solutions.  Not surprisingly, flash-accelerated solutions took the day, with the lowest overall latencies for all database operations, generally blowing non-flash solutions out of the water and showing that NoSQL database environments can benefit significantly from the addition of flash-accelerated systems.

 

In order to accurately test all of these flash solutions, the test environment had to be set up so that no other component would bottleneck the testing.  As it’s often the interconnect between database, client and storage nodes that limits overall system performance, StorageReview plumbed the test setup with none other than Mellanox ultra low-latency, FDR 56Gb/s InfiniBand adapter cards and switches to ensure full flash performance realization and true apples-to-apples test results.

 

StorageReview-MarkLogic-Layout_trans.png

MarkLogic Benchmark Setup

Find out more about the benchmark and testing results at StorageReview’s website: http://www.storagereview.com/storagereview_debuts_marklogic_nosql_storage_performance_benchmark

 

Don’t for get to join the Mellanox Storage Community: http://community.mellanox.com/groups/storage

Product Flash: DDN hScaler Hadoop Appliance

 

Of the many strange-sounding application and product names out there in the industry today, Hadoop remains one of the most recognized.  Why?  Well, we’ve talked about the impact that data creation, storage and management is having on the overall business atmosphere, it’s the quintessential Big Data problem. Since all that data has no value unless it’s made useful and actionable through analysis, a variety of Big Data analytics software and hardware solutions have been created.  The most popular solution on the software side is, of course, Hadoop.  Recently, however, DDN announced an exciting new integrated solution to solve the Big Data equation: hScaler.

 

Based on DDN’s award-winning SFA 12K architecture, hScaler is the world’s first enterprise Hadoop appliance.  Unlike many Hadoop installations, hScaler is factory-configured and simple to deploy, eliminating the need for trial-and-error approaches that require substantial expertise and time to configure and tune.  The hScaler can be deployed in a matter of hours, compared to homegrown approaches requiring weeks or even months, allowing enterprises to focus on their actual business, and not the mechanics of the Hadoop infrastructure.

hScaler_trans.png

DDN hScaler

 

Performance-wise, the hScaler is no slouch.  Acceleration of the Hadoop shuffle phase through the use of Mellanox InfiniBand and 40GbE RDMA interconnects, ultra-dense storage and an efficient processing infrastructure deliver results up to 7x faster than typical Hadoop installations. That means quicker time-to-insight and a more competitive business.

 

For enterprise installations, hScaler includes an integrated ETL engine, over 200 connectors for data ingestion and remote manipulation, high availability and management through DDN’s DirectMon framework.  Independently scalable storage and compute resources provide additional flexibility and cost savings, as organizations can choose to provision to meet only their current needs, and add resources later as their needs change.  Because hScaler’s integrated architecture is four times as dense as commodity installations, additional TCO dollars can be saved in floorspace, power and cooling.

 

Overall, hScaler looks to be a great all-in-one, plug-n-play package for enterprise organizations that need Big Data results fast, but don’t have the time, resources or desire to build an installation from the ground up.

 

Find out more about the hScaler Hadoop Appliance at DDN’s website: http://www.ddn.com/en/products/hscaler-appliance and http://www.ddn.com/en/press-releases/2013/new-era-of-hadoop-simplicity

 

Don’t for get to join the Mellanox Storage Community: http://community.mellanox.com/groups/storage

 

Xyratex Advances Lustre Initiative

 

The Lustre® file system has played a significant role in the high performance computing industry since its release in 2003.  Lustre is used in many of the top HPC supercomputers in the world today, and has a strong development community behind it.  Last week, Xyratex announced plans to purchase the Lustre trademark, logo, website and associated intellectual property from Oracle, who acquired them with the purchase of Sun Microsystems in 2010. Xyratex will assume responsibility for customer support for Lustre and has pledged to continue its investment in and support of the open source community development.

 

Both Xyratex and the Lustre community will benefit from the purchase. The Lustre community now has an active, stable promoter whose experience and expertise is aligned with their major market segment, HPC, and Xyratex can confidently continue to leverage the Lustre file system to drive increased value in their ClusterStor™ product line, which integrates Mellanox InfiniBand and Ethernet solutions. In a blog post from Ken Claffey on the Xyratex website, the point was made that Xyratex’ investment in Lustre is particularly important to the company, as Xyratex sees its business “indelibly intertwined with the health and vibrancy of the Lustre community” and offers all of its storage solutions based on the Lustre file system. Sounds like a winning proposition for both sides.

 

Find out more about Xyratex’ acquisition of Lustre: http://www.xyratex.com/news/press-releases/xyratex-advances-lustre%C2%AE-initiative-assumes-ownership-related-assets

 

http://community.mellanox.com/groups/storage

 

 

Product Flash: NetApp EF540 Enterprise Flash Array

 

Written By: Erin Filliater, Enterprise Market Development Manager

Via the Storage Solutions Group

 

Everyone knows that flash storage is a big deal.  However, one of the gaps in the flash storage market has been in enterprise flash systems. Flash caching has for some time been in many enterprise storage environments, but enterprise all-flash arrays haven’t.  This week, that all changed with the launch of NetApp’s EF540 Flash Array.  Targeted for business critical applications, the EF540 features the enterprise features we’re used to in a NetApp system: high availability, reliability, manageability, snapshots, synchronous and asynchronous replication and backup and a fully redundant architecture.  Add to that some impressive performance statistics—over 300,000 IOPs, sub-millisecond latency, and 6GB/s throughput—and you have a system to be reckoned with.

NetApp EF540 transparent-sm.png

NetApp® EF540 Flash Array

 

What does all this mean for the IT administrator?  Database application performance boosts of up to 500% over traditional storage infrastructures mean faster business operation results, decreased time-to-market and increased revenue.  Enterprise RAS features lead to less downtime, intuitive management and greater system ROI.

 

Of course, as mentioned earlier in the week in the Are You Limiting Your Flash Performance? post, the network flash systems are connected to also plays a role in boosting performance and reliability.  To this end, NetApp has equipped the EF540 well with 40Gb/s QDR InfiniBand, 10Gb/s iSCSI and 8Gb/s Fibre Channel connectivity options, all with automated I/O path failover for robustness.

 

Following the flash trend, NetApp also announced the all-new FlashRay family of purpose-built enterprise flash arrays, with expected availability in early 2014.  The FlashRay products will focus on efficient, flexible, scale-out architectures to maximize the value of flash installments across the entire enterprise data center stack.  Given all this and the enterprise features of the EF540, there’s no longer a reason not to jump on the flash bandwagon and start moving your enterprise ahead of the game.

 

Find out more about the EF540 Flash Array and FlashRay product family at NetApp’s website: http://www.netapp.com/us/products/storage-systems/flash-ef540/ and http://www.netapp.com/us/company/news/press-releases/news-rel-20130219-678946.aspx

 

Find out more about how Mellanox accelerates NetApp storage solutions at: https://solutionconnection.netapp.com/mellanox-connectx-3-virtual-protocol-interconnect-vpi-adapter-cards.aspx