Category Archives: Cloud Computing

Your Sub Zero Votes Needed for Mellanox OpenStack Summit Proposals

It is that time of the year again, the time to get the drumbeat going for OpenStack Summit, this time in the beautiful city of Vancouver!

 

Chloe Ma

 

Why would you vote for Mellanox proposals? Here are your top three reasons:

  1. Mellanox has been fully devoted to being open: open source, open architecture, open standards and open API are just a few ways we show our openness. Mellanox has been involved and contributing to multiple open source projects, such as OpenStack, ONIE, Puppet and others, and already contributed certain adaptor applications to the open source community. As a leading member and contributor of the Open Compute Project, Mellanox not only has delivered the world’s first 40GbE NIC for OCP servers, but also has been a key Ethernet switching partner of white box hotties such as Cumulus Networks.
  2. Mellanox brings efficiency to your OpenStack cloud. Ultimately, cloud is about delivering compute, storage and network resources as a service and utility to end users. Any utility models value efficiency, which helps utility providers to support more users, more applications, and more workload with fewer resources. Mellanox can drive far more bandwidth out of each compute or storage node with our offloading, acceleration, and RDMA features to greatly reduce the CPU overhead, leading to better performance and higher efficiency.
  3. Mellanox is a thought leader with innovative ideas to address challenges in various clouds, including public cloud, private cloud, hybrid cloud, High Performance Computing (HPC) cloud and Telco cloud for Network Function Virtualization deployments.

Without further ado, here is our list of proposals for the Telco Strategies track.  Please cast your coolest sub-zero votes to help us stand out in this OpenStack Summit!

See you all in Vancouver!

 

 

Updated! Vote for #OpenStack Summit Vancouver Presentations

The OpenStack Summit will be held May 18-22, 2015 in Vancouver, Canada. The OpenStack Foundation allows its member community to vote for the presentations they are most interested in viewing for the summit. Many presentations have been submitted for this event, and voting is now open.  We have updated this post with additional sessions submitted by Mellanox and our partner organizations.

In order to vote, you will need to register to the OpenStack Foundation: https://www.openstack.org/join/register/.   Voting for all presentations closes on Monday, February 23 at 5:00 PM CST (GMT-6:00).

 

Vote! #OpenStack Summit

 

For your reference, we have included a list of Mellanox sessions below, click on the title to submit your vote:

 

Accelerating Applications to Cloud using OpenStack-based Hyper-convergence  

Presenters: Kevin Deierling (@TechSeerKD) &  John Kim (@Tier1Storage)

Continue reading

Establishing a High Performance Cloud with Mellanox CloudX

When it comes to advanced scientific and computational research in Australia, the leading organization is the National Computational Infrastructure (NCI).  NCI was tasked to form a national research cloud, as part of a government effort to connect eight geographically distinct Australian universities and research institutions into a single national cloud system.

 

NCI decided to establish a high-performance cloud, based on Mellanox 56Gb/s Ethernet solutions.  NCI, home to the Southern Hemisphere’s most powerful supercomputer, is hosted by the Australian National University and supported by three government agencies: Geoscience Australia, the Bureau of Meteorology, and the Commonwealth Scientific and Industrial Research Organisation (CSIRO).

Continue reading

Road to 100Gb/sec…Innovation Required! (Part 2 of 3)

Network and Link Layer Innovation: Lossless Networks

In a previous post, I discussed that innovations are required to take advantage of 100Gb/s at every layer of the communications protocol stack networks – starting off with the need for  RDMA at the transport layer. So now let’s look at the requirements at the next two layers of the protocol stack. It turns out that RDMA transport requires innovation at the Network and Link layers in order to provide a lossless infrastructure.

‘Lossless’ in this context does not mean that the network can never lose a packet, as some level of noise and data corruption is unavoidable. Rather by ‘lossless’ we mean a network that is designed such that it avoids intentional, systematic packet loss as a means of signaling congestion. That is packet loss is the exception rather than the rule.

Priority Flow Control is similar to a traffic light and enables lossless networks
Priority Flow Control is similar to a traffic light and enables lossless networks

Lossless networks  can be achieved by using priority flow control at the link layer which allows packets to be forwarded only if there is buffer space available in the receiving device. In this way buffer overflow and packet loss is avoided and the network becomes lossless.

In the Ethernet world, this is standardized as 802.1 QBB Priority Flow Control (PFC) and is equivalent to putting stop lights at each intersection. A packet on a given priority class can only be forwarded when the light is green.

Continue reading

Top Three Network Considerations for Large Scale Cloud Deployments

The rapid pace of change in data and business requirements is the biggest challenge when deploying a large scale cloud.  It is no longer acceptable to spend years designing infrastructure and developing applications capable to cope with data and users at scale. Applications need to be developed in a much more agile manner, but in such a way that allows dynamic reallocation of infrastructure to meet changing requirements.

Choosing an architecture that can scale is critical. Traditional “scale-up” technologies are too expensive and can ultimately limit growth as data volumes grow. Trying to accommodate data growth without proper architectural design, results in un-needed infrastructure complexity and cost.

The most challenging task for the cloud operator in a modern cloud data center supporting thousands or even hundreds-of-thousands of hosts is scaling and automating network services.  Fortunately, server virtualization has enabled automation of routine tasks – reducing the cost and time required to deploy a new application from weeks to minutes.   Yet, reconfiguring the network for a new or migrated virtual workload can take days and cost thousands of dollars.

To solve these problems, you need to think differently about your data center strategy.  Here are three technology innovations that will help data center architects design a more efficient and cost-effective cloud:

1.  Overlay Networks

Overlay network technologies such as VXLAN and NVGRE, make the network as agile and dynamic as other parts of the cloud infrastructure. These technologies enable automated network segment provisioning for cloud workloads, resulting in a dramatic increase in cloud resource utilization.
Overlay networks provide for ultimate network flexibility and scalability and the possibility to:

  • Combine workloads within pods
  • Move workloads across L2 domains and L3 boundaries easily and seamlessly
  • Integrate advanced firewall appliances and network security platform seamlessly

Continue reading

Deploying Ceph with High Performance Networks

As data continues to grow exponentially storing today’s data volumes in an efficient way is a challenge.  Many traditional storage solutions neither scale-out nor make it feasible from Capex and Opex perspective, to deploy Peta-Byte or Exa-Byte data stores.

Ceph_Logo_Standard_RGB_120411_fa

In this newly published whitepaper, we summarize the installation and performance benchmarks of a Ceph storage solution. Ceph is a massively scalable, open source, software-defined storage solution, which uniquely provides object, block and file system services with a single, unified Ceph storage cluster. The testing emphasizes the careful network architecture design necessary to handle users’ data throughput and transaction requirements.

 

Ceph Architecture

Continue reading

Recap: OpenStack Summit 2014 – Atlanta, GA

This past week in Atlanta, I got the chance to attend the sessions, presented and exhibited at the OpenStack Summit.  The Summit was attended by over 4,500 registered participants.  Today there are more users than ever!  More than 200 companies have joined the project, and the main contributors of current OpenStack release are Red Hat, HP and IBM.  The OpenStack Foundation has posted a recap video showing some highlights:

 

 

Some themes emerged during the summit.  The new concept of big users becoming major contributors is really taking off.  Big users are becoming major contributors to the project because it means they can move faster as a company.  These big users include large banks, manufacturing, retailers, government agencies, entertainment and everything between.  Instead of spending time trying to convince vendors to add features, these large organizations have realized that they can work with the OpenStack community directly to add those features and move faster as a business as a result.

Continue reading

Four Big Changes in the World of Storage

People often ask me why Mellanox is interested in storage, since we make high-speed InfiniBand and Ethernet infrastructure, but don’t sell disks or file systems.  It is important to understand the four biggest changes going on in storage today:  Flash, Scale-Out, Appliances, and Cloud/Big Data. Each of these really deserves its own blog but it’s always good to start with an overview.

 

Storage 021814 img1

Flash

Flash is a hot topic, with IDC forecasting it will consume 17% of enterprise storage spending within three years. It’s 10x to 1000x faster than traditional hard disk drives (HDDs) with both higher throughput and lower latency. It can be deployed in storage arrays or in the servers. If in the storage, you need faster server-to-storage connections. If in the servers, you need faster server-to-server connections. Either way, traditional Fibre Channel and iSCSI are not fast enough to keep up. Even though Flash is cheaper than HDDs on a cost/performance basis, it’s still 5x to 10x more expensive on a cost/capacity basis. Customers want to get the most out of their Flash and not “waste” its higher performance on a slow network.

 Storage 021814 img2

Flash can be 10x faster in throughput, 300-4000x faster in IOPS per GB (slide courtesy of EMC Corporation)

  Continue reading

InfiniBand Enables the Most Powerful Cloud: Windows Azure

windows_Azure_logo12Windows Azure continues to be the leader in High-Performance Computing Cloud services. Delivering a HPC solution built on top of Windows Server technology and Microsoft HPC Pack, Windows Azure offers the performance and scalability of a world-class supercomputing center to everyone, on demand, in the cloud.

 

Customers can now run compute-intensive workloads such as parallel Message Passing Interface (MPI) applications with HPC Pack in Windows Azure. By choosing compute intensive instances such as A8 and A9 for the cloud compute resources, customers can deploy these compute resources on demand in Windows Azure in a “burst to the cloud” configuration, and take advantage of InfiniBand interconnect technology with low-latency and high-throughput, including Remote Direct Memory Access (RDMA) technology for maximum efficiency. The new high performance A8 and A9 compute instances also provide customers with ample memory and the latest CPU technology.

 

The new Windows Azure services can burst and scale on-demand, deploy Virtual Machines and Cloud Services when users require them.  Learn more about Azure new services: http://www.windowsazure.com/en-us/solutions/big-compute/

eli karpilovski
Author: Eli Karpilovski manages the Cloud Market Development at Mellanox Technologies. In addition, Mr. Karpilovski serves as the Cloud Advisory Council Chairman. Mr. Karpilovski served as product manager for the HCA Software division at Mellanox Technologies. Mr. Karpilovski holds a Bachelor of Science in Engineering from the Holon Institute of Technology and a Master of Business Administration from The Open University of Israel. Follow him on Twitter.

Turn Your Cloud into a Mega-Cloud

Cloud computing was developed specifically to overcome issues of localization and limitations of power and physical space. Yet many data center facilities are in danger of running out of power, cooling, or physical space.

Mellanox offers an alternative and cost-efficient solution. Mellanox’s new MetroX® long-haul switch system makes it possible to move from the paradigm of multiple, disconnected data centers to a single multi-point meshed mega-cloud. In other words, remote data center sites can now be localized through long-haul connectivity, providing benefits such as faster compute, higher volume data transfer, and improved business continuity. MetroX provides the ability for more applications and more cloud users, leading to faster product development, quicker backup, and more immediate disaster recovery.

The more physical data centers you join using MetroX, the more you scale your company’s cloud into a mega-cloud. You can continue to scale your cloud by adding data centers at opportune moments and places, where real estate is inexpensive and power is at its lowest rates, without concern for distance from existing data centers and without fear that there will be a degradation of performance.

Blog MegaCloudBAandA

Moreover, you can take multiple distinct clouds, whether private or public, and use MetroX to combine them into a single mega-cloud.  This enables you to scale your cloud offering without adding significant infrastructure, and it enables your cloud users to access more applications and to conduct more wide-ranging research while maintaining the same level of performance.

Continue reading