Category Archives: Cloud Computing

OpenCloud Speeds Ahead with Mellanox at OpenStack Summit

Mellanox is at Vancouver this week and the frequency of #OpenStack tweets have quadrupled. If you are wondering how they are related, it’s because every 6 months, the industry showcases the next coolest thing in cloud at the OpenStack Summit.

 

For those who are unaware, OpenStack is an open source cloud operating system, which initially began as a joint project between Rackspace and NASA and was quickly embraced by the entire industry, from hot startups to big enterprises. Year after year, this honey pot has attracted more bees than ever imagined.

 

This year marks a big landmark for the OpenStack community partly because several organizations propelled OpenStack from a ‘test bed’ to a ‘production ready’ cloud [Read Walmart and Fujitsu story].

 

Continue reading

Mellanox and Metaswitch Co-Present at OpenStack Summit on Cloud-Native NFV

OpenStack Summit Vancouver is around the corner, and I am very happy to have Colin Tregenza Dancer from Metaswitch to co-present with me on my session “Ahead of the NFV Curve through Truly Scale-Out Network Function Cloudification” in the Telco Strategies track on Thursday May 21st at 2:20PM.

 

Chloe Ma 050615 Pic1

 

I started talking to Martin Taylor a few months ago on the topic of cloud-native VNF. Martin is the Metaswitch CTO and a thought leader in the NFV space. He has written and spoken extensively about how Communication Service Providers (CSPs) must embrace the cloud model to realize the scalability, reliability and availability that NFV really needs to succeed.

 

Continue reading

Turbo LAMP Stack for Today’s Demanding Application Workload

With the rise of cloud computing and mobile technologies, today’s market demands applications that deliver information from mounds of data to a myriad of end user devices. This data must be personalized, localized, and curated for the user and sent back to these devices. Businesses must retrieve data from their own systems—typically ERP, SCM and HRM applications–and then deliver it through systems of engagement with those end users.

 

The standard for building these systems is the LAMP stack, which consists of Linux as the operating system, an Apache web server, an open source relational database like MySQL or MariaDB, and PHP as the development language.

 

LAMP stack has become popular because each component can be theoretically interchanged and adapted without lock in to a specific vendor software stack.  These solutions have grown to support many business critical systems of engagement, despite the need for more powerful, scalable and reliable hardware systems. Ideally, the LAMP stack can be optimized for dynamic scale out as well as scale up virtualized infrastructures.

Continue reading

From Network Function Virtualization to Network Function Cloudification: Secrets to VNF Elasticity

According to a recent survey done by Light Reading, SDN/NFV was ahead of 5G and Internet of Things (IoT) and gained the honor of being the hottest topic at the 2015 Mobile World Congress in Barcelona. Why are people so enthused about SDN and NFV? Two key things: Agility and Elasticity. Communication Service Providers (CSPs) and enterprises alike can spin up and down networks and services on demand, and scale them to the right size that fits their business needs.


Chloe Ma 031115 Fig 1

 

But these are really the benefits of cloud, not just virtualization. Virtualization and cloud are often used interchangeably but they are not the same concept. Fundamentally, virtualization refers to the act of creating a virtual (rather than actual) version of something, including but not limited to a virtual computer hardware platform, operating system (OS), storage device, or computer network resources. Virtualization enhances utilization of resources and let you pack more applications onto your infrastructure.

 

On the other hand, cloud computing is the delivery of shared computing resources on demand through the Internet or enterprise private networks. Cloud can provide self-service capability, elasticity, automated management, scalability and pay-as-you-go service that are not inherent in virtualization, but virtualization makes it easier to achieve those.

 

So the Nirvana of Network Function Virtualization is really Network Function Cloudification. But exactly what do we need to do to get there?

 

Continue reading

Your Sub Zero Votes Needed for Mellanox OpenStack Summit Proposals

It is that time of the year again, the time to get the drumbeat going for OpenStack Summit, this time in the beautiful city of Vancouver!

 

Chloe Ma

 

Why would you vote for Mellanox proposals? Here are your top three reasons:

  1. Mellanox has been fully devoted to being open: open source, open architecture, open standards and open API are just a few ways we show our openness. Mellanox has been involved and contributing to multiple open source projects, such as OpenStack, ONIE, Puppet and others, and already contributed certain adaptor applications to the open source community. As a leading member and contributor of the Open Compute Project, Mellanox not only has delivered the world’s first 40GbE NIC for OCP servers, but also has been a key Ethernet switching partner of white box hotties such as Cumulus Networks.
  2. Mellanox brings efficiency to your OpenStack cloud. Ultimately, cloud is about delivering compute, storage and network resources as a service and utility to end users. Any utility models value efficiency, which helps utility providers to support more users, more applications, and more workload with fewer resources. Mellanox can drive far more bandwidth out of each compute or storage node with our offloading, acceleration, and RDMA features to greatly reduce the CPU overhead, leading to better performance and higher efficiency.
  3. Mellanox is a thought leader with innovative ideas to address challenges in various clouds, including public cloud, private cloud, hybrid cloud, High Performance Computing (HPC) cloud and Telco cloud for Network Function Virtualization deployments.

Without further ado, here is our list of proposals for the Telco Strategies track.  Please cast your coolest sub-zero votes to help us stand out in this OpenStack Summit!

See you all in Vancouver!

 

 

Updated! Vote for #OpenStack Summit Vancouver Presentations

The OpenStack Summit will be held May 18-22, 2015 in Vancouver, Canada. The OpenStack Foundation allows its member community to vote for the presentations they are most interested in viewing for the summit. Many presentations have been submitted for this event, and voting is now open.  We have updated this post with additional sessions submitted by Mellanox and our partner organizations.

In order to vote, you will need to register to the OpenStack Foundation: https://www.openstack.org/join/register/.   Voting for all presentations closes on Monday, February 23 at 5:00 PM CST (GMT-6:00).

 

Vote! #OpenStack Summit

 

For your reference, we have included a list of Mellanox sessions below, click on the title to submit your vote:

 

Accelerating Applications to Cloud using OpenStack-based Hyper-convergence  

Presenters: Kevin Deierling (@TechSeerKD) &  John Kim (@Tier1Storage)

Continue reading

Establishing a High Performance Cloud with Mellanox CloudX

When it comes to advanced scientific and computational research in Australia, the leading organization is the National Computational Infrastructure (NCI).  NCI was tasked to form a national research cloud, as part of a government effort to connect eight geographically distinct Australian universities and research institutions into a single national cloud system.

 

NCI decided to establish a high-performance cloud, based on Mellanox 56Gb/s Ethernet solutions.  NCI, home to the Southern Hemisphere’s most powerful supercomputer, is hosted by the Australian National University and supported by three government agencies: Geoscience Australia, the Bureau of Meteorology, and the Commonwealth Scientific and Industrial Research Organisation (CSIRO).

Continue reading

Road to 100Gb/sec…Innovation Required! (Part 2 of 3)

Network and Link Layer Innovation: Lossless Networks

In a previous post, I discussed that innovations are required to take advantage of 100Gb/s at every layer of the communications protocol stack networks – starting off with the need for  RDMA at the transport layer. So now let’s look at the requirements at the next two layers of the protocol stack. It turns out that RDMA transport requires innovation at the Network and Link layers in order to provide a lossless infrastructure.

‘Lossless’ in this context does not mean that the network can never lose a packet, as some level of noise and data corruption is unavoidable. Rather by ‘lossless’ we mean a network that is designed such that it avoids intentional, systematic packet loss as a means of signaling congestion. That is packet loss is the exception rather than the rule.

Priority Flow Control is similar to a traffic light and enables lossless networks
Priority Flow Control is similar to a traffic light and enables lossless networks

Lossless networks  can be achieved by using priority flow control at the link layer which allows packets to be forwarded only if there is buffer space available in the receiving device. In this way buffer overflow and packet loss is avoided and the network becomes lossless.

In the Ethernet world, this is standardized as 802.1 QBB Priority Flow Control (PFC) and is equivalent to putting stop lights at each intersection. A packet on a given priority class can only be forwarded when the light is green.

Continue reading

Top Three Network Considerations for Large Scale Cloud Deployments

The rapid pace of change in data and business requirements is the biggest challenge when deploying a large scale cloud.  It is no longer acceptable to spend years designing infrastructure and developing applications capable to cope with data and users at scale. Applications need to be developed in a much more agile manner, but in such a way that allows dynamic reallocation of infrastructure to meet changing requirements.

Choosing an architecture that can scale is critical. Traditional “scale-up” technologies are too expensive and can ultimately limit growth as data volumes grow. Trying to accommodate data growth without proper architectural design, results in un-needed infrastructure complexity and cost.

The most challenging task for the cloud operator in a modern cloud data center supporting thousands or even hundreds-of-thousands of hosts is scaling and automating network services.  Fortunately, server virtualization has enabled automation of routine tasks – reducing the cost and time required to deploy a new application from weeks to minutes.   Yet, reconfiguring the network for a new or migrated virtual workload can take days and cost thousands of dollars.

To solve these problems, you need to think differently about your data center strategy.  Here are three technology innovations that will help data center architects design a more efficient and cost-effective cloud:

1.  Overlay Networks

Overlay network technologies such as VXLAN and NVGRE, make the network as agile and dynamic as other parts of the cloud infrastructure. These technologies enable automated network segment provisioning for cloud workloads, resulting in a dramatic increase in cloud resource utilization.
Overlay networks provide for ultimate network flexibility and scalability and the possibility to:

  • Combine workloads within pods
  • Move workloads across L2 domains and L3 boundaries easily and seamlessly
  • Integrate advanced firewall appliances and network security platform seamlessly

Continue reading

Deploying Ceph with High Performance Networks

As data continues to grow exponentially storing today’s data volumes in an efficient way is a challenge.  Many traditional storage solutions neither scale-out nor make it feasible from Capex and Opex perspective, to deploy Peta-Byte or Exa-Byte data stores.

Ceph_Logo_Standard_RGB_120411_fa

In this newly published whitepaper, we summarize the installation and performance benchmarks of a Ceph storage solution. Ceph is a massively scalable, open source, software-defined storage solution, which uniquely provides object, block and file system services with a single, unified Ceph storage cluster. The testing emphasizes the careful network architecture design necessary to handle users’ data throughput and transaction requirements.

 

Ceph Architecture

Continue reading