I was talking to a friend of mine over the weekend and he was really excited about his visit to Facebook and trial of the Oculus Rift, the virtual reality headset designed for 3D gaming with an immersive experience. This has reminded me yet again that we are really living in an era filled with disruptive technologies.
Stable industries that have long been dominated by entrenched leaders are being disrupted by businesses that can innovate, experiment and deploy a faster, often times with software as a core competency. Companies like Uber, Airbnb, Netflix, Square, etc. are boasting private and public market valuations that make the executives of their industry’s historical leaders jealous.
Mellanox is at Vancouver this week and the frequency of #OpenStack tweetshave quadrupled. If you are wondering how they are related, it’s because every 6 months, the industry showcases the next coolest thing in cloud at the OpenStack Summit.
For those who are unaware, OpenStack is an open source cloud operating system, which initially began as a joint project between Rackspace and NASA and was quickly embraced by the entire industry, from hot startups to big enterprises. Year after year, this honey pot has attracted more bees than ever imagined.
This year marks a big landmark for the OpenStack community partly because several organizations propelled OpenStack from a ‘test bed’ to a ‘production ready’ cloud [Read Walmart and Fujitsustory].
I started talking to Martin Taylor a few months ago on the topic of cloud-native VNF. Martin is the Metaswitch CTO and a thought leader in the NFV space. He has writtenand spokenextensively about how Communication Service Providers (CSPs) must embrace the cloud model to realize the scalability, reliability and availability that NFV really needs to succeed.
With the rise of cloud computing and mobile technologies, today’s market demands applications that deliver information from mounds of data to a myriad of end user devices. This data must be personalized, localized, and curated for the user and sent back to these devices. Businesses must retrieve data from their own systems—typically ERP, SCM and HRM applications–and then deliver it through systems of engagement with those end users.
The standard for building these systems is the LAMP stack, which consists of Linux as the operating system, an Apache web server, an open source relational database like MySQL or MariaDB, and PHP as the development language.
LAMP stack has become popular because each component can be theoretically interchanged and adapted without lock in to a specific vendor software stack. These solutions have grown to support many business critical systems of engagement, despite the need for more powerful, scalable and reliable hardware systems. Ideally, the LAMP stack can be optimized for dynamic scale out as well as scale up virtualized infrastructures.
According to a recent survey done by Light Reading, SDN/NFV was ahead of 5G and Internet of Things (IoT) and gained the honor of being the hottest topic at the 2015 Mobile World Congress in Barcelona. Why are people so enthused about SDN and NFV? Two key things: Agility and Elasticity. Communication Service Providers (CSPs) and enterprises alike can spin up and down networks and services on demand, and scale them to the right size that fits their business needs.
But these are really the benefits of cloud, not just virtualization. Virtualization and cloud are often used interchangeably but they are not the same concept. Fundamentally, virtualization refers to the act of creating a virtual (rather than actual) version of something, including but not limited to a virtual computer hardware platform, operating system (OS), storage device, or computer network resources. Virtualization enhances utilization of resources and let you pack more applications onto your infrastructure.
On the other hand, cloud computing is the delivery of shared computing resources on demand through the Internet or enterprise private networks. Cloud can provide self-service capability, elasticity, automated management, scalability and pay-as-you-go service that are not inherent in virtualization, but virtualization makes it easier to achieve those.
So the Nirvana of Network Function Virtualization is really Network FunctionCloudification. But exactly what do we need to do to get there?
It is that time of the year again, the time to get the drumbeat going for OpenStack Summit, this time in the beautiful city of Vancouver!
Why would you vote for Mellanox proposals? Here are your top three reasons:
Mellanox has been fully devoted to being open: open source, open architecture, open standards and open API are just a few ways we show our openness. Mellanox has been involved and contributing to multiple open source projects, such as OpenStack, ONIE, Puppet and others, and already contributed certain adaptor applications to the open source community. As a leading member and contributor of the Open Compute Project, Mellanox not only has delivered the world’s first 40GbE NIC for OCP servers, but also has been akey Ethernet switching partner of white box hotties such as Cumulus Networks.
Mellanox brings efficiency to your OpenStack cloud. Ultimately, cloud is about delivering compute, storage and network resources as a service and utility to end users. Any utility models value efficiency, which helps utility providers to support more users, more applications, and more workload with fewer resources. Mellanox can drive far more bandwidth out of each compute or storage node with our offloading, acceleration, and RDMA features to greatly reduce the CPU overhead, leading to better performance and higher efficiency.
Mellanox is a thought leader with innovative ideas to address challenges in various clouds, including public cloud, private cloud, hybrid cloud, High Performance Computing (HPC) cloud and Telco cloud for Network Function Virtualization deployments.
Without further ado, here is our list of proposals for the Telco Strategies track. Please cast your coolest sub-zero votes to help us stand out in this OpenStack Summit!
The OpenStack Summit will be held May 18-22, 2015 in Vancouver, Canada. The OpenStack Foundation allows its member community to vote for the presentations they are most interested in viewing for the summit. Many presentations have been submitted for this event, and voting is now open. We have updated this post with additional sessions submitted by Mellanox and our partner organizations.
When it comes to advanced scientific and computational research in Australia, the leading organization is the National Computational Infrastructure (NCI). NCI was tasked to form a national research cloud, as part of a government effort to connect eight geographically distinct Australian universities and research institutions into a single national cloud system.
NCI decided to establish a high-performance cloud, based on Mellanox 56Gb/s Ethernet solutions. NCI, home to the Southern Hemisphere’s most powerful supercomputer, is hosted by the Australian National University and supported by three government agencies: Geoscience Australia, the Bureau of Meteorology, and the Commonwealth Scientific and Industrial Research Organisation (CSIRO).
Network and Link Layer Innovation: Lossless Networks
In a previous post, I discussed that innovations are required to take advantage of 100Gb/s at every layer of the communications protocol stack networks – starting off with the need for RDMA at the transport layer. So now let’s look at the requirements at the next two layers of the protocol stack. It turns out that RDMA transport requires innovation at the Network and Link layers in order to provide a lossless infrastructure.
‘Lossless’ in this context does not mean that the network can never lose a packet, as some level of noise and data corruption is unavoidable. Rather by ‘lossless’ we mean a network that is designed such that it avoids intentional, systematic packet loss as a means of signaling congestion. That is packet loss is the exception rather than the rule.
Lossless networks can be achieved by using priority flow control at the link layer which allows packets to be forwarded only if there is buffer space available in the receiving device. In this way buffer overflow and packet loss is avoided and the network becomes lossless.
In the Ethernet world, this is standardized as 802.1 QBB Priority Flow Control (PFC) and is equivalent to putting stop lights at each intersection. A packet on a given priority class can only be forwarded when the light is green.
The rapid pace of change in data and business requirements is the biggest challenge when deploying a large scale cloud. It is no longer acceptable to spend years designing infrastructure and developing applications capable to cope with data and users at scale. Applications need to be developed in a much more agile manner, but in such a way that allows dynamic reallocation of infrastructure to meet changing requirements.
Choosing an architecture that can scale is critical. Traditional “scale-up” technologies are too expensive and can ultimately limit growth as data volumes grow. Trying to accommodate data growth without proper architectural design, results in un-needed infrastructure complexity and cost.
The most challenging task for the cloud operator in a modern cloud data center supporting thousands or even hundreds-of-thousands of hosts is scaling and automating network services. Fortunately, server virtualization has enabled automation of routine tasks – reducing the cost and time required to deploy a new application from weeks to minutes. Yet, reconfiguring the network for a new or migrated virtual workload can take days and cost thousands of dollars.
To solve these problems, you need to think differently about your data center strategy. Here are three technology innovations that will help data center architects design a more efficient and cost-effective cloud:
1. Overlay Networks
Overlay network technologies such as VXLAN and NVGRE, make the network as agile and dynamic as other parts of the cloud infrastructure. These technologies enable automated network segment provisioning for cloud workloads, resulting in a dramatic increase in cloud resource utilization.
Overlay networks provide for ultimate network flexibility and scalability and the possibility to:
Combine workloads within pods
Move workloads across L2 domains and L3 boundaries easily and seamlessly
Integrate advanced firewall appliances and network security platform seamlessly