With the rise of cloud computing and mobile technologies, today’s market demands applications that deliver information from mounds of data to a myriad of end user devices. This data must be personalized, localized, and curated for the user and sent back to these devices. Businesses must retrieve data from their own systems—typically ERP, SCM and HRM applications–and then deliver it through systems of engagement with those end users.
The standard for building these systems is the LAMP stack, which consists of Linux as the operating system, an Apache web server, an open source relational database like MySQL or MariaDB, and PHP as the development language.
LAMP stack has become popular because each component can be theoretically interchanged and adapted without lock in to a specific vendor software stack. These solutions have grown to support many business critical systems of engagement, despite the need for more powerful, scalable and reliable hardware systems. Ideally, the LAMP stack can be optimized for dynamic scale out as well as scale up virtualized infrastructures.
According to a recent survey done by Light Reading, SDN/NFV was ahead of 5G and Internet of Things (IoT) and gained the honor of being the hottest topic at the 2015 Mobile World Congress in Barcelona. Why are people so enthused about SDN and NFV? Two key things: Agility and Elasticity. Communication Service Providers (CSPs) and enterprises alike can spin up and down networks and services on demand, and scale them to the right size that fits their business needs.
But these are really the benefits of cloud, not just virtualization. Virtualization and cloud are often used interchangeably but they are not the same concept. Fundamentally, virtualization refers to the act of creating a virtual (rather than actual) version of something, including but not limited to a virtual computer hardware platform, operating system (OS), storage device, or computer network resources. Virtualization enhances utilization of resources and let you pack more applications onto your infrastructure.
On the other hand, cloud computing is the delivery of shared computing resources on demand through the Internet or enterprise private networks. Cloud can provide self-service capability, elasticity, automated management, scalability and pay-as-you-go service that are not inherent in virtualization, but virtualization makes it easier to achieve those.
So the Nirvana of Network Function Virtualization is really Network FunctionCloudification. But exactly what do we need to do to get there?
It is that time of the year again, the time to get the drumbeat going for OpenStack Summit, this time in the beautiful city of Vancouver!
Why would you vote for Mellanox proposals? Here are your top three reasons:
Mellanox has been fully devoted to being open: open source, open architecture, open standards and open API are just a few ways we show our openness. Mellanox has been involved and contributing to multiple open source projects, such as OpenStack, ONIE, Puppet and others, and already contributed certain adaptor applications to the open source community. As a leading member and contributor of the Open Compute Project, Mellanox not only has delivered the world’s first 40GbE NIC for OCP servers, but also has been akey Ethernet switching partner of white box hotties such as Cumulus Networks.
Mellanox brings efficiency to your OpenStack cloud. Ultimately, cloud is about delivering compute, storage and network resources as a service and utility to end users. Any utility models value efficiency, which helps utility providers to support more users, more applications, and more workload with fewer resources. Mellanox can drive far more bandwidth out of each compute or storage node with our offloading, acceleration, and RDMA features to greatly reduce the CPU overhead, leading to better performance and higher efficiency.
Mellanox is a thought leader with innovative ideas to address challenges in various clouds, including public cloud, private cloud, hybrid cloud, High Performance Computing (HPC) cloud and Telco cloud for Network Function Virtualization deployments.
Without further ado, here is our list of proposals for the Telco Strategies track. Please cast your coolest sub-zero votes to help us stand out in this OpenStack Summit!
The OpenStack Summit will be held May 18-22, 2015 in Vancouver, Canada. The OpenStack Foundation allows its member community to vote for the presentations they are most interested in viewing for the summit. Many presentations have been submitted for this event, and voting is now open. We have updated this post with additional sessions submitted by Mellanox and our partner organizations.
When it comes to advanced scientific and computational research in Australia, the leading organization is the National Computational Infrastructure (NCI). NCI was tasked to form a national research cloud, as part of a government effort to connect eight geographically distinct Australian universities and research institutions into a single national cloud system.
NCI decided to establish a high-performance cloud, based on Mellanox 56Gb/s Ethernet solutions. NCI, home to the Southern Hemisphere’s most powerful supercomputer, is hosted by the Australian National University and supported by three government agencies: Geoscience Australia, the Bureau of Meteorology, and the Commonwealth Scientific and Industrial Research Organisation (CSIRO).
Network and Link Layer Innovation: Lossless Networks
In a previous post, I discussed that innovations are required to take advantage of 100Gb/s at every layer of the communications protocol stack networks – starting off with the need for RDMA at the transport layer. So now let’s look at the requirements at the next two layers of the protocol stack. It turns out that RDMA transport requires innovation at the Network and Link layers in order to provide a lossless infrastructure.
‘Lossless’ in this context does not mean that the network can never lose a packet, as some level of noise and data corruption is unavoidable. Rather by ‘lossless’ we mean a network that is designed such that it avoids intentional, systematic packet loss as a means of signaling congestion. That is packet loss is the exception rather than the rule.
Lossless networks can be achieved by using priority flow control at the link layer which allows packets to be forwarded only if there is buffer space available in the receiving device. In this way buffer overflow and packet loss is avoided and the network becomes lossless.
In the Ethernet world, this is standardized as 802.1 QBB Priority Flow Control (PFC) and is equivalent to putting stop lights at each intersection. A packet on a given priority class can only be forwarded when the light is green.
The rapid pace of change in data and business requirements is the biggest challenge when deploying a large scale cloud. It is no longer acceptable to spend years designing infrastructure and developing applications capable to cope with data and users at scale. Applications need to be developed in a much more agile manner, but in such a way that allows dynamic reallocation of infrastructure to meet changing requirements.
Choosing an architecture that can scale is critical. Traditional “scale-up” technologies are too expensive and can ultimately limit growth as data volumes grow. Trying to accommodate data growth without proper architectural design, results in un-needed infrastructure complexity and cost.
The most challenging task for the cloud operator in a modern cloud data center supporting thousands or even hundreds-of-thousands of hosts is scaling and automating network services. Fortunately, server virtualization has enabled automation of routine tasks – reducing the cost and time required to deploy a new application from weeks to minutes. Yet, reconfiguring the network for a new or migrated virtual workload can take days and cost thousands of dollars.
To solve these problems, you need to think differently about your data center strategy. Here are three technology innovations that will help data center architects design a more efficient and cost-effective cloud:
1. Overlay Networks
Overlay network technologies such as VXLAN and NVGRE, make the network as agile and dynamic as other parts of the cloud infrastructure. These technologies enable automated network segment provisioning for cloud workloads, resulting in a dramatic increase in cloud resource utilization.
Overlay networks provide for ultimate network flexibility and scalability and the possibility to:
Combine workloads within pods
Move workloads across L2 domains and L3 boundaries easily and seamlessly
Integrate advanced firewall appliances and network security platform seamlessly
As data continues to grow exponentially storing today’s data volumes in an efficient way is a challenge. Many traditional storage solutions neither scale-out nor make it feasible from Capex and Opex perspective, to deploy Peta-Byte or Exa-Byte data stores.
In this newly published whitepaper, we summarize the installation and performance benchmarks of a Ceph storage solution. Ceph is a massively scalable, open source, software-defined storage solution, which uniquely provides object, block and file system services with a single, unified Ceph storage cluster. The testing emphasizes the careful network architecture design necessary to handle users’ data throughput and transaction requirements.
This past week in Atlanta, I got the chance to attend the sessions, presented and exhibited at the OpenStack Summit. The Summit was attended by over 4,500 registered participants. Today there are more users than ever! More than 200 companies have joined the project, and the main contributors of current OpenStack release are Red Hat, HP and IBM. The OpenStack Foundation has posted a recap video showing some highlights:
Some themes emerged during the summit. The new concept of bigusers becoming major contributors is really taking off. Big users are becoming major contributors to the project because it means they can move faster as a company. These big users include large banks, manufacturing, retailers, government agencies, entertainment and everything between. Instead of spending time trying to convince vendors to add features, these large organizations have realized that they can work with the OpenStack community directly to add those features and move faster as a business as a result.
People often ask me why Mellanox is interested in storage, since we make high-speed InfiniBand and Ethernet infrastructure, but don’t sell disks or file systems. It is important to understand the four biggest changes going on in storage today: Flash, Scale-Out, Appliances, and Cloud/Big Data. Each of these really deserves its own blog but it’s always good to start with an overview.
Flash is a hot topic, with IDC forecasting it will consume 17% of enterprise storage spending within three years. It’s 10x to 1000x faster than traditional hard disk drives (HDDs) with both higher throughput and lower latency. It can be deployed in storage arrays or in the servers. If in the storage, you need faster server-to-storage connections. If in the servers, you need faster server-to-server connections. Either way, traditional Fibre Channel and iSCSI are not fast enough to keep up. Even though Flash is cheaper than HDDs on a cost/performance basis, it’s still 5x to 10x more expensive on a cost/capacity basis. Customers want to get the most out of their Flash and not “waste” its higher performance on a slow network.
Flash can be 10x faster in throughput, 300-4000x faster in IOPS per GB (slide courtesy of EMC Corporation)