Mellanox is at Vancouver this week and the frequency of #OpenStack tweetshave quadrupled. If you are wondering how they are related, it’s because every 6 months, the industry showcases the next coolest thing in cloud at the OpenStack Summit.
For those who are unaware, OpenStack is an open source cloud operating system, which initially began as a joint project between Rackspace and NASA and was quickly embraced by the entire industry, from hot startups to big enterprises. Year after year, this honey pot has attracted more bees than ever imagined.
This year marks a big landmark for the OpenStack community partly because several organizations propelled OpenStack from a ‘test bed’ to a ‘production ready’ cloud [Read Walmart and Fujitsustory].
Over the past couple years, we have witnessed significant architectural changes affecting modern data center storage systems. These changes have had a dramatic effect, as they have practically replaced traditional Storage Area Network (SAN), which has been the dominant solution for over a decade.
When analyzing the market trends that led to this change, it becomes very clear that virtualization is the main culprit. The SAN architecture was very efficient when only one workload was accessing the storage array, but it has become much less efficient in a virtualized environment in which different workloads arrive from different independent Virtual Machines (VMs).
To better understand this concept, let’s use a city’s traffic light system as an analogy to a data center’s data traffic. In this analogy, the cars are the data packets (coming in different sizes), and the traffic lights are the data switches. Before the city programs a traffic light’s control, it conducts a thorough study of the traffic patterns of that intersection and the surrounding area.
Enable Higher IOPS while Maximizing CPU Utilization
As virtualization is now a standard technology in the modern data center, IT managers are now seeking ways to increase efficiency by adopting new architectures and technologies that enable faster data processing and execute more jobs over the same infrastructure, thereby lowering the cost per job. Since CPUs and storage systems are the two main contributors to infrastructure cost, using fewer CPU cycles and accelerating access to storage are keys toward achieving higher efficiency.
The ongoing demand to support mobility and real-time analytics of constantly increasing amounts of data demands that new architectures and technologies be used, specifically those with smarter usage of expensive CPU cycles and as a replacement of old storage systems that were very efficient in the past, but that have become hard to manage and extremely expensive to scale in modern virtualized environments.
With an average cost of $2,500 per CPU, about 50% of compute server cost is due to the CPUs. On the other hand, the I/O controllers cost less than $100. Thus, offloading tasks from the CPU to the I/O controller frees expensive CPU cycles, increasing the overall server efficiency. Other expensive components, such as SSD, will therefore not need to wait the extra cycles for the CPU. This means that using advanced I/O controllers with offload engines results in a much more balanced system that increases the overall infrastructure efficiency.
Early May is a time of celebrations. May 1 is the traditional start of summer, as well as International Workers Day. Cinco De Mayo celebrates the Mexican victory over the French in 1862. In the United States, it’s time for Mother’s Day.
Figure 1: A traditional Maypole celebration in England
Most importantly for IT, it’s time for EMC World. EMC is Mother Storage to many enterprise customers gathered in Las Vegas this week.
Figure 2: EMC CEO Joe Tucci says “Live Long and Prosper” to mothers (and storage users) across the galaxy.
I started talking to Martin Taylor a few months ago on the topic of cloud-native VNF. Martin is the Metaswitch CTO and a thought leader in the NFV space. He has writtenand spokenextensively about how Communication Service Providers (CSPs) must embrace the cloud model to realize the scalability, reliability and availability that NFV really needs to succeed.
I am on a business trip and had dinner with a few coworkers last night. During dinner, one of them proudly pulled out his smartphone and bragged about how young how-old.net thinks he is. Indeed, the age that how-old.net spat out was about 2/3 of his real age.
Of course, he had to take everyone’s picture and we had a good laugh about the results. Moreover, right before I started this business trip a couple days ago, I had multiple friends posting similar pictures online from how-old.net, it had gone viral! In case you haven’t tried, here is how it looks:
This week the National Association of Broadcasters (NAB) show is going full swing in Las Vegas and Ethernet Technology Summit (ETS)is running in Santa Clara, California. Today in the United States also happens to be Tax Day, when you must file your return and pay any extra taxes owed to the US Government. That makes it a great time to show a new solution that aims to eliminate latency “taxes” from flash storage—it’s called NVMe Over Fabrics.
What Is NVMe and Why Would I Want It?
First a brief history of NVMe (Non-Volatile Memory Express): Traditionally flash storage is connected by SAS or SATA disk interfaces or a PCIe slot with proprietary drivers. SAS and SATA are proven solutions but they—and the included SCSI protocol layer–are designed for spinning disk, not flash. NVMe standardizes a flash-optimized command set to access to flash devices over a PCIe bus, eliminating the SCSI latency tax. NVMe devices are shipping now with native drivers for Linux, Windows, and VMware.
Customers today are seeking low latency operations to enable high performance cloud computing, big data, database and virtualization applications. To meet this demand, Mellanox has collaborated with HP to optimize the HP ProLiant Server networking for high performance infrastructures.
HP recently announcedtwo new adapters–the first in the HP Ethernet adapter family–based on the Mellanox ConnectX®-3 Pro 10GbE. These adapters are optimized for fast, efficient and scalable cloud and Network Functions Virtualization (NFV). The HP Ethernet 10Gb 2-port 546FLR-SFP+ and 546SFP+ Stand-up adapters for ProLiant Gen9 rack servers are specifically designed to optimize cloud efficiency, improve performance and security of applications.
With the rise of cloud computing and mobile technologies, today’s market demands applications that deliver information from mounds of data to a myriad of end user devices. This data must be personalized, localized, and curated for the user and sent back to these devices. Businesses must retrieve data from their own systems—typically ERP, SCM and HRM applications–and then deliver it through systems of engagement with those end users.
The standard for building these systems is the LAMP stack, which consists of Linux as the operating system, an Apache web server, an open source relational database like MySQL or MariaDB, and PHP as the development language.
LAMP stack has become popular because each component can be theoretically interchanged and adapted without lock in to a specific vendor software stack. These solutions have grown to support many business critical systems of engagement, despite the need for more powerful, scalable and reliable hardware systems. Ideally, the LAMP stack can be optimized for dynamic scale out as well as scale up virtualized infrastructures.
According to a recent survey done by Light Reading, SDN/NFV was ahead of 5G and Internet of Things (IoT) and gained the honor of being the hottest topic at the 2015 Mobile World Congress in Barcelona. Why are people so enthused about SDN and NFV? Two key things: Agility and Elasticity. Communication Service Providers (CSPs) and enterprises alike can spin up and down networks and services on demand, and scale them to the right size that fits their business needs.
But these are really the benefits of cloud, not just virtualization. Virtualization and cloud are often used interchangeably but they are not the same concept. Fundamentally, virtualization refers to the act of creating a virtual (rather than actual) version of something, including but not limited to a virtual computer hardware platform, operating system (OS), storage device, or computer network resources. Virtualization enhances utilization of resources and let you pack more applications onto your infrastructure.
On the other hand, cloud computing is the delivery of shared computing resources on demand through the Internet or enterprise private networks. Cloud can provide self-service capability, elasticity, automated management, scalability and pay-as-you-go service that are not inherent in virtualization, but virtualization makes it easier to achieve those.
So the Nirvana of Network Function Virtualization is really Network FunctionCloudification. But exactly what do we need to do to get there?