With the rise of cloud computing and mobile technologies, today’s market demands applications that deliver information from mounds of data to a myriad of end user devices. This data must be personalized, localized, and curated for the user and sent back to these devices. Businesses must retrieve data from their own systems—typically ERP, SCM and HRM applications–and then deliver it through systems of engagement with those end users.
The standard for building these systems is the LAMP stack, which consists of Linux as the operating system, an Apache web server, an open source relational database like MySQL or MariaDB, and PHP as the development language.
LAMP stack has become popular because each component can be theoretically interchanged and adapted without lock in to a specific vendor software stack. These solutions have grown to support many business critical systems of engagement, despite the need for more powerful, scalable and reliable hardware systems. Ideally, the LAMP stack can be optimized for dynamic scale out as well as scale up virtualized infrastructures.
According to a recent survey done by Light Reading, SDN/NFV was ahead of 5G and Internet of Things (IoT) and gained the honor of being the hottest topic at the 2015 Mobile World Congress in Barcelona. Why are people so enthused about SDN and NFV? Two key things: Agility and Elasticity. Communication Service Providers (CSPs) and enterprises alike can spin up and down networks and services on demand, and scale them to the right size that fits their business needs.
But these are really the benefits of cloud, not just virtualization. Virtualization and cloud are often used interchangeably but they are not the same concept. Fundamentally, virtualization refers to the act of creating a virtual (rather than actual) version of something, including but not limited to a virtual computer hardware platform, operating system (OS), storage device, or computer network resources. Virtualization enhances utilization of resources and let you pack more applications onto your infrastructure.
On the other hand, cloud computing is the delivery of shared computing resources on demand through the Internet or enterprise private networks. Cloud can provide self-service capability, elasticity, automated management, scalability and pay-as-you-go service that are not inherent in virtualization, but virtualization makes it easier to achieve those.
So the Nirvana of Network Function Virtualization is really Network FunctionCloudification. But exactly what do we need to do to get there?
It is that time of the year at Mellanox, where we proudly present some of the coolest things our team has worked on! This time it is going to be at the Open Compute Project (OCP) Summit which will be held in the heart of Silicon Valley – San Jose Convention Center on March 11-12, 2015. It is impressive to see how hyper-scale architecture has been revolutionized in just 4 years.
What started as a small project from the basement of Facebook office in Palo Alto has come alive in the form of some cutting edge innovation in racks, server, networking and storage. Some of these innovations from Mellanox will take the center stage during the OCP summit that will accelerate the advancement of data center components, mainly server and networking. Key highlights during the OCP events are:
ConnectX-4 and Multi-Host: Back in November, Mellanox announced the industry’s first 100GbE interconnect adapter pushing the innovation in the networking arena in HPC, Cloud, Web2.0, storage and enterprise applications. With a throughput of 100 Gb/s, bidirectional throughput of 195 Gb/s, application latency of 610 nanoseconds and message rate of 149.5 million messages per second, ConnectX-4 InfiniBand adapters provide the means to increase data center return on investment while reducing IT costs.
You realize that it is time to get a new car. You go to the local dealer and look around. You spot the car you want, and it meets your budget. You take a look at the list of features you wrote down at home, and you check each and every one of them.
A salesperson approaches you and asks: “How may I help you?”
You say: “I like this one, is it possible to take it to a test drive?”
The salesperson says: “Of course, may I see your driver’s license?” You hand him your license.
He takes a look and says: “I’m sorry, but you will need to take additional driving lessons to drive this model.”
“What? I don’t mind reading the owner’s manual, but why driving lessons? This is a car, not a school bus!”
Obviously, each car make and model is different, but they share so much of their functionality, that you can change from one make and model to another almost instantly. If that wasn’t the case, we would all be forced to buy the same model again and again.
So why not in Ethernet switching?
The Ethernet switch industry faces a similar conflict. In the heart of almost every Ethernet switch you would find a switching ASIC, and while every ASIC is different, they share a lot of functionality. So yes, you should read the manual to operate them correctly, but why do you need driving lessons to use an ASIC from another manufacturer?
When I explain Network Function Virtualization (NFV) and why it is a great technology that can revolutionize Communication Service Provider (CSP) operational and business models, I often use the smartphone analogy. In the not-so-distant past, we used to carry a lot of gadgets and accessories such as GPS, cameras, cell phones, Walkman, Gameboy, and this list goes on.
But now, people carry only smartphones, and all the above have become apps running on a generic piece of hardware and an operating system sitting on top of that hardware to provide necessary platform services to the software applications. The number of apps in both Apple and Android app stores is well over a million, and Apple said it paid $13 billion to developers at the 2014 World Wide Developer Conference.
NFV is aiming at doing the same for CSPs, moving their services from running on purpose-built hardware platforms to Commercial Off-the-Shelf (COTS) compute, storage and networking infrastructure. The benefits are obvious: agility in service creation, automation in operation, and dynamic scalability. But anybody who has designed and operated a Telco system wonders, how about performance? Continue reading →
A soon to be released film, CHAPPiE, tackles the subject matter of artificial intelligence with an experimental robot built and designed to learn and feel. In the near future, crime is patrolled by an oppressive mechanized police force. When one police droid, CHAPPiE, is stolen and given new programming, he becomes the first robot with the ability to think and feel for himself. He must fight back against forces planning to take him down.
While you may see it as a fictional work of scientific vision, the path toward artificial intelligence isn’t quite so far away. The building block of artificial intelligence is machine learning. Deep learning is a new area of machine learning. This new area has the objective of moving machine learning closer to artificial intelligence. Multiple organizations are investing significant resources into deep learning include Google, Microsoft, Yahoo, Facebook, Twitter and DropBox.
One such company tackling the challenge is Baidu, Inc., a web services company headquartered in Beijing, China. The company offers many services including a Chinese language-search engine for websites, audio files and images. The company offers multimedia content including MP3 music, and movies, and is the first company in China to offer wireless access protocol and PDA-based mobile search to users. Baidu has seen an ever increasing percentage of voice and image searches on its platform.
It is that time of the year again, the time to get the drumbeat going for OpenStack Summit, this time in the beautiful city of Vancouver!
Why would you vote for Mellanox proposals? Here are your top three reasons:
Mellanox has been fully devoted to being open: open source, open architecture, open standards and open API are just a few ways we show our openness. Mellanox has been involved and contributing to multiple open source projects, such as OpenStack, ONIE, Puppet and others, and already contributed certain adaptor applications to the open source community. As a leading member and contributor of the Open Compute Project, Mellanox not only has delivered the world’s first 40GbE NIC for OCP servers, but also has been akey Ethernet switching partner of white box hotties such as Cumulus Networks.
Mellanox brings efficiency to your OpenStack cloud. Ultimately, cloud is about delivering compute, storage and network resources as a service and utility to end users. Any utility models value efficiency, which helps utility providers to support more users, more applications, and more workload with fewer resources. Mellanox can drive far more bandwidth out of each compute or storage node with our offloading, acceleration, and RDMA features to greatly reduce the CPU overhead, leading to better performance and higher efficiency.
Mellanox is a thought leader with innovative ideas to address challenges in various clouds, including public cloud, private cloud, hybrid cloud, High Performance Computing (HPC) cloud and Telco cloud for Network Function Virtualization deployments.
Without further ado, here is our list of proposals for the Telco Strategies track. Please cast your coolest sub-zero votes to help us stand out in this OpenStack Summit!
The OpenStack Summit will be held May 18-22, 2015 in Vancouver, Canada. The OpenStack Foundation allows its member community to vote for the presentations they are most interested in viewing for the summit. Many presentations have been submitted for this event, and voting is now open. We have updated this post with additional sessions submitted by Mellanox and our partner organizations.
Mellanox held its annual sales conference from January 21-23 in San Francisco, with its largest turnout ever, about 250 people. This is in stark contrast to its first sales conference 15 years ago, which included only 6 of the company’s founders. After 15 years of development, the company has taped out seven generations of technology to become the world’s leading provider of data center interconnect, including the introduction in late 2014 of EDR InfiniBand 100Gb/s solutions.
Today, Mellanox provides both end-to-end and top-to-bottom data center interconnect solutions. Not only do we provide the hardware from adapter card to cables to switch to the other machine, PCI to PCI, but Mellanox also creates solutions that reach from the application API down to the electron, down to the cable, down to the fiber and optical devices that run the cable.
While the traditional roadmap indicates that interconnect technology doubles in speed every two to three years, Mellanox takes things one step farther. Not only do we drive the bandwidth higher and the latency lower, we also introduce new, innovative capabilities in every technology cycle. One of these innovations is to drive the cloud toward high-performance computing. It is commonly said that there is a waterfall from high-performance computing that leads its technology into the cloud, which is completely true, but Mellanox is also steering the cloud to high-performance, enabling clouds to deliver highest applications efficiencies while reducing total cost of ownership.
Guest post by Noam Shendar, VP Business Development, Zadara Storage
Just Punch It: Accelerating Storage the Easy Way
Architecting storage solutions means being on the hunt for bottleneck after bottleneck and removing them to eliminate latency wherever possible. With the availability of storage media such as Flash and high-performance disk drives, the bottleneck has often moved away from the storage medium itself and onto the interconnect.
When building our enterprise storage as a service offerings, we’ve had to overcome several bottlenecks – from reinventing the proprietary, purpose-built controllers of traditional arrays that caps the speed and expandability of an array, through the performance of the controllers, to using flash cache acceleration, it was clear that to deliver applications with even better efficiency our award-winningVirtual Private Storage Arrays (VPSA) need to take advantage of new technologies for the datapath.
What is iSER technology?
iSER is an interface that uses Ethernet to carry data directly between server memory and storage devices. The protocol eliminates memory copies (a.k.a. memcpy) and the operating system TCP/IP stack, and bypasses the CPU of the target system entirely. As such, it is lossless, deterministic, with low overhead, and delivers far better CPU efficiency by freeing up resources.