A soon to be released film, CHAPPiE, tackles the subject matter of artificial intelligence with an experimental robot built and designed to learn and feel. In the near future, crime is patrolled by an oppressive mechanized police force. When one police droid, CHAPPiE, is stolen and given new programming, he becomes the first robot with the ability to think and feel for himself. He must fight back against forces planning to take him down.
While you may see it as a fictional work of scientific vision, the path toward artificial intelligence isn’t quite so far away. The building block of artificial intelligence is machine learning. Deep learning is a new area of machine learning. This new area has the objective of moving machine learning closer to artificial intelligence. Multiple organizations are investing significant resources into deep learning include Google, Microsoft, Yahoo, Facebook, Twitter and DropBox.
One such company tackling the challenge is Baidu, Inc., a web services company headquartered in Beijing, China. The company offers many services including a Chinese language-search engine for websites, audio files and images. The company offers multimedia content including MP3 music, and movies, and is the first company in China to offer wireless access protocol and PDA-based mobile search to users. Baidu has seen an ever increasing percentage of voice and image searches on its platform.
It is that time of the year again, the time to get the drumbeat going for OpenStack Summit, this time in the beautiful city of Vancouver!
Why would you vote for Mellanox proposals? Here are your top three reasons:
Mellanox has been fully devoted to being open: open source, open architecture, open standards and open API are just a few ways we show our openness. Mellanox has been involved and contributing to multiple open source projects, such as OpenStack, ONIE, Puppet and others, and already contributed certain adaptor applications to the open source community. As a leading member and contributor of the Open Compute Project, Mellanox not only has delivered the world’s first 40GbE NIC for OCP servers, but also has been akey Ethernet switching partner of white box hotties such as Cumulus Networks.
Mellanox brings efficiency to your OpenStack cloud. Ultimately, cloud is about delivering compute, storage and network resources as a service and utility to end users. Any utility models value efficiency, which helps utility providers to support more users, more applications, and more workload with fewer resources. Mellanox can drive far more bandwidth out of each compute or storage node with our offloading, acceleration, and RDMA features to greatly reduce the CPU overhead, leading to better performance and higher efficiency.
Mellanox is a thought leader with innovative ideas to address challenges in various clouds, including public cloud, private cloud, hybrid cloud, High Performance Computing (HPC) cloud and Telco cloud for Network Function Virtualization deployments.
Without further ado, here is our list of proposals for the Telco Strategies track. Please cast your coolest sub-zero votes to help us stand out in this OpenStack Summit!
The OpenStack Summit will be held May 18-22, 2015 in Vancouver, Canada. The OpenStack Foundation allows its member community to vote for the presentations they are most interested in viewing for the summit. Many presentations have been submitted for this event, and voting is now open. We have updated this post with additional sessions submitted by Mellanox and our partner organizations.
Mellanox held its annual sales conference from January 21-23 in San Francisco, with its largest turnout ever, about 250 people. This is in stark contrast to its first sales conference 15 years ago, which included only 6 of the company’s founders. After 15 years of development, the company has taped out seven generations of technology to become the world’s leading provider of data center interconnect, including the introduction in late 2014 of EDR InfiniBand 100Gb/s solutions.
Today, Mellanox provides both end-to-end and top-to-bottom data center interconnect solutions. Not only do we provide the hardware from adapter card to cables to switch to the other machine, PCI to PCI, but Mellanox also creates solutions that reach from the application API down to the electron, down to the cable, down to the fiber and optical devices that run the cable.
While the traditional roadmap indicates that interconnect technology doubles in speed every two to three years, Mellanox takes things one step farther. Not only do we drive the bandwidth higher and the latency lower, we also introduce new, innovative capabilities in every technology cycle. One of these innovations is to drive the cloud toward high-performance computing. It is commonly said that there is a waterfall from high-performance computing that leads its technology into the cloud, which is completely true, but Mellanox is also steering the cloud to high-performance, enabling clouds to deliver highest applications efficiencies while reducing total cost of ownership.
Guest post by Noam Shendar, VP Business Development, Zadara Storage
Just Punch It: Accelerating Storage the Easy Way
Architecting storage solutions means being on the hunt for bottleneck after bottleneck and removing them to eliminate latency wherever possible. With the availability of storage media such as Flash and high-performance disk drives, the bottleneck has often moved away from the storage medium itself and onto the interconnect.
When building our enterprise storage as a service offerings, we’ve had to overcome several bottlenecks – from reinventing the proprietary, purpose-built controllers of traditional arrays that caps the speed and expandability of an array, through the performance of the controllers, to using flash cache acceleration, it was clear that to deliver applications with even better efficiency our award-winningVirtual Private Storage Arrays (VPSA) need to take advantage of new technologies for the datapath.
What is iSER technology?
iSER is an interface that uses Ethernet to carry data directly between server memory and storage devices. The protocol eliminates memory copies (a.k.a. memcpy) and the operating system TCP/IP stack, and bypasses the CPU of the target system entirely. As such, it is lossless, deterministic, with low overhead, and delivers far better CPU efficiency by freeing up resources.
Pushing the frontiers of science and technology will require extreme-scale computing with machines that are 500-to-1,000 times more capable than today’s supercomputers. As researchers continuously refine their models, the demand for more parallel computation and advanced networking capabilities is paramount.
As a result of the ubiquitous data explosion and the ascendance of big data, today’s systems need to move enormous amounts of data and perform more sophisticated analysis; the interconnect truly becomes the critical element of enabling the use of data.
During the last couple of years, the networking industry has invested a lot of effort into developing Software Defined Network (SDN) technology, which is drastically changing data center architecture and enabling large-scale clouds without significantly escalating the TCO (Total Cost of Ownership).
The secret of SDN is not that it enables control of data center traffic via software–it’s not like IT managers were using screwdrivers before to manage the network–but rather that it affords the ability to decouple the control path from the data path. This represents a major shift from the traditional data center networking architecture and therefore offers agility and better economics in modern deployments.
For readers who not familiar with SDN, a simple example can demonstrate the efficiency that SDN provides: Imagine a traffic light that makes its own decisions as to when to change and sends data to the other lamps. Now imagine if that were changed to a centralized control system that takes a global view of the entire traffic pattern throughout the city and therefore makes smarter decisions on how to route the traffic.
The centralized control unit tells each of the lights what to do (using a standard protocol), reducing the complexity of the local units while increasing overall agility. For example, in an emergency, the system can reroute traffic and allow rescue vehicles faster access to the source of the issue.
Did you know Dell Fluid Cache for SAN now supports Red Hat® Enterprise Linux® 6.5 and VMware vSphere® ESXi™5.5 U2*? With these two additions plus the ability to use a variety of Dell PowerEdge 12th and new 13th generation Dell servers as Cache Contributor servers, customers have even more deployment options to turbocharge OLTP and power heavy use VDI workloads.
Big data analytics are growing in demand across enterprise organizations with the need to sort and analyze vast amounts of data in order to guide business decisions. Many companies using ERP solutions which require vast amounts of I/O to process multiple transactions could benefit from extraordinary performance increases required of these databases by adding Dell Fluid Cache for SAN.
CROC is the number one IT infrastructure creation company in Russia, and one of Russia’s top 200 private companies. CROC has become the first public cloud service provider in Russia to adopt InfiniBand—a standard for high-speed data transfer between servers and storage. Migration to a new network infrastructure took approximately one month and resulted in up to a ten-fold increase in cloud service performance.
When it comes to advanced scientific and computational research in Australia, the leading organization is the National Computational Infrastructure (NCI). NCI was tasked to form a national research cloud, as part of a government effort to connect eight geographically distinct Australian universities and research institutions into a single national cloud system.
NCI decided to establish a high-performance cloud, based on Mellanox 56Gb/s Ethernet solutions. NCI, home to the Southern Hemisphere’s most powerful supercomputer, is hosted by the Australian National University and supported by three government agencies: Geoscience Australia, the Bureau of Meteorology, and the Commonwealth Scientific and Industrial Research Organisation (CSIRO).