Moore’s Law’s Data Center Disruption

Change happens, and when you talk to anyone involved in the enterprise data center, change has been accelerating and is making their life more and more complicated. The most recent issue is the growing list of network protocols which the network engineer has to choose from.

 

Previously, the decision on what network protocol was very simple. For IP traffic, you used Ethernet, and for storage, Fibre Channel. Speeds were pretty simple to choose from also. You used 1 Gb Ethernet for the IP and 2 or 4 Gb Fibre Channel. The only challenge was choosing the vendor to purchase the equipment from.

 

Now what has happened is Moore’s Law has made the legacy data center network obsolete. Moore’s Law was originally conceived by one of the founders of Intel, Gordon Moore. He noticed that every generation of microprocessor that Intel made tracked a straight line when transistor count was plotted against time. What was more profound, he noticed that most all semiconductor companies tracked this line. He determined that transistor density of the microprocessors doubled every 18 months. His world famous graphical plot is still used today and now used to describe the steady march of technology.

 

Moore’s Law has caused an issue in the data center. Here is what has happened. For any data center to work properly, its major building blocks (storage, servers and network) should be in balance. Meaning, for them to work most efficiently, they should be matched. Also, you could say these three components of the data center have their functionality primarily dependent on semiconductor manufacturing processes i.e. the advance of Moore’s Law. Historically, storage and servers have tracked Moore’s Law very nicely. But when you look at the network you find a big discrepancy. Ethernet and Fibre Channel have not been tracking Moore’s Law. What has happened recently is that the efficiencies of server processing power and storage bandwidth have progressed so far ahead of the network, that the network has become a bottleneck.

 

Looking at present day data center networks, you can see that not only is the performance sub-par to the I/O needs of the server and storage, but also its functionality and features are woefully behind too. Why is this? If you look at Ethernet and Fibre Channel, you discover these protocols don’t track Moore’s Law. Go ahead and plot the advance in bandwidth over time with both Ethernet and Fibre Channel. Then overlay that onto server CPU density and storage bandwidth (aggregated) and you discover that the legacy network (Ethernet and Fibre Channel) have fallen way behind. Even their future roadmaps don’t track Moore’s Law. We are beginning to see the bottlenecks happening. While Ethernet is very popular, it was never designed for the data center. (Try pumping lots of data from tens-to-hundreds of servers and watch the congestion)! Fibre Channel is really too slow. Even 8 Gb is too slow. This lack of matching the technological advance of the servers and storage has made traditional approaches to data center network topology a dead-end. To get back in balance, the network needs to be matched using newer ways of deploying data enter networks.

 

Getting back to my original point; the network administrator of a large data center is probably noticing network problems and is pretty fed up with having to run 8 to 10 network cables to every server. Also, he can move servers anywhere from his desktop but when it comes to the network, he has to physically go into the data center and add NICs and HBAs plus cables. Throwing adapters and more cables at the problem is counterintuitive and not productive. These activities drive CapEx and OpEx through the roof.

 

There are many new network technologies which are available to the data center network administrator that offer compelling solutions to the Moore’s Law problem. 10Gb Ethernet, Low Latency Ethernet, Data Center Ethernet and InfiniBand all offer a wide range of features and solutions for the enterprise data center and cloud computing. The issue is, can people let go of the legacy way and embrace a new way to think about their network? It’s not about the protocol anymore. There are too many choices for that. The new way is to leverage what makes the most sense for the application. By leveraging the newer protocols and their powerful features

 

The change in the enterprise data center which is causing the network problems is actually a good thing. It is forcing people to think about how they deploy their networks in a new light. By adapting an open viewpoint rather than stubbornly holding onto legacy ways, the network engineer in the enterprise data center can leverage powerful alternatives which makes choice a good thing.


Tony Rea
tony@mellanox.com