All posts by Kevin Deierling

About Kevin Deierling

Kevin Deierling has served as Mellanox's VP of marketing since March 2013. Previously he served as VP of technology at Genia Technologies, chief architect at Silver Spring Networks and ran marketing and business development at Spans Logic. Kevin has contributed to multiple technology standards and has over 25 patents in areas including wireless communications, error correction, security, video compression, and DNA sequencing. He is a contributing author of a text on BiCmos design. Kevin holds a BA in Solid State Physics from UC Berkeley. Follow Kevin on Twitter: @TechseerKD

Seven Predictions for 2017

Fibre Channel Market Collapsing in 2017

Fibre Channel Market Collapsing in 2017

As we look forward to 2017 it is time to peer out into the distance and think about what will happen during the year:

 

  1. 2017 will mark the beginning of the Hunger Games for high performance Optical Component Startups.

This fight to the death battle is inevitable, because there is no de-facto industry standard for 25, 50, and 100G optical interconnects. The early adopters of advanced optical interconnect are the Super Seven – and each has its own view on what the technology should look like. Some want to use multi-mode fiber while others insist on single mode. Some use QSFP and parallel fiber while others insist on WDM (wavelength division multiplexing) over single fiber SFP. Still others want to use break out or pigtailed options. This fragmentation of the market means that small manufacturers can’t develop all of these different options. And with only one major customer for each variant, it is a very dangerous game to play. Some players who are “one-trick ponies” will find themselves unable to achieve scale and maintain the investment required to compete.

 

  1. NVMe Over Fabrics will “Cross the Chasm” and accelerate the decline of Fibre Channel

NVMe Over Fabrics (NVMe-OF) arrived with a bang just a scant 18 months ago and is being driven forward by the performance advantages of RDMA and RoCE. Often a new technology is over-hyped into a state of “Overstated Expectations” and eventually falls into a “Trough of Disillusionment.” But like RoCE before, NVMe-OF will cross the chasm in 2017 with GA solutions appearing that deliver true business value. This will accelerate the decline of the Fibre Channel making this market collapse even faster.

 

  1. Flash Memory will demonstrate remarkable market resilience vs. the new class of Non-Volatile memory competitors such as 3D-Xpoint and ReRAM

Many have predicted the end of Flash memory with the advent of new non-volatile memory technologies such as 3D-Xpoint and ReRAM. In true Mark Twain fashion, the news reports of the death of Flash are greatly exaggerated. Flash memory will continue to thrive even as the new technologies struggle to become reliable and manufacturable in high volumes. In fact the major Flash memory manufacturers will innovate to dramatically improve the read and write latency of Flash, thereby closing the gap on the main advantage of these new technologies.

 

  1. We’ll see a Flash Crash with several prominent All Flash Array vendors finally succumbing

With success comes fierce competition. So despite the overall success of Flash Memory storage there will be winners and losers. Violin Memory will be first and foremost among the struggling All Flash Array vendors that will finally give up the ghost. The competition will get fierce as the big boys and especially the now colossal Dell-EMC hit their stride. It will become increasingly difficult for the smaller guys (and maybe even some of the bigger guys) to compete. Consolidations and pink slips will be the order of the day in 2017.

 

  1. NFV will finally start Functioning in 2017

In 2016 the much ballyhooed potential of Network Function Virtualization (NFV) to eliminate the need for purpose built appliances, failed to materialize. Unfortunately vendors found that when they ported their applications to industry standard servers the performance of their network functions (such as load balancers and firewalls) was dramatically degraded. The promise of better agility with pure software defined virtual network functions (VNFs) unfortunately came at the expense of untenable tradeoffs on price, performance, and power. Instead of reducing cost the performance limitations of VNFs running on X86 servers meant more boxes, dollars, and Megawatts.

 

But this will all change in 2017. Advanced 25, 50, and 100G Network adapters now have built-in Open Virtual Switch (OVS) hardware accelerators that allow vendors to achieve the agility and DevOps capabilities of software defined VNFs, without the performance penalties previously suffered. That combined with the nimble software providers developing true cloud native VNFs based on scalable microservices will make 2017 the year that NFV finally starts to function!

 

  1. The grand vision of SDN will stall and SDN will become “just an overlay” technology

The original grand vision of Software Defined Networks was to create an entirely new, centrally managed, flow based networking architecture. But displacing 30+ years of router technology is a tall order. In reality all but the largest cloud providers have rejected the forklift upgrade required to replace all their BGP routers with flow based manager – and instead have adopted only a subset of the grand SDN vision. The use of ‘overlay’ networks (based on VXLAN, NVGRE, or GENEVE technologies) is becoming widely adopted. This form of network virtualization enables isolation within a multi-tenancy service provider environment and importantly allows tenants to span across L3 routers transparently to both the traditional routers and to tenant software. So SDN and network virtualization is becoming a reality, but only the overlay part of the grand vision.

 

  1. Open Flow will Morph From a Protocol Into an Interface

Closely related to SDN the original grand vision of Software Defined Networks was to use OpenFlow to replace traditional end-point path based routing algorithms, and instead treat every flow a separated entity. Unfortunately this vision didn’t take into account scalability challenges of flow based forwarding nor the robustness and feature set that has evolved around traditional network routing, quality of service, and management. So the funeral for OpenFlow as a network routing technology will be held in 2017, but it will persist as an API to configure flow policies at end points and within gateways.

 

Why 25G is like this Flippin Kid

WooHoo!! We’re Two!! Happy Birthday 25G Ethernet Consortium!

Why 25G Ethernet is like a back-Flipping Wall-Climbing 2 Year Old!

Happy Birthday! Today marks the 2nd birthday of the 25G Ethernet Consortium . It was July 1, 2014 that Microsoft, Google, Mellanox, Arista, and Broadcom first announced the formation of the consortium in order to define interoperable 25 and 50 Gigabit Ethernet solutions. There is a lot of history behind why 25G Ethernet was defined by this consortium, rather than in the IEEE, that you can read about in this Electronic Design Article.

Fast forward two years and there has been tremendous progress. Today the specification is at version 1.4 which has allowed multiple vendors to develop interoperable solutions. The first 25/50G Consortium Plugfest is being held next month with interoperability demonstrations expected from 20+ companies. Of course we’ll be there with our end to end line of 25, 50, & 100Gb/s Ethernet solutions of adapters, switches, and copper and optical cables and transceivers.

UPDATE: I missed mention in my original post, that on June 30, the IEEE just approved the 802.3by spec for 25G Ethernet too! From the ieee802.org 25G reflector:

“Congratulations to all!  The IEEE-SA Standards Board today approved 802.3by an an IEEE Standard!  We are done!”

 

25G Ethernet Adapters

Perhaps even more important than the standards is the fact that major server vendors including Dell, HPE, and Lenovo have 25G network adapters solutions based on our 25/50G ConnectX-4 Lx device.

HPE_FlexLOM

For example HPE offers 25G Adapters as both regular standup PCIe cards and in the compact FlexibleLOM form factors.

25G, 50G  & 100G Ethernet Switches

In addition a broad range of 25, 50, & 100 Gb/s Ethernet switches are now available. This includes the complete line of Mellanox 25/50/100 GE switches, including the half rack width SN2100, the 48+8 SN2410, and the 32 port SN2770. Based on the Spectrum switching silicon these switches offer the best performance and predictability in the industry. You can read the Predictable Performance Blog to learn more abouthow Spectrum based switches deliver the lowest latency, best congestion resilience, predictable performance, fairness, and Zero Packet Loss.

SN2100: 16 Port 100G half rack width. Can be 64 Port @ 25G with breakout cables  SN2100
SN2410 – 48-Port @ 25G + 8-Port @100G  SN2410
SN2700: 32-Port @ 100G  SN2700

25G, 50G  & 100G Ethernet Cables and Transceivers

And lastly we have a complete line of LinkX copper and fiber cables and transceivers. These include both VCSEL based multimode short reach and Silicon Photonics based single mode long reach optical cables and transceivers. We’ve got the best 100G optical modules in the business and are looking forward to the expected ramp in 25, 50, and 100G data centers from hyperscale customers in the second half of 2016.

LinkX

Analysts have been predicting a rapid ramp for the 25GbE technology and this Network Computing Article explains three of the key drivers behind this explosive growth.  But bottom line for a technology that is only two years old, it is amazing to see how rapidly the entire 25, 50, and 100G Ethernet ecosystem has come together with a robust end to end product line of GA products. The 25G Ethernet market is really taking off! Can’t wait to see what it will look like at 3!

race25

The Race to 25G Ethernet – Seven Critical Issues that will Decide the Winner

Often marketers treat new technology like a foot race, and for some it seems the ultimate goal is to be the first to announce a new product. But in reality the first to announce, just like the first out of the blocks, doesn’t actually determine the end of the story as this race video shows.

In reality there are many issues that need to be considered when choosing the right partners with which to deploy new technologies, and an aggressive marketing department willing to announce a product just to be first, is the least important of these considerations.

The new 25Gb/s Ethernet technology is a great case in point, which is now literally hitting full stride with major server vendors announcing support for both adapters and switches. Mellanox has been at the vanguard of this technology being one of the original founders of the 25GEthernet Consortium, along with hyperscale providers like Google and Microsoft.

So if “first to announce” isn’t the primary consideration to determine ultimate success with a new technology, it raises the question what is important?

Here is my take on the top 7 considerations to evaluate companies that will actually determine success in the new 25GbE technology:

  1. Technology
  2. Manufacturing and Operational Capabilities
  3. Price Performance
  4. Ease of Adoption
  5. Product Robustness and Reliability
  6. Corporate and financial stability
  7. End to End portfolio

1.     Technology

The first and most critical consideration for most customers is the core features and capabilities of a new technology. What is most important here is that the technology just works and that the advanced feature set can be easily consumed and delivers true business value. The good news here is that Mellanox offers ConnectX-4 Lx 25/50 GbE adapters that deliver not just 2.5X higher bandwidth, but combine this with advanced networking offloads that accelerate cloud, virtualization, storage, and communications. These offloads mean that more CPU power is available to run applications rather than being consumed by moving data. So the ultimate benefit is application and infrastructure efficiency that results in a better data center ROI using 25GbE.

2.     Manufacturing and Operational Capabilities

So even if the technology works and has the features you need, it’s vital to consider whether your technology partner can manufacture and deliver 25GbE products in high volume and in a timely fashion that meets your business needs. There is nothing more frustrating than having significant customer system revenue opportunity delayed or lost because of supply chain problems with a single component.

The good news here is that Mellanox has proven itself as a reliable supplier shipping to the largest OEM and data center customers in the world. We are the market share leader today in high performance Ethernet NICs (>10Gb/s) with over 90% market share. We are shipping millions of ConnectX adapters to the largest public cloud, Web 2.0, storage, and server OEM customers every year with reliable and dependable delivery. Our ConnectX-4 Lx adapters are a mature product line, with a broad set of software driver support, and have been battle hardened in real world deployments. We maintain significant inventory that is staged throughout the world to enable us to meet upside demand on an expedited schedule.

3.     Price/Performance

Industry analysts are predicting that 25GbE will have the fastest adoption ramp ever for a new Ethernet technology.

Adoption25

Figure 2 Faster Ever Adoption Forecast for 25Gb/s Ethernet

To make this forecast a reality requires not just 25GbE technology that is both manufacturable and offers better features and capabilities, but also deliver a true price/performance advantage.

 PPC

Figure 3: Price Performance Advantage of 25GbE

And here 25GbE delivers on both fronts with better price/performance as well, as can be seen in the Crehan forecast. While 25GbE Ethernet is slightly more expensive than the 10GbE pricing, when normalized for price/performance it is much cheaper on a $/Gbit/s of bandwidth.

In fact the 25GbE pricing is very competitive, with only a 30%-40% premium over 10GbE and this premium is expected to come down over time. To achieve these competitive pricing levels requires devices that are optimized to support 25GbE.

This is precisely why Mellanox introduced new ConnectX-4 Lx silicon for our 25GbE adapter products. The ConnectX-4 Lx is a dedicated 25/50GbE device with a X8 PCIe interface. This is in contrast to the larger and more expensive ConnectX-4 device which has a wider PCIe interface and is capable of supporting 100GbE performance levels. Other offerings that try to cut corners with a one-size fits all approach won’t be able to meet the aggressive price targets required by this market.

4.     Ease of Adoption

ConnectX-4_Lx_Dual-Port-SFP_front_small

Figure 4: ConnectX-4 Lx Adapter with Backwards Compatible SFP28 Connectors

At Mellanox we’ve worked hard to ensure that 25Gb/s Ethernet offers a seamless upgrade to 10 GbE environments, with backwards compatibility that uses the same 10GbE LC fiber cabling that has already been deployed in the data center. Other 25GbE NIC offerings require the use of special QSFP to SFP28 breakout cables, and thus do not provide backwards compatibility with existing LC fiber. In fact there is no solution to connect these NICs to fiber at all.

By contrast the ConnectX-4 Lx offers ordinary SFP style connectors enabling a choice of either copper or fiber connectivity in the same manner as existing 10GbE is deployed.

5.     Product Robustness and Reliability

It is critical that a new technology is robust and reliable. Even a few bad customer experiences can create a perception that a technology has issues and is not ready for primetime. The perception of poor reliability is difficult to overcome and can set back the adoption of a new technology for years.

Building a robust and reliable product is hard and requires everything (silicon, hardware, software, and components) to be designed to the highest standards and built to last. Often weakness in one area can cascade and cause challenges that impact the entire system design and limit product reliability.

For example a high powered device may require special cooling such as a mechanical fan. This should be a red flag as it can cause many thermal and mechanical challenges and has the potential to limit the overall reliability and adversely impact mean time between failures.

Fans

Figure 5: Fans on competitors offerings are a Big Red Flag that indicate high power which can limit product lifetime

Fortunately Mellanox ConnectX-4 Lx Adapters are low power and don’t require fans. The adapters are fully qualified and shipping as GA  products. All of these products have undergone rigorous qualification screening and are designed for reliable operation.

6.     Corporate and Financial Stability

When you choose a technology provider you are also choosing a business partner and it is important to consider the financial health and corporate well-being of the company behind the technology. After all supporting and qualifying a new technology is difficult and requires a significant resource investment by the system vendor. You want to make sure that your business partner is financially healthy with a strong leadership team in place and will continue to invest in software and hardware to drive the technology forward. A company that is financially strong with growing revenues and profits has the ability to continue to invest R&D resources to expand application support and develop new technology. It can be a huge setback to find that all of your key contacts at your technology supplier suddenly don’t work there anymore. So when you choose your technology partners consider not just the technology, operational capabilities, and reliability but also the financial health and stability of the companies you work with.

7.     End to End Portfolio

When introducing a new technology it is important that a comprehensive product offering is in place that allows for end to end connectivity. Mellanox has an entire end to end product line including 25, 50, and 100 GbE ConnectX-4/4Lx adapters, Spectrum switches, and LinkX cables that are generally available and shipping in volume. This is important as it allows Mellanox to perform integration and optimization at every level of the product line to ensure that solutions just work. By qualifying our end to end product line we learn a great deal about each individual component which allows us to improve on all fronts.

But it is equally important that we have interoperability with the entire 25 Gb/s Ethernet ecosystem. As one of the founding members of the 25Gb Ethernet consortium , a member of the Ethernet Alliance, and a founding member of the RoCE Initiative; Mellanox is committed to compliance and interoperability in order to drive 25Gb/s Ethernet technology forward.

Conclusion

The race to 25G Ethernet technology has just begun and it is important to be a leader and deliver this new technology. However it goes well beyond what a company says, and there are many much more important concerns that saying you are first. Here we’ve outlined seven critical issues to consider that will ultimately determine who the winner is in the race to 25G Ethernet technology. But no matter which provider wins one thing is for certain – 25G Ethernet is a great new technology that delivers compelling value and the customer will win for sure.

 

university-of-cambridge

Mellanox and NexentaEdge Cranks Up OpenStack Storage with 25GbE!

Mellanox and NexentaEdge High Performance Scale-Out Block & Object Storage  Deliver Line Rate Performance on 25Gbs and 50Gbs Fabrics.

This week at the OpenStack Summit in Austin, we announced that Mellanox end-to-end Ethernet solutions and the NexentaEdge high performance scale-out block and object storage are being deployed by Cambridge University for their OpenStack cloud.

Software-Defined Storage (SDS) is a key ingredient of OpenStack cloud platforms and Mellanox networking solutions, together with Nexenta storage, are the key to achieving efficient and cost effective deployments. Software-Defined Storage fundamentally breaks the legacy storage models that requires a separate Storage Area Network (SAN) interconnect and instead, converges storage onto a single integrated network.

NexentaEdge block and object storage is designed for any petabyte scale, OpenStack or Container-based cloud and is being deployed to support Cambridge’s OpenStack research cloud. The Nexenta OpenStack solution supports Mellanox Ethernet solutions from 10 up to 100 Gigabit per second.

NexentaEdge is a ground-breaking high performance scale-out block and object SDS storage platform for OpenStack environments. NexentaEdge is the first SDS offering for OpenStack to be specifically designed for high-performance block services with enterprise grade data integrity and storage services. Particularly important in the context of all-flash scale-out solutions, NexentaEdge provides always-on cluster-wide inline deduplication and compression, enabling extremely cost-efficient high performance all-flash storage for OpenStack clouds.

Over the last couple of weeks, Mellanox and Nexenta worked jointly to verify our joint solution’s ability to linearly scale cluster performance with the Mellanox fabric line rate. The testbed comprised 3 storage all-flash storage nodes with Micron SSDs and a single block gateway. All 4 servers in the cluster were connected with Mellanox ConnectX-4 Lx adapters, capable of either 25Gbps or 50Gbps Ethernet.

NexentaEdge configured with Nexenta Block Devices on the gateway node demonstrate 2x higher performance as the Mellanox fabric line rate increased from 25Gbps to 50Gbps.

NexentaEdge-graph1

For example, front-end 100% random write bandwidth (with 128KB I/Os) on the NBD devices scaled from 1.3GB/s with 25Gbps networking, to 2.8GB/s with 50Gbps networking. If you consider a 3x replication factor for data protection, these front-end numbers correspond to 25Gbps and 50Gbps line rate performance on the interface connecting the Gateway server to the three storage nodes in the cluster. While NexentaEdge deduplication and compression were enabled, to maximize network load, the dataset used for testing was non-dedupable and non-compressible.

Building and deploying an OpenStack cloud is made easier with a reliable components that have been tested together. Mellanox delivers predictable end-to-end Ethernet networks that don’t lose packets as detailed in the Tolly Report.  NexentaEdge takes full advantage of the underlying physical infrastructure to enable high performance OpenStack cloud platforms that deliver both CapEx and OpEx savings as well as extreme performance scaling compared to legacy SAN-based storage offerings.

Can you Afford an Unpredictable Network?

For many a predictable network is simply assumed. But it turns out that at the most advanced network speeds predictable performance is extremely hard to deliver, and some vendors actually fall short. Unfortunately, for application level and data center architects the unpredictability of the underlying network can be hidden from view. It is fruitless trying to debug unpredictable application behavior at a system or application level when it is the underlying network that is behaving chaotically and dropping packets. At Mellanox, we deliver predictable networks so that we take the network out of the equation and let providers and customers focus on only their applications – knowing that data communications just works.

 

In order to achieve predictable performance, it’s important to understand how modern, open  networking equipment is built. At this year’s Open Compute Project (OCP) Summit in San Jose, we introduced Open Composable Networks (OCN) – which represents the realization of the vision of the Open-Ethernet initiative first launched early in 2013. OCN demonstrates the power of open networking as is explained in the blog: Why are Open Composable Networks like Lego?

 

By disaggregating switches, OCN enables customers to choose the best hardware and the best software. At Mellanox, we are happy to provide customers with solutions at multiple levels, as we know that fundamentally we deliver predictable performance with the best switching solutions available, from the platform all the way down to the ASIC level. This blog provides the details to support that claim and on how the Spectrum switches deliver predictable performance.

 

The most obvious advantages of the Spectrum switch are 37% lower power and less than half the latency of Broadcom devices. But, in fact, Predictable Performance is perhaps even more important to application performance and customer experience.

Today’s advanced switching devices are complex beasts and unfortunately sometimes all their features get reduced to a short list of simple bullets. So when comparing the Mellanox Spectrum based switches to Broadcom Tomahawk based offerings (Tolly Report), one might make the error of thinking they are roughly the same.

Continue reading

RoCE has Crossed the Chasm

In my previous post, I outlined how Gartner and The Register were predicting a gloomy outcome for Fibre Channel over Ethernet (FCoE) and made the assertion that in contrast RDMA over Converged Ethernet (RoCE) had quite a rosy future.  The key here is that RoCE has crossed the chasm from technology enthusiasts and early adopters to mainstream buyers.

 

In his eponymous book, Moore outlines that the main challenge of Crossing the Chasm is that the Early Majority are pragmatists interested in the quality, reliability, and business value of a technology. Whereas visionaries and enthusiasts relish new, disruptive technologies; the pragmatist values solutions that integrate smoothly into the existing infrastructure. Pragmatists prefer well established suppliers and seek references from other mature customers in their industry. And pragmatists look for technologies where there is a competitive multi-vendor eco-system that gives them flexibility, bargaining power, and leverage.

To summarize the three key requirements needed for a technology to cross the chasm are:

  1. Demonstration that the technology delivers clear business value
  2. Penetration of key beachhead in a mainstream market
  3. Multi-vendor, competitive ecosystem of suppliers

 

On all three fronts RoCE has crossed the chasm.

Continue reading

RoCE has Leaped the Canyon but FCoE Faces a Hellish Descent

I was talking with my colleague, Rob Davis, recently and he commented that “RoCE has leaped the canyon.” Now Rob is from Minnesota and they talk kind of funny there, but despite the rewording, I realized instantly what he meant. RoCE, of course refers to RDMA over Converged Ethernet technology and has “leaped the canyon” was a more emphatic way of saying has “crossed the chasm.”

 

This is, of course, the now proverbial CHASM:  the gap between early adopters and mainstream users made famous by the book, “Crossing the Chasm” by @GeoffreyAMoore. If you are serious about high-tech marketing and haven’t read this book, then you should cancel your afternoon meetings, download it onto your Kindle, and dive in! Moore’s Chasm along with Clayton Christianson’s Innovator’s Dilemma, and Michael Porter’s Competitive Strategy comprise the sacred trilogy for technology marketers.

 

Kevin Deierling 081015 Crossing the Chasm

Crossing the Chasm – Source: (http://yourstory.com/2014/09/druva-inc-techsparks-pune-crossing-the-chasm/)

 

Continue reading

Ethernet Just Got a Big Performance Boost with Release of Soft-RoCE

Data Center innovation just keeps getting faster and RoCE just gave a big boost to Ethernet! Today we announced the release of opensource Soft RoCE. Soft RoCE is a software implementation of RDMA over Converged Ethernet that allows RoCE to run on any Ethernet network adapter, whether or not it offers hardware acceleration.

Kevin D 062315 Fig 1

 

This Soft-RoCE announcement comes fast on the heels of our big launch last week at One World Observatory of our next generation of 25 & 100Gb Ethernet adapters and switches. As we announced last week, both the Mellanox 25/50/100 Gigabit Ethernet Spectrum switch and the ConnectX-4 Lx 25/50 Gigabit Ethernet adapter both fully support RoCE in hardware. As such, they offer the highest performance and most cost-effective RDMA over Ethernet solutions in the market.

 

 

So why would we want to enable the market with a software implementation of RoCE that runs on other Ethernet adapters that don’t have hardware acceleration for RDMA? Well because we think once customers try RDMA and see the benefits they will decide to deploy RoCE it in their data centers – and we want to make this as easy as possible with Soft RoCE. That way customers can run a proof of concept evaluation using existing client machines that have any Ethernet NIC and connect to a RoCE enabled server or storage system. RoCE enabled systems are available from Dell, Data-On HP, IBM, Iron Systems, Lenovo, SuperMicro, Zadara and others.

Continue reading

IBM Enterprise2014 Executive Summit: Turbo LAMP

This week at the IBM Enterprise2014 Executive Summit in Las Vegas, IBM unveiled new Power8 based infrastructure for cloud, data, web 2.0, and mobile engagement. Mellanox is being showcased as a key partner enabling critical platforms for IBM’s Big Data analytics, cloud, and software defined “Elastic Storage” solutions. The new Power8 platform incorporates Mellanox 40Gb Ethernet networking gear and a fully integrated Turbo LAMP (Linux, Apache, MySQL, PHP) software stack.

 

TurboLamp

 

This Turbo LAMP stack came about through a development partnership between IBM, Mellanox and several software vendors:

  • Canonical (Ubuntu Linux & Apache Web Server)
  • SkySQL (MariaDB/MySQL Database)
  • Zend (PHP)

The Turbo LAMP integration is important as it is the foundation for the most common ecommerce, content management systems, and Big Data analytics platforms. This integration allows customers to deliver optimized mobile and web applications while offering critical performance, scale and secure access that businesses need.

On Thursday October 9, our very own Matthew Sheard will be on stage at IBM Enterprise Conference providing details on the solution as outlined in this presentation.

Continue reading

Road to 100Gb/sec…Innovation Required! (Part 3 of 3)

Physical Layer Innovation: Silicon Photonics

So in two previous posts, I discussed the innovations required at the transport, network, and link layer of the communications protocol stack to take advantage of 100Gb/s networks . Let’s now talk about the physical layer. A 100Gb/sec signaling rate implies a 10ps symbol period.

Frankly, this is just not possible on a commercial basis with current technology. Neither is it possible on copper nor on optical interfaces. At this rate the electrical and optical pules just can’t travel any useful distance without smearing into each other and getting corrupted.

So there are two possible solutions.  The first is to use 4 parallel connections each running @25Gb/sec. The second is to use a single channel with a 25Gb/sec symbol rate but to send four bits per symbol. This can be done either through techniques like Pulse Amplitude Modulation (PAM4) or optically by sending four different colors of light on a single fiber using Wavelength Division Multiplexing (WDM) techniques. Continue reading