Remember server virtualization? You know, that technology that was the buzz of the technology world just a few short years ago. The one that got supplanted by the “cloud”. You do remember the cloud don’t you? It’s that thing that turned the economics of IT services upside down and and unshackled us from the surly bonds of IT departments and Ops managers.
Network virtualization is the new buzz (we skipped storage in there somewhere but will eventually get around to it). And rightfully so, because that’s where all the action seems to be right now. If you peruse the portfolios of most Silicon Valley VCs you will see a list of who’s who in the world of virtualizing the network. And it’s about time we got back around to this space. I say “got back around” because we touched on this area back in the middle of the last decade with the likes of Inkranetworks, which was for it’s time one very cool technology. Unfortunately the technology was way ahead of our ability to shoe-horn it into our legacy (pre-historic may be a better description) operational and organizational models, and while the Inkra team may a good run of it, they eventually closed the doors. Vyatta picked up the mantle and has done quite well in creating virtualized network services, leading us into the next generation, which is really exciting. Exciting in a couple of ways. First, current network virtualization is focusing on truly abstracting the network layer from network services, as opposed to simply creating virtual network services (firewalls, load balancers, etc.) that sit along side the network OS so to speak. And secondly in that there is talk of abstracting not only the OS from the hardware and underlying switching/routing functions, but also abstracting business services from the underlying OS. Now we can (hopefully) actually start talking in terms of business needs and strategy.
OpenFlow has created a ripple in the time/space fabric of networking and looks to be gaining a reasonable foothold outside the academic arena. If the number of OpenFlow-related “projects” (thanks to Martin Casado) is any indication of progress in this space, then it looks like the momentum is building. OpenFlow and the broader notion of Software-Defined Networking (SDN), which is promoted by the Open Networking Foundation (ONF), approach the problem from a different perspective than their predecessors. OpenFlow separates control of the network from the data flow, moving the former to a separate control point (controller in OpenFlow parlance). This is a great model in that it allows the deployment of commodity switches that use non-proprietary hardware and much lighter weight software stacks. That’s good for the enterprise side of the equation, but the big networking vendors haven’t exactly been lining up behind the flag-bearer and implementing fully functional OpenFlow capabilities in their existing product lines. But with some pretty big players like Google demonstrating that this stuff does work, and at “web-scale” no less, it’s a given that they will have to eventually offer something in this space. But it’s not always clear as to what that will be.
ONF has a good overview presentation of SDN and I suggest you take some time to peruse it if you haven’t already done so. There are two slides of particular interest – Restructured Network and Software-Defined Network (sorry, no slide numbers, but it should be slides 23 and 24) that are of particular interest as we think about the future of this latest technological evolution. In both examples the network OS has been abstracted from the underlying hardware – leaving the specialized (or in the SDN case, potentially commodity) hardware to handle packet-forwarding responsibilities. This allows a rich set of OS capabilities to be built in a more open-source manner and provides the flexibility to exchange one OS model for another, even though not a likely scenario, at least on a routine basis. But the real value lies in the layer above the network OS layer – the feature sets. This is where companies will have the ability to differentiate themselves in terms of capabilities that address business problems without being constrained by the underlying network architecture, proprietary operating systems and specialized hardware.
Unfortunately the bulk of the activity is still in the development of SDN-compatible switch technology and controllers, which should be expected at this stage of the life cycle. But as we move forward through the life cycle the definition of northbound APIs (from the controller) will be critical in the development of a true “top to bottom” business driven network topology, much like the description of the southbound interface on the Software-Defined Network slide (24). Without the application of the same level of rigor in defining those APIs, the ability to craft reusable features (applications) that can sit on top of different OS models will be lost, and we will be back to vendor-specific (and controlled) solution sets.
The same holds true for what I will call the “east-west” interfaces, or the ability to integrate with existing network management systems or cloud operating systems such as OpenStack, CloudStack, etc. While potentially not quite as important, still very critical in being able to create an open-architecture network topology that takes advantages of the benefits that virtualization can bring, and pretty important in being able to create virtualized cloud-enabling networking models.
The question is, where will that definition and standardization be done. It doesn’t appear to be on the near-term radar screen for ONF and/or IETF, which are both posturing to set their own standards for SDN, with IETF focusing on software “driven” networks. IETF also seems to be taking a broader approach, embracing both virtualized and legacy network topologies in the same “orchestration” model. So I think we will see some “gorilla dust” for a while as companies line up behind one or the other (or both) of these initiatives. Hopefully we will get to a point when the dust settles and we have a reasonable set of API definitions that groups like the Open Data Center Alliance can use to build reusable “templates” for data center and cloud services.
In the meantime, it will be interesting to watch the natural progression take place and see who gets consumed by whom…