Building a mission critical system requires the careful selection of constituent technologies and solutions based on their ability to support the goals of the overall system. We do this regardless of whether we subscribe to one approach or another of building such a system.
It is well known that the network technologies commonly used today to build a data center network have not adequately met the needs of applications and operators. However, what is being drowned out in the media storm around "SDN" is that many of the current day challenges in the data center network can be addressed within an existing and familiar toolkit. The vision of SDN should be beyond where today’s data center networks could have been yesterday.
In this "treatise" I highlight what I believe has been lacking in todays data center core network toolkit as well as address certain misconceptions. I'll begin by listing key aspects of a robust network, followed by perspectives on each.
A robust network should be evolved in all of the following aspects:
- Modularity - freedom to choose component solutions based on factors, such as bandwidth, latency, ports, cost, serviceability, etc., This generally requires the network to be solution and vendor-neutral as no single solution or vendor satisfies all requirements. Management and control-plane applications are not excluded from this requirement.
- Automation - promotes the definition of robust network services, automated instantiation of these network services, full cycle management of network services and of other physical and logical network entities, and api-based integration into larger service offerings.
- Operations - functional simplicity and transparency (not a complicated black box), ease of finding engineering and operations talent and ease of building or buying robust software to transparently operate it.
- Flexibility - any port (physical or virtual) should support any network service (“tenancy”, service policy, etc). This property implies that the network can support multiple coexisting services while still meeting end user performance and other experience expectations.
- Scalability - adding capacity (bandwidth, ports, etc) to the network should be trivial and achievable without incremental risk.
- Availability - through survivability, rapid self-healing and unsusceptibility.
- Connectivity - in the form of a simple, robust and consistent way to federate network fabrics and internetwork.
- Cost -- since inflated costs inhibit innovation.
This blog post is about the location-based connective fabric over which higher-layer network services and applications operate. I might talk about "chaining in" conversation-based network services (such as stateful firewalling, traffic monitoring, load-balancing, etc) another time.
On modularity.
Real modularity frees you to select components for your system that best meet it's requirements, without the constraint of unnecessary lock in. In today’s network, the part of the network that can make or break this form of modularity are generally the control-plane and data-plane protocols.
Network protocols are like language. Proprietary protocols have a similar impact to networking as languages to civilization, which is that language silos hinder a connected world from forming. It took the global acceptance of English to enable globalization -- for the world to be more accessible and opportunities not constrained by language.
The alphabet soup of proprietary and half-way control-plane protocols we’ve had forced on us has resulted in mind-bending complexity and has become a drag on the overall advancement of the network. Each vendor claims their proprietary protocol is better than the other guys, but we all know that proprietary protocols are primarily a tool to lock customers out of choice.
Based on evidence, it’s reasonable to believe that robust open protocols and consistent robust implementations of these would address the modularity requirement. We can see this success in the data-plane protocols of TCP/IP and basic Ethernet.
Many folks see the world of network standards as a pile of RFCs related to BGP, IGPs, and other network information exchange protocols. What they don’t see is that many RFCs actually describe network applications (sound familiar?). For example, the IETF RFC 4364 ("BGP/MPLS IP Virtual Private Networks") describes the procedures and data exchange required to implement a network application used for creating IP-based virtual private networks. It describes how to do this in a distributed way using MPBGP for NLRI exchange (control-plane) and MPLS for data plane -- in other words it does not attempt to reinvent what it can readily use. Likewise there are RFCs that describe other applications such as Ethernet VPNs, pseudo-wire service, traffic engineering, etc.
Openflow extends standardization to the API of the data plane elements, but it is still only a single chapter in the modularity story. Putting a proprietary network application on top of Openflow is damaging to Openflow's goal of openness since a system is only as open as the least open part of it.
The modularity of any closed system eventually breaks down.
On automation.
Achieving good network automation has been more of a challenge for some operators than for others. If you haven't crossed this mountain then it's hard, but if you're over the top already then it's a lot easier. Based on my experience the challenges with automating today’s networks are concentrated in a couple of places.
- Low-level configuration schema rather than service-oriented schema. This is fairly obvious when you look at configuration schema of common network operating systems. To pull together a network service such as an IPVPN, many operators need a good "cookbook" that provides a "recipe" which describes how to combine a little bit of these statements with a little bit of those seemingly unrelated statements, and that also hopefully warns you of hidden dangers in often undocumented default settings.
- Different configuration languages to describe the same service and service parameters. The problem also extends to the presentation of status and performance information. There are very large multi-vendor networks today that seamlessly support L2 and L3 VPNs, QoS, and all that other good stuff across the network -- this is made possible by the use of common control and data plane protocols. However it is quite a challenge to build a provisioning library that has to speak in multiple tongues (ex: IOS, JunOS, etc), and keep up with changes in schema (or presentation). This problem highlights the need for not only common data plane and control plane languages, but also for common management plane languages.
- Inability to independently manage one instance of a service from another. An ideal implementation would allow configuration data related to one service instance to be tagged so that services can be easily retired or modified without affecting other services that may depend on the same underlying resources. Even where Openflow is involved, Openflow rules that are common to different service instances need to be managed in a manner that prevents the removal of these common rules when an instance of a service that shares that common rule is removed -- with Openflow, this forces the need for a central all-knowing controller to compile the vectors for the entire fabric and communicate the diffs to data plane elements.
- Lack of robust messaging toolkit to capture and forward meaningful event data. Significant efficiencies can be achieved when network devices capture events and deliver them over a reliable messaging service to a provisioning or monitoring system (syslog does not qualify). For example [simplistically] if a switch could capture LLDP information regarding a neighbor and send that to a provisioning station along with information about itself then the provisioning system could autonomically configure the points of service without the need for an operator to drive the provisioning.
- Inability to atomically commit all the statements related to the creation, modification, deletion and rollback of a service instance as a single transaction. Some vendors have done a better job than others in this area.
- Excessively long commit times (on platforms where commits are supported). Commits should take milliseconds and not 10’s of seconds to be considered API quality.
Interestingly the existence of these issues gives impetus to the belief by some that Openflow will address them by effectively getting rid of them. Openflow is to today's CLI as Assembly Language is to Python. With all the proprietary extensions tacked on to the side of most implementations of Openflow the situation is hardly any better than crummy old CLI, except now you need a couple of great software developers with academic levels of network knowledge and experience (a rare breed) to engineer a network.
On the other hand, for the sake of automation, buying a proprietary centralized network application that uses Openflow to speak to data plane elements isn't necessarily an ideal choice either. These proprietary network applications may support a northbound interface based on a common service schema (management plane) and issue Openflow-based directives to data plane elements, but implement proprietary procedures internally -- a black box.
On operations.
Relying on hidden procedures that are at the heart of the connectivity of your enterprise isn't, in my opinion, a good thing. Today's network professionals believe that control-plane and data-plane transparency are essential to running a mission critical network since that transparency is essential to identifying, containing and resolving issues. The management plane, on the other hand, is considered important for the rapid enablement of services but, in most cases, is less of a concern in relation to business continuity. Some perspectives on SDN espouse fixing of issues at the management plane at the expense of disrupting and obscuring the control plane. Indeed some new SDN products don’t even use Openflow.
Vendors and others that believe that enterprises are better off not understanding the glue that keeps their system together are mistaken. Banks need bankers, justice needs lawyers, the army needs generals and mission critical networks need network professionals. One could define APIs to drive any of these areas and have programmers write the software, but lack of domain expertise begs failure.
When I first read the TRILL specification I was pretty baffled. The network industry had already created most of the basic building blocks needed to put STP-based protocols out of their miserable existence and yet it needed to invent another protocol that would bring us only half way to where we needed the DC network to be. The first thing that crossed my mind when I came out of the sleep induced by reading this document was regarding the challenge of managing all the different kinds of wheels created by endless reinvention. Trying to holistically run a system with wheels of different sizes and shapes all trying to do effectively the same thing yet in different ways, is mind numbing and counter-productive. My reaction was to do something about this -- hence E-VPN, which enables scalable Ethernet DCVPN, based on an existing proven wheel.
Domain expertise is built on transparency, and success is proportional to how non-repetitive, minimal, structured and technically sound your choice of technologies are -- and you'll be better off if your technologists can take the reins when things are heading south.
On flexibility.
Anyone who has built a large data center network in the past 15 years is familiar with the tradeoffs imposed by spanning tree protocols and Ethernet flooding. The tradeoff for a network that optimized for physical space (i.e. any server anywhere) was a larger fault domain while the tradeoff for networks that optimized for high availability was restrictions on the placement of systems (or a spaghetti of patches). When the DC started to get filled up things got ugly. Things were not much better with regard to multi-tenancy at the IP layer of the data center network either.
As I mentioned before, some of the same vendors in the DC network space had already created the building blocks for constructing scalable and flexible networks for service providers. But they kept these technologies out of the hands of DC network operators -- the DC business made profits on volume while the WAN business made its dollars on premiums for feature-rich equipment. If network vendors introduced lower cost DC equipment with the same features as the WAN equipment they risked harming their premium WAN business. Often the vendor teams that engineered DC equipment were different from those that engineered WAN equipment and they did not always work well together, if at all.
WAN technology such as MPLS was advertised as being too complex. Having built both large WAN and DC networks I'll admit that I've had a harder time building a good DC network than building a very large and flexible MPLS WAN. The DC network often “got me”, while the WAN technology was far more predictable. But instead of giving us flexible DCVPN based on robust, scalable and established technologies we instead were given proprietary flavors of TRILL for the DC. The DC network was essentially turf protected with walls made of substandard proprietary protocols. The good news is that all that is changing -- our vendors have known for quite some time that WAN technology is indeed good for the DC and artificial lines need not be drawn between the two.
A flexible DC network allows any service to be enabled on any network port (physical or virtual). One could opt to achieve this flexibility using network software running in hypervisors under the control of centralized SDN controllers. This model might be fine for some environments where compute virtualization is pervasive and if good techniques to reduce risk are employed. Most enterprise DC environments on the other hand will continue to have “bare metal” servers which will be networked over physical ports. "Physicalization" remains strong and certainly PaaS in an enterprise DC does not necessarily require a conventional hypervisor. Some environments may even need to facilitate the extension of a virtual network to other local or remote devices where it will not be possible to impose a hypervisor. Ideally the DC network would not need to be partitioned into parts that are interconnected by specialized gateway choke points. The goal of reducing network complexity and increasing flexibility can't be achieved without eliminating gateways and other service nodes where they only get in the way.
On scalability.
The glue that holds together a network is the control-plane of the network -- it’s job is to efficiently and survivably distribute information about the position and context of destinations on a network and determine the best paths via which to reach them. The larger a network the more the details of the control plane matter. As I alluded to before, spanning tree protocols showed up on the stage with seams already unraveled (this is why smart service providers avoid it like the plague).
The choice of control plane matters significantly in the scaling of a network that needs to provide seamless end-to-end services. A good network control plane combined with its proper implementation enables the efficient and reliable distribution of forwarding information from the largest and most expensive equipment on the network to the smallest and cheapest ones.
A good control plane takes a divide and conquer approach to scaling. As an example of this, given a set of possible paths to a destination, BGP speakers only advertise to their peers the best paths they have chosen based on their position in the network. This approach avoids the need for the sender to transmit more data than necessary and the receiver to have to store and process more of it. Another scaling technique that is used by some BGP-based network applications is to have network nodes explicitly subscribe to the routing information that are relavent to them. Scaling features are indeed available in good standards-based distributed control planes and not unique to proprietary centralized ones.
One could argue that a central controller can do a better job since it has full visibility into all the information in the system. However a central controller can only support a domain of a limited size, more or less the size of what an instance of an IGP can support. The benefits therefore are tempered when building a seamless network of significant scale since to do this necessitates multiple controller domains that share summarized information with each other. As you step back, you begin to see why [scalable and open] network protocols matter.
In addition to the network control plane, open standards have allowed further scaling in other ways, such as by enabling one service domain to be cleanly layered over another service provider domain without customer and service provider having to know the details of the layer above or below.
There are other properties that tend to be inherent in scalable network technology and in their implementations. Bandwidth scaling, for example, (1) avoids the need for traffic to move back and forth across the network as it makes it’s way between two endpoints, (2) avoids multiple copies of the same packet on any single link of the network (whether intra or inter-subnet), and (3) makes it possible for different traffic types to share the same physical wires fairly with minimal state and complexity. Scalable network technology is also fairly easy to understand and debug (but I said that already).
On availability.
In todays networks, each data plane node is co-resident with it's own personal control-plane entity within the same sheet metal packaging. However the communication between that control-plane entity and the coupled data plane entity is opaque. Some operators reduce the size of each box to reduce the unknowns related to the failure of any single box. The complexity inside the box doesn't matter so much since the operator is able to reduce the impact of a failure of the box. What he can see is the control-plane dialog between boxes which he understands. Now imagine that box is the size of your DC network, and the "inside" is still opaque.
In my experience the biggest risk to high availability is software. The quality of software tends to vary as the companies that produce them go through shifts and changes. Many operators have been on the receiving end of this dynamic lately. Even software that is intended to improve availability tends to be a factor in reducing it. In order to minimize the chance of getting hit hard by a software bug, many operators deploy new software into their network in phases (after first testing in their lab hopefully). Any approach that requires an operator to cross his fingers and upgrade large chunks of the network or god forbid the whole thing is probably not suitable for a mission critical system. In my opinion, it is foolish to trust any vendor's software to be bug free. Building a mission critical system involves reducing software fault domains as much as it does reducing other single points of failure.
The most resilient networks would have two separate network "rails" with hosts that straddle both rails and software on the hosts that know how to dynamically shift away from problems on one or the other network. For the highest availability, the software running each network would be different from the other since then they will not be subject to the same bugs.
On connectivity.
In order to scale out a network and reduce risks, we would most likely divide the network into smaller control-plane domains (lets call them fabrics) and then inter-connect these fabrics. There are two main ways we could go about federating these fabrics to create a fabric of fabrics.
Gateways -- We might choose to interconnect the fabrics using gateways, but depending on the gateway technology, the use of gateways can tend to damage the transparency between two communicating endpoints and/or they create artificial choke points and burdensome complexity in the network. Gateways may be the way to go for public clouds for tenants to access their virtual networks, but in large enterprise data centers, fabric domains will more often be created for HA purposes and not for creating virtual network silos with high data-plane walls.
Seamless -- In the ideal world, we should be able to create virtual networks that are not constrained to a single fabric. In order to achieve this, distributed controllers need to federate with each other using a common paradigm (let's say L2 VPN) and ideally using a common protocol language. This seems to take us back to the need for a solid distributed control plane protocol (we can't seem to get away from it!).
If seamless connectivity is not to be limited to a single fabric, then a good protocol will be the connectivity maker. Then it begs the question why is a good protocol not good enough for the fabric itself? Hmm...
On cost.
Health care in the United States is one of the most expensive in the world, but if I was in need of serious medical attention I wouldn't want to be anywhere else. If health care were free innovation would probably cease. Once patents expire and generics hit the market access to life-saving medication becomes commoditized and more accessible. Similarly when Broadcom brought Trident to the market and Intel came with Alta, building good data center networks became accessible to folks that weren't in big margin businesses. However the commodity silicon vendors of the world aren't at the head of the innovation curve. Innovation isn't cheap, but it does need to be fair to both the innovator and the consumer.
The truth about network hardware cost is that it depends on what you need and which point on the adoption curve you are at. The other truth about cost is that customers have a role in driving down costs through the choices we make. As an example of this -- if I believe that routers should have 100GE coherent DWDM interface cards for DCI, then I put my dollars on the first vendor that brings a good implementation to market -- I resist alternatives since it defeats my goal of making the market. There may be a higher price to being a first mover, but once a competitive market is established prices will fall and we're all the better for it. Competition flourishes when sellers know they can be easily displaced -- hence again standards. The alternative is to spend your dollars on run of the mill technology and cry about it.
Fortunately, most data centers don't require really fancy technology as many vendors might have you believe (unless you love fiber channel). The problem is that no single vendor seems to have the right combination of technology that would make for an ideal network fabric. The network equipment we buy has a ton of features we probably didn't need (like DCB) yet pay dearly for and not enough of the stuff that we could really have used. In future blogs I hope to outline a small set of building block technologies that I believe would enable simple, cheap and scalable data center fabrics. It just might be possible for DC network operators to have their cake and eat it too.
In conclusion.
One of the greatest value of the SDN “gold rush” to the enterprise DC network is not necessarily in the change of hands to a brand new way of networking, but in how it has shone a light on it's current dysfunction. SDN is giving network vendors a "run for the money" that will result in them closing the long-standing gaps and bringing great standards-based technologies into the data center network that they've unnecessarily kept out of the data center.
There is more to be done beyond using the fear of SDN to ensure that network vendors are making the best decisions on behalf of the rest of us -- decisions based on customer needs versus on vendor politics and pure self-interest. The way standards bodies work today is driven more by the self-interest of each of it’s members then by what would be best for customers. On the other hand, it was technology buyers that drove the creation of standards bodies because of how much worse it was prior to their existence. Costs can become unconstrained without competition, and once a closed solution is established, extracting it can be a long, costly and perilous journey.
Good standards bring structure to the picture, each good standard bringing one more foundation on which other systems can be efficiently built. When control and data plane are standards-based and properly layered then an Internet emerges. When the management plane is standardized it will reach more people faster.
New and innovative network control applications indeed have value and create new out-of-the box thinking that are good for the industry. On the other hand, that conversation should not be a reason for holding back progress along proven lines. Fixing the problems with todays data center networks should not require giving the boot to all of thirty years of packet network evolution.
The market is agitating for network vendors to bring us solutions that work for us, not just for them. The message is loud and clear and the vendors that are listening will be successful. Vendors and customers that buy too much into the hype will lose big time. Vendors that don't respond to the challenge at hand will also find themselves heading towards the exit.
As much as it's been a bit tiring to search for gems of truth amidst the noise, it's also an exciting time for the data center network.
My advise to buyers and implementors? Choose wisely.