To know whether scale-out network virtualization in the enterprise DC is the answer to the problem, we need to understand the problem in a holistic sense. Let's set aside our desire to recreate the networks of the past (vlans, subnets, etc, etc) in a new virtual space, and with an open mind ask ourselves some basic questions.
Clearly at a high level, enterprises wish to reduce costs and increase business agility. To reduce costs it's pretty obvious that enterprises need to maximize asset utilization. But what specific changes should enterprise IT bring about to maximize asset utilization and bring about safe business agility? This question ought to be answered in the context of the full sum of changes to the IT model necessary to gain all the benefits of the scale-out cloud.
Should agility come in the form of PaaS or stop short at Iaas? Should the individual machine matter? Should a service be tightly coupled with an instance of a machine or rather should the service exist as data and application that is independent of a machine (physical or virtual)?
In a scale-out cloud, the platform [software] infrastructure directs the spin up and down of application instances relative to demand. The platform infrastructure also spins up application capacity when capacity is lost due to physical failures. Furthermore, the platform software infrastructure ensures that services are only provided to authorized applications and users and secured as required by data policy. VLANs, subnets and IP addresses don't matter to scale-out cloud applications. Clearly network virtualization isn't a requirement for a well designed scale-out PaaS cloud. (Multi-tenant IaaS clouds do have a very real need for scale-out network virtualization)***
So why does scale-out network virtualization matter to the "single-tenant" enterprise? Here's two reasons why I believe enterprises might believe they need it, and two reasons why I think maybe they don't need it for those reasons.
Network virtualization for VM migration.
The problem in enterprises is that a dynamic platform layer such as I describe above isn't quite achievable yet because, unlike the Google's of the world, enterprises generally purchase most of their software from third parties and have legacy software that does not conform to any common platform controls. Many of the applications that enterprises use maintain complex state in memory that if lost can be disruptive to critical business services. Hence, the closest an enterprise can do these days to attain cloud status, is IaaS -- i.e. take your physical machines and turn them into virtual machines. Given this dependence on third party applications, dynamic bursting and that sort of true cloudy capabilities aren't universally applicable in the back end of an enterprise DC.
The popularity of vmotion in the enterprise is testament to the current need to preserve the running state of applications. VM migration is primarily used for two reasons -- (1) to improve overall performance by non-disruptively redistributing running VMs to even out loads and (2) to non-disruptively move virtual machines away from physical equipment that is scheduled for maintenance. This is different from scale-out cloud applications where virtual machines would not be moved, but rather new service instances spun up and others spun down to address both cases.
We all know that for vmotion to work, the migrated VM needs to retain the same IP and MAC address of it's prior self. Clearly if the VM migration were limited to only a subset of the available compute assets this will lead to underutilization and hence higher costs. If a VM should be migrated to any available compute node (assuming again that retaining IP and MAC is a requirement), the requirement would appear to be scale-out network virtualization.
Network virtualization for maintaining traditional security constructs.
As I mentioned before, a scale-out PaaS cloud enforces application security beginning with a trusted registration process. Some platforms require the registration of schemas that application are then forced to conform to when communicating with another application. This isn't a practical option for consumers of off-the-shelf applications. But clearly, not enforcing some measure of expected behavior between endpoints isn't a very safe bet either.
The classical approach to network security has been to create subnets and place firewalls at the top of them. A driving reason for this design is that implementing security at the end-station wasn't considered very secure since an application vulnerability could allow malware on a compromised system to override local security. This drove the need for security to be imposed at the nearest point in the network that was less likely to be compromised rather than at the end station.
When traditional firewall based security is coupled with traditional LANs, a virtual machine is limited to only the compute nodes that are physically connected to that LAN and so we end up with the underutilization of the available compute assets that are on other LANs. However if rather than traditional LANs, we instead use scale-out LAN virtualization, then the firewall (physical or virtual) could be wherever the firewall is, and the VMs that the firewall secures can be on any compute node. Nice.
So it seems we need scale-out network virtualization for vmotion and security...
Actually we don't -- not if we don't have a need for non-IP protocols. Contrary to what some folks might believe, VM migration doesn't require that a VM remains in the same subnet -- it requires that a VM retains it's IP and MAC address which is easily done using host-specific routes (IPv4 /32 or IPv6 /128 routes). Using host-specific routing a VM migration would require that the new hypervisor advertise the VM's host route (initially with a lower metric) and the original hypervisor withdraw it when the VM is suspended.
So now that we don't need a scale-out virtual LAN for vmotion, that leaves the matter of the firewall. The ideal place to implement security is at the north and south perimeters of your network. As I mentioned earlier security inside the host (the true south) can be defeated and hence the subnet firewall (the compromise). But with the advent of the hypervisor, there is now a trusted security enforcement point at the edge. We can now implement firewall security right at the vNIC of the VM (the "south" perimeter). When coupled with perimeter security at the points where communication lines connect your DC to the outside world (the "north" perimeter), you don't need scale-out virtual LANs to support traditional subnet firewalls either. It's debatable whether additional firewall security is required at intermediate points between these two secured perimeters -- my view is that they are not unless you have a low degree of confidence in your security operations. There is a tradeoff to intermediate security which comes at the expense of reduced bandwidth efficiency, increased complexity and higher costs to name a few.
The use of host-specific routing combined with firewall security applied at the vNIC is evident in LAN virtualization solutions that support distributed integrated routing and bridging capability (aka IRB or distributed inter-subnet routing). The only way to perform fully distributed shortest-path routing with free and flexible VM placement, is to use host-based routing. The dilemma then is where to impose firewall security -- at the vNIC of course!!
Building a network-virtualization-free enterprise IaaS cloud.
There's probably a couple of ways to achieve vmotion and segmentation without network virtualization and with very little bridging. Below is one way to do this using only IP. The following approach does not leverage overlays and so each fabric can only support as many hosts as the size of the hardware switch L3 forwarding table.
(1) Build an IP-based core network fabric using switches that have adequate memory and compute power to process fabric-local host routes. The switch L3 forwarding table size reflects the number of VMs you intend to support in a single instance of this fabric design. Host routes should only be carried in BGP. You can use the classic BGP-IGP design or for a BGP-only fabric you might consider draft-lapukhov-bgp-routing-large-dc. Assign the fabric a prefix that is large enough to provide addresses to all the VMs you expect your fabric to support. This fabric prefix will be advertised out of and a default route advertised in to the fabric for inter-fabric and other external communication.
(2) Put a high performance virtual router in your hypervisor image that will get a network facing IP via DHCP and is scripted to automatically BGP peer with it's default gateway which will be the ToR switch. The ToR switch should be configured to accept incoming BGP connections from any vrouter that is on it's local subnet. The vrouter will advertise host routes of local VMs via BGP and for outbound forwarding will use it's default route to the ToR.
(3) To bring up a VM on a hypervisor, your CMS should create an unnumbered interface on the vrouter, attach the vNIC of the VM to it and create a host route to the VM which should be advertised via BGP. The reverse should happen when the VM is removed. This concludes the forwarding aspect of the design.
(4) This next step handles the firewall aspect of the design. Use a bump-in-the-wire firewall like Juniper's vGW to perform targeted class-based security at the vNIC level. If you prefer to apply ACLs on the VM facing port of the vrouter, then you should carve out prefixes for different roles from the fabric's assigned address space to make writing the ACLs a bit easier.
Newer hardware switches support 64K and higher L3 forwarding entries and also come with more than enough compute and memory to handle the task so it's reasonable to achieve upward of 32K VMs per fabric. Further scaling is achieved by having multiple of these fabrics (each with their own dedicated maskable address block) interconnected via an inter-connect fabric, however VM migration should be limited to a single fabric. But if you prefer to go with the overlay approach to achieve the greater scaling, replace BGP to the ToR with MP-BGP to two fabric route reflectors for VPNv4 route exchange. When provisioning a VM-facing interface on the vrouter you'll need to place it into a VRF and import/export a route target.
I've left out some details for the network engineers among you to exercise your creativity (and avoid making this blog as long as my first one) -- why should Openstack hackers have all the fun? :)
Btw, native multicast should work just fine using this approach. Best of all, you can't be locked in to a single vendor.
If you believe you need scale-out overlay network virtualization consider using one that is based on an open standard such as IPVPN or E-VPN. The latter does not require MPLS as some might believe and supports efficient inter-subnet routing via this draft which I believe will be accepted as a standard eventually. Both support native multicast and both are or will be supported by at least three or more vendors eventually with full inter-operability. I'm hopeful that my friends in the centralized camp will some day see the value of also using and contributing to open control-plane and management-plane standards.