Thursday, January 10, 2019

Blending into software infrastructure


Electronic networks existed long before electronic compute and storage.   Early on, the network was simple wires and switch boards, and the endpoints were humans.  Telegraphs turn taps into on/off current on the wire and back to audible clicks.  Telephones turn voice into fluctuating current and back to voice.

Since then, the network has existed as a unique entity apart from the things it connected.  Until now.  

Less than two decades ago most applications were built in vertical silos.  Each application got its own servers, storage, database and so on.  The only thing applications shared was the network.  The network was the closest thing to a shared resource pool — the original “cloud”.  With increasing digital transformation, other services were also pooled, such as storage and database.  However each application interfaced with these pooled resources and with other applications directly.  Applications had little in common other than the pooled resources they shared.

As more code was written, the value of pooling common software functions and best practices into a “soft” infrastructure layer became evident.  The role of this software infrastructure was to normalize the many disparate ways of doing common things in application software.  This meant application developers could focus on unique value and not boilerplate.  This also meant the software infrastructure could make the best use of underlying physical resources on behalf of the applications.  Storage was absorbed into software infrastructure, and even database.  Software infrastructure was necessary to achieve economies of scale in digital businesses. 

Over the past few decades, emphasis has been on decentralization of network control.  The network’s greatest mission was survivability, and the trade-off was other optimizations, such as for end-user experience.  However the challenges of massive scale has created the need for centralized algorithms to ensure good end-user experience, eliminate stranded capacity, improve time to recovery and speed up deployment.  Imperatives to achieve economies at hyper scale.  For some problems, like survivability, cooperating peers is best, but for others, like bin packing, master-slave is better. 

So while the network may continue to implement last resort survivability in a distributed way, those optimizations that require centralization will/have/should be driven by the common software infrastructure layer in the most demanding environments. 

From day one, the network design team at Bloomberg reported into the software infrastructure org.  My boss Franko reported to Chuck Zegar, the godfather of Bloomberg’s software infrastructure (and Mike Bloomberg’s first employee).  Mike tasked Chuck to engineer Bloomberg’s networks. This first-hand experience in such an organization has led me to believe that software infrastructure is also the ideal place in the org chart for the network engineering role to develop networks that best serve the business and end users.