Wednesday, March 16, 2016

Home Lab

Lab Network Diagram

The network topology was extremely pragmatic.  I wanted to have a setup I could move if I needed to.

You see, I live in Arizona.  Having it in my office in the summer, well, it's already hot here, why add to my discomfort.

So, I've purchased a small wire cart, put the computers and network on it.  Added an old KVM switch I had laying about, as well as a usable but extremely old Dell monitor, mouse and keyboard.  Wired it up to an old wireless game repeater I had from back in my Xbox days and...

Equipment Deployment

Completely mobile except for power.  I could add a UPS, maybe one day.

Left to right,

Lab 3, running my current Arista Spine and Leaf simulator, on VirtualBox on Ubuntu 14.04.

Lab 2, running OpenStack on Ubuntu 14.04.

Labcore, box I got from a friend upgrading, running Ubuntu 14.04 for anything else I may need to do in the rack.

Network switch, 8 x 1Gbps NetGear SOHO, router, Buffalo SOHO running DDWRT.

Lab 3 and 2 specs are buried somewhere in my blog if you're interested.

Anyway, can't wait for summer.

If you're interested in sharing your home lab picts, tweet 'em to me @abusedbits #homelab

Thursday, March 3, 2016

Mega Data Centers and Scale Out

Historically, every massive Scale Up evokes a future Scale Out of capabilities in the IT industry.

Mainframes to Unix systems, Unix systems to x86
Centralized to distributed to centralized, etc.

Inevitably, a technology advancement forces a state change in the mechanism or requirement that filters how the industry chooses to deal with that event.

Today the industry is taking massive advantage of Cloud capabilities that are changing how computing services are acquired and managed.  This advantage is, in no small part, arrived at by new commoditization levels of access to computing capability.  It can be turned on and off at will and pay only for the portion of resource used.  It is a Utility with respect to IDC's 3rd Platform concepts and equates directly to a financial disruption to the traditional model of application delivery.

On the horizon, are things like IoT (Internet of Things, that have the potential to take significant advantage of new streams of data made available by sensors for, well, anything.

Other areas, including the enormous growth in content and data delivery are creating the need for mega Data Centers which are scaling up to meet the demand of the under-layer of scale out of the compute necessary to run this industry.  This scale up is associated directly with the pressure to continue the cost reduction of the service enable-ment of Cloud and Virtualization, with some financial benefit to more traditional 2nd Platform capabilities. (See Scale Up to the mega Data Center)

When content delivery and IOT bandwidth and/or latency requirements overrun the data transport capabilities of the mega Data Center, two things based on the history of this industry are almost assured to change for the 3rd Platform Utility consumption of computing resource.

1 - IOT Sensor data ingestion will be driven back to the edge, or closest point of creation
2 - Content Delivery will be driven back to the edge, or closest point of consumption

The consequence of this may be instrumental in the next major change in the development of the Data Center industry.

One possible model for this change is a hub and spoke model for compute resource consumption.  In this model, the mega Data Center becomes the hub for the majority of bandwidth or latency insensitive applications and the spoke provides all of the edge services for bandwidth and latency sensitive requirements.

The compute utility will have to assume a role that not only spans availability zones for application redundancy, but edge zones for the more sensitive application requirements.

Where it could get really interesting, if the power utilities started to provide co-location (edge) services for mega Data Centers.  The combination could be extremely compelling, especially if the power utility were able to achieve re-classification of the edge service data center as an IT resource rather than a typical power consumption entity (human facility).

Consider the ramifications of a co-location hosting facility within the boundaries of an electrical substation. 

Re-classification of the edge service to an IT resource could eliminate much of the expense of power redundancy for the co-location facility, with minor modifications of the existing power delivery rules.  It could be dual fed from two active power sources.  As an example, a redundant power generation facility would not be required.  As another example, a UPS would not be required.  As a third example, a power transfer switch would not be required.

Furthermore, it could provide an immediate step down to direct electrical bus in the IT resource facility.  In the future, direct step from AC power to DC in the IT resource to get at the extra ~8% efficiency.


Power distribution aligns pretty darn nice with edge service data distribution facilities.  If the power Utility was worth its salt, it either has a fiber network already established OR it has partnered with someone to lay fiber in the right of way.

This would place compute, subsequently creation and consumption, within milliseconds of the majority of the population and nearly all of the industries at fiber bandwidth.

Scale Up to the mega Data Center

While we're in the midst of a massive shift toward mega Data Centers for Cloud and co-location, I thought it would be interesting to start to explore what could possibly happen next.

In order to understand this, its incredibly important to understand why this is happening.  The economics of the data center landscape is controlled in large part by PUE (Power Usage Effectiveness, which introduces major variables in delivering the consumption side economics that drive toward the value chain mode of Utility (or commodity if you prefer).

Google went further and identified 19 variables that affect the efficiency of a Data Center.

In order to adjust the PUE variables in a meaningful way, the Data Center must be industrialized to an extraordinary extent.  At the scale of this industrialization, the variables that affect the Data Center at larger scale are much more relevant than they are in smaller sized Data Centers and thus provide a means to achieve higher levels of PUE while decreasing the work effort necessary to achieve them.

This is where the mega Data Centers have chosen to attack (manipulate, see Wardley chart below) the problems associated with Data Center efficiency.

Wardley Value Chain

But there are limits to the industrialization capabilities, particularly power delivery.  In essence, there is a limit, based on a combination of power generation location and available delivery mechanisms when we start thinking about multi-megawatt facilities.  Put another way, it becomes increasingly expensive to build Data Centers larger than the local generation (supply) and transport (delivery) of electricity.

This is effectively the complete Scale Up of the supply side for Data Centers, square footage plus electricity, that achieves the goals of optimized control of the variables of PUE.

Keeping this in mind, let's think about the groundwork for what happens after.

One relatively good attack point in continuing the Scale Up is optimizing the efficiency of electricity delivery.  Data Centers really need to get from AC (transport) to DC (computer consumption) with fewer steps, with power cost savings order of magnitude of ~ 8%.  At multi-megaWatt size, it would not be insignificant.

"Evaluating the Opportunity for DC Power in the Data Center" by Mark Murrill and B.J. Sonnenberg, Emerson Network Power 

where they reference

"Evaluation of 400V DC distribution in telco and data centers to improve energy efficiency", Annabelle Pratt ; Intel Corp., Hillsboro ; Pavan Kumar ; Tomm V. Aldridge


"400-V DC Distribution in the Data Center Gets Real" , Don Tuite


If the industry continues to function the way it has historically, the step after Scale Up is Scale Out.