Showing posts with label Center. Show all posts
Showing posts with label Center. Show all posts

Thursday, December 6, 2018

Declarative IT and Design for Operations

Was reading a very insightful and interesting blog from @TorstenVolt on Linked.in titled "Digital Transformation Requires Autonomic Computing 3.0" and started to see some commonality in the needed operations processes with a digital future.

This, by the way, is right after @AWS announces their AWS Outposts, a telling example of how coevolution in IT validates concepts and models (Yes, everyone, Hybrid is real, even Microsoft things so) and how terribly important it is to get to the data.

Torsten brings up some very important points about post transformational examples on what direction needs to take place.  It's very much like the conversation that @cpswan brings up almost every time we talk, how does the future solve the problems we see through Design for Operations.

The solution for Design for Operations revolves distinctly around Declarative IT plus a bit of truth and trust in the platform(s) associated with that same delivery.

I've done my best in capturing/mapping this in a Wardley Map as a way to illustrate the idea.

Declarative needs and Design for Operations of IT Infrastructure
Figure 1.  Wardley Map showing Declarative needs and Design for Operations of IT Infrastructure
In general, IT Infrastructure Design for Operations includes a Platform for management of Declarative states of underlying infrastructure.

With the abstraction of server virtualization and cloud computing, the declarations became very much least common denominator and not very permissible with respect the evolution of application development (the Dev part of DevOps).  It eased operations of the abstracted layer considerably, but it didn't really enhance the Ops side of the infrastructure model.

At this point, we could argue that the Cloud Titans have already achieved a more horizontally evolved Declarative Platform, but now that everyone seems to be reaching back into the Enterprise Datacenter, permissible use and least common denominator may very well come into play again.

So, looking at the problem that now needs to be solved, truly Declarative Storage and Declarative Compute need to be pressed into evolution toward commodity to achieve the rewards of application integrated software definition of the hybrid datacenter.

This should go a long way toward solving the Design for Operations problems as well as matching up with @TorstonVolt's re-conceptualization of the Digital future state.


Tuesday, November 22, 2016

"Smallering" of the Enterprise Data Center

There is an acceleration of workloads moving away from the Enterprise Data Center.

Figure 1.  Workload Movement
The Digital Shift is contributing to a fundamental change in how and where applications are hosted.

Uptake in the use of Public Cloud as a principle target for what IDC calls the 3rd Platform is shifting application development toward vendors like AWS and Microsoft Azure.

Co-Location Data Center providers are rapidly shifting the cost of Data Center to a Utility model of delivery.  The contestable monthly cost at any volume in an OpEx delivery model of the Mega Data Center providers creates the consumption utility.

Fundamental business operation applications are being consumed in the as-a-Service model, eliminating entirely the need to host those applications in the Enterprise Data Center.  Consider the range between Salesforce and Microsoft o365.

As workloads move out of the traditional Enterprise Data Center, the Enterprise Data Center estate will have to be right sized in some significant way.

Consolidation of Data Centers will play a role in the remaining assets of the Enterprise Data Center estates.  CIOs should consider this when considering workload placement for the next equipment refresh cycles.

Why?  http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html 


Part 1:  There is a shift afoot, that is changing the business.
     http://www.abusedbits.com/2016/11/public-cloud-echo-in-data-center.html

Part 2: Moving time for the Enterprise Data Center?
     http://www.abusedbits.com/2016/11/diminution-of-enterprise-data-center.html

Part 3: The Enterprise Data Center is getting smaller.  Time to add that to the roadmap.
     http://www.abusedbits.com/2016/11/smallering-of-enterprise-data-center.html


Tuesday, April 19, 2016

LAN Physical to Overlay

It is terribly difficult to express the design elements of VxLAN technology in a single drawing.  As seen in previous blogs, the elements of the physical construct of an ECMP (Equal-Cost Multi-Path) spine and leaf is astonishingly complex.  These become represented in abstracted views of the total in graduations from quite simple to piece-by-piece elements in order to express their true nature.

Part of the problem is associated with these complexities, but having put together multiple descriptive models, it really is because the OVERLAY doesn't look anything like the physical design.  Networkers got used to broadcast domains being associated with a wire.  Then being associated with an L2 abstraction.  Now networkers need to figure out how to represent the multiple levels of networking with additional L2-in-L3 tunnels.

It's also not as if this hasn't happened before.  VPN drawings look much like VxLAN drawings, but in a VPN drawing, there may be one or a couple, in VxLAN there could be a lot more.  Ultimately, the issue becomes how complex the drawing is to layer in all of the information.

Starting from the view of a Modern Platform for Enterprise (as opposed to public cloud), networkers need to connect particular security elements, network services, control plane services, backup, networking and platforms.  Architecturally, the model looks very much like this.

Figure 1.  Modern Platform
DC LAN (red) plays a dominant role in this platform architecture, providing connectivity from any element to any element within the construct.  Be it a physical device, a virtual machine on a hypervisor or a container on an operating system.

Figure 2.  Any to Any
It's not as simple as that though.  The network attributes are overlaid on top of hardware, software and logical (or abstracted) networking mechanisms in this model.  In this extremely simplified model, use case 1 is where the hypervisor communicates with another hypervisor (green) and use case 2 where the hypervisor communicates with a bare metal object.  A third use case exists where a container communicates with hypervisor or hardware, but it's not used frequently yet in the Enterprise.

The VxLAN (Virtual eXtensible LAN) kernel module of the hypervisor (in this case VMware ESX) communicates through it's vSwitch to the physical medium, a network adapter on the physical machine.  This is then passed to the physical switch to be passed in accordance to the VNI (Virtual Network Identifier) associated with the packets.  In case 1, it is then picked up by the network card on the second physical host, working though the vSwitch to get to the virtual machine.  In case 2, the endpoint is a VTEP (Virtual Tunnel EndPoint) where it is de-encapsulated from L3 to L2 to arrive at the physical server.

In both cases, two sets of interacting Control Planes, VMware NSX and Arista CVX provide the path information to instantiate the VxLAN tunnel.

Figure 3.  VxLAN Delivery (gratuitous re-use from earlier blog)
Now, there needs to be a bold red blinking sign that states the network still exists, both in its physical and logical form.  Herein lies the Spine-and-Leaf.  It is similar to a Clos design (as in Charles Clos) that does have some oversubscription at different levels.  Physically it utilizes the spine as a one hop transport route between all leaf nodes.

The concept though is quite simple.  Scale out horizontally as large as possible.  When that is exceeded, add another spine and make a 2 hop transport route.  The Leaf nodes come in two basic flavors, one that provides transport access to hosts, the other leaf flavor provides Services.  Services Leaf are used to establish specific service functions.  The two examples shown here are WAN access with perimeter security Firewall and IP protocol based storage.

Moving from Brown to Green in this model should be relatively easy.  Once the Spine and Leaf is established, legacy networks may be connected at a Leaf node to provide access utilizing the switch VTEP.  It's an extra hop, but the latency in this network type is extremely low.

Figure 4.  Spine and Leaf (high level)
Combining Figure 3 and Figure 4 is terribly problematic.  It may not even be useful at large scale due to the enormous details necessary to show it in its entirety.  

What networkers can do is abstract the drawing a bit.  Pull it up from the physical layer.  The view in Figure 5 is just such a drawing.

Utilizing Figure 4 as the groundwork (the physical) and "pulling up" the abstractions and overlays that reside on it.  

In this scenario, the spine and each leaf are BGP Autonomous Systems.  The first layer above the physical equipment is an IP network routed solely within the confines of the spine and leaf (Blue routers).  Additionally, a VRF is run on the same structure to propagate a management network (Red routers) to all switches.  It also acts as the distribution layer for any management switches deployed within the Leaf cabinets.

The VNI framework is then "pulled up" to the final level.  This is the tunnel path.  

Below the physical switch VTEP, an OVSDB (OpenVSwitch DataBase) is used to manage the interactions of all platform systems that are virtualized with, in this case, VMware NSX.

At and above the physical switch are the VTEP, managed by another OVSDB for all physical element connections not associated with VMware NSX.  The control plane in this model is Arista CVX.

As in Diagram 3, the OVSDB (VMware) communicates with the OVSDB (Arista) to manage the entirety of the tunnel formation.

Figure 5.  An high level view of the network from physical to overlay
In light of creating a view of the network that expresses as much as possible with the least amount of complexity, this drawing has proven to be quite good at showing as much as possible without making it impossible to follow.

If you use this model to describe your network, please do let me know how it turned out.

I'm interested in any other way you may have to show this type of information graphically.  Please tweet them to @abusedbits or link them to my Google+.

Thursday, March 3, 2016

Mega Data Centers and Scale Out

Historically, every massive Scale Up evokes a future Scale Out of capabilities in the IT industry.

Mainframes to Unix systems, Unix systems to x86
Centralized to distributed to centralized, etc.

Inevitably, a technology advancement forces a state change in the mechanism or requirement that filters how the industry chooses to deal with that event.

Today the industry is taking massive advantage of Cloud capabilities that are changing how computing services are acquired and managed.  This advantage is, in no small part, arrived at by new commoditization levels of access to computing capability.  It can be turned on and off at will and pay only for the portion of resource used.  It is a Utility with respect to IDC's 3rd Platform concepts and equates directly to a financial disruption to the traditional model of application delivery.

On the horizon, are things like IoT (Internet of Things, that have the potential to take significant advantage of new streams of data made available by sensors for, well, anything.

Other areas, including the enormous growth in content and data delivery are creating the need for mega Data Centers which are scaling up to meet the demand of the under-layer of scale out of the compute necessary to run this industry.  This scale up is associated directly with the pressure to continue the cost reduction of the service enable-ment of Cloud and Virtualization, with some financial benefit to more traditional 2nd Platform capabilities. (See Scale Up to the mega Data Center)

When content delivery and IOT bandwidth and/or latency requirements overrun the data transport capabilities of the mega Data Center, two things based on the history of this industry are almost assured to change for the 3rd Platform Utility consumption of computing resource.

1 - IOT Sensor data ingestion will be driven back to the edge, or closest point of creation
2 - Content Delivery will be driven back to the edge, or closest point of consumption

The consequence of this may be instrumental in the next major change in the development of the Data Center industry.

One possible model for this change is a hub and spoke model for compute resource consumption.  In this model, the mega Data Center becomes the hub for the majority of bandwidth or latency insensitive applications and the spoke provides all of the edge services for bandwidth and latency sensitive requirements.

The compute utility will have to assume a role that not only spans availability zones for application redundancy, but edge zones for the more sensitive application requirements.

Where it could get really interesting, if the power utilities started to provide co-location (edge) services for mega Data Centers.  The combination could be extremely compelling, especially if the power utility were able to achieve re-classification of the edge service data center as an IT resource rather than a typical power consumption entity (human facility).

Consider the ramifications of a co-location hosting facility within the boundaries of an electrical substation. 

Re-classification of the edge service to an IT resource could eliminate much of the expense of power redundancy for the co-location facility, with minor modifications of the existing power delivery rules.  It could be dual fed from two active power sources.  As an example, a redundant power generation facility would not be required.  As another example, a UPS would not be required.  As a third example, a power transfer switch would not be required.

Furthermore, it could provide an immediate step down to direct electrical bus in the IT resource facility.  In the future, direct step from AC power to DC in the IT resource to get at the extra ~8% efficiency.

AND


Power distribution aligns pretty darn nice with edge service data distribution facilities.  If the power Utility was worth its salt, it either has a fiber network already established OR it has partnered with someone to lay fiber in the right of way.

This would place compute, subsequently creation and consumption, within milliseconds of the majority of the population and nearly all of the industries at fiber bandwidth.

Scale Up to the mega Data Center

While we're in the midst of a massive shift toward mega Data Centers for Cloud and co-location, I thought it would be interesting to start to explore what could possibly happen next.

In order to understand this, its incredibly important to understand why this is happening.  The economics of the data center landscape is controlled in large part by PUE (Power Usage Effectiveness, http://www.thegreengrid.org/~/media/WhitePapers/WP49-PUE%20A%20Comprehensive%20Examination%20of%20the%20Metric_v6.pdf) which introduces major variables in delivering the consumption side economics that drive toward the value chain mode of Utility (or commodity if you prefer).

Google went further and identified 19 variables that affect the efficiency of a Data Center.
http://searchdatacenter.techtarget.com/blog/Data-Center-Apparatus/The-19-variables-that-most-affect-Google-data-centers-PUE

In order to adjust the PUE variables in a meaningful way, the Data Center must be industrialized to an extraordinary extent.  At the scale of this industrialization, the variables that affect the Data Center at larger scale are much more relevant than they are in smaller sized Data Centers and thus provide a means to achieve higher levels of PUE while decreasing the work effort necessary to achieve them.

This is where the mega Data Centers have chosen to attack (manipulate, see Wardley chart below) the problems associated with Data Center efficiency.

Wardley Value Chain


But there are limits to the industrialization capabilities, particularly power delivery.  In essence, there is a limit, based on a combination of power generation location and available delivery mechanisms when we start thinking about multi-megawatt facilities.  Put another way, it becomes increasingly expensive to build Data Centers larger than the local generation (supply) and transport (delivery) of electricity.

This is effectively the complete Scale Up of the supply side for Data Centers, square footage plus electricity, that achieves the goals of optimized control of the variables of PUE.

Keeping this in mind, let's think about the groundwork for what happens after.

One relatively good attack point in continuing the Scale Up is optimizing the efficiency of electricity delivery.  Data Centers really need to get from AC (transport) to DC (computer consumption) with fewer steps, with power cost savings order of magnitude of ~ 8%.  At multi-megaWatt size, it would not be insignificant.

"Evaluating the Opportunity for DC Power in the Data Center" by Mark Murrill and B.J. Sonnenberg, Emerson Network Power

http://www.emersonnetworkpower.com/documentation/en-us/brands/liebert/documents/white%20papers/124w-dcdata-web.pdf 

where they reference

"Evaluation of 400V DC distribution in telco and data centers to improve energy efficiency", Annabelle Pratt ; Intel Corp., Hillsboro ; Pavan Kumar ; Tomm V. Aldridge

http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4448733&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4448733

and


"400-V DC Distribution in the Data Center Gets Real" , Don Tuite

http://electronicdesign.com/power/400-v-dc-distribution-data-center-gets-real-0

But,

If the industry continues to function the way it has historically, the step after Scale Up is Scale Out.

http://www.abusedbits.com/2016/03/mega-data-centers-and-scale-out.html


Tuesday, February 16, 2016

Data Center Network Types

Data Center Network Types
DC Network Types Architectural

Extending on the idea presented by Greg (whom I'm a huge fan of), http://etherealmind.com/the-data-centre-network-of-networks/

The networks in a Data Center can become quite complex.  Quite a bit of this is due to security and zone considerations.

Take, for instance, the difference between a DMZ host and a true Bastion host.  One is protected (hopefully) by at minimum a firewall with some level of application awareness.  Bastion host, probably not.

This differentiation in the Data Center network has lead to regions of Data Center networking, segregating the network by macrosocopic concepts of security and utility.

Then add the concept of logic segregation on the physical infrastructure and the network gets really interesting.

These walls need to come down in favor of more microscopic, or application aligned, security capabilities.

In most cases, there is no reason that a management network and a monitoring network couldn't occupy the same logical space.  Multi-tenancy restrictions possibly being one of the restricted use cases.

There's a similar argument for production and test platform environments.  It does require that the use of these environments be prescriptive, but they are largely doing the same thing and with the possibility of placing them in a different logical plane (a VxLAN arrangement, for example) the need to purchase and build parallel hardware infrastructures may not be needed.

If that doesn't suit, just look at the way AWS is assembled.  Is there a production or test environment?

In the long run, look at the evolution of horizontal design.  Discreet building blocks are much easier to upgrade than vertical integrations.

Design it so nobody has to eat the entire apple in a single bite.