Wednesday, November 30, 2016

Carriers could eat the WAN and more

In the previous article I shared a vision of a future beyond SD-WAN.  Requirements for this are the Telecom Carriers specifically not forgetting about their customers as they roll out the Network Function Virtualization (NFV) modernization they are currently undertaking.

This really includes, and I can't emphasize this enough, a service interface based on APIs consumable by the customer.

This means that all of the NFV MANO work they are doing needs to have a customer control interface that allows the direct consumption of their service(s).  Automated and orchestrated service chaining that speeds current methods of delivery is simply not enough.

There's also a coup in the works for those Carriers that get this one simple idea.  If they enable their NFV MANO platform to provide catalogs of consumable code, a marketplace, pioneers in WAN service enhancement will be positioned to develop new services directly for the network.

The Carriers will be able to utilize the pioneers in the WAN networking space not only to enhance their services using this platform, but they can use it to create an exceptional "sensing engine" for future products and industrialization within their ecosystem.

Let's have a look at the possible step function to get to this future state.

Figure 1.  Step Function to Carrier Cloud WAN
The starting point in this step function is the circuit based WAN topology that was primarily single customer, with the majority of consumption in the T1/E1 - DS-3/E3 range.  Through some fits and starts, eventually the predominant delivery became a shared service mechanism utilizing Multi-Protocol Label Switching (MPLS) and delivery was nurtured from at the very least custom build to a product with last mile delivery being the time consuming effort.

In the previous blog, we showed that SD-WAN is well positioned for the edge services consumption model of the future.  There are some drawbacks, particularly in the current levels of standardization, but this may well be a short stop on the way to bigger things.  Eventually this technology will incorporate all of the interesting features of direct internet offload and direct to ecosystem connections through the evolution to Hybrid-WAN.

But, I believe there's an immediate follow-on the carriers should be looking into.  The provisioning of Enhanced Cloud WAN Services.  Carriers already own all the WAN packets anyway, why not provide add-on services within their shiny new NFV frameworks?  Why not open up the platform to developers to enhance those capabilities within that new platform capability?  Why not develop a marketplace to provide new services and use the pioneers to sense the future services to industrialize?

Should this come into being, the Carrier Cloud WAN, becomes much easier to envision.  It is the industrialized future of the Enhanced Cloud WAN Services with something as simple as an Ethernet handoff at the edge and a marketplace of services, capabilities and ecosystem partners made available for Utility consumption.

Will there be business and security concerns?  Certainly.  Keep in mind, MPLS was a business and security concern because of its shared service nature when it first came out.  It's everywhere now.  #mapping

This is a followup article to:

http://www.abusedbits.com/2016/11/value-chain-mapping-and-future-of-sd-wan.html 

Tuesday, November 29, 2016

Future of SD-WAN and the Value Chain

Office and business campus locations utilize WAN, or Wide Area Network(ing), for connectivity to other business locations as well as to Data Centers.

There's new technology that is poised to replace the legacy of routers and interesting circuit and protocol types.  It is called SD-WAN, or Software Defined - WAN, and is used to apply software rules and network functions (sometime virtualized) to provide more instant gratification in the area of Wide Area Networking.

Of the things that can be done with SD-WAN, one of the more interesting is a combination of two delivery methods, like traditional WAN using MPLS and broadband Internet (similar to what you might have providing internet service to your home).  This combination, called Hybrid-WAN and using special protocols (read this as non-standardized and highly proprietary) running on the WAN device can provide 
  • higher availability at lower run cost through the use of broadband internet
  • secure connected perimeter with direct internet offload reducing the cost of internet access
  • more direct interconnect to services like AWS and Azure avoiding the hairpin in the Data Center
  • attachment to vendor ecosystems that sell services like SaaS, VoIP, storage, backup, etc.
  • enhanced security services within the SD-WAN provider's cloud
What is even more interesting is the trajectory of this particular technology.

Figure 1.  SD-WAN WardleyMap for 2015
In 2015, this technology was arguably in the Genesis phase of the Value Chain.  There were some initial entrants to this capability, but it was mostly laboratory exercise to determine the art of the possible.

Figure 2.  SD-WAN WardleyMap for 2016

There was a rather dramatic series of changes that happened in 2016.  The viability of the individual vendor solutions started to show through in the market and even major telecommunication vendors started to take note of it.  

Early in 2016 a major telecommunications company even announced their support and pending availability of SD-WAN in the form of a Utility service model.  Others followed. This shifts the financial mechanism toward commodity.  

Late in 2016 some of the early adopters of SD-WAN stated publicly that they no longer had a need for traditional WAN deployments.

Figure 3.  SD-WAN WardleyMap Prediction for 2017
My prediction for SD-WAN in 2017 will be a complete commoditization and Utility release of SD-WAN by every major player in the Wide Area Networking space, if not the announcement of partnership with companies that have this capability.  

This doesn't mean that the undercarriage of the service will be 100% utility though.  This first step will provide the means to change the usage behavior of the WAN consumers and not necessarily the underlying components.  It will erode parts of the current business model and there is still plenty of movement available to the providers to eek out more efficiency as the technology evolves toward commodity.

What happens after that is probably more interesting.  I've a feeling this is simply a stepping stone to the next WAN delivery model.    The telecommunication carrier's network is undergoing a significant technology change with a base in Network Function Virtualization (NFV).  


As this rolls out and reaches maturity, the carriers will learn that they can provide many of the services in their "Carrier Cloud" that their customers currently solve with hardware and appliances and take up data closet and Data Center space with, in a monthly Utility model.

Once that happens, there may be few reasons left to maintain WAN equipment at a site, where a simple ethernet hand-off and network functions in the Carrier Cloud at a monthly Utility cost will suffice.

My one single hope is that, as the carriers are developing and modernizing their infrastructure and capabilities, they don't forget the consumer.  With all of the Software Defined Networking (SDN) and Network Function Virtualization (NFV) in play today, I hope at least some of their APIs are pointed in our direction.

What does the extreme future hold?  #mapping

http://www.abusedbits.com/2016/11/carriers-could-eat-wan-and-more.html

If this is the case, I think we live in interesting times and not the proverb with similar words (1).

1 - https://en.wikipedia.org/wiki/May_you_live_in_interesting_times 

Tuesday, November 22, 2016

"Smallering" of the Enterprise Data Center

There is an acceleration of workloads moving away from the Enterprise Data Center.

Figure 1.  Workload Movement
The Digital Shift is contributing to a fundamental change in how and where applications are hosted.

Uptake in the use of Public Cloud as a principle target for what IDC calls the 3rd Platform is shifting application development toward vendors like AWS and Microsoft Azure.

Co-Location Data Center providers are rapidly shifting the cost of Data Center to a Utility model of delivery.  The contestable monthly cost at any volume in an OpEx delivery model of the Mega Data Center providers creates the consumption utility.

Fundamental business operation applications are being consumed in the as-a-Service model, eliminating entirely the need to host those applications in the Enterprise Data Center.  Consider the range between Salesforce and Microsoft o365.

As workloads move out of the traditional Enterprise Data Center, the Enterprise Data Center estate will have to be right sized in some significant way.

Consolidation of Data Centers will play a role in the remaining assets of the Enterprise Data Center estates.  CIOs should consider this when considering workload placement for the next equipment refresh cycles.

Why?  http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html 


Part 1:  There is a shift afoot, that is changing the business.
     http://www.abusedbits.com/2016/11/public-cloud-echo-in-data-center.html

Part 2: Moving time for the Enterprise Data Center?
     http://www.abusedbits.com/2016/11/diminution-of-enterprise-data-center.html

Part 3: The Enterprise Data Center is getting smaller.  Time to add that to the roadmap.
     http://www.abusedbits.com/2016/11/smallering-of-enterprise-data-center.html


Monday, November 21, 2016

Diminution of the Enterprise Data Center

Where the Public Cloud is extremely accepting of what IDC calls the 3rd Platform of application delivery (Digital / Web Applications), the 2nd Platform (Client Server) and 1st Platform (Centralized Mainframe/Minicomputer) do not always fair well in that same operating space.

As the fixed cost of the Enterprise Data Center is liquidated across an ever shrinking volume of 2nd and 1st Platform applications, CIOs need to look at the means to revitalize this application area or at the very least consider options for lower the cost of delivery.

In this space, transformation of the application and reduction of the associated technical debt of the application can be considered.  Where this is not possible, or unpalatable to the business for financial or strategic reasons, the locality of execution is the next logical place to look for better cost of delivery.

Figure 1.  Data Center CapEx vs OpEx
Workloads not able to move to Public Cloud may be moved to a co-location provider for optimal cost of delivery.  Consider also that the liquidated cost of the Enterprise Data Center per workload-month is expected to grow as more workloads move to the Public Cloud.

Financially, Co-Location results in a flat cost per unit of delivery, similar to Utility.  The monthly cost can be associated directly with the cost of doing business, even down to the individual applications being hosted.  The business can start to eliminate the CapEx cost of the Enterprise Data Center as well as future upgrades required for the facility.  Not an insignificant burden to the business.

The means to do this is well within reach of modern Network providers.  Telecommunications vendors are offering SD-WAN, WAN Switching, L2 Tunneling and other mechanisms for creating the next generation of connections to service the move to the OpEx Data Center.

With an average savings over Enterprise Data Center in the neighborhood of 30% per workload-month, the effect to the business by flattening the gap becomes significant.

Why?  http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html 


Part 1:  There is a shift afoot, that is changing the business.
     http://www.abusedbits.com/2016/11/public-cloud-echo-in-data-center.html

Part 2: Moving time for the Enterprise Data Center?
     http://www.abusedbits.com/2016/11/diminution-of-enterprise-data-center.html

Part 3: The Enterprise Data Center is getting smaller.  Time to add that to the roadmap.
     http://www.abusedbits.com/2016/11/smallering-of-enterprise-data-center.html



Sunday, November 20, 2016

Public Cloud Echo in the Data Center

The ecosystem effects of workload placement in the cloud have a real effect on the enterprise Data Center that all CIOs should be aware of.

The cost of managing a Data Center is a fixed cost.

  •      The real estate cost plus infrastructure cost defined when the DC was built.
  •      There's the taxes that, depending on the jurisdiction, may see an increase or decrease over some relative but relevant time period.
  •      The "nearly" fixed costs of electricity at volume required to operate the DC.
  •      The maintenance, which also comes in relatively flat over time.
  •      Then there is the inevitable upgrade/renew/replace that comes in roughly the 15-20 year mark that keeps DC managers up at night.

All of these costs must be liquidated against the cost of running everything within the Data Center. (This is a pretty important factor.)

At this moment in time, the Data Center business is seeing a significant increase in volume usage that makes the likes of a Mega Data Center (Mega as in MegaWatt) a viable size of delivery.

This is a fight in the economy of scale that ultimately resolves in Data Center operating in a Utility business model.  It is arguable that Public Cloud DCs are already operating in a Utility model.

What does this mean for the ever venerable Enterprise Data Center?

Figure 1:  DC Workload Shift toward Public Cloud

Workloads transitioned or transformed to operate in the Public Cloud moving out of the Enterprise Data Center are changing the cost base over time.  As indicated in the two simple trajectory graphs, the available inventory will increase as will cost per unit measured and the volume of workloads will decrease over time.

Add this to some really exciting capabilities in Software Defined Networking between the Enterprise (not necessarily the DC) and the Public Cloud and the economies of scale that the Enterprise Data Center were originally deployed for start to errode.

The workloads move for cost optimal delivery.  It provides agility to the business with the means to execute workloads significantly more rapidly.  Then there's the case of Software Defined Networking in this space providing Security, QoS and Latency management spread out over an entirely new capability and dimension.

As the Data Center is operated with a very large number of primarily fixed cost financials, scaling the Enterprise Data Center is incredibly difficult.  With this in mind, and no other means to spread the cost of the Enterprise DC investment, usage of Public Cloud will drive UP the per-workload cost of execution within the Enterprise Data Center as workload volume shifts.

Why?  http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html 


Part 1:  There is a shift afoot, that is changing the business.
     http://www.abusedbits.com/2016/11/public-cloud-echo-in-data-center.html

Part 2: Moving time for the Enterprise Data Center?
     http://www.abusedbits.com/2016/11/diminution-of-enterprise-data-center.html

Part 3: The Enterprise Data Center is getting smaller.  Time to add that to the roadmap.
     http://www.abusedbits.com/2016/11/smallering-of-enterprise-data-center.html

Friday, November 18, 2016

Why Public Cloud wins Workload Love

Enterprise Virtualization 2017 prediction vs Public Cloud
The graphic on the right is by @swardley (http://blog.gardeviance.org/) consumer cloud value chain and the drawing on the left is the 2017 update of the Enterprise Virtualization Value Chain @abusedbits.

I'm using this representation to provide a basis of understanding for the growth in workload volume in the Public Cloud space.

IDC is reporting that World Wide Public Cloud Services Spending Forecast will reach $195B by 2020 and I needed a way to show WHY this was happening.

This shift in workload delivery is not about cost, although the argument that workload execution relative to the coverage of the gap in the value chain is becoming extremely important.  The barrier of entry in a public cloud is extremely low.  So low that we have to pay attention to it.

Its also not about the technology currently available and operating at volume in each space, as both private and public cloud do nearly the same things.  Although, this is changing for Public Cloud with the addition of Lambda functions to market players like Amazon Web Services.

Containers are following a similar path in this space, making the locality of execution of a workload subsequently less "sticky" to the environment.  Where host virtualization has stalled, containers are positioning for least common denominator execution in multiple cloud environments.

What it does show is that the Public Cloud is operating with nearly all aspects of their service delivery in or near the Commodity or Utility area of the value chain.  Moreover, anything added to the Public Cloud capability is more rapidly pushed toward Commodity delivery.

Update:What's more, the pace of change in Private Cloud seems to be slowing in magnitude toward Utility.  If you consider the previous 3 years of change in the Private Cloud space, not even compute has made it to a Utility service.

This doesn't mean that everything is fit for Public Cloud.  Applications will need to be revisited with the idea that application execution has to evolve to reduce the effects of technical debt.  Realistically there is a crossroads in choice that needs to be made with older software that doesn't fit very well.

As @jpmorganthal states on migration to cloud .... “It’s complicated” http://jpmorgenthal.com/2016/08/  .

Update:  http://www.businesswire.com/news/home/20161130006375/en/Worldwide-Server-Market-Revenue-Declines-7.0-Quarter


Part 1:  There is a shift afoot, that is changing the business.
     http://www.abusedbits.com/2016/11/public-cloud-echo-in-data-center.html

Part 2: Moving time for the Enterprise Data Center?
     http://www.abusedbits.com/2016/11/diminution-of-enterprise-data-center.html

Part 3: The Enterprise Data Center is getting smaller.  Time to add that to the roadmap.
     http://www.abusedbits.com/2016/11/smallering-of-enterprise-data-center.html

2016 - Mapping Exercise - Enterprise Virtualization

Friday, November 4, 2016

Computing, ROI and Performance - Value Chain

The surprising thing about Information Technology is the willingness to purchase equipment, service, licensing and managed services in step functions.

A combination of business and technology enhancements have made it possible to eke out additional value over time, but the greatest value almost always comes from a transformation technology change.  The Transformational Technology change almost always changes the SIZE of the step function and who has to cover the GAP between ROI and PERFORMANCE.

Figure 1.  The ROI and Performance GAP of the Step Function of acquisition 
When purchasing ABOVE the curve, the ROI is in constant jeopardy until it meets the optimal units per cost line.

When purchasing BELOW the curve, the PERFORMANCE is in constant jeopardy until it meets the optimal units per cost line.  Even then, it is almost always in jeopardy from business budgets and the fundamentals of running a business.

What is really interesting is how technology plays a particular role in changing the size of the step function.  Where the depiction in Figure 2 is extremely idealized, the transformational impact is not.

Figure 2.  Evolution of Computing Value Chain
Ascribing a single evolution event to the changes (and yes, there are more than one, but this is an idealized view), the changes in the industry have "almost always" lead to a transformation event.  Each evolution change has, almost certainly, lead to a change in the units per cost step function, either explicitly a part of the acquisition cost or as part of an underlying cost associated directly with the mechanism of the step function.

As an example:  Mainframe services moving toward Minicomputers was a size evolution.  It had a direct impact on cost of acquisition and therefore reduced the step size.  One didn't have to purchase an entire mainframe, one only had to purchase a Minicomputer.

BTW, Mainframe is still a very good business to be in, but the question needs to be asked, do you want your business in Mainframes?

Another example:  Server to Virtualization was an asset optimization.  It directly impacted the cost of managing workloads on servers as well as the capital to workload ratio.  This affected the step function by increasing the utilization of computing assets and increasing the overall ratio of administrators to workloads.

Above MicroServices, the roadmap and view starts to get a little hazy.  It looks like application functions will be decoupled and tied together with some type of API (Application to Platform) Chaining (or "Application Chaining"), sort of like the methods considered for Service Chaining in the Telecom industry for MANO.

While there may be some problems implementing that type of approach, ultimately it has become less and less about the hardware and more and more about the software running on the hardware.

It is expected that this trend will continue for the foreseeable future.

In the mean time, consider this:  Purchasing a capability or service in the Utility area of a Value Chain, like Public Cloud is right now, will be overall less asset intensive and tied more readily to the unit of execution than building a computing infrastructure from the floor up.  When the next transformational event hits, don't re-invent it, consider how to consume it as close to Utility as possible.