Showing posts with label Cloud. Show all posts
Showing posts with label Cloud. Show all posts

Thursday, August 29, 2019

The IT Toolbox #006 - Multi-Cloud Strategy

A multi-cloud strategy is half-way between two types of thinking.

It's also not thought of as an optimal technical solution for a business problem.

But, it's the only way for late majority users to adopt the new technology lifecycle.  It's also the elephant in the room.

Let's start with Roger' bell curve:

Figure 1.  Roger' bell curve.  credit:Wikipedia.org

Conceptually, the businesses that are going to use a public cloud strategy effectively are the ones that are arguably already using it.  They were the innovators, AWS, Google, Azure, etc.  What's interesting, in the case of the first two, the lifecycle that lead to public cloud was an arbitrage of their excess capacity.  A way to wring out more value from already in place infrastructure.

The "earlies" saw this as a starting point for their needs.  Through consumption of a defined model of delivery, their use cases fit the new lifecycle model.  At any relative scale, public cloud was a way to reduce the business decision of build vs consume.  Netflix is an incredibly good example of an "early", using the new lifecycle model literally made it possible for them to concentrate their effort more specifically on developing their product.

The late majority has a different business problem than the innovators and the "earlies."

They desperately want to be able to take advantage of the advances they perceive in the technology change:

Figure 2.  Enterprise Virtualization vs Public Cloud - Link

I've previously described the state of advancement of enterprise virtualization vs @swardley 's Public Cloud map using his technique (above).  

The difference under the Wardley mapped model though, is that the Use Case doesn't align cleanly.

It is also why there's a contentious argument in the minds and words of business owners.

During the peak of expectations, we hear statements like this:

     "We're moving our workloads to public cloud."

     "We'll be 100% public cloud in 2 years."

In despair, the story changes.

      "Public cloud is too expensive."

      "Public cloud is not secure."

      This is also why Workload (or Cloud) Repatriation occurs.

With enlightenment, the story changes, yet again.

     "We're implementing with hybrid cloud, so we can take advantage of new technologies and techniques."

Today, anyone that says they are moving from legacy to hybrid cloud is perceived as a laggard.

Consider this:

When a business from the "earlies" in public cloud moves to hybrid, it is a conscious business decision. It's thought of as forward thinking. 

     Consider the infrastructure necessary to deliver Netflix today.  It's not pure public cloud.  In order to meet the content delivery goals with their customers, they built a Content Delivery Network (CDN), integrated with their platform.  It is hybrid

     When the public cloud innovators and the "earlies" start to invest in on-prem workload execution, it is hybrid.

Multi-Cloud Strategy is meeting in the middle ground.

     It is a legitimate tool in the #ITToolbox.

     The immediate future is hybrid.

Tuesday, June 18, 2019

The IT Toolbox #001 - Definitions


It is vital that IT people communicate with the same lexicon.  This helps to establish definition which provides the specificity necessary to discuss complex topics.

The marketing engine, not to mention the general media, does little to correct ambiguity.  It can be argued that ambiguity in the marketing engine suffers ignorance in the hopes of capturing the next big headline.

This isn’t new.  Key terms do matter.  They are refined over time.

What makes matters worse, we often don’t know ~exactly~ what these terms mean until they are ingrained in a pattern that everyone comes to accept.

One of the best examples is Cloud Computing, “The Cloud” or just simply ‘Cloud.’

The problem with ‘Cloud’ is it doesn’t fit the definition of what everyone believes it to be.

The various definitions I’ve come across include:

Cloud is hosting on the internet.  (True-ish, but not very meaningful.)
Cloud is Infrastructure as a Service.  (It’s not only, but that’s OK.)
Cloud is Platform as a Service.  (A better definition, but also incomplete.)
Cloud is Cloud Native applications.  (This is about as ill fitting as Infrastructure as a Service.)
Cloud is Serverless.  (No, it’s not.  Never was, never will be.)
Cloud is where I’m moving all of our Enterprise Applications.  (That’ll be fun)
            Cloud is Digital.  (as in Digital Transformation, everyone talks about it, but few know how to                                           do it.)

If you’d like to read a thoughtful description of Cloud, have a look at this Wikipedia article: https://en.wikipedia.org/wiki/Cloud_computing  (16 pages and 125 references, 7 deployment models and 6 service models AND 23 disambiguation references)

What I’m getting at is that calling anything ‘Cloud’ lacks definition, its meaning has no precision what-so-ever.

Please make sure others know what you mean, when you say Cloud.

Monday, April 3, 2017

The what all, of Enterprise Cloud adoption, and all

Was asked my thoughts on Enterprise Public Cloud adoption rate.

heart cloud


Sensors leading to Public Cloud adoption:

From an enterprise perspective, the volume of servers sold in the previous year is soft. http://www.gartner.com/newsroom/id/3530117

The partnership that can underpin legacy enterprise app deployments in hybrid cloud has been announced.  https://blogs.vmware.com/vsphere/2016/10/vmware-aws-announce-strategic-partnership.html

The drivers for cost containment are going to become increasingly important as customers look for lower cost of service.  This will probably start looking like “Use (AWS or Azure) for disaster recovery” and likely to evolve into “Application Transformation” discussions as optimizations within the cloud.  The best way to see this is with the directionality of utility mapping like the value chains between enterprise and public cloud: http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html

Microsoft has announced an Azure Stack appliance to extend the reach of capabilities into the private cloud.  http://www.zdnet.com/article/microsoft-to-release-azure-stack-as-an-appliance-in-mid-2017/

The cost / unit consumption of private Data Center estate is anywhere between ~40 and 100% more than public cloud.  This is further being eroded by co-location vendors continuing to drive down the cost/unit for data center vs new build.  It is becoming very costly to build or retrofit older data centers rather than simply consuming a service at cost that can be easily liquidated per month.  The large scale DC vendors are also creating ecosystem connections for networking directly with the public cloud vendors and where those don’t exist, companies like AT&T, etc are enabling this type of service connection via their MPLS capabilities.

Then, there’s the eventual large scale adoption of containers that present some additional optimizations, particularly as they relate to DevOps that further increase density over hypervisor based virtualization and increase dramatically the speed of change.  Further extending this capability, the network vendors, the historic last line to move in 3rd platform are starting to adopt these concepts.
http://www.abusedbits.com/2017/03/creation-and-destruction-without-remorse.html
http://www.investors.com/news/technology/can-cisco-take-aristas-best-customers-with-software-bait/
http://www.crn.com/slide-shows/channel-programs/300084323/arista-networks-exec-on-new-container-software-offensive-and-its-biggest-fundamental-advantage-over-cisco.htm?_lrsc=9ce3424f-25d3-4521-95e4-eeae8e96b525

This culminates in public cloud providers positioning themselves for legacy applications, cost containment, cost based on their operating models, positions in DR if they can’t get production workloads, integration into private cloud where they can’t get principle workloads and certainly new workloads in cost/volume based on scale.

This leads me to believe that the position on Public Cloud, from an enterprise perspective, is just starting…..

Friday, March 31, 2017

Enterprise WAN is evolving!

Enterprise WAN Reference Architecture
Figure 1.  Enterprise WAN Reference Architecture
Figure 1 represents the high level Enterprise WAN Reference Architecture that current network capabilities seem to be indicating for the support of enterprise services. 

The MPLS network will be extended and enhanced utilizing gateway functions like VPN (which we currently do), CSP access that enables direct connectivity via the MPLS network and SD-WAN that will allow the extension of the MPLS via the Internet to small and medium size locations (maybe even large locations).

SD-WAN will extend the capability of MPLS network to locations not natively available with individual carriers.  It avoids the need to NNI carriers unless it is absolutely necessary. The carriage mechanism is tunneling over the internet and can support vendor/protocol specific optimizations for some quality of service (an abstraction of the underlying IP connectivity).
     Where SD-WAN cannot be on an MPLS gateway, the internet direct to DC will be able to support this functionality.

This model also represents the dissection and reduction of networks that must be "carried twice", ingressing and egressing the Data Center perimeter security controls. These controls will eventually be migrated to the Carrier Cloud WAN Services.  They will be provisioned for specificity in the enterprise application usage model or virtualized per application within the workload execution model.
     Traffic destined for CSPs and SaaS can use a more direct path via the Internet if allowed by the Enterprise.

The CSPs, connected to the Internet, a CSP gateway to MPLS and Ecosystem networks connected directly to Data Centers will extend the Enterprise Network to support enhanced consumption of those types of services like SAS, IoT as well as the various Cloud Service Providers.

Individuals will come in over a variety of connectivity mechanisms including broadband and telco wireless.

Providing the cost structure is competitive, backup paths for many of these networks are likely to shift toward future implementations of Telco 5G.

Sunday, November 20, 2016

Public Cloud Echo in the Data Center

The ecosystem effects of workload placement in the cloud have a real effect on the enterprise Data Center that all CIOs should be aware of.

The cost of managing a Data Center is a fixed cost.

  •      The real estate cost plus infrastructure cost defined when the DC was built.
  •      There's the taxes that, depending on the jurisdiction, may see an increase or decrease over some relative but relevant time period.
  •      The "nearly" fixed costs of electricity at volume required to operate the DC.
  •      The maintenance, which also comes in relatively flat over time.
  •      Then there is the inevitable upgrade/renew/replace that comes in roughly the 15-20 year mark that keeps DC managers up at night.

All of these costs must be liquidated against the cost of running everything within the Data Center. (This is a pretty important factor.)

At this moment in time, the Data Center business is seeing a significant increase in volume usage that makes the likes of a Mega Data Center (Mega as in MegaWatt) a viable size of delivery.

This is a fight in the economy of scale that ultimately resolves in Data Center operating in a Utility business model.  It is arguable that Public Cloud DCs are already operating in a Utility model.

What does this mean for the ever venerable Enterprise Data Center?

Figure 1:  DC Workload Shift toward Public Cloud

Workloads transitioned or transformed to operate in the Public Cloud moving out of the Enterprise Data Center are changing the cost base over time.  As indicated in the two simple trajectory graphs, the available inventory will increase as will cost per unit measured and the volume of workloads will decrease over time.

Add this to some really exciting capabilities in Software Defined Networking between the Enterprise (not necessarily the DC) and the Public Cloud and the economies of scale that the Enterprise Data Center were originally deployed for start to errode.

The workloads move for cost optimal delivery.  It provides agility to the business with the means to execute workloads significantly more rapidly.  Then there's the case of Software Defined Networking in this space providing Security, QoS and Latency management spread out over an entirely new capability and dimension.

As the Data Center is operated with a very large number of primarily fixed cost financials, scaling the Enterprise Data Center is incredibly difficult.  With this in mind, and no other means to spread the cost of the Enterprise DC investment, usage of Public Cloud will drive UP the per-workload cost of execution within the Enterprise Data Center as workload volume shifts.

Why?  http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html 


Part 1:  There is a shift afoot, that is changing the business.
     http://www.abusedbits.com/2016/11/public-cloud-echo-in-data-center.html

Part 2: Moving time for the Enterprise Data Center?
     http://www.abusedbits.com/2016/11/diminution-of-enterprise-data-center.html

Part 3: The Enterprise Data Center is getting smaller.  Time to add that to the roadmap.
     http://www.abusedbits.com/2016/11/smallering-of-enterprise-data-center.html

Tuesday, October 11, 2016

WAN Ecosystems - Evolution

Looking forward to the day of enterprise SDN and NFV for WAN service delivery.

This graphic is how I'm depicting the evolution underway:

WAN Ecosystems


Area 1:  Move toward Virtual Network Function, where individual devices are replaced by virtual network functions on as close to commodity x86 servers as possible.  This is a very pragmatic change that is already underway, with major companies like AT&T and their Universal CPE.

It will allow the edge to evolve in software timescales rather than hardware timescales.  It also makes practical large scale deployment at fixed monthly costs.

Area 2:  Deliver Ethernet to the edge and run the Virtual Network Functions, along with Enhanced Cloud WAN Services from within the Carrier estate.

Area 2 may have dependencies on local capability requirements, like application acceleration.

Consuming services from the Enhanced Cloud WAN area could provide rapid evolution, in software, of things like security perimeter enhancements as well as more options in routing traffic.

Area 3:  My personal favorite area.  Deliver any connection to any service within the ecosystem, on demand.  Programatically.

The delay between want and ready reduced from weeks to minutes if you are already attached to the ecosystem.

Make everything catalog based, so the order to fulfillment time for pre-existing customers is under their control.

Eliminate "stickiness" wherever possible.

There are some maturing vendors in this space and hopefully adoption will pull standardization along with it.


Wednesday, April 6, 2016

SDN and NFV and WAN Modernization

The role of network infrastructure is seeing a rapid change from the historical perspective of component by component design to one where those capabilities are created in mixed modes of pure hardware (let's call this the classical model), hardware and software (let's call this hybrid networking) and software alone (defined as Network Function Virtualization or NFV).

Each of these can be programatically defined with increasing software definition as the capabilities move toward NFV.  So, one may have both a NFV instance or instances of functionality AND software defined functionality.

SDN and NFV vs Classical WAN
Using the picture above, Classical site level networking introduces discreet hardware devices to provide specific service capabilities in at the location.  These could and do typically include routers, switches, firewalls, load balancers, wan optimization devices and wireless controllers.

These also have a very low degree of unified Software Definition, but will typically have high performance.

In the case of the Software Defined WAN (Cloud WAN, SD-WAN, etc), those functions typically assigned to discreet hardware at the site are virtualized into their Network Function, yes, Network Function Virtualization.  Integration of these virtual services with a management and control plane provides a key element in addressing Software Defined WAN.  Each element is intended to operate in unison in a pre-determined way to provide the site level WAN Service as well as other services for the site.

This would have a much higher degree of Software Definition.

The limitations of this method may include sufficient processing power (CPU) and processing memory.  Where, as an example, an x86 server is used to provide the host for virtualization, there may also be a critical limit to the capabilities of the box relative to performance.  Where network hardware typically uses well defined processes, in ASICs to amplify the performance, much of the work done within a virtualization system is accomplished in software.

This means that there will be a performance difference, escalating as the number and type of NFV services that are applied to the same server host.  Expect more dramatic drops in performance when services like firewall rules increase significantly as this may cause more software lookups and therefore slower packets per second.

This is not static though.  Performance of NFV services will continue to increase and become optimized in software.  Performance of x86 hardware is absolutely guaranteed to continue to increase in performance and applicable memory per CPU.  Network card vendors are building addressable hardware functions within the network interface.  Parts of the NFV services may be broken out to optimize performance.

The recommendation, look at the specs and decide what you want/need to do.  Where necessary, break out to a hybrid network solution to solve today's problems and look to the future where these functions integrate even tighter with better performance.


Friday, January 15, 2016

Enterprise Cloud

Let's have a look at enterprise cloud.

Enterprise Cloud Framework

The large majority of IDC's 2nd and 3rd Platform capabilities can be run and managed within the enterprise virtualization framework.

Built primarily on commodity x86 components, the financial model for this type of service closely follows the price-performance curve within the consumer virtualization industry (read "consumer cloud").  Built on vertically scaled systems, it'll vary considerably.

Cost per workload starts to flatten out at somewhere between 300 and 700 workloads, referenced to a 2 vCPU/2 GB vRAM reference unit virtual machine.  Storage as a mix of DAS (solid state and/or spinning) and archival as needed to service the application requirements.  Cost of storage is technology and size dependent.

It can be located nearly anywhere that has suitable power, cooling and data access.  This provides the business with options to utilize cloud capabilities without any of the concerns arising from a multi-tenant solution (read "consumer cloud").

Disaster recovery can be any similarly constructed system with the typical limitations of application and storage latencies along with appropriate data access to service the workload DR requirements. Ideally the "similarly constructed" DR locks in control plane and hypervisor type.

It can support, in bare metal delivery, the interconnection to applications that have requirements above the application virtualization maximums and/or alternative bare metal operating systems.

---

Where things get "interesting"….

     For Network, please review blog entry http://www.abusedbits.com/2015/11/spine-and-leaf-nodes.html for some of the concepts.  Consider that the network above the "logical rack", that supported by a ToR switching pair, really needs to have horizontal scale, it needs to be extremely robust and support significant potential East-West traffic.  It also needs to support bare metal integration and containers, in addition to the virtual hosts.

     Enterprises should be mindful of the Operations Management requirements of their enterprise cloud service, particularly as it pertains to DevOps in the management and lifecycle of the infrastructure and application.  Lower in the visibility, but enormously valuable to the enterprise, things like directory services, host security, scanning and platform overlay need to be considered. That combined with normal enterprise functions of backup, DR, monitoring and administrative access really should be looked at from a cohesive set of feature-function requirements.  Monitoring, well, un-monitored cloud is simply frightening.

     Lastly, the APIs.  Adoption of a modern infrastructure or platform initiative, which this directly relates to, is all about access to the APIs.  There are APIs for the hardware at the point of management.  APIs for the control(s) plane capabilities, and not all or well integrated without development, as the control plane functions may be separate or standalone.  APIs for any platform substrates. APIs for software defined networking.  Then, to top it all off, multi-vendor cloud services that deliver Hybrid Cloud capability across disparate systems, will have APIs.  Read this as "you'll need programmers" OR a vendor that provides these capabilities prepackaged.

Thursday, September 24, 2015

Hybrid Cloud, how it can look....

Hybrid Cloud Network Model
The control plane is separated from the data plane.

The capability is completely API driven.

Imagine now an environment that is fully automated and orchestrated.  

It has QoS for the Enterprise Application.

It has Bandwidth Management for financial management

It has the possibility to connect at Layer 2, extending an application back to the Private Cloud

It has SLA

It has well understood Latency

It avoids the Internet completely

Want to know more?  Follow the Twitter hashtag #harnessthecloud

Wednesday, September 23, 2015

Hybrid Cloud, how it used to look....


Islands of automation areas in Cloud connectivity
And I'm not joking about owning every possible problem on the internet.

If you would like to see how it can be done now, please follow this link:

http://www.vdatacloud.com/blogs/2015/09/23/the-network-is-the-computer/