Saturday, December 17, 2016

Decoupled Hardware and Application Chaining

To start things off, I'm going to "abuse" the value chain from +Simon Wardley mostly because I haven't figured out a better way of describing this.

It is intended to be the start of a conversation, not the end of one.

The argument I'm making is that hardware from the classical/traditional sense of computing is procedurally being decoupled from the execution of the application.  To be sure, we saw some of this with the introduction of virtualization, but it wasn't so much decoupled as pushed so far down the value chain that application architects simply quit thinking of hardware. 

To understand how we get to that state, we should probably start with what has happened to hardware.  A high level depiction provided in Figure 1:

Figure 1.  Value Chain Evolution of Hardware in Computing
 Starting with mechanical computing in the Genesis (lower left) and showing each high level hardware change, like IDC's representation of 1st Platform in the range of Mainframe to "still" Mini Computer, 2nd Platform from Micro Computer to Server Virtualization and 3rd Platform with Cloud Computing and upward with outliers just kicking off in IoT and Private Computing.

Figure 2.  Value Chain of Software Evolution in Computing

Software from a hardware perspective has a more complex value chain with respect to the hardware.  While we could argue the position, the relative perspective can be used to illustrate the mechanics that have driven software and subsystems toward commodity.

A case in the argument being, the use of Operating Systems need to be product, providing ease of use, but input to something like computer Languages has to be custom because of the deep learning required to learn and master.  

This being as much a case of cost of the output of the products as the education needed to create the products and make them useful/use-able.

Ultimately there's something unique happening today that is not necessarily the easiest thing to describe.  Its called #serverless.

We've done things like this before with portability of languages like Java, where the execution method was abstracted away from the hardware sufficiently to run it on multiple hardware devices.

Today we're thinking about a more absolute abstraction of the hardware.  This will make it possible to decouple multiple and complex languages, libraries and applications from the hardware.

To show where and how this is happening, I'm overlaying these two value chains.  Apologies if I've just offended the #Mapping #Gods.

Figure 3.  Combined Hardware - Software
Upper right hand corner holds the primary elements to decoupling hardware from the software.  Hardware is still there, but is only important to those that have to care about it.  

The feedback loop of Agile Development with "Loosely Coupled, Message Event driven and Backflow tuning of the software plays a critical role in reaching the state of the Decoupled Application Function.  Quite a lot of this is to make the application more aware of itself.  

In the hardware area, some element of containerization (capture of all relevant parts of the execution of function of an application or sub application) moves the awareness of the application design away from the hardware to the operating system (arguably another abstraction of the function of the hardware).

Containerization provides the means to swiftly move the application or subcomponents from one hardware underlay to another (think one server to another or one cloud to another).

Ultimately this makes it possible to create components of applications that can become highly standardized or industrialized.  Something we might call #microservices.

These components being decoupled, can be called upon independently by API.  Sufficiently optimized, they can be spun up and down on demand for the execution of a particular function.

The application could simply become a cascade of API calls between decoupled application functions to create an output.

I believe the next trick is going to be related back to a platform mechanism.  We're going to need a schematic or blueprint to create / modify the application.  Something that can understand all of the sub-component elements and provide the messaging and feedback necessary to execute more complex arrangements of applications.

A platform that does Application Chaining.  

This platform could also abstract away the complexity in creation of an application.  With the most simplified models providing a GRAPHICAL interface to the creation of an application.

Also, the sensors that this would provide would be incredibly valuable to the operator of the platform.  It would provide advanced knowledge of new methodologies of applications and sub-components.


Wednesday, November 30, 2016

Carriers could eat the WAN and more

In the previous article I shared a vision of a future beyond SD-WAN.  Requirements for this are the Telecom Carriers specifically not forgetting about their customers as they roll out the Network Function Virtualization (NFV) modernization they are currently undertaking.

This really includes, and I can't emphasize this enough, a service interface based on APIs consumable by the customer.

This means that all of the NFV MANO work they are doing needs to have a customer control interface that allows the direct consumption of their service(s).  Automated and orchestrated service chaining that speeds current methods of delivery is simply not enough.

There's also a coup in the works for those Carriers that get this one simple idea.  If they enable their NFV MANO platform to provide catalogs of consumable code, a marketplace, pioneers in WAN service enhancement will be positioned to develop new services directly for the network.

The Carriers will be able to utilize the pioneers in the WAN networking space not only to enhance their services using this platform, but they can use it to create an exceptional "sensing engine" for future products and industrialization within their ecosystem.

Let's have a look at the possible step function to get to this future state.

Figure 1.  Step Function to Carrier Cloud WAN
The starting point in this step function is the circuit based WAN topology that was primarily single customer, with the majority of consumption in the T1/E1 - DS-3/E3 range.  Through some fits and starts, eventually the predominant delivery became a shared service mechanism utilizing Multi-Protocol Label Switching (MPLS) and delivery was nurtured from at the very least custom build to a product with last mile delivery being the time consuming effort.

In the previous blog, we showed that SD-WAN is well positioned for the edge services consumption model of the future.  There are some drawbacks, particularly in the current levels of standardization, but this may well be a short stop on the way to bigger things.  Eventually this technology will incorporate all of the interesting features of direct internet offload and direct to ecosystem connections through the evolution to Hybrid-WAN.

But, I believe there's an immediate follow-on the carriers should be looking into.  The provisioning of Enhanced Cloud WAN Services.  Carriers already own all the WAN packets anyway, why not provide add-on services within their shiny new NFV frameworks?  Why not open up the platform to developers to enhance those capabilities within that new platform capability?  Why not develop a marketplace to provide new services and use the pioneers to sense the future services to industrialize?

Should this come into being, the Carrier Cloud WAN, becomes much easier to envision.  It is the industrialized future of the Enhanced Cloud WAN Services with something as simple as an Ethernet handoff at the edge and a marketplace of services, capabilities and ecosystem partners made available for Utility consumption.

Will there be business and security concerns?  Certainly.  Keep in mind, MPLS was a business and security concern because of its shared service nature when it first came out.  It's everywhere now.  #mapping

This is a followup article to:

http://www.abusedbits.com/2016/11/value-chain-mapping-and-future-of-sd-wan.html 

Tuesday, November 29, 2016

Future of SD-WAN and the Value Chain

Office and business campus locations utilize WAN, or Wide Area Network(ing), for connectivity to other business locations as well as to Data Centers.

There's new technology that is poised to replace the legacy of routers and interesting circuit and protocol types.  It is called SD-WAN, or Software Defined - WAN, and is used to apply software rules and network functions (sometime virtualized) to provide more instant gratification in the area of Wide Area Networking.

Of the things that can be done with SD-WAN, one of the more interesting is a combination of two delivery methods, like traditional WAN using MPLS and broadband Internet (similar to what you might have providing internet service to your home).  This combination, called Hybrid-WAN and using special protocols (read this as non-standardized and highly proprietary) running on the WAN device can provide 
  • higher availability at lower run cost through the use of broadband internet
  • secure connected perimeter with direct internet offload reducing the cost of internet access
  • more direct interconnect to services like AWS and Azure avoiding the hairpin in the Data Center
  • attachment to vendor ecosystems that sell services like SaaS, VoIP, storage, backup, etc.
  • enhanced security services within the SD-WAN provider's cloud
What is even more interesting is the trajectory of this particular technology.

Figure 1.  SD-WAN WardleyMap for 2015
In 2015, this technology was arguably in the Genesis phase of the Value Chain.  There were some initial entrants to this capability, but it was mostly laboratory exercise to determine the art of the possible.

Figure 2.  SD-WAN WardleyMap for 2016

There was a rather dramatic series of changes that happened in 2016.  The viability of the individual vendor solutions started to show through in the market and even major telecommunication vendors started to take note of it.  

Early in 2016 a major telecommunications company even announced their support and pending availability of SD-WAN in the form of a Utility service model.  Others followed. This shifts the financial mechanism toward commodity.  

Late in 2016 some of the early adopters of SD-WAN stated publicly that they no longer had a need for traditional WAN deployments.

Figure 3.  SD-WAN WardleyMap Prediction for 2017
My prediction for SD-WAN in 2017 will be a complete commoditization and Utility release of SD-WAN by every major player in the Wide Area Networking space, if not the announcement of partnership with companies that have this capability.  

This doesn't mean that the undercarriage of the service will be 100% utility though.  This first step will provide the means to change the usage behavior of the WAN consumers and not necessarily the underlying components.  It will erode parts of the current business model and there is still plenty of movement available to the providers to eek out more efficiency as the technology evolves toward commodity.

What happens after that is probably more interesting.  I've a feeling this is simply a stepping stone to the next WAN delivery model.    The telecommunication carrier's network is undergoing a significant technology change with a base in Network Function Virtualization (NFV).  


As this rolls out and reaches maturity, the carriers will learn that they can provide many of the services in their "Carrier Cloud" that their customers currently solve with hardware and appliances and take up data closet and Data Center space with, in a monthly Utility model.

Once that happens, there may be few reasons left to maintain WAN equipment at a site, where a simple ethernet hand-off and network functions in the Carrier Cloud at a monthly Utility cost will suffice.

My one single hope is that, as the carriers are developing and modernizing their infrastructure and capabilities, they don't forget the consumer.  With all of the Software Defined Networking (SDN) and Network Function Virtualization (NFV) in play today, I hope at least some of their APIs are pointed in our direction.

What does the extreme future hold?  #mapping

http://www.abusedbits.com/2016/11/carriers-could-eat-wan-and-more.html

If this is the case, I think we live in interesting times and not the proverb with similar words (1).

1 - https://en.wikipedia.org/wiki/May_you_live_in_interesting_times 

Tuesday, November 22, 2016

"Smallering" of the Enterprise Data Center

There is an acceleration of workloads moving away from the Enterprise Data Center.

Figure 1.  Workload Movement
The Digital Shift is contributing to a fundamental change in how and where applications are hosted.

Uptake in the use of Public Cloud as a principle target for what IDC calls the 3rd Platform is shifting application development toward vendors like AWS and Microsoft Azure.

Co-Location Data Center providers are rapidly shifting the cost of Data Center to a Utility model of delivery.  The contestable monthly cost at any volume in an OpEx delivery model of the Mega Data Center providers creates the consumption utility.

Fundamental business operation applications are being consumed in the as-a-Service model, eliminating entirely the need to host those applications in the Enterprise Data Center.  Consider the range between Salesforce and Microsoft o365.

As workloads move out of the traditional Enterprise Data Center, the Enterprise Data Center estate will have to be right sized in some significant way.

Consolidation of Data Centers will play a role in the remaining assets of the Enterprise Data Center estates.  CIOs should consider this when considering workload placement for the next equipment refresh cycles.

Why?  http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html 


Part 1:  There is a shift afoot, that is changing the business.
     http://www.abusedbits.com/2016/11/public-cloud-echo-in-data-center.html

Part 2: Moving time for the Enterprise Data Center?
     http://www.abusedbits.com/2016/11/diminution-of-enterprise-data-center.html

Part 3: The Enterprise Data Center is getting smaller.  Time to add that to the roadmap.
     http://www.abusedbits.com/2016/11/smallering-of-enterprise-data-center.html


Monday, November 21, 2016

Diminution of the Enterprise Data Center

Where the Public Cloud is extremely accepting of what IDC calls the 3rd Platform of application delivery (Digital / Web Applications), the 2nd Platform (Client Server) and 1st Platform (Centralized Mainframe/Minicomputer) do not always fair well in that same operating space.

As the fixed cost of the Enterprise Data Center is liquidated across an ever shrinking volume of 2nd and 1st Platform applications, CIOs need to look at the means to revitalize this application area or at the very least consider options for lower the cost of delivery.

In this space, transformation of the application and reduction of the associated technical debt of the application can be considered.  Where this is not possible, or unpalatable to the business for financial or strategic reasons, the locality of execution is the next logical place to look for better cost of delivery.

Figure 1.  Data Center CapEx vs OpEx
Workloads not able to move to Public Cloud may be moved to a co-location provider for optimal cost of delivery.  Consider also that the liquidated cost of the Enterprise Data Center per workload-month is expected to grow as more workloads move to the Public Cloud.

Financially, Co-Location results in a flat cost per unit of delivery, similar to Utility.  The monthly cost can be associated directly with the cost of doing business, even down to the individual applications being hosted.  The business can start to eliminate the CapEx cost of the Enterprise Data Center as well as future upgrades required for the facility.  Not an insignificant burden to the business.

The means to do this is well within reach of modern Network providers.  Telecommunications vendors are offering SD-WAN, WAN Switching, L2 Tunneling and other mechanisms for creating the next generation of connections to service the move to the OpEx Data Center.

With an average savings over Enterprise Data Center in the neighborhood of 30% per workload-month, the effect to the business by flattening the gap becomes significant.

Why?  http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html 


Part 1:  There is a shift afoot, that is changing the business.
     http://www.abusedbits.com/2016/11/public-cloud-echo-in-data-center.html

Part 2: Moving time for the Enterprise Data Center?
     http://www.abusedbits.com/2016/11/diminution-of-enterprise-data-center.html

Part 3: The Enterprise Data Center is getting smaller.  Time to add that to the roadmap.
     http://www.abusedbits.com/2016/11/smallering-of-enterprise-data-center.html



Sunday, November 20, 2016

Public Cloud Echo in the Data Center

The ecosystem effects of workload placement in the cloud have a real effect on the enterprise Data Center that all CIOs should be aware of.

The cost of managing a Data Center is a fixed cost.

  •      The real estate cost plus infrastructure cost defined when the DC was built.
  •      There's the taxes that, depending on the jurisdiction, may see an increase or decrease over some relative but relevant time period.
  •      The "nearly" fixed costs of electricity at volume required to operate the DC.
  •      The maintenance, which also comes in relatively flat over time.
  •      Then there is the inevitable upgrade/renew/replace that comes in roughly the 15-20 year mark that keeps DC managers up at night.

All of these costs must be liquidated against the cost of running everything within the Data Center. (This is a pretty important factor.)

At this moment in time, the Data Center business is seeing a significant increase in volume usage that makes the likes of a Mega Data Center (Mega as in MegaWatt) a viable size of delivery.

This is a fight in the economy of scale that ultimately resolves in Data Center operating in a Utility business model.  It is arguable that Public Cloud DCs are already operating in a Utility model.

What does this mean for the ever venerable Enterprise Data Center?

Figure 1:  DC Workload Shift toward Public Cloud

Workloads transitioned or transformed to operate in the Public Cloud moving out of the Enterprise Data Center are changing the cost base over time.  As indicated in the two simple trajectory graphs, the available inventory will increase as will cost per unit measured and the volume of workloads will decrease over time.

Add this to some really exciting capabilities in Software Defined Networking between the Enterprise (not necessarily the DC) and the Public Cloud and the economies of scale that the Enterprise Data Center were originally deployed for start to errode.

The workloads move for cost optimal delivery.  It provides agility to the business with the means to execute workloads significantly more rapidly.  Then there's the case of Software Defined Networking in this space providing Security, QoS and Latency management spread out over an entirely new capability and dimension.

As the Data Center is operated with a very large number of primarily fixed cost financials, scaling the Enterprise Data Center is incredibly difficult.  With this in mind, and no other means to spread the cost of the Enterprise DC investment, usage of Public Cloud will drive UP the per-workload cost of execution within the Enterprise Data Center as workload volume shifts.

Why?  http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html 


Part 1:  There is a shift afoot, that is changing the business.
     http://www.abusedbits.com/2016/11/public-cloud-echo-in-data-center.html

Part 2: Moving time for the Enterprise Data Center?
     http://www.abusedbits.com/2016/11/diminution-of-enterprise-data-center.html

Part 3: The Enterprise Data Center is getting smaller.  Time to add that to the roadmap.
     http://www.abusedbits.com/2016/11/smallering-of-enterprise-data-center.html

Friday, November 18, 2016

Why Public Cloud wins Workload Love

Enterprise Virtualization 2017 prediction vs Public Cloud
The graphic on the right is by @swardley (http://blog.gardeviance.org/) consumer cloud value chain and the drawing on the left is the 2017 update of the Enterprise Virtualization Value Chain @abusedbits.

I'm using this representation to provide a basis of understanding for the growth in workload volume in the Public Cloud space.

IDC is reporting that World Wide Public Cloud Services Spending Forecast will reach $195B by 2020 and I needed a way to show WHY this was happening.

This shift in workload delivery is not about cost, although the argument that workload execution relative to the coverage of the gap in the value chain is becoming extremely important.  The barrier of entry in a public cloud is extremely low.  So low that we have to pay attention to it.

Its also not about the technology currently available and operating at volume in each space, as both private and public cloud do nearly the same things.  Although, this is changing for Public Cloud with the addition of Lambda functions to market players like Amazon Web Services.

Containers are following a similar path in this space, making the locality of execution of a workload subsequently less "sticky" to the environment.  Where host virtualization has stalled, containers are positioning for least common denominator execution in multiple cloud environments.

What it does show is that the Public Cloud is operating with nearly all aspects of their service delivery in or near the Commodity or Utility area of the value chain.  Moreover, anything added to the Public Cloud capability is more rapidly pushed toward Commodity delivery.

Update:What's more, the pace of change in Private Cloud seems to be slowing in magnitude toward Utility.  If you consider the previous 3 years of change in the Private Cloud space, not even compute has made it to a Utility service.

This doesn't mean that everything is fit for Public Cloud.  Applications will need to be revisited with the idea that application execution has to evolve to reduce the effects of technical debt.  Realistically there is a crossroads in choice that needs to be made with older software that doesn't fit very well.

As @jpmorganthal states on migration to cloud .... “It’s complicated” http://jpmorgenthal.com/2016/08/  .

Update:  http://www.businesswire.com/news/home/20161130006375/en/Worldwide-Server-Market-Revenue-Declines-7.0-Quarter


Part 1:  There is a shift afoot, that is changing the business.
     http://www.abusedbits.com/2016/11/public-cloud-echo-in-data-center.html

Part 2: Moving time for the Enterprise Data Center?
     http://www.abusedbits.com/2016/11/diminution-of-enterprise-data-center.html

Part 3: The Enterprise Data Center is getting smaller.  Time to add that to the roadmap.
     http://www.abusedbits.com/2016/11/smallering-of-enterprise-data-center.html

2016 - Mapping Exercise - Enterprise Virtualization

Friday, November 4, 2016

Computing, ROI and Performance - Value Chain

The surprising thing about Information Technology is the willingness to purchase equipment, service, licensing and managed services in step functions.

A combination of business and technology enhancements have made it possible to eke out additional value over time, but the greatest value almost always comes from a transformation technology change.  The Transformational Technology change almost always changes the SIZE of the step function and who has to cover the GAP between ROI and PERFORMANCE.

Figure 1.  The ROI and Performance GAP of the Step Function of acquisition 
When purchasing ABOVE the curve, the ROI is in constant jeopardy until it meets the optimal units per cost line.

When purchasing BELOW the curve, the PERFORMANCE is in constant jeopardy until it meets the optimal units per cost line.  Even then, it is almost always in jeopardy from business budgets and the fundamentals of running a business.

What is really interesting is how technology plays a particular role in changing the size of the step function.  Where the depiction in Figure 2 is extremely idealized, the transformational impact is not.

Figure 2.  Evolution of Computing Value Chain
Ascribing a single evolution event to the changes (and yes, there are more than one, but this is an idealized view), the changes in the industry have "almost always" lead to a transformation event.  Each evolution change has, almost certainly, lead to a change in the units per cost step function, either explicitly a part of the acquisition cost or as part of an underlying cost associated directly with the mechanism of the step function.

As an example:  Mainframe services moving toward Minicomputers was a size evolution.  It had a direct impact on cost of acquisition and therefore reduced the step size.  One didn't have to purchase an entire mainframe, one only had to purchase a Minicomputer.

BTW, Mainframe is still a very good business to be in, but the question needs to be asked, do you want your business in Mainframes?

Another example:  Server to Virtualization was an asset optimization.  It directly impacted the cost of managing workloads on servers as well as the capital to workload ratio.  This affected the step function by increasing the utilization of computing assets and increasing the overall ratio of administrators to workloads.

Above MicroServices, the roadmap and view starts to get a little hazy.  It looks like application functions will be decoupled and tied together with some type of API (Application to Platform) Chaining (or "Application Chaining"), sort of like the methods considered for Service Chaining in the Telecom industry for MANO.

While there may be some problems implementing that type of approach, ultimately it has become less and less about the hardware and more and more about the software running on the hardware.

It is expected that this trend will continue for the foreseeable future.

In the mean time, consider this:  Purchasing a capability or service in the Utility area of a Value Chain, like Public Cloud is right now, will be overall less asset intensive and tied more readily to the unit of execution than building a computing infrastructure from the floor up.  When the next transformational event hits, don't re-invent it, consider how to consume it as close to Utility as possible.

Wednesday, October 26, 2016

Technical Debt, and Wookiees™, in a Star Trek™ world

In information technology, the goal of design is 100% uptime, be it of the equipment infrastructure (and oh by the way we shouldn't be doing this anymore) or the application (it's really what we care about).

I often think of this as the Star Trek™ conundrum, where the desire is to have something absolutely perfected.  In this way, it will work the way you want it to, when you want it to, each time you want it to.

The components ideally fitted at the subatomic layer for exactness.  Software tested to be infallible. The entire system redundant with tolerances that leave no excuse for failure.  All parts upgrade-able and replaceable within a specification that meets the test of time.  Ultimately, the ideal of the star ship Enterprise.

In other words perfect.

This perfection ends up being quite expensive, so the tolerances are loosened.  The software is released in stages of development.  The parts have finite longevity with future specifications not well understood.  Redundancy placed where traditional mechanisms fail the most frequently.  Sort of like the Millennium Falcon.

So, there will be a really tall and hairy guy with an intergalactic spanner wrench banging on the console of a component that is failing due to something else failing in the power room, 100 meters away under the floor.  It is inevitable.

As the Wookiee™ is banging on the console, just remember that at some point it did the Kessel Run in less than twelve parsecs.

Maybe it's time to establish a new Kessel Run record.



NFV - Service Consumption

As the telecommunications industry starts delivering on Network Function Virtualization (NFV) via delivery of Virtual Network Functions (VNF) there should be a consideration for the service consumption mechanism that is driving this industry.

Consider that startup players in the market are approaching this market segment from enabling developers to directly integrate with their systems.  As the network ecosystems have evolved to include demand based services, these new players are providing the means to directly consume services that have been historically managed services.

There is a direct parallel of this model with the likes of Amazon Web Services and Microsoft Azure.  They have build a platform and enhanced it over time to directly address the service consumption model.  As a demand based service, compute and storage have largely been commoditized, or in the vernacular of the Value Chain, they are utility services.  You pay for what you use.

Telecommunications carriers need to be aware of the conditions this placed on the entirety of the IT market.  It shifted major capabilities to Hybrid Cloud and may further shift the entirety of workload execution to this demand based service area before the next major scale out.

During this evolution, traditional managed services may not survive in their current state.  Further, the directional of OSS and BSS have almost always been northbound.  As the digital shift continues, these functions need to be both North and Southbound.

Finally, there cannot be enough emphasis on this, this is technology segregated by logic.  Policy Enforcement that is well understood and tied together from MANO Service Chaining to VIM and finally to the consumer needs to be a foundational part of the plan of service delivery, enabled and enacted upon by API and made available to the masses that will be in a position to consume it.

The evolution of this space is ripe for a lambda function like execution in its maturity.

Tuesday, October 11, 2016

WAN Ecosystems - Evolution

Looking forward to the day of enterprise SDN and NFV for WAN service delivery.

This graphic is how I'm depicting the evolution underway:

WAN Ecosystems


Area 1:  Move toward Virtual Network Function, where individual devices are replaced by virtual network functions on as close to commodity x86 servers as possible.  This is a very pragmatic change that is already underway, with major companies like AT&T and their Universal CPE.

It will allow the edge to evolve in software timescales rather than hardware timescales.  It also makes practical large scale deployment at fixed monthly costs.

Area 2:  Deliver Ethernet to the edge and run the Virtual Network Functions, along with Enhanced Cloud WAN Services from within the Carrier estate.

Area 2 may have dependencies on local capability requirements, like application acceleration.

Consuming services from the Enhanced Cloud WAN area could provide rapid evolution, in software, of things like security perimeter enhancements as well as more options in routing traffic.

Area 3:  My personal favorite area.  Deliver any connection to any service within the ecosystem, on demand.  Programatically.

The delay between want and ready reduced from weeks to minutes if you are already attached to the ecosystem.

Make everything catalog based, so the order to fulfillment time for pre-existing customers is under their control.

Eliminate "stickiness" wherever possible.

There are some maturing vendors in this space and hopefully adoption will pull standardization along with it.


Monday, October 10, 2016

As the Pendulum Swings

Figure 1.  As the Pendulum swings


There's a relatively constant motion in the IT industry that we tend to think of as a pendulum.  As technologies evolve, we often see a swing toward "what was old is new again" and "what was new is old, again."  

It is at least partially represented by these two graphics.  

The first being price vs performance.  We often liquidate performance for price as we abstract or redefine products.  There are evolution resets that cause it to restart again, frequently to the detriment of the previous technology course, but often looking like something that skipped a generation.  This is a foundation of the pendulum swing and why we think of it as a pendulum.

The second can be thought of as dedicated vs general purpose or locked in vs open.  As an example: IT functions dedicated in hardware are created to optimize performance, at the expense of great cost.  IT functions created in general arrive with a performance penalty vs the dedicated systems, but become extremely versatile.  These will often lead to a stair-step function in the next technology evolution.  This doesn't correlate directly with the pendulum swing but a shift in position of the pendulum, see below..

Figure 2. Efficiency Improvements - shift in position
Special thanks to Simon Wardley @swardley - http://blog.gardeviance.org/  mapping



Wednesday, September 28, 2016

Healthcare Value Chain 2016

Healthcare Value Chain
Value Chain for healthcare IT systems in 2016
Femi and I created these to describe the Healthcare value chain for 2016.

The first figure indicates the high level value chain and the area where the most value may be derived from an increase in technical capability.

The supporting requirements, functions and interactions are all indicated as subsequent capabilities.

Healthcare Step Function
Possible Technology Step Function for Healthcare

The second figure shows a possible future step function that causes a reset in the value chain and what those potential Healthcare technology areas would become.

In the event that this future becomes reality, where Population Health Management resides in the top figure today, Targeted or Precision (Personalized) Medicine will become the operational mode in the future. #mapping

Friday, September 23, 2016

Network Function Virtualization - Value Chain 2016



Value Chain for NFV in 2016
I'm using this to describe the relative position of functions in the value chain for Network Function Virtualization (NFV).

Where the customer is consuming Virtual Network Functions (VNFs) there are a large number of supporting functions underlying the delivery of the VNFs.

This is intended to show the relationship between those elements that would perform part of a MANO Functional Block in the VIM (Virtualized Infrastructure Manager). #mapping

Friday, September 2, 2016

Docker Host Networking Modes

I created this in an attempt to show the Docker Host Networking modes all aligned on the same model, mostly because I couldn't find anything quite like this representation.

If you want to provide any feedback on the models or descriptions I would love to hear back from you.

Docker Host Networking
Docker Host Networking Modes

Tuesday, July 26, 2016

Weekend Wireless

My previous WLAN provided adequate coverage and N speeds throughout the house, but I'm starting to get devices that are AC capable.

When the home wireless starts looking like it needs a refresh, my go-to has historically been an enterprise grade wireless system.

I've been reading the reviews of the Ubiquiti product line (Ubiquiti UAP-AC-LR) and decided it was time to give them a try.  If for no other reason than many people reported that it was A) difficult to set up and B) used by WLAN engineers and C) cost (how they do it for this price is pretty cool).

A) Not true
B) Ok.  I believe that anyone that can install a java server app could probably do the "home" level configuration work.
C)  Yeah, $200, not $2000

I chose 2 of the LR model.  Coverage of my home with N required 2, so these most likely to require 2.  BTW, WLAN site survey, should probably be done.  I did one for my home for N, so pretty much knew what the freq coverage pattern needed to look like.

Received them 2 days after ordering, when Amazon Prime works, it really works.

Unboxing:

Ubiquiti UAP-AC-LR

2 manuals, may open them one day.

A sub ceiling drill-through mounting bracket with screws.  Won't be using that.

A PoE injector with power cable.  Nice addition if you don't have a PoE switch.

Access Point with mounting bracket.

Controller Installation:

https://www.ubnt.com/download/unifi -> Unifi controller for Windows (going to run it on my media server)

There is also a user guide on this page for the controller.

Java needs to be installed on the computer.

Once installed, this dialog pops up.  It's a direct link to https://localhost:8443




Click "Launch a Browser…"



You may see this:  Click "Advanced"




Click "Add Exception…"



Click "Confirm…"



Verify your Country and Timezone



Note:  This is where it gets interesting.  If you have a single VLAN home network, no issues.  If you have multiple VLANs, make sure the AP and the device the server are running on are in the same VLAN.

The AP light will turn white.  Select "Refresh Now"



Select the checkbox next to the AP and select "Next"



Put in the SSID and Wireless Key you want to use.

Select "Next"



Put in the administrative information.  Select "Next"

Then select "Finish"

The installation now takes you to the Unifi controller



Sign in with the administrative name and password you set earlier



Select Devices Icon - third icon in the black menu on the left



The AP(s) will auto-magically discover if it is connected.

Notice "Pending Approval" in the Status.



Scroll to the right of the window and select "Adopt" under actions menu






It is now "CONNECTED" and the SSID is broadcasting.  The light on the AP switched to a Blue hue.

Feel free to use the user guide to customize your wireless.

AP Installed:

Used two of the expansion anchors in the screws pack.



Cute.  Lots better than the 6 legged black spider I had up there before.  Wife likes the looks of it a LOT better.

Entire Installation:  About 1.5 hr.  It took longer to get the old AP off the ceiling than it did to configure the software and install the APs.  The locking mounting bracket of the old AP gave me so much trouble.

Yeah, I need to patch the hole where the anchor pulled out from the last AP.

Final thoughts after using it for a couple of days.

If you are in a relatively small living space, use the wireless on your router.  I'm not sure you'd get much more out of this solution.

If you are living in a larger space, that may require 2 or more APs for coverage, consider this AP.  Also, consider the single channel roaming configuration if you want to be really cutting edge.

It's provided really good performance at distance.  More than I expected for approximately $200 and as good as I was getting out of a major enterprise brand Controller plus WAPs of the 802.11n generation.  Also, "way better" than 3 of the current top of the line SOHO brand 802.11ac routers I have.

I won't be taking them down anytime soon.  Hope I get the same 6 years of use out of these that I did with the previous APs.  Even if I didn't, I could replace them 10 times over and it wouldn't cost more than the previous installation.

Recommendation:  HIGH






Monday, July 25, 2016

Docker Network Demo - Part 5



A couple of useful links:

https://github.com/wsargent/docker-cheat-sheet

https://blog.docker.com/2013/10/docker-0-6-5-links-container-naming-advanced-port-redirects-host-integration/

Also figured out where the interesting docker names come from:

https://github.com/docker/docker/blob/master/pkg/namesgenerator/names-generator.go

BTW, there is a lot of REM in the file with some Easter Egg kind of info in it.

https://docs.docker.com/engine/reference/commandline/attach/

You can create your own names using --name foo as in "docker run --name test -it alpine /bin/sh".

Resuming from Part 4….

First thing, I just simply didn't have it in me to continue to use a complete /16. So:


docker network create -d bridge --subnet 172.16.2.0/24 docker2

nelson@lab1:~$ docker network ls
NETWORK ID          NAME                DRIVER
5ef6f5f7f40f        bridge              bridge
11f4ac20d39d        docker1             bridge
5d150019b8a9        docker2             bridge
d1a03332c0c1        host                host
91b70cf2593b        none                null

I feel so much better…..

Also, I updated the Ubuntu system and rebooted it, so I'm going to need to recreate the containers I'm playing with.

Now that I know how to name the docker containers, I can re-create the lab setup rapidly with the following commands:

docker run --name=test1 --net=docker1 -it alpine /bin/sh

docker run --name=test2 --net=docker1 -it alpine /bin/sh

docker run --name=test3 --net=docker2 -it alpine /bin/sh


nelson@lab1:~$ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
9f9a5604108b        alpine              "/bin/sh"           2 minutes ago       Up 2 minutes                            test3
61acf893dac5        alpine              "/bin/sh"           2 minutes ago       Up 2 minutes                            test2
b501988db295        alpine              "/bin/sh"           3 minutes ago       Up 2 minutes                            test1

Docker revised containers and networks

Let's look at the connectivity again. The vSwitch isn't allowing the traffic to pass from one bridge to the other.

From test1 to test3


/ # ping 172.16.2.2
PING 172.16.2.2 (172.16.2.2): 56 data bytes
^C
--- 172.16.2.2 ping statistics ---
8 packets transmitted, 0 packets received, 100% packet loss

From test3 to test1

/ # ping 172.16.1.2
PING 172.16.1.2 (172.16.1.2): 56 data bytes
^C
--- 172.16.1.2 ping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss

What does it take to get the containers to be able to talk to each other.

https://docs.docker.com/v1.8/articles/networking/ -> Search "Communication between containers"

There's a nice section on the rules here, but basically it can be turned off if --iptables=false is evoked at docker start.

Be aware: This is not considered a secure way of allowing containers to communicate. Look up --icc=true and https://docs.docker.com/v1.8/userguide/dockerlinks/

Before:


nelson@lab1:/etc/default$ sudo iptables -L -n
[sudo] password for nelson:
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER-ISOLATION  all  --  0.0.0.0/0            0.0.0.0/0
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (3 references)
target     prot opt source               destination

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
RETURN     all  --  0.0.0.0/0            0.0.0.0/0


Insert the following rule in /etc/default/docker using your favorite editor

#nelson - remove iptables remove masquerade

DOCKER_OPTS="--iptables=false --ip-masq=false"


Rebooting - in too much of a hurry to figure out iptables right now

     update:  sudo iptables -F -t nat  -- flushes the nat table
                     sudo iptables -F -t filter  -- flushes the filter table

Then re-start and re-attach the containers in each putty window


/ # nelson@lab1:~$ docker start test1
test3
nelson@lab1:~$ docker attach test1
/ #
/ # ifconfig -a
eth0      Link encap:Ethernet  HWaddr 02:42:AC:10:01:02
          inet addr:172.16.1.2  Bcast:0.0.0.0  Mask:255.255.255.0
          inet6 addr: fe80::42:acff:fe10:102%32734/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:24 errors:0 dropped:0 overruns:0 frame:0
          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:5361 (5.2 KiB)  TX bytes:648 (648.0 B)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1%32734/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

After Docker default change.

nelson@lab1:~$ sudo iptables -L -n
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Ping test1 to test3

/ # ping 172.16.2.2
PING 172.16.2.2 (172.16.2.2): 56 data bytes
64 bytes from 172.16.2.2: seq=0 ttl=63 time=0.163 ms
64 bytes from 172.16.2.2: seq=1 ttl=63 time=0.138 ms
64 bytes from 172.16.2.2: seq=2 ttl=63 time=0.133 ms
^C
--- 172.16.2.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.133/0.144/0.163 ms

Ping test3 to test1

/ # ping 172.16.1.2
PING 172.16.1.2 (172.16.1.2): 56 data bytes
64 bytes from 172.16.1.2: seq=0 ttl=63 time=0.280 ms
64 bytes from 172.16.1.2: seq=1 ttl=63 time=0.126 ms
64 bytes from 172.16.1.2: seq=2 ttl=63 time=0.136 ms
64 bytes from 172.16.1.2: seq=3 ttl=63 time=0.129 ms
64 bytes from 172.16.1.2: seq=4 ttl=63 time=0.139 ms
^C
--- 172.16.1.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.126/0.162/0.280 ms

What you should probably be thinking now, OMG what have I done!

     Update:  from here, all isolation rules must be made specifically in iptables
                      make sure the FORWARD-DROP rules provide all of the required isolation
                           think direction AND address range

                      this method may be very useful if the network area is behind a sufficient perimeter

                      host routes for specific networks could be applied for connectivity

                      a routing function on the host would be used for communicating with the
                      outside world.  Look at:
                      http://www.admin-magazine.com/Articles/Routing-with-Quagga 


#-REM out the statement in the default docker file and rebooted

Once again all is right with the world.


nelson@lab1:~$ sudo iptables -L -n
[sudo] password for nelson:
Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER-ISOLATION  all  --  0.0.0.0/0            0.0.0.0/0
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
DOCKER     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0
ACCEPT     all  --  0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (3 references)
target     prot opt source               destination

Chain DOCKER-ISOLATION (1 references)
target     prot opt source               destination
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
DROP       all  --  0.0.0.0/0            0.0.0.0/0
RETURN     all  --  0.0.0.0/0            0.0.0.0/0
nelson@lab1:~$