Wednesday, April 25, 2018

Getting to the New Problems

At some point in the evolution of a product, a new product will come along and displace it.  Examples are quite prolific in IT, exactly like what happened with video rental, the entire market was replaced.

In the Settlers and New Problem entry, I made what turned out to be an abortive attempt at rationalizing this within the context of the Value Chain.  It should have been in an Evolution Map, as in Descriptive Evolution Mapping.

There still exists the issue of identifying new problems to be solved that create the displacement.  It can happen almost anywhere on the continuum of the product.  It's my feeling that it is the most identifiable and the most "felt" when the product being displaced is at the assumed stage of "Solutions to Known Problems" or "Refinement of Solutions."  That in mind, it can happen at any stage.

The following Wardley Map shows the associated change from the perspective of a Value Chain map.  It's also how, in "Next IT Shift, back to Distributed", the step function that could be used to show a possible future is realized.

The example, in "Next IT Shift, back to Distributed", implies that Cloud Computing is perpetually in a state of Refinement of Solution, but "Sensors, IoT and Machine Learning" cannot all be run from Cloud Computing:  This is the New Problem How do you get data to where it needs to be acted on with an enormously distributed system?  Should we try to move data in this way?
@swardley, WardleyMaps, Wardley, Maps, mapping, Evolution,
Getting to the New Problems

Tuesday, April 17, 2018

Descriptive Evolution Mapping

Thanks to @swardley and the community for steering me to a better definition of the Wardley Maps I was attempting to produce.

Wardley Map Evolution
Wardley Maps - Evolution Map

The graphic above introduces the product view of each phase of Design.

As a starting point, it is the output of the workload or application that has value.

The applications, and indeed the early computers, were Designed for Purpose.  They were also Custom built.  This is illustrated by the simple fact that repurposing an early computer may very well mean that the equipment necessary to run a different program may have to be changed in some pretty fundamental ways.

     Evolution of computing lead to computers that were capable of multi-purpose computing.  The applications were no longer specifically tied to hardware that they operated on (I know there's an argument brewing here, but it's the generalization I'm illustrating).

This provided the means for individuals and companies to run a multitude of programs on computing equipment.  This was a generational improvement in application development and lead to an entire industry of boxed software delivery.

     The box software products were developed within the framework of a particular operating system with guidelines about what and how commands were processed.  This is a manufacturing guideline and therefore we can identify the boxed software industry as Design for Manufacture and each "box" contained a Product on some type of media.

     Interestingly, the Product area is not necessarily the valuable output of the workload or application, but a means integrated to produce the valuable output.  This means that, while boxed software may very well be a Product, the overall application may still be Custom build, retroactively putting the entirety of said application back into Design for Purpose.

     Another interesting tidbit, the tight control over box software lead to the creation of competitors and the OpenSource industry.  This happened because the design of the overall application mandated variation that the box software vendors were either unwilling or unable to easily produce or it came at great licensing expense.

We reach another evolutionary state with what are arguably extremely easy to consume application outputs.  Built specifically with the consumer in mind and creating a service category (Utility Service) built upon commodity components. These are more easily offered in a volume consumption model of cost/volume-time.

     Utility specifically targets ease of consumption, foremost the usage mechanisms or operation.  It therefore represents a Design for Operations method of delivery.  Constantly being updated and firmly rooted in the idea that the easier it is to use, the more difficult it will be to replace.

     An interesting consequence of Design for Operations is that competitive pressures in Utility drive new business practices like Business Agility/Agile DevOps.

What's next you might ask?  google serverless, then google codeless


Friday, April 6, 2018

Settlers and the New Problems

There are some fundamental truths that drive technology forward, not meaning to be entirely inclusive, but they certainly include

     Technology evolves on top of other technology advancements
     Technology needs to be simplified or abstracted away from complexity to advance

This leads to interesting side cases of where the interdependence of technology must co-evolve with a relatively uncertain future state to continue moving forward.  Part of the solution is an evolution on top of the old technology and part of the solution is an abstraction to solve a complexity problem.

There are also characteristics of technology that meet as part of an industrialization of the technology.  Chris Swan @cpswan speaks on this very elegantly in the move from a technology that is design for operations rather than a design for manufacture or the design for purpose.

Industrialization of this sort is a stepping stone, much like the technology changes that move the state of the art from Town Planners back to Genesis and Design for Purpose.

Wardley Map Settlers and the New Problem
Settlers and the New Problem
Here's the diagram I'm using to try to illustrate this effect.  It is loosely based on the Wardley Map of  Settlers and an evolution step function that shows the evolution toward the "New Problems."

Looking forward to your feedback!

Update Apr 9: 


Friday, December 15, 2017

Enterprise Virtualization vs Public Cloud 2018 (Mapping, Wardley)

Enterprise virtualization prediction for 2018, tl:dr - no drastic changes from 2017.

There are some interesting possibilities, though, and I've used Simon Wardley mapping to diagram them.

Wardley Enterprise Cloud 2018 Mapping
Enterprise Virtualization 2018 prediction vs Public Cloud
As shown in the map on the left, we can now argue that the consumption of virtual machines has trended all the way into commodity consumption (with standardized tooling, automation and operating environment).  If it hasn't in your company, you may want to start asking why.

One of the more interesting possibilities for 2018, if the equipment vendors do this correctly, is composable infrastructure. This could completely displace traditional compute and push it into commodity consumption.  I'm going to leave it as a dotted line in the figure for now, as the business impact of technical accounting for corporations might make this a non-starter. That said, I have to imagine that a true utility in any premises would be good for the industry and the consumers.

In the public cloud map on the right, we may need to incorporate some changes based on enterprise use of public cloud to include the difference between “cloud native” capability vs enterprise hosting requirements.

Cloud native capability is the service consumption of the public cloud that relies only on the tools and capabilities built into the public cloud platform element.  Using it for new application development, including things like serverless application design, is growing as AWS and Azure partners and consumers learn to take advantage of those cloud native features.

     Cloud Native platforms are not particularly well placed for traditional enterprise workloads, though, which often require more specific (as well as traditional) care and feeding.  Furthermore, refactoring enterprise applications to take advantage of Cloud Native features may not be a worthwhile endeavor considering the cost to do transformation of applications.  The general thought is to give them a safe place to run until they are no longer needed. 

The enterprise hosting data center exodus from 2017 provides some of the highlights of why workloads will move out of the data center.  It may not be obvious, but the unifying element of both of the diagrams is how Hybrid Computing will be handled between enterprise virtualization and public cloud.  This integration still looks very much like early product (see diagram above).

One of the possible next steps is being taken by both Microsoft Azure and AWS / VMware who have announced methods to move non-cloud native workloads to their IaaS:  Microsoft Azure Stack and VMware Cloud on AWS.  Over time, both of these services should drive workloads more than "smooth out the peaks".  This is a major movement in what I’d titled my prediction last year, and why I say “Public Cloud Continues to Win Workload Love.”

If you've followed the mapping threads on my blog, here's the historical links on these predictions:


And if you want to know more about my Wardley mapping mis-adventures, follow this link.

updated 19 Dec 2017

Saturday, August 26, 2017

It's about the data

A friend of mine recently asked me what my thoughts were around connected home/small business.  He's kindly agreed to sharing the response.
...

In my humble opinion, it's all just verticals until something comes along to unify things and make life simpler.

It is about the Platform today.  In the future it will be about how we act on the data we have.  This will most likely shift to AI in some IoT fashion.

At the moment, I think we're are approaching a Wardly WAR stage with quite a lot of the capabilities moving from custom into product. http://blog.gardeviance.org/2015/02/on-evolution-disruption-and-pace-of.html

It's pre-industrialization of the capabilities that actually prove themselves both useful and offset either a cost, a time pressure or a labor effort that typically seem to win.

It'll almost always be about situational awareness though. A person will eventually see the thing that stands to make a serious and significant change that takes hold of one of the ecoystems and gives it a big push.

That's almost always an abstraction of the capability. I took a wild swing at one of the computing areas as an example. The abstraction of programming I called "codeless programming" that uses visual context rather than programming languages to build applications. It's the next step past "serverless" where inputs are tied to outputs by people not necessarily programming any of the intermediary application chains. http://www.abusedbits.com/2017/04/codeless-programming.html

Zuckerberg is running an experiment along these lines with his integration of capabilities using an AI.

https://www.youtube.com/watch?v=vvimBPJ3XGQ 

Thing is, getting everything to work together is ridiculously complex. Either commonality in method for integration or something that can manage the complexity is required before this is anything other than a really expensive and time consuming hobby. And then there's the data capture....

My brother has a sensor network that he literally built into his house to record a variety of data. The first dimension value of that data was to have it and be able to view it. The second dimension value of that data is to macroscopically act on it, like integrated thermostat control based on conditions and presets. The third dimension value would be to have similar information from 1500 houses and use it to design houses better, as an example.

In each one of the industries you are thinking of, the third dimension values far outweigh the two below it, but getting the data AND being able to act on it is ... difficult. The connected home products are about the only way to get at that data.

Thursday, May 25, 2017

Mapping Tool

Just did my first mapping with the @lefep tool at https://atlas2.wardleymaps.com/ 


Wardley Map
Figure 1.  WardleyMap on @lefep web tool

My first thought isn't about the tool which was really easy to learn and use, but the download formatting.  It outputs .png.

The thought here is we should want to show it to people, talk about the map and make adjustments to it.

  • Showing it in this format, perfect.
  • Talking about the map in this format, great.
  • Making adjustments to it... have to do it in the webtool.


Maybe something to put into the backlog, a story about users that may want to adjust the map or the formatting to fit a specific need.

Great job @lefep.  What takes me 20 min in Visio took me all of 5 min on the tool.  A certain timesaver.

Monday, April 17, 2017

Networking Penalty Box

In networking, we can't always have both price and performance.

     In many cases it is the equivalent of 'having the cake and eating it too.'

This is governed by the mechanism or method of network delivery, software or hardware.

In software, the cost of networking is relatively low and favors extremely rapid change.
   
     It is important to remember that it is constrained by the software architecture as well as the queues and threads that must be processed in concert with the operating system, hypervisor and application, etc.

     All of these are contending for time with the CPU and executing a software instruction takes a certain relative amount of time based on the CPU architecture.

In hardware, the cost of networking is high and favors rapid packet exchange over the ability to modify the networking function.

     I'm being very generous in this statement, the sole purpose of hardware is to move the packet from one spot to another, as rapidly as possible.

     Because the majority of the work is done in silicon, the only means to modify the network is to subroutine into software (which undermines the purpose and value of the hardware) OR to replace the silicon (which can take months to years and costs, a lot).

Price vs Performance
Figure 1.  The price vs performance curve
Utilizing x86 generically programmable delivery mechanisms, it is possible to do many things required of the network that are tolerable, but not fast or optimized.

     The examples of host bridges and OVS are eminently capable at relative bandwidth requirements and latency supporting an application within the confines of a hypervisor.  It can be remarkably efficient at least with respect to the application requirements.  The moment the traffic exits a hypervisor or OS, it becomes considerably more complex, particularly under high virtualization ratios.

The Network Penalty Box
Figure 2.  The Network Penalty Box
Network chipset vendors, chipset developers and network infrastructure vendors have maintained the continuing escalation in performance by designing capability into silicon.

All the while, arguably, continuing to put a downward pressure on the cost per bit transferred.

Virtualization vendors, on the other hand, have rapidly introduced network functions to support their use cases.

At issue is the performance penalty for networking in x86 and where that performance penalty affects the network execution.

In general, there is a performance penalty for executing layer 3 routing using generic x86 instructions vs silicon in the neighborhood of 20-25x.

For L2 and L3 (plus encapsulation) networking in x86 instruction vs silicon, the impact imposed is higher, in the neighborhood of 60-100x.

This adds latency to a system we'd prefer not to have, especially with workload bandwidth shifting heavily in the East-West direction.

Worse, it consumes a portion of the CPU and memory of the host that could be used to support more workloads.  The consumption is so unwieldy, bursty and application dependent that it becomes difficult to calculate the impact except in extremely narrow timeslices.

Enter virtio/SR-IOV/DPDK

The theory is, take network instructions that can be optimized and send them to the 'thing' that optimizes them.

Examples include libvert/virtio that evolve the para-virtualization of the network interface through driver optimization that can occur at the rate of change of software.

SR-IOV increases performance by taking a more direct route from the OS or hypervisor to the bus that supports the network interface via an abstraction layer.  This provides a means for the direct offload of instructions to the network interface to provide more optimized execution.

DPDK creates a direct to hardware abstraction layer that may be called from the OS or hypervisor.  Similarly offloading instructions for optimized execution in hardware.

What makes these particularly useful, from a networking perspective, is that elements or functions normally executed in the OS, hypervisor, switch, router, firewall, encryptor, encoder, decoder, etc., may now be moved into a physical interface for silicon based execution.

The cost of moving functions to the physical interface can be relatively small compared to putting these functions into a switch or router.  The volumes and rate of change of a CPU, chipset or network interface card has been historically higher, making introduction faster.

     Further, vendors of these cards and chipsets have practical reasons to support hardware offloads that favor their product over other vendors (or at the very least to remain competitive).

This means that network functions are moving closer to the hypervisor.

As the traditional network device vendors of switches, routers, load balancers, VPNs, etc., move to create Virtual Network Functions (VNFs) of their traditional business (in the form of virtual machines and containers) the abstractions to faster hardware execution will become ever more important.

This all, to avoid the Networking Penalty Box.