Thursday, May 25, 2017

Mapping Tool

Just did my first mapping with the @lefep tool at https://atlas2.wardleymaps.com/ 


Wardley Map
Figure 1.  WardleyMap on @lefep web tool

My first thought isn't about the tool which was really easy to learn and use, but the download formatting.  It outputs .png.

The thought here is we should want to show it to people, talk about the map and make adjustments to it.

  • Showing it in this format, perfect.
  • Talking about the map in this format, great.
  • Making adjustments to it... have to do it in the webtool.


Maybe something to put into the backlog, a story about users that may want to adjust the map or the formatting to fit a specific need.

Great job @lefep.  What takes me 20 min in Visio took me all of 5 min on the tool.  A certain timesaver.

Monday, April 17, 2017

Networking Penalty Box

In networking, we can't always have both price and performance.

     In many cases it is the equivalent of 'having the cake and eating it too.'

This is governed by the mechanism or method of network delivery, software or hardware.

In software, the cost of networking is relatively low and favors extremely rapid change.
   
     It is important to remember that it is constrained by the software architecture as well as the queues and threads that must be processed in concert with the operating system, hypervisor and application, etc.

     All of these are contending for time with the CPU and executing a software instruction takes a certain relative amount of time based on the CPU architecture.

In hardware, the cost of networking is high and favors rapid packet exchange over the ability to modify the networking function.

     I'm being very generous in this statement, the sole purpose of hardware is to move the packet from one spot to another, as rapidly as possible.

     Because the majority of the work is done in silicon, the only means to modify the network is to subroutine into software (which undermines the purpose and value of the hardware) OR to replace the silicon (which can take months to years and costs, a lot).

Price vs Performance
Figure 1.  The price vs performance curve
Utilizing x86 generically programmable delivery mechanisms, it is possible to do many things required of the network that are tolerable, but not fast or optimized.

     The examples of host bridges and OVS are eminently capable at relative bandwidth requirements and latency supporting an application within the confines of a hypervisor.  It can be remarkably efficient at least with respect to the application requirements.  The moment the traffic exits a hypervisor or OS, it becomes considerably more complex, particularly under high virtualization ratios.

The Network Penalty Box
Figure 2.  The Network Penalty Box
Network chipset vendors, chipset developers and network infrastructure vendors have maintained the continuing escalation in performance by designing capability into silicon.

All the while, arguably, continuing to put a downward pressure on the cost per bit transferred.

Virtualization vendors, on the other hand, have rapidly introduced network functions to support their use cases.

At issue is the performance penalty for networking in x86 and where that performance penalty affects the network execution.

In general, there is a performance penalty for executing layer 3 routing using generic x86 instructions vs silicon in the neighborhood of 20-25x.

For L2 and L3 (plus encapsulation) networking in x86 instruction vs silicon, the impact imposed is higher, in the neighborhood of 60-100x.

This adds latency to a system we'd prefer not to have, especially with workload bandwidth shifting heavily in the East-West direction.

Worse, it consumes a portion of the CPU and memory of the host that could be used to support more workloads.  The consumption is so unwieldy, bursty and application dependent that it becomes difficult to calculate the impact except in extremely narrow timeslices.

Enter virtio/SR-IOV/DPDK

The theory is, take network instructions that can be optimized and send them to the 'thing' that optimizes them.

Examples include libvert/virtio that evolve the para-virtualization of the network interface through driver optimization that can occur at the rate of change of software.

SR-IOV increases performance by taking a more direct route from the OS or hypervisor to the bus that supports the network interface via an abstraction layer.  This provides a means for the direct offload of instructions to the network interface to provide more optimized execution.

DPDK creates a direct to hardware abstraction layer that may be called from the OS or hypervisor.  Similarly offloading instructions for optimized execution in hardware.

What makes these particularly useful, from a networking perspective, is that elements or functions normally executed in the OS, hypervisor, switch, router, firewall, encryptor, encoder, decoder, etc., may now be moved into a physical interface for silicon based execution.

The cost of moving functions to the physical interface can be relatively small compared to putting these functions into a switch or router.  The volumes and rate of change of a CPU, chipset or network interface card has been historically higher, making introduction faster.

     Further, vendors of these cards and chipsets have practical reasons to support hardware offloads that favor their product over other vendors (or at the very least to remain competitive).

This means that network functions are moving closer to the hypervisor.

As the traditional network device vendors of switches, routers, load balancers, VPNs, etc., move to create Virtual Network Functions (VNFs) of their traditional business (in the form of virtual machines and containers) the abstractions to faster hardware execution will become ever more important.

This all, to avoid the Networking Penalty Box.

Thursday, April 13, 2017

Codeless programming

Based on the conversation of decoupled hardware and applications, there is a possibility of graphical code integration as only seen in specific environments.  See LabVIEW, Yahoo Pipes, etc.

In much the same way that we've created reusability in aspects of code in the creation of containers and serverless, there is likely an additional digital shift that can drive the mainstream of programming.
Evolution to codeless
Figure 1.  Flirtation with codeless
Codeless is the further evolution of the composite of Application Chaining with advancement of Graphical controlled logic, conditional integrations and data mapping.

This becomes an inflection point in programming.  It is an abstraction of traditional programming that makes it possible to provide the capability to program to the masses.

It is graphical, utilizing common drag-and-drop methods with flowchart like linkage between sources of data and the manipulation of that data.

Codeless doesn't remove coding, much like serverless didn't remove servers.  It doesn't remove the need to create code using traditional methods, but it will be the means for the largely uninitiated to create meaningful output from code and application inputs.

If you want to have a glimpse of this possible future, have a look at what tray.io and zapier are doing.  (There are possibly more, this list is not exhaustive.)

Thanks to @swardley, @cpswan and @IanMmmm for bringing this to my attention.

Monday, April 3, 2017

The what all, of Enterprise Cloud adoption, and all

Was asked my thoughts on Enterprise Public Cloud adoption rate.

heart cloud


Sensors leading to Public Cloud adoption:

From an enterprise perspective, the volume of servers sold in the previous year is soft. http://www.gartner.com/newsroom/id/3530117

The partnership that can underpin legacy enterprise app deployments in hybrid cloud has been announced.  https://blogs.vmware.com/vsphere/2016/10/vmware-aws-announce-strategic-partnership.html

The drivers for cost containment are going to become increasingly important as customers look for lower cost of service.  This will probably start looking like “Use (AWS or Azure) for disaster recovery” and likely to evolve into “Application Transformation” discussions as optimizations within the cloud.  The best way to see this is with the directionality of utility mapping like the value chains between enterprise and public cloud: http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html

Microsoft has announced an Azure Stack appliance to extend the reach of capabilities into the private cloud.  http://www.zdnet.com/article/microsoft-to-release-azure-stack-as-an-appliance-in-mid-2017/

The cost / unit consumption of private Data Center estate is anywhere between ~40 and 100% more than public cloud.  This is further being eroded by co-location vendors continuing to drive down the cost/unit for data center vs new build.  It is becoming very costly to build or retrofit older data centers rather than simply consuming a service at cost that can be easily liquidated per month.  The large scale DC vendors are also creating ecosystem connections for networking directly with the public cloud vendors and where those don’t exist, companies like AT&T, etc are enabling this type of service connection via their MPLS capabilities.

Then, there’s the eventual large scale adoption of containers that present some additional optimizations, particularly as they relate to DevOps that further increase density over hypervisor based virtualization and increase dramatically the speed of change.  Further extending this capability, the network vendors, the historic last line to move in 3rd platform are starting to adopt these concepts.
http://www.abusedbits.com/2017/03/creation-and-destruction-without-remorse.html
http://www.investors.com/news/technology/can-cisco-take-aristas-best-customers-with-software-bait/
http://www.crn.com/slide-shows/channel-programs/300084323/arista-networks-exec-on-new-container-software-offensive-and-its-biggest-fundamental-advantage-over-cisco.htm?_lrsc=9ce3424f-25d3-4521-95e4-eeae8e96b525

This culminates in public cloud providers positioning themselves for legacy applications, cost containment, cost based on their operating models, positions in DR if they can’t get production workloads, integration into private cloud where they can’t get principle workloads and certainly new workloads in cost/volume based on scale.

This leads me to believe that the position on Public Cloud, from an enterprise perspective, is just starting…..

Friday, March 31, 2017

Enterprise WAN is evolving!

Enterprise WAN Reference Architecture
Figure 1.  Enterprise WAN Reference Architecture
Figure 1 represents the high level Enterprise WAN Reference Architecture that current network capabilities seem to be indicating for the support of enterprise services. 

The MPLS network will be extended and enhanced utilizing gateway functions like VPN (which we currently do), CSP access that enables direct connectivity via the MPLS network and SD-WAN that will allow the extension of the MPLS via the Internet to small and medium size locations (maybe even large locations).

SD-WAN will extend the capability of MPLS network to locations not natively available with individual carriers.  It avoids the need to NNI carriers unless it is absolutely necessary. The carriage mechanism is tunneling over the internet and can support vendor/protocol specific optimizations for some quality of service (an abstraction of the underlying IP connectivity).
     Where SD-WAN cannot be on an MPLS gateway, the internet direct to DC will be able to support this functionality.

This model also represents the dissection and reduction of networks that must be "carried twice", ingressing and egressing the Data Center perimeter security controls. These controls will eventually be migrated to the Carrier Cloud WAN Services.  They will be provisioned for specificity in the enterprise application usage model or virtualized per application within the workload execution model.
     Traffic destined for CSPs and SaaS can use a more direct path via the Internet if allowed by the Enterprise.

The CSPs, connected to the Internet, a CSP gateway to MPLS and Ecosystem networks connected directly to Data Centers will extend the Enterprise Network to support enhanced consumption of those types of services like SAS, IoT as well as the various Cloud Service Providers.

Individuals will come in over a variety of connectivity mechanisms including broadband and telco wireless.

Providing the cost structure is competitive, backup paths for many of these networks are likely to shift toward future implementations of Telco 5G.

Thursday, March 16, 2017

Is Agile about brain chemistry

agile neuron
DevOps on Neuron
Is there a correlation between brain chemistry and development using the Continuous Integration mechanism of the DevOps model?


This relates to Agile in a couple of interesting ways.  The first being goal-setting, where the Agile Development process introduces changes to the Application Lifecycle in easily manageable increments.  

     Agile coaches recommend two week cycles, which are easily understood goal timeframes by the Agile team.  Too long and procrastination takes hold.  Too short and the development becomes daunting. 

The second is a relative randomness of action vs reward.  This has quite a lot to do with human brain chemistry, but essentially means that the Agile development corresponds roughly to an illusion of progress, if not outright progress.  

     It puts the developer into the position of a firefighter, being allowed a quick win in the form of an application enhancement or fix that corresponds directly to a goal that must pass strenuous test and validation.  

Success, much like the firefighter extinguishing a fire, is a buildup of doubt, fear and any number of negative emotions, followed by a release of endorphins.

     The chance of success is interwoven in the complexity of the solution requirements and the random nature of fast fails.

Ultimately, DevOps is about behavior.  If the sense of creation within a relatively small window and the struggle of creation do not tarnish the motivating growth or momentum (seem too difficult), the Agile development process is easier to continue than stop.

     This reinforces the behaviors of Agile methods as well as the ancillary benefits of the entire team participating in the successes.

In this way, it is possible that Agile is a means to change application development in a way that relates more to human brain chemistry than simply developing code.

Saturday, March 11, 2017

What does Watson think about you

http://gwen-systemu.mybluemix.net/ 

Follow the link, free and no registration (though, they will get your twitter handle out of the deal).

     Choose Content type Radio Button "Twitter"

     Enter your twitter name.

     Select Search (It'll find your tweets automagically)

     Press Analyze and await the results.

Figure 1.  Watson inputs to Personality Insights

Here's what Watson said about me:  (guess Watson didn't see my cartoon blog)

Your Personality*

You are shrewd and unconventional.

You are philosophical: you are open to and intrigued by new ideas and love to explore them. You are authority-challenging: you prefer to challenge authority and traditional values to help bring about positive changes. And you are solemn: you are generally serious and do not joke much.

Your choices are driven by a desire for discovery.

You are relatively unconcerned with both tradition and taking pleasure in life. You care more about making your own path than following what others have done. And you prefer activities with a purpose greater than just personal enjoyment.

*Compared to most people who participated in our surveys.