Monday, April 17, 2017

Networking Penalty Box

In networking, we can't always have both price and performance.

     In many cases it is the equivalent of 'having the cake and eating it too.'

This is governed by the mechanism or method of network delivery, software or hardware.

In software, the cost of networking is relatively low and favors extremely rapid change.
   
     It is important to remember that it is constrained by the software architecture as well as the queues and threads that must be processed in concert with the operating system, hypervisor and application, etc.

     All of these are contending for time with the CPU and executing a software instruction takes a certain relative amount of time based on the CPU architecture.

In hardware, the cost of networking is high and favors rapid packet exchange over the ability to modify the networking function.

     I'm being very generous in this statement, the sole purpose of hardware is to move the packet from one spot to another, as rapidly as possible.

     Because the majority of the work is done in silicon, the only means to modify the network is to subroutine into software (which undermines the purpose and value of the hardware) OR to replace the silicon (which can take months to years and costs, a lot).

Price vs Performance
Figure 1.  The price vs performance curve
Utilizing x86 generically programmable delivery mechanisms, it is possible to do many things required of the network that are tolerable, but not fast or optimized.

     The examples of host bridges and OVS are eminently capable at relative bandwidth requirements and latency supporting an application within the confines of a hypervisor.  It can be remarkably efficient at least with respect to the application requirements.  The moment the traffic exits a hypervisor or OS, it becomes considerably more complex, particularly under high virtualization ratios.

The Network Penalty Box
Figure 2.  The Network Penalty Box
Network chipset vendors, chipset developers and network infrastructure vendors have maintained the continuing escalation in performance by designing capability into silicon.

All the while, arguably, continuing to put a downward pressure on the cost per bit transferred.

Virtualization vendors, on the other hand, have rapidly introduced network functions to support their use cases.

At issue is the performance penalty for networking in x86 and where that performance penalty affects the network execution.

In general, there is a performance penalty for executing layer 3 routing using generic x86 instructions vs silicon in the neighborhood of 20-25x.

For L2 and L3 (plus encapsulation) networking in x86 instruction vs silicon, the impact imposed is higher, in the neighborhood of 60-100x.

This adds latency to a system we'd prefer not to have, especially with workload bandwidth shifting heavily in the East-West direction.

Worse, it consumes a portion of the CPU and memory of the host that could be used to support more workloads.  The consumption is so unwieldy, bursty and application dependent that it becomes difficult to calculate the impact except in extremely narrow timeslices.

Enter virtio/SR-IOV/DPDK

The theory is, take network instructions that can be optimized and send them to the 'thing' that optimizes them.

Examples include libvert/virtio that evolve the para-virtualization of the network interface through driver optimization that can occur at the rate of change of software.

SR-IOV increases performance by taking a more direct route from the OS or hypervisor to the bus that supports the network interface via an abstraction layer.  This provides a means for the direct offload of instructions to the network interface to provide more optimized execution.

DPDK creates a direct to hardware abstraction layer that may be called from the OS or hypervisor.  Similarly offloading instructions for optimized execution in hardware.

What makes these particularly useful, from a networking perspective, is that elements or functions normally executed in the OS, hypervisor, switch, router, firewall, encryptor, encoder, decoder, etc., may now be moved into a physical interface for silicon based execution.

The cost of moving functions to the physical interface can be relatively small compared to putting these functions into a switch or router.  The volumes and rate of change of a CPU, chipset or network interface card has been historically higher, making introduction faster.

     Further, vendors of these cards and chipsets have practical reasons to support hardware offloads that favor their product over other vendors (or at the very least to remain competitive).

This means that network functions are moving closer to the hypervisor.

As the traditional network device vendors of switches, routers, load balancers, VPNs, etc., move to create Virtual Network Functions (VNFs) of their traditional business (in the form of virtual machines and containers) the abstractions to faster hardware execution will become ever more important.

This all, to avoid the Networking Penalty Box.

Thursday, April 13, 2017

Codeless programming

Based on the conversation of decoupled hardware and applications, there is a possibility of graphical code integration as only seen in specific environments.  See LabVIEW, Yahoo Pipes, etc.

In much the same way that we've created reusability in aspects of code in the creation of containers and serverless, there is likely an additional digital shift that can drive the mainstream of programming.
Evolution to codeless
Figure 1.  Flirtation with codeless
Codeless is the further evolution of the composite of Application Chaining with advancement of Graphical controlled logic, conditional integrations and data mapping.

This becomes an inflection point in programming.  It is an abstraction of traditional programming that makes it possible to provide the capability to program to the masses.

It is graphical, utilizing common drag-and-drop methods with flowchart like linkage between sources of data and the manipulation of that data.

Codeless doesn't remove coding, much like serverless didn't remove servers.  It doesn't remove the need to create code using traditional methods, but it will be the means for the largely uninitiated to create meaningful output from code and application inputs.

If you want to have a glimpse of this possible future, have a look at what tray.io and zapier are doing.  (There are possibly more, this list is not exhaustive.)

Thanks to @swardley, @cpswan and @IanMmmm for bringing this to my attention.

Monday, April 3, 2017

The what all, of Enterprise Cloud adoption, and all

Was asked my thoughts on Enterprise Public Cloud adoption rate.

heart cloud


Sensors leading to Public Cloud adoption:

From an enterprise perspective, the volume of servers sold in the previous year is soft. http://www.gartner.com/newsroom/id/3530117

The partnership that can underpin legacy enterprise app deployments in hybrid cloud has been announced.  https://blogs.vmware.com/vsphere/2016/10/vmware-aws-announce-strategic-partnership.html

The drivers for cost containment are going to become increasingly important as customers look for lower cost of service.  This will probably start looking like “Use (AWS or Azure) for disaster recovery” and likely to evolve into “Application Transformation” discussions as optimizations within the cloud.  The best way to see this is with the directionality of utility mapping like the value chains between enterprise and public cloud: http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html

Microsoft has announced an Azure Stack appliance to extend the reach of capabilities into the private cloud.  http://www.zdnet.com/article/microsoft-to-release-azure-stack-as-an-appliance-in-mid-2017/

The cost / unit consumption of private Data Center estate is anywhere between ~40 and 100% more than public cloud.  This is further being eroded by co-location vendors continuing to drive down the cost/unit for data center vs new build.  It is becoming very costly to build or retrofit older data centers rather than simply consuming a service at cost that can be easily liquidated per month.  The large scale DC vendors are also creating ecosystem connections for networking directly with the public cloud vendors and where those don’t exist, companies like AT&T, etc are enabling this type of service connection via their MPLS capabilities.

Then, there’s the eventual large scale adoption of containers that present some additional optimizations, particularly as they relate to DevOps that further increase density over hypervisor based virtualization and increase dramatically the speed of change.  Further extending this capability, the network vendors, the historic last line to move in 3rd platform are starting to adopt these concepts.
http://www.abusedbits.com/2017/03/creation-and-destruction-without-remorse.html
http://www.investors.com/news/technology/can-cisco-take-aristas-best-customers-with-software-bait/
http://www.crn.com/slide-shows/channel-programs/300084323/arista-networks-exec-on-new-container-software-offensive-and-its-biggest-fundamental-advantage-over-cisco.htm?_lrsc=9ce3424f-25d3-4521-95e4-eeae8e96b525

This culminates in public cloud providers positioning themselves for legacy applications, cost containment, cost based on their operating models, positions in DR if they can’t get production workloads, integration into private cloud where they can’t get principle workloads and certainly new workloads in cost/volume based on scale.

This leads me to believe that the position on Public Cloud, from an enterprise perspective, is just starting…..