Friday, December 15, 2017

Enterprise Virtualization vs Public Cloud 2018 (Mapping, Wardley)

Enterprise virtualization prediction for 2018, tl:dr - no drastic changes from 2017.

There are some interesting possibilities, though, and I've used Simon Wardley mapping to diagram them.

Wardley Enterprise Cloud 2018 Mapping
Enterprise Virtualization 2018 prediction vs Public Cloud
As shown in the map on the left, we can now argue that the consumption of virtual machines has trended all the way into commodity consumption (with standardized tooling, automation and operating environment).  If it hasn't in your company, you may want to start asking why.

One of the more interesting possibilities for 2018, if the equipment vendors do this correctly, is composable infrastructure. This could completely displace traditional compute and push it into commodity consumption.  I'm going to leave it as a dotted line in the figure for now, as the business impact of technical accounting for corporations might make this a non-starter. That said, I have to imagine that a true utility in any premises would be good for the industry and the consumers.

In the public cloud map on the right, we may need to incorporate some changes based on enterprise use of public cloud to include the difference between “cloud native” capability vs enterprise hosting requirements.

Cloud native capability is the service consumption of the public cloud that relies only on the tools and capabilities built into the public cloud platform element.  Using it for new application development, including things like serverless application design, is growing as AWS and Azure partners and consumers learn to take advantage of those cloud native features.

     Cloud Native platforms are not particularly well placed for traditional enterprise workloads, though, which often require more specific (as well as traditional) care and feeding.  Furthermore, refactoring enterprise applications to take advantage of Cloud Native features may not be a worthwhile endeavor considering the cost to do transformation of applications.  The general thought is to give them a safe place to run until they are no longer needed. 

The enterprise hosting data center exodus from 2017 provides some of the highlights of why workloads will move out of the data center.  It may not be obvious, but the unifying element of both of the diagrams is how Hybrid Computing will be handled between enterprise virtualization and public cloud.  This integration still looks very much like early product (see diagram above).

One of the possible next steps is being taken by both Microsoft Azure and AWS / VMware who have announced methods to move non-cloud native workloads to their IaaS:  Microsoft Azure Stack and VMware Cloud on AWS.  Over time, both of these services should drive workloads more than "smooth out the peaks".  This is a major movement in what I’d titled my prediction last year, and why I say “Public Cloud Continues to Win Workload Love.”

If you've followed the mapping threads on my blog, here's the historical links on these predictions:


And if you want to know more about my Wardley mapping mis-adventures, follow this link.

updated 19 Dec 2017

Saturday, August 26, 2017

It's about the data

A friend of mine recently asked me what my thoughts were around connected home/small business.  He's kindly agreed to sharing the response.
...

In my humble opinion, it's all just verticals until something comes along to unify things and make life simpler.

It is about the Platform today.  In the future it will be about how we act on the data we have.  This will most likely shift to AI in some IoT fashion.

At the moment, I think we're are approaching a Wardly WAR stage with quite a lot of the capabilities moving from custom into product. http://blog.gardeviance.org/2015/02/on-evolution-disruption-and-pace-of.html

It's pre-industrialization of the capabilities that actually prove themselves both useful and offset either a cost, a time pressure or a labor effort that typically seem to win.

It'll almost always be about situational awareness though. A person will eventually see the thing that stands to make a serious and significant change that takes hold of one of the ecoystems and gives it a big push.

That's almost always an abstraction of the capability. I took a wild swing at one of the computing areas as an example. The abstraction of programming I called "codeless programming" that uses visual context rather than programming languages to build applications. It's the next step past "serverless" where inputs are tied to outputs by people not necessarily programming any of the intermediary application chains. http://www.abusedbits.com/2017/04/codeless-programming.html

Zuckerberg is running an experiment along these lines with his integration of capabilities using an AI.

https://www.youtube.com/watch?v=vvimBPJ3XGQ 

Thing is, getting everything to work together is ridiculously complex. Either commonality in method for integration or something that can manage the complexity is required before this is anything other than a really expensive and time consuming hobby. And then there's the data capture....

My brother has a sensor network that he literally built into his house to record a variety of data. The first dimension value of that data was to have it and be able to view it. The second dimension value of that data is to macroscopically act on it, like integrated thermostat control based on conditions and presets. The third dimension value would be to have similar information from 1500 houses and use it to design houses better, as an example.

In each one of the industries you are thinking of, the third dimension values far outweigh the two below it, but getting the data AND being able to act on it is ... difficult. The connected home products are about the only way to get at that data.

Thursday, May 25, 2017

Mapping Tool

Just did my first mapping with the @lefep tool at https://atlas2.wardleymaps.com/ 


Wardley Map
Figure 1.  WardleyMap on @lefep web tool

My first thought isn't about the tool which was really easy to learn and use, but the download formatting.  It outputs .png.

The thought here is we should want to show it to people, talk about the map and make adjustments to it.

  • Showing it in this format, perfect.
  • Talking about the map in this format, great.
  • Making adjustments to it... have to do it in the webtool.


Maybe something to put into the backlog, a story about users that may want to adjust the map or the formatting to fit a specific need.

Great job @lefep.  What takes me 20 min in Visio took me all of 5 min on the tool.  A certain timesaver.

Monday, April 17, 2017

Networking Penalty Box

In networking, we can't always have both price and performance.

     In many cases it is the equivalent of 'having the cake and eating it too.'

This is governed by the mechanism or method of network delivery, software or hardware.

In software, the cost of networking is relatively low and favors extremely rapid change.
   
     It is important to remember that it is constrained by the software architecture as well as the queues and threads that must be processed in concert with the operating system, hypervisor and application, etc.

     All of these are contending for time with the CPU and executing a software instruction takes a certain relative amount of time based on the CPU architecture.

In hardware, the cost of networking is high and favors rapid packet exchange over the ability to modify the networking function.

     I'm being very generous in this statement, the sole purpose of hardware is to move the packet from one spot to another, as rapidly as possible.

     Because the majority of the work is done in silicon, the only means to modify the network is to subroutine into software (which undermines the purpose and value of the hardware) OR to replace the silicon (which can take months to years and costs, a lot).

Price vs Performance
Figure 1.  The price vs performance curve
Utilizing x86 generically programmable delivery mechanisms, it is possible to do many things required of the network that are tolerable, but not fast or optimized.

     The examples of host bridges and OVS are eminently capable at relative bandwidth requirements and latency supporting an application within the confines of a hypervisor.  It can be remarkably efficient at least with respect to the application requirements.  The moment the traffic exits a hypervisor or OS, it becomes considerably more complex, particularly under high virtualization ratios.

The Network Penalty Box
Figure 2.  The Network Penalty Box
Network chipset vendors, chipset developers and network infrastructure vendors have maintained the continuing escalation in performance by designing capability into silicon.

All the while, arguably, continuing to put a downward pressure on the cost per bit transferred.

Virtualization vendors, on the other hand, have rapidly introduced network functions to support their use cases.

At issue is the performance penalty for networking in x86 and where that performance penalty affects the network execution.

In general, there is a performance penalty for executing layer 3 routing using generic x86 instructions vs silicon in the neighborhood of 20-25x.

For L2 and L3 (plus encapsulation) networking in x86 instruction vs silicon, the impact imposed is higher, in the neighborhood of 60-100x.

This adds latency to a system we'd prefer not to have, especially with workload bandwidth shifting heavily in the East-West direction.

Worse, it consumes a portion of the CPU and memory of the host that could be used to support more workloads.  The consumption is so unwieldy, bursty and application dependent that it becomes difficult to calculate the impact except in extremely narrow timeslices.

Enter virtio/SR-IOV/DPDK

The theory is, take network instructions that can be optimized and send them to the 'thing' that optimizes them.

Examples include libvert/virtio that evolve the para-virtualization of the network interface through driver optimization that can occur at the rate of change of software.

SR-IOV increases performance by taking a more direct route from the OS or hypervisor to the bus that supports the network interface via an abstraction layer.  This provides a means for the direct offload of instructions to the network interface to provide more optimized execution.

DPDK creates a direct to hardware abstraction layer that may be called from the OS or hypervisor.  Similarly offloading instructions for optimized execution in hardware.

What makes these particularly useful, from a networking perspective, is that elements or functions normally executed in the OS, hypervisor, switch, router, firewall, encryptor, encoder, decoder, etc., may now be moved into a physical interface for silicon based execution.

The cost of moving functions to the physical interface can be relatively small compared to putting these functions into a switch or router.  The volumes and rate of change of a CPU, chipset or network interface card has been historically higher, making introduction faster.

     Further, vendors of these cards and chipsets have practical reasons to support hardware offloads that favor their product over other vendors (or at the very least to remain competitive).

This means that network functions are moving closer to the hypervisor.

As the traditional network device vendors of switches, routers, load balancers, VPNs, etc., move to create Virtual Network Functions (VNFs) of their traditional business (in the form of virtual machines and containers) the abstractions to faster hardware execution will become ever more important.

This all, to avoid the Networking Penalty Box.

Thursday, April 13, 2017

Codeless programming

Based on the conversation of decoupled hardware and applications, there is a possibility of graphical code integration as only seen in specific environments.  See LabVIEW, Yahoo Pipes, etc.

In much the same way that we've created reusability in aspects of code in the creation of containers and serverless, there is likely an additional digital shift that can drive the mainstream of programming.
Evolution to codeless
Figure 1.  Flirtation with codeless
Codeless is the further evolution of the composite of Application Chaining with advancement of Graphical controlled logic, conditional integrations and data mapping.

This becomes an inflection point in programming.  It is an abstraction of traditional programming that makes it possible to provide the capability to program to the masses.

It is graphical, utilizing common drag-and-drop methods with flowchart like linkage between sources of data and the manipulation of that data.

Codeless doesn't remove coding, much like serverless didn't remove servers.  It doesn't remove the need to create code using traditional methods, but it will be the means for the largely uninitiated to create meaningful output from code and application inputs.

If you want to have a glimpse of this possible future, have a look at what tray.io and zapier are doing.  (There are possibly more, this list is not exhaustive.)

Thanks to @swardley, @cpswan and @IanMmmm for bringing this to my attention. #mapping

Monday, April 3, 2017

The what all, of Enterprise Cloud adoption, and all

Was asked my thoughts on Enterprise Public Cloud adoption rate.

heart cloud


Sensors leading to Public Cloud adoption:

From an enterprise perspective, the volume of servers sold in the previous year is soft. http://www.gartner.com/newsroom/id/3530117

The partnership that can underpin legacy enterprise app deployments in hybrid cloud has been announced.  https://blogs.vmware.com/vsphere/2016/10/vmware-aws-announce-strategic-partnership.html

The drivers for cost containment are going to become increasingly important as customers look for lower cost of service.  This will probably start looking like “Use (AWS or Azure) for disaster recovery” and likely to evolve into “Application Transformation” discussions as optimizations within the cloud.  The best way to see this is with the directionality of utility mapping like the value chains between enterprise and public cloud: http://www.abusedbits.com/2016/11/why-public-cloud-wins-workload-love.html

Microsoft has announced an Azure Stack appliance to extend the reach of capabilities into the private cloud.  http://www.zdnet.com/article/microsoft-to-release-azure-stack-as-an-appliance-in-mid-2017/

The cost / unit consumption of private Data Center estate is anywhere between ~40 and 100% more than public cloud.  This is further being eroded by co-location vendors continuing to drive down the cost/unit for data center vs new build.  It is becoming very costly to build or retrofit older data centers rather than simply consuming a service at cost that can be easily liquidated per month.  The large scale DC vendors are also creating ecosystem connections for networking directly with the public cloud vendors and where those don’t exist, companies like AT&T, etc are enabling this type of service connection via their MPLS capabilities.

Then, there’s the eventual large scale adoption of containers that present some additional optimizations, particularly as they relate to DevOps that further increase density over hypervisor based virtualization and increase dramatically the speed of change.  Further extending this capability, the network vendors, the historic last line to move in 3rd platform are starting to adopt these concepts.
http://www.abusedbits.com/2017/03/creation-and-destruction-without-remorse.html
http://www.investors.com/news/technology/can-cisco-take-aristas-best-customers-with-software-bait/
http://www.crn.com/slide-shows/channel-programs/300084323/arista-networks-exec-on-new-container-software-offensive-and-its-biggest-fundamental-advantage-over-cisco.htm?_lrsc=9ce3424f-25d3-4521-95e4-eeae8e96b525

This culminates in public cloud providers positioning themselves for legacy applications, cost containment, cost based on their operating models, positions in DR if they can’t get production workloads, integration into private cloud where they can’t get principle workloads and certainly new workloads in cost/volume based on scale.

This leads me to believe that the position on Public Cloud, from an enterprise perspective, is just starting…..

Friday, March 31, 2017

Enterprise WAN is evolving!

Enterprise WAN Reference Architecture
Figure 1.  Enterprise WAN Reference Architecture
Figure 1 represents the high level Enterprise WAN Reference Architecture that current network capabilities seem to be indicating for the support of enterprise services. 

The MPLS network will be extended and enhanced utilizing gateway functions like VPN (which we currently do), CSP access that enables direct connectivity via the MPLS network and SD-WAN that will allow the extension of the MPLS via the Internet to small and medium size locations (maybe even large locations).

SD-WAN will extend the capability of MPLS network to locations not natively available with individual carriers.  It avoids the need to NNI carriers unless it is absolutely necessary. The carriage mechanism is tunneling over the internet and can support vendor/protocol specific optimizations for some quality of service (an abstraction of the underlying IP connectivity).
     Where SD-WAN cannot be on an MPLS gateway, the internet direct to DC will be able to support this functionality.

This model also represents the dissection and reduction of networks that must be "carried twice", ingressing and egressing the Data Center perimeter security controls. These controls will eventually be migrated to the Carrier Cloud WAN Services.  They will be provisioned for specificity in the enterprise application usage model or virtualized per application within the workload execution model.
     Traffic destined for CSPs and SaaS can use a more direct path via the Internet if allowed by the Enterprise.

The CSPs, connected to the Internet, a CSP gateway to MPLS and Ecosystem networks connected directly to Data Centers will extend the Enterprise Network to support enhanced consumption of those types of services like SAS, IoT as well as the various Cloud Service Providers.

Individuals will come in over a variety of connectivity mechanisms including broadband and telco wireless.

Providing the cost structure is competitive, backup paths for many of these networks are likely to shift toward future implementations of Telco 5G.

Thursday, March 16, 2017

Is Agile about brain chemistry

agile neuron
DevOps on Neuron
Is there a correlation between brain chemistry and development using the Continuous Integration mechanism of the DevOps model?


This relates to Agile in a couple of interesting ways.  The first being goal-setting, where the Agile Development process introduces changes to the Application Lifecycle in easily manageable increments.  

     Agile coaches recommend two week cycles, which are easily understood goal timeframes by the Agile team.  Too long and procrastination takes hold.  Too short and the development becomes daunting. 

The second is a relative randomness of action vs reward.  This has quite a lot to do with human brain chemistry, but essentially means that the Agile development corresponds roughly to an illusion of progress, if not outright progress.  

     It puts the developer into the position of a firefighter, being allowed a quick win in the form of an application enhancement or fix that corresponds directly to a goal that must pass strenuous test and validation.  

Success, much like the firefighter extinguishing a fire, is a buildup of doubt, fear and any number of negative emotions, followed by a release of endorphins.

     The chance of success is interwoven in the complexity of the solution requirements and the random nature of fast fails.

Ultimately, DevOps is about behavior.  If the sense of creation within a relatively small window and the struggle of creation do not tarnish the motivating growth or momentum (seem too difficult), the Agile development process is easier to continue than stop.

     This reinforces the behaviors of Agile methods as well as the ancillary benefits of the entire team participating in the successes.

In this way, it is possible that Agile is a means to change application development in a way that relates more to human brain chemistry than simply developing code.

Saturday, March 11, 2017

What does Watson think about you

http://gwen-systemu.mybluemix.net/ 

Follow the link, free and no registration (though, they will get your twitter handle out of the deal).

     Choose Content type Radio Button "Twitter"

     Enter your twitter name.

     Select Search (It'll find your tweets automagically)

     Press Analyze and await the results.

Figure 1.  Watson inputs to Personality Insights

Here's what Watson said about me:  (guess Watson didn't see my cartoon blog)

Your Personality*

You are shrewd and unconventional.

You are philosophical: you are open to and intrigued by new ideas and love to explore them. You are authority-challenging: you prefer to challenge authority and traditional values to help bring about positive changes. And you are solemn: you are generally serious and do not joke much.

Your choices are driven by a desire for discovery.

You are relatively unconcerned with both tradition and taking pleasure in life. You care more about making your own path than following what others have done. And you prefer activities with a purpose greater than just personal enjoyment.

*Compared to most people who participated in our surveys.

Wednesday, March 8, 2017

Creation and destruction without remorse

Recent announcements by network vendors (AristaJuniper, Cisco) specifically taking aim at containers as part of their strategy indicate that the evolution of the platform continues.

When I say The Platform, I mean specifically the Value Chain of workload execution that exists as separate entities within the Enterprise and the Public Cloud.

Adoption of methods that provide 'like' services in both the Enterprise and Public Cloud pose an interesting point of view in the direction the network industry will take.  In this view of the world, there is a history of expectation that supporting capabilities within the Enterprise will resemble the capabilities in the Public Cloud and vice-versa.

The two service areas are advancing, often with differing goals in mind, for use of their virtualization service.  Enhancement of the capabilities are happening on different trajectories, where the example of AWS moving their Platform-as-a-Service toward Serverless functions and Enterprise services as Virtual Infrastructure Managers that are starting to tie into container capabilities.

The "least common denominator" capabilities that allow meaningful ubiquity in an all encompassing service model of today may very well disappear over time.  This to be replaced by the service chaining of decoupled application functions.

Based on some of these recent announcements, the network vendors mean to enter this brave new world in their areas of strength.

     Arista is pursuing the support of complex workflows execution, where its network code can be spun up or down as workloads change, with the same code on hardware, in virtualization and within containers.

     Juniper's Contrail similarly supports virtual services like their vRouter that interacts directly with the virtual infrastructure management for automation (service chaining).

     Cisco's Project Contiv may be the most ambitious, applying policies (and networking) that characterize the 'intent' of an application's deployment.

Some of the key things about DevOps that will play a role in how this all works out, and that the network vendors should take to heart:


  •      Application development is not an isolated activity.  When one finds a useful capability, they will share it.
  •      Because containers can be easily shared, applications are unlikely to be created from scratch.  Making sure developers can share useful capability is vital.
  •      Network methods used to support applications must be easily created and destroyed.


Or, as posed to me by Rick Wilhelm, "Containers allow creation and destruction of application environments without drama or remorse."  -- so must it be for the network.

remorseless creation and destruction



   

Saturday, February 25, 2017

DevOps is Sensing for Platform's sake

In the previous post on the high level reference architecture for DevOps, we identified the lifecycle attributes of modern application development.

The state of the art in application delivery is in a continual state of refinement, subject to the execution of economies on the Value Chain attributes of both infrastructure and platform delivery.

This is further enriched through the use of shared development (opensource, github, etc) and recognition of "future sensing agents" within the lifecycle of application delivery, particularly the exposure and refinement of APIs for the purpose of automation.

All of this is made possible by the continuing evolution of application development and operational support of equipment infrastructure at a fundamental/foundational level.  It is an OODA loop and seems to be accelerating in direct alignment with the practice of Agile Development.

Where 2nd Platform application designs supported an abstraction of the server through virtualization, the 3rd Platform is leading to distributed, run anywhere application lifecycles that provide the means to portability of workload on pooled Infrastructure as a Service (IaaS).  See Figure 1.

1st, 2nd, 3rd, Platform, Evolution, Application, Lifecycle, DevOps
Figure 1. Application Development Evolution Steps
Compelling events in the 3rd Platform include the introduction of Software Defined Networking (SDN), Software Defined Storage (SDS) and Containers that provide the enhanced abstraction of the underlying pool of equipment.

With an appropriate sensitivity to the APIs that provide abstracted functions, the Operations service is being automated at the platform level by industrializing those APIs into Cloud Native functions within the Platform.

This is the "future sensing" that creates the continuing evolution of the Platform space. #mapping

Friday, February 24, 2017

CSPs, SaaS and Network Ecosystems

Networking between Virtual Private Cloud (VPC) deployments is possible, but if you're looking to avoid some of the pitfalls of either multi-region or even multi-vendor deployments

  • it may be necessary to build a substantial part of the network yourself
  • you may have to trombone the traffic through your CSP to Data Center connection

      Neither of these are great options for an infrastructure that is nearly all automated and programmatic.

So, considering the alternatives, there may be some interesting possibilities as Network Ecosystem vendors enhance their services with additional automation and integration.

Consider AT&T's NetBond for instance.  In a situation where you are already using NetBond to create interconnection points for your enterprise integration and consumption of CSP services, imagine the possibility of using the NetBond headends to instrument a connection between extra-regional VPCs in a CSP, like Amazon Web Services.

The major advantage, NetBond is a programmable interface to the Direct Route AND they can pass traffic on the AT&T AVPN without having to transverse the Enterprise WAN.

Here's a high level of what that would look like:

VPC Networking AWS NetBond AT&T Amazon Availability Zone
Figure 1.  AWS VPC to VPC

At first glance, this looks remarkably similar to VPC routing, but notice that this configuration is completely EXTRA-REGIONAL, it could be used to connect a VPC in US West to a VPC in Singapore.

This could provide some really interesting availability and DR models for application designers.

A second possibility is to enhance a Hybrid Cloud service with execution in more than one vendor CSP.

Consider the following figure:

Amazon, Azure, ExpressRoute, DirectRoute, Route, AWS
Figure 2.  Amazon VPC connecting to Azure Cloud
In this model, creating a truly vendor independent cloud deployment becomes possible.  Not only will this instrument application delivery across multiple CSPs, but it makes some of the container application deployment possibilities a lot less "sticky."  And yes, it's entirely programmatic.

There's always a question around moving data to the right place.  Considering that quite a number of enterprises use a variety of SaaS services today, it may be nice to move specific volumes of data from one place to another to act on them with some Big Data analytics (and maybe even some #AI in the future).

Consider the next figure:

Azure MS NetBond SaaS Salesforce
Figure 3.  CSP to SaaS
As an example: with this method it would be possible in the future to send SFDC data (or even a stream) to an interactive visualization of the data in Microsoft Azure via Power BI.  Again, all done programmatically AND secure.

Ultimately, once network connecting points are made available, interesting things can start to happen with Network Ecosystems.

Update:

.@abusedbits Love it-- A realtime market opportunity feed for the @CSC + @MicrosoftR IML: http://bit.ly/2ibqZpk  #CSCTechTalk

https://twitter.com/JerryAOverton/status/835493717389754368

A compelling use of real time data feed, programmatically applied to a network integration and delivery of interactive visualization with MicrosoftR.


Tuesday, February 21, 2017

DevOps

I was asked to produce a high level, non-waterfall view of DevOps that included all of the associated areas, including Agile Development, Continuous Integration, Continuous Delivery, Application Development and Operations.

It's an amalgam of more than one model, showing the entire lifecycle from Backlog to Ops Incident/Change with principal areas of concern as banners on the diagram.

DevOps
Figure 1.  DevOps and the Application Development Lifecycle
Please do let me know if you find this useful.

Tuesday, February 14, 2017

map-camp.com

If you are a fan or simply interested in Value Chain Mapping, Wardley Maps or anything else in this area of situational awareness:

Simon Wardley @swardley http://blog.gardeviance.org/ is setting up a first of a kind event in the UK.

It will be a gathering of like minds exploring the depth of Wardley Maps.

Visit map-camp.com to gain situational awareness of .. situational awareness.

     http://www.map-camp.com/

I'm a fan, and here are some of my examples:

     http://www.abusedbits.com/p/value-chain-mapping.html

Friday, January 27, 2017

The Briefest History of Networking

Desire to communicate


Figure 1. Computers Communicate

There was an early desire to have computers communicate with each other.

This takes the form of binary communication, a transfer of 1’s and 0’s that means something to the computer.

Networking is useful when data in one computer is useful to an application running on another computer.


http://www.livescience.com/20727-internet-history.html
http://www.computerhistory.org/timeline/networking-the-web/

Early Systems


Serial

Start – Stop types of communications predate computing (teletype, etc)

These were relatively low bandwidth, but could carry signals over long distances 

They evolved into modern serial communication for computing where pre-determined synchronization and encoding were used

Modem

Modems were developed to significantly reduce the cost of wiring by using the telephone company wires except for the "last mile"

Modems convert the 1’s and 0’s of computer communication into a signal that could travel over telephone lines reliably


https://en.wikipedia.org/wiki/Asynchronous_serial_communication
https://en.wikipedia.org/wiki/Modem
https://en.wikipedia.org/wiki/Last_mile

Telephone System


There’s a problem though, as Van Jacobson put so well, the Phone System wasn’t about the phones, it was about connecting wires to wires

The utility of the system was dependent on wires running anywhere there was a phone
The wiring of the phone system was the dominant cost

The Phone System revenue was derived by constructing paths between calling endpoints


https://en.wikipedia.org/wiki/Telephone
https://en.wikipedia.org/wiki/Van_Jacobson

From manual to Electromechanical Switches


telephone switchboard
Figure 2. Early switchboard phone system


Early phone systems were based on switch-boards, where operators would physically connect one line (wire pair) to another at junctions in the telephone system.


Electromechanical switch
Figure 3. Electromechanical Switch


This evolved into electromechanical switches that utilized the phone number as a program to connect wires within the telephone system such that “operators” were no longer required for the majority of calls.

http://www.imradioha.org/Radio/Communications_Nostalgia/Communications_Nostalgia.htm
http://ethw.org/Electromechanical_Telephone-Switching


Problems with the Phone System



The switches had to be centralized with the wiring to be economically feasible, thus creating multiple single points of failure (as well as the development of a monopoly on access to the wiring)

The reliability of any system decreases as the system scale increases and the telephone switches became incredibly large

From the perspective of a computer, until the path is established, data can’t flow, therefore efficiency decreases during any of the telephone systems’ connecting procedures

Then...


Abstraction of the Path



Paul Baran, in 1964 theorized a distributed (de-centralized) communications network that could eliminate the points of failure in traditional communications systems

Donald Davies, independently worked on networking and coined the term “packet switching” where the computer split the communications into small segments and, independent of the path, reassembled them at the endpoint

Both are credited with the development of modern distributed computer networking

https://en.wikipedia.org/wiki/Paul_Baran
https://en.wikipedia.org/wiki/Donald_Davies

(We can also aruge that many of the technologies that evolved in parallel in the following years are abstractions of previous networking technologies.)

ARPAnet


In 1969, the Advanced Research Projects Agency Network was born, funded by the United States Department of Defense

ARPAnet was an early packet switching network and a precursor to the Internet

It evolved over time and was expanded significantly by the National Science Foundation to support supercomputing at universities in the US

In 1982, the TCP/IP communications protocol was introduced to standardize the protocol for ARPAnet

The result of this, along with the development of the World Wide Web hypertext markup language, provided the means to industrialize the technology that became the Internet

https://en.wikipedia.org/wiki/ARPANET


Figure 4. ARPAnet 1969 to 1984


https://personalpages.manchester.ac.uk/staff/m.dodge/cybergeography/atlas/historical.html
https://www.wired.com/2015/06/mapping-the-internet/


TCP/IP Won -- every time!


In 1974 Bob Kahn and Vint Cerf published “A Protocol for Packet Network Intercommunications” known as Transmission Control Protocol

This provided a means to standardize and industrialize the communications between computers on a network

The addressing structure provided a means to globally connect independently run networks together


World Wide Web


In 1990 Tim Berners-Lee proposed a hypertext project in a client/server configuration called World Wide Web

Using Universal Resource Locators (URLs), enabled human readable identification of materials with a hypertext browser, tying together TCP and the Domain Name System (DNS)

Considered the #1 moment that shaped the world by the British Council (and who can argue with that)


https://en.wikipedia.org/wiki/Tim_Berners-Lee
https://en.wikipedia.org/wiki/Domain_Name_System
https://www.britishcouncil.org/80moments/

Evolution


Request for Comments (RFC) play a vital role in the continuing development of networking technologies.

Technologies developed in accordance with the OSI model have provided the means for continued evolution of the foundation technologies into networking as we know it today.

http://www.internetsociety.org/internet/what-internet/history-internet/brief-history-internet
http://www3.nd.edu/~dwang5/courses/fall16/pdf/evolution.pdf
https://en.wikipedia.org/wiki/OSI_model
https://en.wikipedia.org/wiki/Request_for_Comments

Internet


Internet July 11 2015
Figure 5. Internet on July 11, 2015


An industrialized communications network

Capable of communications around problem areas

Global any-to-any addressing, any computer can talk to any computer

Connects independent networks with standardized protocols

Evolves along with multiple network technologies since inception - Thanks TCP/IP!

Operates on electrical, optical and radio frequency interconnecting mechanisms


http://www.opte.org/ under Creative Commons from LyonLabs
https://en.wikipedia.org/wiki/Internet

Update:  Great preso on network interfaces:  http://player.slideplayer.com/23/6674193/# 

Wednesday, January 25, 2017

Network Abstraction Virtualization SDN VNF

Recent question asked:  What is this network virtualization stuff I keep hearing about?


a representation of packets in a tunnel
Figure 1.  Network packets and trains


Network virtualization can apply to multiple areas of networking.  At a high level....

Network Virtualization technically started with VLAN, which stands for virtual LAN, where the broadcast domain was abstracted away from ALL of the physical endpoints in the network.   This made it possible to group computers on a network with some level of logic, it's done in software rather than by changing wires and can be considered an abstraction of the wiring.

There are a couple of different types of Software Defined Networking (SDN), the leading one right now is an "overlay"  in a tunnel over an "underlay" or "provider" network.  It exists as an abstraction of one network on top of another, where the underlay is responsible for fast packet performance (traditional networking) and the overlay is responsible for specific awareness or intelligence of the communicating endpoints.
     The simple example:  If you consider a train the "underlay" network (it moves packets efficiently) then a person riding on the train with their own bag is the "overlay."  The train doesn't have to know where the person is going, just that a portion of their travel is between these two endpoints.  This abstracts the path of the data packets from the logic of how they are connected by placing the traffic in a network tunnel.  Common tunnel types are VxLAN, GRE and NVGRE.  This type is associated with technology like VMware NSX and Microsoft Hyper-V networking.

   There is another SDN type that acts on the flow of packets between their source and destination. This also abstracts the path of the data packets from the logic of how they are connected, but in difference to the concept above, this type of SDN acts on the forwarding plane of network hardware primarily.  This type is associated with technology like OpenFlow.

And there is also another type of network virtualization happening right now, where the "function" or software coding of a network device is built within a software package, like a virtual host or container, that can be run on a standard server.  This is called a Virtual Network Function (VNF) and is closely associated with the advocacy of moving from  hardware to software delivery of services, often called Network Function Virtualization (NFV).
     The simple example:  A router has been historically a device with interfaces that moves packets from one physical or logical interface to another according to a configured pattern.  A VNF router is software (not a device) that runs on a server that moves packets from one software or logical interface to another.  This abstracts away the hardware in favor of software delivery of the capability.  There's a bit of this in the enterprise and a lot starting in the Telecommunications Carrier space.

Again, this is at a high level and hope that it helps.  There are other network abstractions currently in use, but these are the primary ones getting all of the media attention today.

Friday, January 20, 2017

Modern Network Areas in Software Defined

Software Defined LAN (SD-LAN, SDN) - Local Area Networking deployed as software combined with hardware and/or software as an overlay network (or possibly integrated) with exposed APIs that can be managed from a network controller or by direct programmatic integration.

Traditional Networking Model
Figure 1.  Traditional Network Model
Software Defined Networking Model
Figure 2.  Software Defined Network Model

Software Defined WAN (SD-WAN, SDN) - Wide Area Networking deployed as an appliance or VNF on a hypervisor, typically on a standardized hardware platform with exposed APIs that can be managed from a network controller or by direct programmatic integration.

Classical Network vs Software Defined WAN
Figure 3.  Classical vs Software Defined WAN

     Network Ecosystem - a WAN system created to provide integration capability between consumers and vendors in high population locations, like within a Data Center or between Data Centers within their service coverage area.  Examples in the industry:  Equinix, Telx.

     Universal Cross Connect - a WAN system created to provide integration capability between consumers and vendors that provides a connecting service by extending this service to specific customer locations via Telecom, Dark Fiber, etc. http://www.abusedbits.com/2015/09/hybrid-cloud-how-it-can-look.html

     Service Management Platform (SMP) – Software, tools, services that create the connections for customers between orchestrated networks and Hybrid Cloud. http://www.abusedbits.com/2015/09/hybrid-cloud-how-it-can-look.html

     Enhanced Cloud WAN Service - a WAN service were the routing, switching, firewall, proxy, as well as Internet and other capabilities are run within a Telecom Cloud and connectivity to the consumer is via any physical  means available, typically with a reduced equipment requirement at the customer premises. http://www.abusedbits.com/2016/10/wan-ecosystems-evolution.html

Virtual Network Function (VNF) - Any network capability implemented as an appliance or virtual machine and executed as software within a virtualization platform or framework, typically in an x86 hypervisor allowing specific access capability to the network interfaces.

Virtual Network Function VNF
Figure 4.  Virtual Network Function


Network Function Virtualization (NFV) - Management and control of virtualized network functions (VNF) that includes the OSS/BSS and management tasks for the lifecycle of VNFs.

     Management and Orchestration (MANO) - Platform RE:NFV-MANO - providing the roles of OSS/BSS as part of a related control framework inclusive of orchestration of network functions (Network Function Virtualization Orchestration - NFVO), management of VNFs (Virtual Network Function Manager - VNFM) and awareness of infrastructure management (Virtualzed Infrastructure Manager - VIM).  http://www.etsi.org/deliver/etsi_gs/NFV-MAN/001_099/001/01.01.01_60/gs_NFV-MAN001v010101p.pdf 


Thursday, January 19, 2017

Someone else did it a different way



"Because we've always done it this way" is an absolutely horrible reason to continue doing things in the same way.

It's expensive to always do it the same way.

This should be obvious.  Unless there is a specific legal reason to do something in a certain way, the cost of it will never go down.  As a matter of fact, it will go up if there is nothing putting a cost pressure on it.  It won't improve AND it will cost more over time.

It's boring to always do it the same way.

Nothing is more menial than the exact same thing day after day.  People placed in this situation will burn out or become completely ineffectual.

It kills prioritization to effect change.

When the time comes to consider and evaluate new responses to problems, this one single phrase prevents action.

It supports continuing lack of awareness or understanding.

For every new technology and new way of doing something, there was someone that understood an awful lot about the previous technology or method.  Entire businesses rise and fall based on foundational improvements that cannot be achieved without knowledge of the topic.

It utterly halts innovation.

There's no reason to change something that works well is a corollary.  People learn by doing.  In doing, they make mistakes.  Most of the mistakes are not useful and they are costly.  But, some of those mistakes identify great new paths.  Those mistakes are part-n-parcel with improvement.

Someone will do it differently.

Then there will be that moment when you are left wondering why.  It's because someone else did it a different way.

Tuesday, January 10, 2017

Newest Milk and Oldest Wine

In Information Technology, what we actually want is the newest milk and the oldest wine.

     It may not be the oddest phrasing, with a little explanation.

The newest milk provides the sustenance for new growth, like the things that are technologically interesting turned practical or business useful.

A particularly good analogy may be applied to software and software development.  Consider the accelerated speed of delivery provided by new platforms enabled for Agile delivery of applications over traditional methods.

The speed of change is altered fundamentally and we all want the newest software.

The oldest wine is the libation that is well known and well understood, technologically it is the stability of continuous operation with minimal interruption.

In IT, the oldest and most well understood capabilities are typically the most stable.  Consider things like network switches that have uptime on the order of years.  They perform their given tasks from the time they are installed until they are removed.

As with the oldest wine, equipment relatively free from bugs sets a precedence for the future experience and so, we want the most stability.

And then there is powdered drink mix, technologically equivalent to the half-truth, misrepresented and misunderstood.  The thing is, powered drink mix often tastes excruciatingly wonderful, comes in a variety of flavors and is absolutely packed with sugar.

The advertisers and the media amplify messaging particularly to get attention.

     As an example, we're seeing "disruption of" or "disruptive _____" being used quite a lot.

In advertising, nearly all brand identification is good.  This doesn't mean that this miraculous new thing is particularly good at changing the speed of change or inherently able to increase stability.

It is simply that some new thing is being advertised and some really talented people are paid to advertise this new thing.

So, the point of my rant, it is the job of the programmers and IT architects to explore and understand which things are like the newest milk, which are the oldest wine and which are neither AND to make sure you don't always drink the "powered drink mix" which is neither nutritious nor a libation.

and

My Cat:

Tuesday, January 3, 2017

Next IT Shift, back to Distributed

Figure 1.  We're about to swing to Distributed Computing again
There's a lot to be said for the ever changing IT industry and 2017 should be a banner year in new things happening.

As a recap, in 2016 we saw the Digital Shift take hold of IT and give it a really good shake.  Starting 2016 everyone was asking if you had a plan for Cloud Computing.  I now hope that you do and if not......

There's a huge number of predictions for what the focus and industry will be looking at in 2017.  Of these, #Artifical #Intelligence or #AI and its many brethren (#MachineLearning #Robots, etc) are likely to impose a new Transformation Shift in the industry.

The way I think about it is #IoT with some #smarts.  At a most basic level, anything that can have a sensor applied to it is likely to be considered for an IT upgrade in the framework of an #IoT style device.  These upgrades will be sensors that record some type of data, maybe multiple types of data.

The result is a tiny sensor recording something that people find of value and would like to act upon.

This means data.  And when I say data I mean #LotsOfData.  Big Data levels of lots.

Because there will be so much data, it is almost certain that the data will require some culling and curating at the source.  So, there will be a tiny little computer attached to the tiny sensor.

The data from the sensor is going to have to go somewhere and I think this will have an effect on Cloud Computing.  We're about to have another pendulum swing in computing back to distributed processing of a sort.

Just like other swings, it will not eliminate the previous technology area, but it will force it to change.

Ideally, valuable data that needs to be acted on rapidly will need to have most of the computation done very close to the source.  The entire idea behind the #OODA #Loop is needed to take advantage of real time or near real time data.

Something will be needed to jumpstart the #AI training to make it all work.  This may be the resulting use of Cloud Computing, or it may create the need for a "Near Edge" Cloud computing capability.

Once the AI is trained, the tiny computer should be self sufficient enough to handle everything else it needs to do other than store the sensor data, making the "Near Edge" Cloud as much a storage system as anything else.

This means that the next transformation is going to make computing at the #Edge a reality again.

It's also likely to change how we deal with programming languages.  Logic programming will give way to data flow programming.  (Might be time to brush the dust off some old data flow language or build a new one).