Friday, January 27, 2017

The Briefest History of Networking

Desire to communicate

Figure 1. Computers Communicate

There was an early desire to have computers communicate with each other.

This takes the form of binary communication, a transfer of 1’s and 0’s that means something to the computer.

Networking is useful when data in one computer is useful to an application running on another computer.

Early Systems


Start – Stop types of communications predate computing (teletype, etc)

These were relatively low bandwidth, but could carry signals over long distances 

They evolved into modern serial communication for computing where pre-determined synchronization and encoding were used


Modems were developed to significantly reduce the cost of wiring by using the telephone company wires except for the "last mile"

Modems convert the 1’s and 0’s of computer communication into a signal that could travel over telephone lines reliably

Telephone System

There’s a problem though, as Van Jacobson put so well, the Phone System wasn’t about the phones, it was about connecting wires to wires

The utility of the system was dependent on wires running anywhere there was a phone
The wiring of the phone system was the dominant cost

The Phone System revenue was derived by constructing paths between calling endpoints

From manual to Electromechanical Switches

telephone switchboard
Figure 2. Early switchboard phone system

Early phone systems were based on switch-boards, where operators would physically connect one line (wire pair) to another at junctions in the telephone system.

Electromechanical switch
Figure 3. Electromechanical Switch

This evolved into electromechanical switches that utilized the phone number as a program to connect wires within the telephone system such that “operators” were no longer required for the majority of calls.

Problems with the Phone System

The switches had to be centralized with the wiring to be economically feasible, thus creating multiple single points of failure (as well as the development of a monopoly on access to the wiring)

The reliability of any system decreases as the system scale increases and the telephone switches became incredibly large

From the perspective of a computer, until the path is established, data can’t flow, therefore efficiency decreases during any of the telephone systems’ connecting procedures


Abstraction of the Path

Paul Baran, in 1964 theorized a distributed (de-centralized) communications network that could eliminate the points of failure in traditional communications systems

Donald Davies, independently worked on networking and coined the term “packet switching” where the computer split the communications into small segments and, independent of the path, reassembled them at the endpoint

Both are credited with the development of modern distributed computer networking

(We can also aruge that many of the technologies that evolved in parallel in the following years are abstractions of previous networking technologies.)


In 1969, the Advanced Research Projects Agency Network was born, funded by the United States Department of Defense

ARPAnet was an early packet switching network and a precursor to the Internet

It evolved over time and was expanded significantly by the National Science Foundation to support supercomputing at universities in the US

In 1982, the TCP/IP communications protocol was introduced to standardize the protocol for ARPAnet

The result of this, along with the development of the World Wide Web hypertext markup language, provided the means to industrialize the technology that became the Internet

Figure 4. ARPAnet 1969 to 1984

TCP/IP Won -- every time!

In 1974 Bob Kahn and Vint Cerf published “A Protocol for Packet Network Intercommunications” known as Transmission Control Protocol

This provided a means to standardize and industrialize the communications between computers on a network

The addressing structure provided a means to globally connect independently run networks together

World Wide Web

In 1990 Tim Berners-Lee proposed a hypertext project in a client/server configuration called World Wide Web

Using Universal Resource Locators (URLs), enabled human readable identification of materials with a hypertext browser, tying together TCP and the Domain Name System (DNS)

Considered the #1 moment that shaped the world by the British Council (and who can argue with that)


Request for Comments (RFC) play a vital role in the continuing development of networking technologies.

Technologies developed in accordance with the OSI model have provided the means for continued evolution of the foundation technologies into networking as we know it today.


Internet July 11 2015
Figure 5. Internet on July 11, 2015

An industrialized communications network

Capable of communications around problem areas

Global any-to-any addressing, any computer can talk to any computer

Connects independent networks with standardized protocols

Evolves along with multiple network technologies since inception - Thanks TCP/IP!

Operates on electrical, optical and radio frequency interconnecting mechanisms under Creative Commons from LyonLabs

Update:  Great preso on network interfaces: 

Wednesday, January 25, 2017

Network Abstraction Virtualization SDN VNF

Recent question asked:  What is this network virtualization stuff I keep hearing about?

a representation of packets in a tunnel
Figure 1.  Network packets and trains

Network virtualization can apply to multiple areas of networking.  At a high level....

Network Virtualization technically started with VLAN, which stands for virtual LAN, where the broadcast domain was abstracted away from ALL of the physical endpoints in the network.   This made it possible to group computers on a network with some level of logic, it's done in software rather than by changing wires and can be considered an abstraction of the wiring.

There are a couple of different types of Software Defined Networking (SDN), the leading one right now is an "overlay"  in a tunnel over an "underlay" or "provider" network.  It exists as an abstraction of one network on top of another, where the underlay is responsible for fast packet performance (traditional networking) and the overlay is responsible for specific awareness or intelligence of the communicating endpoints.
     The simple example:  If you consider a train the "underlay" network (it moves packets efficiently) then a person riding on the train with their own bag is the "overlay."  The train doesn't have to know where the person is going, just that a portion of their travel is between these two endpoints.  This abstracts the path of the data packets from the logic of how they are connected by placing the traffic in a network tunnel.  Common tunnel types are VxLAN, GRE and NVGRE.  This type is associated with technology like VMware NSX and Microsoft Hyper-V networking.

   There is another SDN type that acts on the flow of packets between their source and destination. This also abstracts the path of the data packets from the logic of how they are connected, but in difference to the concept above, this type of SDN acts on the forwarding plane of network hardware primarily.  This type is associated with technology like OpenFlow.

And there is also another type of network virtualization happening right now, where the "function" or software coding of a network device is built within a software package, like a virtual host or container, that can be run on a standard server.  This is called a Virtual Network Function (VNF) and is closely associated with the advocacy of moving from  hardware to software delivery of services, often called Network Function Virtualization (NFV).
     The simple example:  A router has been historically a device with interfaces that moves packets from one physical or logical interface to another according to a configured pattern.  A VNF router is software (not a device) that runs on a server that moves packets from one software or logical interface to another.  This abstracts away the hardware in favor of software delivery of the capability.  There's a bit of this in the enterprise and a lot starting in the Telecommunications Carrier space.

Again, this is at a high level and hope that it helps.  There are other network abstractions currently in use, but these are the primary ones getting all of the media attention today.

Friday, January 20, 2017

Modern Network Areas in Software Defined

Software Defined LAN (SD-LAN, SDN) - Local Area Networking deployed as software combined with hardware and/or software as an overlay network (or possibly integrated) with exposed APIs that can be managed from a network controller or by direct programmatic integration.

Traditional Networking Model
Figure 1.  Traditional Network Model
Software Defined Networking Model
Figure 2.  Software Defined Network Model

Software Defined WAN (SD-WAN, SDN) - Wide Area Networking deployed as an appliance or VNF on a hypervisor, typically on a standardized hardware platform with exposed APIs that can be managed from a network controller or by direct programmatic integration.

Classical Network vs Software Defined WAN
Figure 3.  Classical vs Software Defined WAN

     Network Ecosystem - a WAN system created to provide integration capability between consumers and vendors in high population locations, like within a Data Center or between Data Centers within their service coverage area.  Examples in the industry:  Equinix, Telx.

     Universal Cross Connect - a WAN system created to provide integration capability between consumers and vendors that provides a connecting service by extending this service to specific customer locations via Telecom, Dark Fiber, etc.

     Service Management Platform (SMP) – Software, tools, services that create the connections for customers between orchestrated networks and Hybrid Cloud.

     Enhanced Cloud WAN Service - a WAN service were the routing, switching, firewall, proxy, as well as Internet and other capabilities are run within a Telecom Cloud and connectivity to the consumer is via any physical  means available, typically with a reduced equipment requirement at the customer premises.

Virtual Network Function (VNF) - Any network capability implemented as an appliance or virtual machine and executed as software within a virtualization platform or framework, typically in an x86 hypervisor allowing specific access capability to the network interfaces.

Virtual Network Function VNF
Figure 4.  Virtual Network Function

Network Function Virtualization (NFV) - Management and control of virtualized network functions (VNF) that includes the OSS/BSS and management tasks for the lifecycle of VNFs.

     Management and Orchestration (MANO) - Platform RE:NFV-MANO - providing the roles of OSS/BSS as part of a related control framework inclusive of orchestration of network functions (Network Function Virtualization Orchestration - NFVO), management of VNFs (Virtual Network Function Manager - VNFM) and awareness of infrastructure management (Virtualzed Infrastructure Manager - VIM). 

Thursday, January 19, 2017

Someone else did it a different way

"Because we've always done it this way" is an absolutely horrible reason to continue doing things in the same way.

It's expensive to always do it the same way.

This should be obvious.  Unless there is a specific legal reason to do something in a certain way, the cost of it will never go down.  As a matter of fact, it will go up if there is nothing putting a cost pressure on it.  It won't improve AND it will cost more over time.

It's boring to always do it the same way.

Nothing is more menial than the exact same thing day after day.  People placed in this situation will burn out or become completely ineffectual.

It kills prioritization to effect change.

When the time comes to consider and evaluate new responses to problems, this one single phrase prevents action.

It supports continuing lack of awareness or understanding.

For every new technology and new way of doing something, there was someone that understood an awful lot about the previous technology or method.  Entire businesses rise and fall based on foundational improvements that cannot be achieved without knowledge of the topic.

It utterly halts innovation.

There's no reason to change something that works well is a corollary.  People learn by doing.  In doing, they make mistakes.  Most of the mistakes are not useful and they are costly.  But, some of those mistakes identify great new paths.  Those mistakes are part-n-parcel with improvement.

Someone will do it differently.

Then there will be that moment when you are left wondering why.  It's because someone else did it a different way.

Tuesday, January 10, 2017

Newest Milk and Oldest Wine

In Information Technology, what we actually want is the newest milk and the oldest wine.

     It may not be the oddest phrasing, with a little explanation.

The newest milk provides the sustenance for new growth, like the things that are technologically interesting turned practical or business useful.

A particularly good analogy may be applied to software and software development.  Consider the accelerated speed of delivery provided by new platforms enabled for Agile delivery of applications over traditional methods.

The speed of change is altered fundamentally and we all want the newest software.

The oldest wine is the libation that is well known and well understood, technologically it is the stability of continuous operation with minimal interruption.

In IT, the oldest and most well understood capabilities are typically the most stable.  Consider things like network switches that have uptime on the order of years.  They perform their given tasks from the time they are installed until they are removed.

As with the oldest wine, equipment relatively free from bugs sets a precedence for the future experience and so, we want the most stability.

And then there is powdered drink mix, technologically equivalent to the half-truth, misrepresented and misunderstood.  The thing is, powered drink mix often tastes excruciatingly wonderful, comes in a variety of flavors and is absolutely packed with sugar.

The advertisers and the media amplify messaging particularly to get attention.

     As an example, we're seeing "disruption of" or "disruptive _____" being used quite a lot.

In advertising, nearly all brand identification is good.  This doesn't mean that this miraculous new thing is particularly good at changing the speed of change or inherently able to increase stability.

It is simply that some new thing is being advertised and some really talented people are paid to advertise this new thing.

So, the point of my rant, it is the job of the programmers and IT architects to explore and understand which things are like the newest milk, which are the oldest wine and which are neither AND to make sure you don't always drink the "powered drink mix" which is neither nutritious nor a libation.


My Cat:

Tuesday, January 3, 2017

Next IT Shift, back to Distributed

Figure 1.  We're about to swing to Distributed Computing again
There's a lot to be said for the ever changing IT industry and 2017 should be a banner year in new things happening.

As a recap, in 2016 we saw the Digital Shift take hold of IT and give it a really good shake.  Starting 2016 everyone was asking if you had a plan for Cloud Computing.  I now hope that you do and if not......

There's a huge number of predictions for what the focus and industry will be looking at in 2017.  Of these, #Artifical #Intelligence or #AI and its many brethren (#MachineLearning #Robots, etc) are likely to impose a new Transformation Shift in the industry.

The way I think about it is #IoT with some #smarts.  At a most basic level, anything that can have a sensor applied to it is likely to be considered for an IT upgrade in the framework of an #IoT style device.  These upgrades will be sensors that record some type of data, maybe multiple types of data.

The result is a tiny sensor recording something that people find of value and would like to act upon.

This means data.  And when I say data I mean #LotsOfData.  Big Data levels of lots.

Because there will be so much data, it is almost certain that the data will require some culling and curating at the source.  So, there will be a tiny little computer attached to the tiny sensor.

The data from the sensor is going to have to go somewhere and I think this will have an effect on Cloud Computing.  We're about to have another pendulum swing in computing back to distributed processing of a sort.

Just like other swings, it will not eliminate the previous technology area, but it will force it to change.

Ideally, valuable data that needs to be acted on rapidly will need to have most of the computation done very close to the source.  The entire idea behind the #OODA #Loop is needed to take advantage of real time or near real time data.

Something will be needed to jumpstart the #AI training to make it all work.  This may be the resulting use of Cloud Computing, or it may create the need for a "Near Edge" Cloud computing capability.

Once the AI is trained, the tiny computer should be self sufficient enough to handle everything else it needs to do other than store the sensor data, making the "Near Edge" Cloud as much a storage system as anything else.

This means that the next transformation is going to make computing at the #Edge a reality again.

It's also likely to change how we deal with programming languages.  Logic programming will give way to data flow programming.  (Might be time to brush the dust off some old data flow language or build a new one).