Wednesday, December 23, 2015

Episodic Intransience vs Immutable Infrastructure

Anytime you hear someone using the term Immutable Infrastructure, you are allowed to giggle, snork and/or laugh.  Servers, Networks and Storage are not immutable, they never will be.  If you write about, have recently presented on, or otherwise define, discuss or profess Immutable Infrastructure, my apologies.

Keep in mind, Immutable is the equivalent of Willie Wonka's Everlasting Gobstopper.

Immutable means that the device, construct or mechanism is unchanging or unable to change.  Abstract it as much as you like and some underlying subsystem is absolutely guaranteed to cause change.

With today's infrastructure, we are guaranteed it will physically change sometime within 7 years.  That's typical of equipment vendor product refresh.  Plus purchasing infrastructure equipment without the continuous improvement provided by product evolution cycle would result in some extremely old school equipment.  Not to mention that memory use is transient, storage is needfully consumed and the network bursts and caches as needed.  And, don't get me started on drivers.  The equipment infrastructure is not immutable.

The operating system is guaranteed to change much more rapidly, with patch release examples measured in days or weeks, the operating system infrastructure is not immutable.

Virtualization system, changes literally on demand, be it manual or automated.  So, not immutable. 

Some of the Docker aficionados talk about an immutability in the container delivery pattern.  This is tied to a repeatable, to the tune of possibly minutes,  application development process.  If it is immutable, why would it ever need to be repeatable (changing).  Just saying, it's not.

In this I tend to concur with John Willis, "no example in my 35 years of working with IT infrastructure of a system or infrastructure that is  completely immutable…"

So, what are we really talking about?  It's not immutability in any of the delivery patterns associated with modern application development.


What we are taking about is a level of intransience certainly.  The architectural pattern must be well defined, repeatable and largely inexhaustible.  It's more like Episodic Intransience, where change/improvement/SCRUM cycle delivery exist between durations of intransience.  

Monday, November 9, 2015

Spine and Leaf Nodes

This drawing (or one resembling it) seems to keep popping up in Spine and Leaf discussions.  This is an incomplete view of the mechanism of the art.  The Spine and leaf architecture is certainly compelling for a variety of good reasons, not the least of which is a horizontal scaling model that far exceeds more traditional methods of networking.

See  sdxcentral article



The actual story of this design model may actually much more interesting to the network developer.  If this model were the sole construct of the network design, it wouldn't be, necessarily, special.



But if the design model indicated a requirement for redundancy in the Top of Rack (ToR) or if there was a special need physical configuration like a 3-cabinet-wide Logical Rack, those may also be supported by the Spine and Leaf Network model, fairly simply in the guise of a Leaf Node arrangement.


That's not all though.  To scale above the reasonable size of a traditional network, it may be necessary to start thinking about the routing protocol, in order to avoid those pesky all encompassing broadcast domains, delivering what is largely L3 all the way down to the host.


Then topping it off with a healthy dose of the art of the possible, utilizing the routing protocol constructed in the previous model to create a delivery platform for logically isolated networks utilizing VxLAN.  Also, when you get to this size, don't forget to add the Management Network VRF, need a way for those 1000's upon 1000's of physical systems to get back to the monitoring and management.

Hopefully you can recognize the original drawing in the last drawing.  It's still there, but nothing like the switched network of old.






Tuesday, November 3, 2015

Book Review - The Unwritten Laws of Engineering by W.J. King

I was intrigued enough by a twitter conversation about this book that I purchased a copy online.

Over my career, should I have written down things that apply to engineers, I've a feeling that many of the topics in this book would have made it into my notes.

As an example, and relevant to me this very day, "Cultivate the habit of seeking other peoples' opinions and recommendations."

Had I not asked "Jerry" to review some materials I was working on, I would not have known that a partial solution was already completed and those materials were available to me to extend the work of a predecessor(s).  I would have liked to have my name on the total solution, but as it turns out, the solution was expedited by days if not weeks.

The reason this one thing seems so difficult for engineers to understand is that it isn't a sign of weakness or lack of knowledge or how to apply response to problem.  It's more about understanding the background of the issue and acting with more relevant information.

This book is replete with examples such as this one that may well be applied to engineers in all professions on a daily basis.

If you're in the mood for some truths, if not entirely Laws of Engineering, I recommend you pick up this book.  


Wednesday, October 14, 2015

The 6 laws (for reference)

I'm posting this with references mostly for my own archiving purpose.

http://www.techrepublic.com/article/the-6-laws-every-cloud-architect-should-know-according-to-werner-vogels/?utm_campaign=buffer&utm_content=buffer5c6e4&utm_medium=social&utm_source=twitter.com

This blog is excerpted from techrepublic above.  Appreciation to Werner Vogels for the discussion and Conner Forrest for the article.

Lucas Critique

"It is naive to try to predict the effects of a change entirely on the basis of relationships observed in historical data."

Gall's Law

"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."

Law of Demeter

"Each unit should have only limited knowledge about other units—only units 'closely' related to the current unit. Each unit should only talk to its friends; don't talk to strangers."

Occam's Razor

"The one with the fewest assumptions should be selected."

Reed's Law

"The utility of large networks, particularly social networks, can scale exponentially with the size of the network."

The Gestalt Principle

"The whole is greater than the sum of its parts."

....

The only thing I'd add to this is "don't get inexorably tied to a single mode" as it almost guarantees future failure.  There are exceptions, so it's likely not a law, but there's a lot of truth to it.

....

THE LACK OF HISTORIC KNOWLEDGE IS SO FRUSTRATING -- Ivan Pepelnjak, wish this were a law.  Those who fail to learn from history....

....

rfc1925

Friday, October 9, 2015

Modern Network Engineering

I don't know how to stress this enough, but Network Engineering will NOT go away with the advent of Software Defined Networking.

 Here's why: Software Defined Networking Abstracts the control plane from the data plane. Both the control and data mechanisms continue to exist. What does change is the means by which we interact with them.

 So, as Software Defined Networking and Network Function Virtualization (and SD-WAN, etc, etc) continue to enhance the abstraction, the fundamental knowledge of how the logic of the systems function is still required.

 I can't say, for sure, that the value of someone that knows a particular command line interface is going to be continually valuable though. Look at what happened with the controller (control plane) developments in wireless networking. The value of the CLI diminished drastically with advent of the controller for enterprise wireless networking. The value of how the wireless system actually operates increased and so did the people that could balance the change. The same thing will happen with SDN technology.

REST API and/or eAPI calls are going to replace current system management methods. Any number of programming languages are going to provide the basis for automation of the service. Consider methods like Python scripted interaction with the control plane currently being done with OpenStack and you'll have a glimpse into that future.

 You also shouldn't forget that this isn't the first time software definition has been applied to networking. Some of you may remember TCL. Just saying, not the first time.

 The other element that's driving these advancements in networking can be attributed to the structure of the interface. JSON or similar structure is surely going to play a role in managing software defined elements.

 I urge networkers that are concerned about what looks like an uncertain future in networking to explore the pieces that make up the software defined element. This is the path to value for the individual with Software Defined Networking.

 Take a course, read a book, grab stuff off the web like this and figure out how it's done. You won't regret it.

  Other Reading:





Tuesday, October 6, 2015

Macrosegmentation is now in the Networking Lexicon (soon we'll abbreviate it)

Macrosegmentation at the networking level just got a definition today, we're sure to remove the hyphen (Macro-segmentation) and abbreviate it.

A warm up on the details here:

http://www.arista.com/blogs/?p=1245

At its most basic level, macrosegmentation will allow the re-direction of traffic between specific points in the network to enable logical topologies with firewalls and load balancing systems.

Very nifty trick considering:

     Placement of the equipment is location independent.

     Cybersecurity can continue to manage FW rules without blended support requirements.

     It should* work with multi-vendor equipment.

It also brings back fond memories of an Arista whiteboard barstool in VMworld.  Wish I took a picture of it.





Monday, October 5, 2015

VP of Electricity

Randy Rayess on techcrunch proposes that the CIO is the next VP of Electricity.

Imagine the turn of the 20th century, electricity delivery for a company had to be managed.  Often companies would stand up their own infrastructure to deliver electricity within a building or factory.  To support this infrastructure, there would be a team of electrical specialists, electricians, that would maintain the equipment and support the infrastructure.  Once electricity became a utility, much of this was replaced by the vendor and sold as-a-Service to the customer.

Comparing that to the CIO is interesting because there are some parallels, but not entirely and not in the single component delivery requirement that was under the VP of Electricity.

Consider, for instance, that the CIO's primary goal is to keep the IT Infrastructure delivering the application.  Actually applications.  <--  The plural is incredibly important here. It's not a single service, it's tens, hundreds, sometimes even thousands of applications.

These applications need to inter-operate enough to use common foundational services like networking and data center, platform systems and virtualization, and the growing analytics necessary to make ever increasing critical business decisions.  While we may think about them from a consumption perspective, and that reduces many of the applications to in-business-quarter costs, which is great for current business controlling their financial run rate, but

Inter-operation doesn't happen by magic and someone needs to be in a position to manage these as-a-Service applications before they sprawl into a buffet style line of out of control applications that not only don't support the business objectives, but don't deliver the critical value that is required by the business.  Not to mention the potential risks of data breach and loss that happen when applications are deployed without planning.

Then, consider that the cost of electricity isn't going down.  At best, it's stagnant over long periods of time at commercial rates, but the cost is going up and it's guaranteed to go up.  The use of electricity is also increasing as we put in more general purpose hardware to support more applications on even more virtualized platforms.

My contention to the article is that while we don't need the turn of the 20th century VP of Electricity, we do need to continue to think about the sunk cost in the delivery of applications.  We need to have someone thinking about the plethora of applications needed by each industry to operate as well as the infrastructure and critical access to both private and public services.

Who better than the person that understand's the business demand.