Friday, January 29, 2016

Software Defined Security not in 2016

So, I'm wondering why Software Defined Security didn't take off the way the SDx of most everything else did.

Was on a call recently with one of the big analyst organizations.  When asking a very specific question about Software Defined Security, it became very apparent that any of the promises of circa 2014 should be forgotten.

Let's take a stroll down memory lane for a refresher:

  1. Simplicity
  2. Automation
  3. Scaleability and Flexibility
  4. Cost Effectiveness
  5. Increased Security


Which got me to thinking why.  I took the next 15 min at my whiteboard and came up with this.  It's not all inclusive, it's just what I could come up with quickly.

Value Chain Mapping Security Entrypoints

Mobile Device Management isn't working out so great when it places itself above the user experience.  I'm just going to come out and say, it didn't work.

Then, looking at the variety of combinations of "layers of security" applied to secure an application delivery, simplicity is simply out the door.  Some of these (identified in red), require a specialist.

Automation, while possible, covers the aspect or profile of the security control mechanism.  Putting an automation wrapper around all of it, well that's just gruesome.  The best methods available today are layered automation by mechanism with some of these still requiring zero-touch provisioning capability.

With where and how these different security functions need to be applied, scaleability needs to be in relationship to something.  What can you choose as the focus when these functions all apply at different levels and locations?

Flexibility pretty much follows scaleability.

Cost effectiveness.  Now, if you apply the cost of maintenance of devices, operating systems, security applications, application delivery frameworks, people, programmers and the application itself.  This just doesn't look like it is going to be easy to optimize.

The final item really requires the attack surface reduction to absolute minimums.  We're running new applications, on new and old software, on new and old operating systems, with new (and old) flaws and exposures. Increased Security has to be in relationship to something, like the absolute elimination of attack points in the application delivery.  Easy enough to do if you don't want the application to work, not so easy when there are zero day exploits announced all the time.  Exploits that take advantage of possibilities not even envisioned when the software was designed for any of the devices or software relationships in the path.

The contention here is that the application delivery model has to change before Software Defined Security can holistically assume the advantages applied to other constructs of the SDx model.

In any case, I don't believe 2016 or 2017 will be the year of Software Defined Security. #mapping

Friday, January 15, 2016

Mapping Exercise - Enterprise Virtualization - update

Simon Wardley http://blog.gardeviance.org keeps posting maps, exercises of maps and guidelines for maps.  Maps, maps, maps.  As he @swardley recently followed me on twitter @abusedbits, I suddenly developed an immense sense of trust and responsibility to have a look at his teachings.

In a similar fashion, but most certainly naïve to the totality of his methods, I'm mapping Enterprise Virtualization Services.  One must start somewhere....

What I was hoping for was a simplified method to create a level of prediction about what could be expected in 2016 based on where they were from my PoV in 2015.



The results as follows:

Enterprise Virtualization Value Chain Mapping 2015
As position is relative, the map is comprised of elements of Enterprise Virtualization.  These are connected where natural connections exist and placed to the best of my ability. 

I then made a decision to create a direction and magnitude vector, also being relative, but as a predictor for how those elements would advance on the map.

Network of the legacy variety is getting more complex the more virtualization is applied.  So,  up and to the left it is.  Opposing this direction, Network using SDN should get less complex as SDN functions are developed to be used with the control plane, so to the right.  Certainly not downward as it still has some level of visibility to the user.

Storage is similar to Network.  Traditional Storage will be a lot less fun to deal with at any higher densities than it is today, with a direction up and left.  Software Defined Storage will start replace traditional storage methods so, to the right.

In both cases, when the Software Defined capability completely outpaces the traditional, a wholesale shift to the +SD feature set should take place.

Open virtualization in the enterprise still needs some nurturing.  It should slide right on the map as users get more familiar with it.  It probably won't outpace more traditional virtualization methods, so only into the enterprise product space.  I also realize there are vendors with it is a product, but have a difficulty justifying it in the product space by what I've seen to date.

Platform ecosystems are tenuously on the border of products.  They still require a substantial amount of care and feeding.  I'm parking them for now.

Container technology should be one of the big movers.  Enterprise adoption is masked by the reliance on 2nd platform application types, but should be a clear win for any enterprise that makes the jump to 3rd platform, so, to the right.

Hybrid Cloud, still a manual integration process, but it should improve, so it moves to the right a smidge.

The resulting prediction, with hopeful vectors for 2016:
Enterprise Virtualization Value Chain Mapping 2016
#mapping

Update:  I picked some tidbits from the ensuing Twitter conversation.



 

Mapping smaller scale predictors in XaaS space. Wondering if you can have a look at this.


using a perception of the previous years vectors.


@swardley Consumer Cloud Value Chain Mapping
@swardley Consumer Cloud Value Chain Areas
Ok , I had a look at your map. I'd counter (which is part of the point of mapping) with this.



PS : I would have added unikernel to the map at the genesys/ custom limit. Especially linked with NVF/storage / deploy

: that's the point of map, people can add / delete / debate etc.


In the fast movers, makes perfect sense. Are you confident in the more conservative Enterprise space RE adoption?

well , as a priority order then I'd expect ... 1) Access to utility infrastructure to trump deployment (sourcing) practice ...

2) Access to coding platform to trump concerns over if it uses VM / containers etc. ... it's a question of what is more important to user.

So , if someone said to me I have an non commodity / non utility baed infrastructure environment which uses containers - well ...

... provision of containers maybe a differentiator but my need is for very utility based infrastructure and this trumps.

That's the real trick there, isn't it.

: hence position relative to an anchor ... the anchor being user needs.

in that case you are adding inertia in your map Which I feel is counter productive You want now not perception of now

I was trying to create a prediction based on the map at smaller scales than WAR.

in that case adding the spread of usage on the map would probably be useful


: find duplication and bias is a big bugbear of mine.

I've seen it before, but really "saw" it for the first time in the presentation today.

awesome conversation

Enterprise Cloud

Let's have a look at enterprise cloud.

Enterprise Cloud Framework

The large majority of IDC's 2nd and 3rd Platform capabilities can be run and managed within the enterprise virtualization framework.

Built primarily on commodity x86 components, the financial model for this type of service closely follows the price-performance curve within the consumer virtualization industry (read "consumer cloud").  Built on vertically scaled systems, it'll vary considerably.

Cost per workload starts to flatten out at somewhere between 300 and 700 workloads, referenced to a 2 vCPU/2 GB vRAM reference unit virtual machine.  Storage as a mix of DAS (solid state and/or spinning) and archival as needed to service the application requirements.  Cost of storage is technology and size dependent.

It can be located nearly anywhere that has suitable power, cooling and data access.  This provides the business with options to utilize cloud capabilities without any of the concerns arising from a multi-tenant solution (read "consumer cloud").

Disaster recovery can be any similarly constructed system with the typical limitations of application and storage latencies along with appropriate data access to service the workload DR requirements. Ideally the "similarly constructed" DR locks in control plane and hypervisor type.

It can support, in bare metal delivery, the interconnection to applications that have requirements above the application virtualization maximums and/or alternative bare metal operating systems.

---

Where things get "interesting"….

     For Network, please review blog entry http://www.abusedbits.com/2015/11/spine-and-leaf-nodes.html for some of the concepts.  Consider that the network above the "logical rack", that supported by a ToR switching pair, really needs to have horizontal scale, it needs to be extremely robust and support significant potential East-West traffic.  It also needs to support bare metal integration and containers, in addition to the virtual hosts.

     Enterprises should be mindful of the Operations Management requirements of their enterprise cloud service, particularly as it pertains to DevOps in the management and lifecycle of the infrastructure and application.  Lower in the visibility, but enormously valuable to the enterprise, things like directory services, host security, scanning and platform overlay need to be considered. That combined with normal enterprise functions of backup, DR, monitoring and administrative access really should be looked at from a cohesive set of feature-function requirements.  Monitoring, well, un-monitored cloud is simply frightening.

     Lastly, the APIs.  Adoption of a modern infrastructure or platform initiative, which this directly relates to, is all about access to the APIs.  There are APIs for the hardware at the point of management.  APIs for the control(s) plane capabilities, and not all or well integrated without development, as the control plane functions may be separate or standalone.  APIs for any platform substrates. APIs for software defined networking.  Then, to top it all off, multi-vendor cloud services that deliver Hybrid Cloud capability across disparate systems, will have APIs.  Read this as "you'll need programmers" OR a vendor that provides these capabilities prepackaged.