Cloud computing: Swarm Intelligence and Security in a Distributed World

Reading through my blog feeds I came across something Hoff wrote in response to Reuven Cohen’s “Elastic Vapor: Life In the Cloud Blog, in particular I wanted to respond to the the following comment (here)

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

I also wrote about this concept in a series of post on swarm intelligence…

Evolving Information Security Part 1: The Herd Collective vs. Swarm Intelligence (here)

The only viable option for collective intelligence in the future is through the use of intelligent agents, which can perform some base level of analysis against internal and environmental variables and communicate that information to the collective without the need for centralized processing and distribution. Essentially the intelligent agents would support cognition, cooperation, and coordination among themselves built on a foundation of dynamic policy instantiation. Without the use of distributed computing, parallel processing and intelligent agents there is little hope for moving beyond the brittle and highly ineffective defenses currently deployed.

Evolving Information Security Part 2: Developing Collective Intelligence (here)

Once the agent is fully aware of the state of devices it resides on, physical or virtual, it will need to expand its knowledge of the environment it resides in and it’s relative positioning to others. Knowledge of self, combined with knowledge of the environment expands the context in which agents could effect change. In communication with other agents the response to threats or other problems would be more efficiently identified, regardless of location.

As knowledge of self moves to communication with others there is the foundation for inter-device cooperation. Communication and cooperation between seemingly disparate devices, or device clusters, creates collective intelligence. This simple model creates an extremely powerful precedent for dealing with a wide range of information technology and security problems.

Driving the intelligent agents would be a lightweight and adaptable policy language that would be easily interpreted by the agent’s policy engine. New polices would be created and shared between the agents and the system would move from simply responding to changes and begin to adapt on its own. The collective and the infrastructure will learn. This would enable a base-level of cognition where seemingly benign events or state changes coupled with similarly insignificant data could be used to lessen the impact of disruptions or incidents, sometimes before they even occur.

The concept of distributed intelligence and self-healing infrastructure will have a major impact on a highly mobile world of distributed computing devices, it will also form the foundation for how we deal with the loss of visibility and control of the “in the cloud” virtual storage and data centers that service them.

Google Chrome Takes Aim at the Microsoft OS

Google recently “leaked” a cartoon providing information on their upcoming browser named “Chrome” (here) and (here) – personally I will be impressed when the movie comes out and there is a guest appearance by Stan Lee. There has already been a tremendous amount of discussion and opinion on the ramifications of such a release. Most of it centering on Google taking aim at Internet Explorer. Hoff believes this signals Google’s entry into the security market (here), obviously the  acquisition of Greenborder and Postini and the release of Google safe browsing were clear signals that security was a critical part of the equation. But what is most important here, and seems to be missed by much of the mainstream media, is that Google is creating the foundation to render the underlying Microsoft PC-based operating system obsolete and deliver the next evolutionary phase of client computing. Hoff pointed this out in his earlier post (here)

So pair all the client side goodness with security functions AND add GoogleApps and you’ve got what amounts to a thin client version of the Internet.

A highly-portable, highly-accessible, secure, thin-client-like, cloud computing software as a service offering that in the next 5-10 years has the potential to render the standard PC-based operating systems virtually obsolete – couple this with streaming desktop virutalization delivered through the Internet and we are quickly entering the next phase of the client computing evolution. You doubt this? OK, ask yourself a question? If Google is to dominate computing through the next decade can it be done on the browser battlefield of old, fought in the same trench warfare like manner experienced during the Early browser wars between Microsoft and Netscape? or will it introduce a much larger landgrab? and what is larger than owning the desktop – fixed or mobile, physical or virtual, enterprise or consumer – regardless of the form it takes?

On another note I recently posted the “7 greatest Ideas in Security” (here), notice that many of them have been adopted by Google in their development of Chrome, including;

  • Security as part of the SDL – designed from scratch to accommodate current needs; stability, speed, and security, also introduces concepts of fuzzing and automated testing using Google’s massive infrastructure.
  • The principle of least privilege – Chrome is essentially sand-boxed so it limits the possibility for drive-by malware or other vectors of attack that use the browser to infect the base OS or adjacent applications, which means the computation of the browser cannot read or write from the file system  – of course social engineering still exists, but Google has an answer for that providing their free Google safe browsing capabilities to automatically and continuously update a blacklist of malicious sites. Now they just need to solve the eco-system problems of plug-ins bypassing the security model of sand-boxing.
  • Segmentation  – Multiple processes with their own memory and global data structures, not to mention the sand-boxing discussed above
  • Inspect what you expect – Google task manager provides visibility into how various web applications are interacting with the browser
  • Independent security research – a fully open source browser, that you can guarantee will be put through the research gauntlet.