Top 10 Most Overhyped Technology Terms

We have entered a new era of information technology, an era where the clouds are moist, the data is obese and incontinent, and the threats are advanced, persistent, and the biggest ever. Of course with all the paradigm-shifting, next generation, FUD vs. ROI marketing, its important to remember that sometimes we need to balance innovation against misunderstood expectations, vendor double-speak, and relentless enterprise sales guys.

Because contrary to the barrage of marketing, these technologies won’t make you rich, teach you how to invest in real-estate, help you lose weight or grow a full head of hair, it won’t make you attractive to the opposite sex, nor will it solve all your problems, in some cases they can improve the efficiency and effectiveness of your operating environment but it requires proper planning, expectation setting and careful deployment…and on that note, I give you the top 10 most overhyped technology terms over the last decade.

Continue reading

Cloud-Computing is Dead, Turn the Internet Off, Amazon Failed – Again!

 

So it appears the Internet went down, or so many claimed when they were presented with 404 errors when attempting to watch “Georgia Hillbilly Massacre 17: The return of the Banjo Man” on Netflix  – Since Netflix is selective on what you can stream they certainly weren’t queuing up the latest and greatest new releases, but that is a totally different rant – or attempting to declare themselves the Mayor of “who gives a rats ass where you are right now” on Foursquare.

Last time this happened some started to claim that it rocked the very foundation of confidence in cloud-computing (here), yet they failed to juxtapose Amazon’s operational failures against the universe of enterprise operational failures, security compromises and general administrative stupidity that plagues nearly 99.98% of every organization on Earth (minus the DPRK’s website, really not more you can do to fudge that one up)

Continue reading

Client-Side Virtualization Part III: HAL 9000, Hosted Virtual Desktops, and the Death Star

HVD-fail

Systems and security management is difficult, ineffective, costly and becoming ever more so in increasingly distributed, heterogeneous, complex, and mobile computing environments…

  • 98% of all external attacks take advantage of poorly administered, misconfigured, and unmanaged systems (Source: Verizon Data Breach Investigations Report 2009)
  • A locked down and well managed PC can cost 42% less than an unmanaged one (Source: Gartner – The Total Cost of Ownership: 2008 Update)
  • The direct costs incurred in a “somewhat managed” PC are only slightly lower than the direct costs of an unmanaged PC, because of expenses to maintain underutilized or dysfunctional management systems (Source: Gartner – The Total Cost of Ownership: 2008 Update)

The benefits provided by server virtualization are being realized as server consolidation has enabled cost reduction and efficiencies in data center/server management. This is of course leading many to ask the question “why can we not virtualize our desktops as well?” Continue reading

Cloud-Computing Solves Patching Problem…IT Admins Please Report to HR for Immediate Dismissal

stormtrooperlol

So apparently the latest version of the Qualys Laws of Vulnerabilty Report has Qualys jumping to some pretty outrageous claims about how cloud-computing – invented by Qualys according to Courtot (insert cute smiley here) – can secure IT more effectively or allow people to not patch any more or some such nonsense (thanks to Hoff for the heads up).

Anyway so the logic flaw goes something like this -> Continue reading

How Cloud, Virtualization, and Mobile Computing Impact Endpoint Management in the Enterprise

I had an interesting conversation with a peer recently that started with a statement he made that “innovation was all but dead in security”. The implication was that we had done all we could do and that there was very little more that would be accomplished. Of course I felt this was an overly simplistic and narrow view, not to mention that it completely ignores the rather dramatic impact changes in computing infrastructures will have over the next 5-10 years and beyond.

How have enterprise architectures evolved over the past 10 years and how will it continue to evolve? Simply put we are pushing more of our computing assets and the infrastructure that supports them out into the Internet / cloud. It began with mobile computing devices, remote offices, and telecommuters and is now moving into aspects of the traditional internal infrastructure, such as storage, application / service delivery, and data management. This has forced IT to, in some cases, radically redefine the technologies and processes they implement to even provide the basics of availability, maintenance and security. How does an IT organization maintain the health and availability of the evolving enterprise while securing the environment? How do they ensure visibility into and control over an increasingly complex and opaque infrastructure? Continue reading

Moving Security through Visibility to Implementing Operational Controls

viz-and-control1

Quick thought for the day. Most technologies in the security world move through a predictable cycle of adoption. First an organization implements a solution to gain visibility into the scope of the problem (VA, IDS, DLP/CMF, SIEM) then once it becomes apparent that the problem is vast and overwhelming they move to operationally implement technical controls to protect the environment and to enforce organizational policies, when this switch over occurs the adoption of the pure visibility tools becomes eclipsed by the control tools. This doesn’t mean that the visibility tools are ineffective, it generally means that the scope of the problem is understood to the point that an organization can effectively implement controls, it also means that the problem has successfully moved from the security team to the operations team. You can apply this same logic to any segment of security and to any new technology, including cloud computing, virtualization and all the little shiny obejcts in between.

Examples of this movement from visibility to control include intrusion detection, vulnerability assessment and content monitoring and filtering. Let’s look at VA, It’s initial use was to determine the scope of the ‘exposure’ problem, that is to scan the environment against a database of known vulnerabilities to determine the extent of exposure. Unfortunately the volume of output was very high and was presented in a format that was not easily consumable or actionable by the IT operations team. What exactly does one expect the server admin to do with 300 pages of vulnerability data? There were also inherent issues of fidelity. The use of VA tools moved into targeted scans to determine what needed to be patched, which resulted in the operational implementation of patch management technologies, which soon overtook the market adoption of vulnerability assessment tools. There was also the pressure of auditors looking for the implementation of technical controls and although vulnerability assessments were viewed as an important first step, without the work-flow and controls to address the volume of vulnerability data they proved to be less effective in improving operational security than was originally thought.

It became clear that vulnerability management needed to cross the chasm to become an operationally actionable tool, without remediation capabilities the organization would always be under a mountain of vulnerabilities and the use of the technology would linger in the trough of disillusionment. Security configuration management met that need, it allowed an organization to define the desired configuration state of an environment against industry best practices (NIST, DISA, CIS, etc) and then to operationally implement technical controls to identify non-compliant devices and enforce policy. Security configuration management also had the benefit of providing a common language between the security, audit, and operations teams. I wrote about this in a series of posts (here), (here), and (here).

Amazon AWS Security…What a Cloudy Web We Weave

Recently I posted some thoughts on cloud security (here), (here), and (here). The bottom line still holds true…

When we allow services to be delivered by a third party we lose all control over how they secure and maintain the health of their environment and in many cases we lose all visibility into the controls themselves, that being said…Cloud Computing platforms have the potential to offer adequate security controls, but it will require a level of transparency the providers will most likely not be comfortable providing.

In September of 2008 Amazon released a paper entitled “Amazon WebServices: Overview of Security Processes” which discusses, at a high-level, aspects of Amazon’s AWS (Amazon Web Services) security model. Essentially it says that they will provide a base-level of reasonable security controls against their infrastructures and the enterprise is required to provide the required security controls against their guest OS instance and other attributes of the customer environmental variables, including data backup, controls, and secure development.

The biggest problem is that you, as the consumer of this technology, will not be able to audit the security controls. You, as the consumer of this technology, will need to rely on their assertions of the controls and static (SAS 70) audits that these controls are actually in place – sans details of course.

The other big problem with the “joint” security model Amazon proposes is that it adds a level of complexity to the organization utilizing the services. They now have to manage, report against, and provide accountability for the tsunami of compliance audits in a mixed environment where infrastructure is maintained and secured by Amazon and other parts must be maintained and secured by the customer, this is in addition to,  but not necessarily in cooperation with the customers current operational security models.

The rest of the paper weaves its way through traditional security mechanisms like they use firewalls and require SSH access to remote boxes, and they will totally ban someone from port scanning as well as less traditional security mechanisms, but also far less mature or proven, such as relying on the control within the Xen hypervisor.

So what are the salient aspects of the paper? Well you can read the gory details – or lack thereof – (here)

Amazon AWS, Google App Engine, Microsoft Azure, and More – Part 1: Can We Secure The Cloud?

Cloud computing, or as I like to call it the return of the mainframe and thin-client computing architecture – only cloudier, has been creating a lot of interesting discussion throughout IT recently.

Cloud computing, which we will define as any service or set of services delivered through the Internet (Cloud) without requiring additional infrastructure on the part of the organization. Although a broad definition it encompasses everything from storage and capacity services to applications like CRM or email to development platforms and everything in between that is delivered and accessed through the Internet (Cloud).

Obviously the concept of ubiquitous broadband connectivity combined with a highly mobile workforce enabled to productivity, independent of location and with the promise of limited, if any, additional infrastructural costs, offers new levels of efficiencies for many organizations looking to leverage and extend their shrinking IT budgets.

There is little doubt that cloud computing offers benefits in how organizations look to drive greater benefit from their IT dollars, but there are also many trade-offs that can dramatically reduce, and negate the benefits altogether, understanding these trade-offs will allow an organization to make the right decisions.

As with most advancements in computing, security is generally an afterthought, bolted on once the pain is great enough to elicit the medication. Sort of like the back pain of IT, security enhancements tend to result once the agility (availability, reliability, etc) is somehow inhibited or because it is prescribed as a result of a Doctors visit (compliance audit) cloud computing is no different.

But before we can understand the strengths or inadequacies of cloud computing security models we need to have an understanding of baseline security principles that all organizations face, this will allow us to draw parallels and define what is and isn’t an acceptable level of risk.

Again for the sake of brevity I will keep this high-level, but it really comes down to two main concepts; visibility and control. All security mechanisms are an exercise in trying to gain better visibility or to implement better controls all balanced against the demands of the business. for the most part the majority of organizations struggle with even the most basic of security demands. For example visibility into the computing infrastructure itself;

  • How many assets do you own? How many are actively connected to the network right now? How many do you actively manage? Are they configured according to corporate policy? Are they up to date with the appropriate security controls? Are they running licensed applications? Are they functioning to acceptable levels? How do you know?
  • How about the networking infrastructure? databases? application servers? web servers? Are they all configured properly? Who has access to them? Have they been compromised? Are they secure to the universe of known external threats? How do you know?
  • Do internal applications follow standard secure development processes? Do they provide sufficient auditing capabilities? Do they export this data in a format that can be easily consumed by the security team? Can access/authentication anomalies be easily identified? How do you know?
  • What happens when we an FTE is no longer allowed access to certain services/applications? Are they able to access them even after they have been terminated? Do they try? Are they successful? How do you know?

These are all pretty basic security questions and it is only a small subset of issues IT is concerned with, but most organizations cannot answer any one of them, let alone all of them, without significant improvement to their current processes. It is fair to say that the majority of organizations lack adequate visibility into their computing infrastructures.

Of course the lack of visibility doesn’t imply a lack of control;

  • Are assets that are not actively managed blocked from accessing corporate services? Are they blocked from accessing internal applications? Based on what criteria – lack of policy adherence? How granular is the control? And if you lack visibility how can you be sure the control is working?
  • What controls have you implemented to prevent external access to internal resources? Does this apply to mobile/remote employees? How long after an employee is released does it take to remove access to all corporate resources? What authentication mechanisms are in place to validate the identify of an employee accessing corporate resources? Without visibility how do you know?
  • What controls are in place to ensure the concept of least privilege? What controls are in place to ensure internal applications (web, non-web, or modifications to COTs) adhere to corporate secure coding standards? If you lack visibility how do you know?
  • What controls are in place to ensure that a malicious actor cannot access internal corporate resources if they have stolen the credentials of a legitimate employee? How do you know the controls are adequate?

Again, just a small subset of the controls IT must be concerned with. Like the problem of visibility most organizations are barely able to implement proper controls for some of these, let alone the universe of security controls required in most organizations. Let me state, in case it isn’t obvious, the goal of security isn’t to prevent all bad things from occurring – this is an unachievable goal – the goal of security is to implement the needed visibility and controls that allow them to limit the probability of a successful incident from occurring, and when an incident does occur to quickly limit it’s impact.

So what happens when we move services to the cloud?  When we allow services to be delivered by a third party we lose all control over how they secure and maintain the health of their environment and in many cases we lose all visibility into the controls themselves, that being said…Cloud Computing platforms have the potential to offer adequate security controls, but it will require a level of transparency the providers will most likely not be comfortable providing.

Our current computing paradigm is inherently insecure because for the most part it is built on top of fundamentally insecure platforms, there is some potential for cloud computing to balance these deficiencies, but to date there has been little assurances that it will. Some areas that require transparency and that will become the fulcrum points of a sound cloud computing security model:

  • Infrastructural security controls
  • Transport mechanism and associated controls
  • Authentication and authorization access controls
  • Secure development standards and associated controls
  • Monitoring and auditing capabilities
  • SLA and methods for deploying security updates throughout the infrastructure
  • Transparency across these controls and visibility into how they function on a regular basis

Most organizations struggle with their own internal security models, they are barely able to focus their efforts on a segment of the problem, and in many cases they are ill-equipped to implement the needed security mechanisms to even meet a base level of security controls, for these organizations looking to a 3rd party to provide security controls may prove to be beneficial. For organizations that are considered to be highly efficient in implementing their security programs, are risk adverse, or are under significant regulatory pressures, they will find that cloud computing models eliminate too much visibility to be a viable alternative to deploying their own infrastructure.

I will leave you with one quick story, when I was an analyst with Gartner I presented at a SOA/Web Services/Enterprise Architecture Summit a presentation titled “Security 101 for Web 2.0” the room was overwhelming developers who were trying to understand how to better develop and enable security as part of developing the internal applications they were tasked to develop. The one suggestion that elicited the greatest interest and most questions was a simple one; develop your applications so that they can be easily audited by the security and IT teams once they are in production, enable auditing that can capture access attempts (successful or not), date/time, source IP address, etc…the folks I talked to afterwards told me it was probably the single most important concept for them during the summit – enable visibility.

In part 2 we will take an in-depth look into the security models of various cloud computing platforms, stay tuned for more to come….

Some interesting “Cloud” Resources that you can find in the cloud:

  • Amazon Web Services Blog (here)
  • Google App Engine Blog (here)
  • Microsoft Azure Blog (here)
  • Developer.force.com Blog (here)
  • Gartners Application Architecture, Development and Integration Blog (here)
  • The Daily Cloud Feed (here)
  • Craig Balding – Cloudsecurity.org (here)
  • James Urquhart – The wisdom of Clouds (here)
  • Chris Hoff – Rational Survivability (here)

Cloud computing: Swarm Intelligence and Security in a Distributed World

Reading through my blog feeds I came across something Hoff wrote in response to Reuven Cohen’s “Elastic Vapor: Life In the Cloud Blog, in particular I wanted to respond to the the following comment (here)

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

I also wrote about this concept in a series of post on swarm intelligence…

Evolving Information Security Part 1: The Herd Collective vs. Swarm Intelligence (here)

The only viable option for collective intelligence in the future is through the use of intelligent agents, which can perform some base level of analysis against internal and environmental variables and communicate that information to the collective without the need for centralized processing and distribution. Essentially the intelligent agents would support cognition, cooperation, and coordination among themselves built on a foundation of dynamic policy instantiation. Without the use of distributed computing, parallel processing and intelligent agents there is little hope for moving beyond the brittle and highly ineffective defenses currently deployed.

Evolving Information Security Part 2: Developing Collective Intelligence (here)

Once the agent is fully aware of the state of devices it resides on, physical or virtual, it will need to expand its knowledge of the environment it resides in and it’s relative positioning to others. Knowledge of self, combined with knowledge of the environment expands the context in which agents could effect change. In communication with other agents the response to threats or other problems would be more efficiently identified, regardless of location.

As knowledge of self moves to communication with others there is the foundation for inter-device cooperation. Communication and cooperation between seemingly disparate devices, or device clusters, creates collective intelligence. This simple model creates an extremely powerful precedent for dealing with a wide range of information technology and security problems.

Driving the intelligent agents would be a lightweight and adaptable policy language that would be easily interpreted by the agent’s policy engine. New polices would be created and shared between the agents and the system would move from simply responding to changes and begin to adapt on its own. The collective and the infrastructure will learn. This would enable a base-level of cognition where seemingly benign events or state changes coupled with similarly insignificant data could be used to lessen the impact of disruptions or incidents, sometimes before they even occur.

The concept of distributed intelligence and self-healing infrastructure will have a major impact on a highly mobile world of distributed computing devices, it will also form the foundation for how we deal with the loss of visibility and control of the “in the cloud” virtual storage and data centers that service them.

And on through the Fog of Microsoft’s “Cloud OS” Azure

Ray Ozzie, Microsoft Chief Software Architect and creator of Lotus Notes, announced Windows Azure today during the Windows PDC (Professional Developers Conference) event in Los Angeles (here). Azure coincidentally sounds an awful lot like du Jour, as in “technology hype du Jour”

Windows Azure, previously code name “Red Dog” is a hosted suite of services, including a highly scalable virtualization fabric (a what?), scalable storage, and an automated service management system. It is pretty close to the Amazon web services platform EC2 (Elastic Compute Cloud), except for the whole “Only Microsoft” thing. Hoff was on the ball and posted his thoughts earlier today (here)

Look, when I’m forced into vendor lock-in in order to host my applications and I am confined to one vendor’s datacenters without portability, that’s not ” the cloud” and it’s not an “open architecture,” it’s marketing-speak for “we’re now your ASP/XaaS service provider of choice.”

You can “experience” Azure here (here) also check out Manuvir Das, Director in the Windows Azure team explain the Windows “Cloud OS” (here) or Steve Marx presentation, Azure for Developers (here)

You can read my previous thoughts on cloud-computing (here) and (here)

Cloud Computing – The Good, The Bad, and the Cloudy

And on the second day God said “let there be computing – in the cloud” and he gave unto man cloud computing…on the seventh day man said “hey, uhmm, dude where’s my data?”

There has been much talk lately about the “Cloud“. The promise of information stored in massive virtual data centers that exist in the ethereal world of the Internet, then delivered as data or services to any computing device with connectivity to the “Cloud“. Hoff recently ranted poetic on the “Cloud” (here) and asked the question “How does one patch the Cloud” (here)

So what the hell is the cloud anyway and how is it different from ASPs (application service providers) and MSPs (managed service providers) of yesteryear, the SaaS/PaaS/CaaS (crap as a Service) “vendors” of today and the telepathic, quantum, metaphysical, neural nets of tomorrow?

I am not going to spend any time distinguishing between services offered by, or including the participation of, a 3rd party whether they take the name ASP, SOA, Web services, Web 2.0, SaaS/PaaS, or cloud-computing. For whatever label the ‘topic du jour’ is given, and regardless of the stark differences or subtle nuances between them, the result is the same – an organization acquiesces almost complete visibility and control over some aspect of their information and/or IT infrastructure.

There should be no doubt that the confluence of greater computing standardization, an increasing need for service orientation, advances in virtualization technology, and nearly ubiquitous broad-band connectivity enable radical forms of content and service delivery. The benefits could be revolutionary, the fail could be Biblical.

Most organizations today can barely answer simple questions, such as how many assets do we own? How many do we actively manage and of these how many adhere to corporate policy? So of course it makes sense to look to a 3rd party to assist in creating a foundation for operational maturity and it is assumed that once we turn over accountability to a 3rd party that we significantly reduce cost, improve service levels and experience wildly efficient processes – this is rarely the case, in fact most organizations will find that the lack of transparency creates more questions than they answer and instill a level of mistrust and resentment within the IT team as they have to ask whether the company has performed something as simple as applying a security patch. The “Cloud” isn’t magic, it isn’t built on advanced alien technology or forged in the fires of Mount Doom in Mordor, no it is built on the same crappy stuff that delivers lolcats (here) and The Official Webpage of the Democratic Peoples Republic of Korea (here), that’s right the same DNS, BGP, click-jacking and Microsoft security badness that plague most everybody – well plague most everybody – so how does an IT organization reliably and repeatably gain visibility into a 3rd parties operational processes and current security state? More importantly when we allow services to be delivered by a third party we lose all control over how they secure and maintain the health of their environment and you simply can’t enforce what you can’t control.

In the best case an organization will be able to focus already taxed IT resources on solving tomorrows problems while the problems of today are outsourced, but in the worst case using SaaS or cloud-computing might end up as the digital equivalent of driving drunk through Harlem while wearing a blind fold and waving a confederate flag with $100 bills stapled to it and hoping that “nothing bad happens”. Yes cloud-computing could result in revolutionary benefits or it could result in failures of Biblical proportions, but most likely it will result in incremental improvements to IT service delivery marked by cyclical periods of confusion, pain, disillusionment, and success, just like almost everything else in IT – this is assuming that there is such a thing as the “Cloud

Update: To answer Hoff’s original question “How do we patch the cloud?” the answer is – no different than we patch anything, unfortunately the problem is in the “if and when does one patch the cloud” – which can result in mistmatched priorities between the cloud owners and the cloud users.

SaaS and Cloud Computing change the CIA paradigm

Although cloud computing and Software as a Service (SaaS) offer tremendous opportunities for business innovation and return on investment, they also present unique challenges that companies developing new technologies, looking to take advantage of new services, or investors looking for new opportunities must understand.

Security, especially integrity of the service and confidentiality of the information, is critical to the market success of companies offering cloud computing and SaaS solutions. Traditionally security has lagged behind technology innovation, from the dawn of the Internet, to mobility, to virtualization, security is for the most part an afterthought. When security has become important it has generally been driven from the perspective of availability, whether it is the impact of SPAM on email flow or worm attacks that consumed network bandwidth, most organizations have prioritized security concerns once it has impacted availability.  Right or wrong, for traditional enterprise software it is easy to understand the importance of service availability over data integrity or confidentiality.

However when we introduce a 3rd party, which is responsible for data integrity and data confidentiality, then these are perceived as and become much more important than data availability. Mashups, offsite data storage, delivery of critical information from a 3rd party, the heavy use of web-based technologies, all introduce opportunities for significant security incidents, especially since SaaS and cloud computing are so reliant on open Internet protocols, many of which are fundamentally insecure. Recently we have seen a dramatic increase in high-profile vulnerabilities against the core routing infrastructure of the Internet, such as DNS and BGP, these impact everyone, but they are especially devastating to organizations highly reliant on Internet stability.

A major security incident against a company offering SaaS or cloud computing is inevitable, the question will become how resilient is the company in responding to the incident and what impact will the incident have on the companies reputation. Salesforce.com experienced a major security incident in 2007, in which a phising attack resulted in the disclosure of customer data, this was then used to phish for more data from salesforce.com customers. In this case the extent of damage was limited, but it could of been worse. Recently a couple of young hackers were able to redirect all Comcast customers to their own website, luckily this was more of a prank but the results could of been much more devastating. In the long run SaaS and cloud computing will thrive, regardless of issues of security, but there will be a lot of companies that will not be able to withstand the damage to their brand reputation if they experience a high-profile security incident.

Against the backdrop of an orgy of breach disclosures, the fundamental weaknesses of the core Internet protocols, and a dramatic increase in financially motivated cyber crime it is imperative that companies offering SaaS or cloud computing implement effective security controls.  Companies looking to take advantage of these new services or investors looking for opportunities for growth should investigate and understand the security models implemented by SaaS and cloud computing companies.