Posts Tagged ‘Security’

On December 1, 2011 a Class-action lawsuit was filed in United States District Court Northern District of California against Hewlett-Packard, alleging violations of The California Consumer Legal Remedies Act for Injunctive Relief and the California Unfair Competition Law based on non-disclosure of a known security vulnerability (read the filing here)

Nature of the Action

l. Plaintiff brings this action individually and as a class action against Hewlett-Packard Company (“Hewlett-Packard” or “HP” or “Defendant”) on behalf of all others who purchased a Hewlett-Packard printer (the “HP Printers”).

2. The HP Printer’s suffer from a design defect in the software (which is also sometimes referred to as “firmware” ) that is resident on the HP Printers, which allow computer hackers to gain access to the network on which the HP Printers are connected, steal sensitive information, and even flood the HP Printers, themselves, with commands that are able to control the HP Printers and even cause physical damage to the BP Printers themselves.

3. Despite Defendant’s knowledge of the design defect in the software of the HP Printers. Defendant has failed to disclose the existence of the defect to consumers

4. As a result of the facts alleged herein, Defendant has violated California laws governing consumer protection.


Read Full Post »


From Computer World UK (here)

There is little doubt that advances in technology have radically changed many aspects of our lives, from healthcare to manufacturing, from supply chains to battlefields, we are experiencing an unprecedented technical revolution.

Unfortunately, technology enables the average person to leak personal information at a velocity that few understand. Take a moment and think about how much of your life intersects with technology that can be used to track your movements, record your buying patterns, log your internet usage, identify your friends, associates, place of employment, what you had for dinner, where you ate and who you were with. It may not even be you who is disclosing this information. (more…)

Read Full Post »

VDI fail

To address the increasing cost and complexity of managing dynamic IT environments organizations are trying to understand how to adopt virtualization technologies. The value proposition and “killer app” are quite clear in the data center, however less attention has been given to the opportunities for endpoint virtualization. Even though there are multiple methods to address client-side virtualization; hosted virtual desktops (HVD), bare-metal hypervisors, local and streaming virtual workspaces and a range of options that layer on top of and between them all, such as application virtualization, portable personalities, and virtual composite desktops, there is still a tremendous amount of confusion and even more misconceptions about the benefits of client-side virtualization than with server virtualization. The major architectural flaw in almost all of these solutions is they remain very back end and infrastructural heavy, which reduces the benefit of cost-reduction and lower complexity.

Unlike server virtualization, which drove adoption from the bottom up, that is from the hypervisor and then through the other stacks, adoption of endpoint virtualization technologies is moving top down, that is starting with single applications within an existing OS. Application virtualization adoption will accelerate over the next 12-18 months with Gartner life cycle management analyst suggesting that it will be included in the majority of PC life cycle RFP’s in 2010 and beyond. Workspace/Desktop virtualization will follow over the next 24-36 months, as will the endpoint virtualization infrastructures. The adoption of both workspace/desktop and endpoint virtualization infrastructure will align with organizations desktop refresh cycles. Considering the average is between 3-5 years and considering that many are looking at desktop refresh to support Vista, although it probably only has about a 10% market adoption, and Windows 7, it is conceivable that we will begin seeing accelerated adoption of desktop and infrastructure virtualization over the next 24-36 months as organizations rethink their current systems management processes and technologies.

Let’s look at the 4 client/desktop virtualization models I believe will become the most prevalent over the next 3-5 years… (more…)

Read Full Post »

Recently I posted some thoughts on cloud security (here), (here), and (here). The bottom line still holds true…

When we allow services to be delivered by a third party we lose all control over how they secure and maintain the health of their environment and in many cases we lose all visibility into the controls themselves, that being said…Cloud Computing platforms have the potential to offer adequate security controls, but it will require a level of transparency the providers will most likely not be comfortable providing.

In September of 2008 Amazon released a paper entitled “Amazon WebServices: Overview of Security Processes” which discusses, at a high-level, aspects of Amazon’s AWS (Amazon Web Services) security model. Essentially it says that they will provide a base-level of reasonable security controls against their infrastructures and the enterprise is required to provide the required security controls against their guest OS instance and other attributes of the customer environmental variables, including data backup, controls, and secure development.

The biggest problem is that you, as the consumer of this technology, will not be able to audit the security controls. You, as the consumer of this technology, will need to rely on their assertions of the controls and static (SAS 70) audits that these controls are actually in place – sans details of course.

The other big problem with the “joint” security model Amazon proposes is that it adds a level of complexity to the organization utilizing the services. They now have to manage, report against, and provide accountability for the tsunami of compliance audits in a mixed environment where infrastructure is maintained and secured by Amazon and other parts must be maintained and secured by the customer, this is in addition to,  but not necessarily in cooperation with the customers current operational security models.

The rest of the paper weaves its way through traditional security mechanisms like they use firewalls and require SSH access to remote boxes, and they will totally ban someone from port scanning as well as less traditional security mechanisms, but also far less mature or proven, such as relying on the control within the Xen hypervisor.

So what are the salient aspects of the paper? Well you can read the gory details – or lack thereof – (here)

Read Full Post »

Cloud computing, or as I like to call it the return of the mainframe and thin-client computing architecture – only cloudier, has been creating a lot of interesting discussion throughout IT recently.

Cloud computing, which we will define as any service or set of services delivered through the Internet (Cloud) without requiring additional infrastructure on the part of the organization. Although a broad definition it encompasses everything from storage and capacity services to applications like CRM or email to development platforms and everything in between that is delivered and accessed through the Internet (Cloud).

Obviously the concept of ubiquitous broadband connectivity combined with a highly mobile workforce enabled to productivity, independent of location and with the promise of limited, if any, additional infrastructural costs, offers new levels of efficiencies for many organizations looking to leverage and extend their shrinking IT budgets.

There is little doubt that cloud computing offers benefits in how organizations look to drive greater benefit from their IT dollars, but there are also many trade-offs that can dramatically reduce, and negate the benefits altogether, understanding these trade-offs will allow an organization to make the right decisions.

As with most advancements in computing, security is generally an afterthought, bolted on once the pain is great enough to elicit the medication. Sort of like the back pain of IT, security enhancements tend to result once the agility (availability, reliability, etc) is somehow inhibited or because it is prescribed as a result of a Doctors visit (compliance audit) cloud computing is no different.

But before we can understand the strengths or inadequacies of cloud computing security models we need to have an understanding of baseline security principles that all organizations face, this will allow us to draw parallels and define what is and isn’t an acceptable level of risk.

Again for the sake of brevity I will keep this high-level, but it really comes down to two main concepts; visibility and control. All security mechanisms are an exercise in trying to gain better visibility or to implement better controls all balanced against the demands of the business. for the most part the majority of organizations struggle with even the most basic of security demands. For example visibility into the computing infrastructure itself;

  • How many assets do you own? How many are actively connected to the network right now? How many do you actively manage? Are they configured according to corporate policy? Are they up to date with the appropriate security controls? Are they running licensed applications? Are they functioning to acceptable levels? How do you know?
  • How about the networking infrastructure? databases? application servers? web servers? Are they all configured properly? Who has access to them? Have they been compromised? Are they secure to the universe of known external threats? How do you know?
  • Do internal applications follow standard secure development processes? Do they provide sufficient auditing capabilities? Do they export this data in a format that can be easily consumed by the security team? Can access/authentication anomalies be easily identified? How do you know?
  • What happens when we an FTE is no longer allowed access to certain services/applications? Are they able to access them even after they have been terminated? Do they try? Are they successful? How do you know?

These are all pretty basic security questions and it is only a small subset of issues IT is concerned with, but most organizations cannot answer any one of them, let alone all of them, without significant improvement to their current processes. It is fair to say that the majority of organizations lack adequate visibility into their computing infrastructures.

Of course the lack of visibility doesn’t imply a lack of control;

  • Are assets that are not actively managed blocked from accessing corporate services? Are they blocked from accessing internal applications? Based on what criteria – lack of policy adherence? How granular is the control? And if you lack visibility how can you be sure the control is working?
  • What controls have you implemented to prevent external access to internal resources? Does this apply to mobile/remote employees? How long after an employee is released does it take to remove access to all corporate resources? What authentication mechanisms are in place to validate the identify of an employee accessing corporate resources? Without visibility how do you know?
  • What controls are in place to ensure the concept of least privilege? What controls are in place to ensure internal applications (web, non-web, or modifications to COTs) adhere to corporate secure coding standards? If you lack visibility how do you know?
  • What controls are in place to ensure that a malicious actor cannot access internal corporate resources if they have stolen the credentials of a legitimate employee? How do you know the controls are adequate?

Again, just a small subset of the controls IT must be concerned with. Like the problem of visibility most organizations are barely able to implement proper controls for some of these, let alone the universe of security controls required in most organizations. Let me state, in case it isn’t obvious, the goal of security isn’t to prevent all bad things from occurring – this is an unachievable goal – the goal of security is to implement the needed visibility and controls that allow them to limit the probability of a successful incident from occurring, and when an incident does occur to quickly limit it’s impact.

So what happens when we move services to the cloud?  When we allow services to be delivered by a third party we lose all control over how they secure and maintain the health of their environment and in many cases we lose all visibility into the controls themselves, that being said…Cloud Computing platforms have the potential to offer adequate security controls, but it will require a level of transparency the providers will most likely not be comfortable providing.

Our current computing paradigm is inherently insecure because for the most part it is built on top of fundamentally insecure platforms, there is some potential for cloud computing to balance these deficiencies, but to date there has been little assurances that it will. Some areas that require transparency and that will become the fulcrum points of a sound cloud computing security model:

  • Infrastructural security controls
  • Transport mechanism and associated controls
  • Authentication and authorization access controls
  • Secure development standards and associated controls
  • Monitoring and auditing capabilities
  • SLA and methods for deploying security updates throughout the infrastructure
  • Transparency across these controls and visibility into how they function on a regular basis

Most organizations struggle with their own internal security models, they are barely able to focus their efforts on a segment of the problem, and in many cases they are ill-equipped to implement the needed security mechanisms to even meet a base level of security controls, for these organizations looking to a 3rd party to provide security controls may prove to be beneficial. For organizations that are considered to be highly efficient in implementing their security programs, are risk adverse, or are under significant regulatory pressures, they will find that cloud computing models eliminate too much visibility to be a viable alternative to deploying their own infrastructure.

I will leave you with one quick story, when I was an analyst with Gartner I presented at a SOA/Web Services/Enterprise Architecture Summit a presentation titled “Security 101 for Web 2.0” the room was overwhelming developers who were trying to understand how to better develop and enable security as part of developing the internal applications they were tasked to develop. The one suggestion that elicited the greatest interest and most questions was a simple one; develop your applications so that they can be easily audited by the security and IT teams once they are in production, enable auditing that can capture access attempts (successful or not), date/time, source IP address, etc…the folks I talked to afterwards told me it was probably the single most important concept for them during the summit – enable visibility.

In part 2 we will take an in-depth look into the security models of various cloud computing platforms, stay tuned for more to come….

Some interesting “Cloud” Resources that you can find in the cloud:

  • Amazon Web Services Blog (here)
  • Google App Engine Blog (here)
  • Microsoft Azure Blog (here)
  • Developer.force.com Blog (here)
  • Gartners Application Architecture, Development and Integration Blog (here)
  • The Daily Cloud Feed (here)
  • Craig Balding – Cloudsecurity.org (here)
  • James Urquhart – The wisdom of Clouds (here)
  • Chris Hoff – Rational Survivability (here)

Read Full Post »

Reading through my blog feeds I came across something Hoff wrote in response to Reuven Cohen’s “Elastic Vapor: Life In the Cloud Blog, in particular I wanted to respond to the the following comment (here)

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

I also wrote about this concept in a series of post on swarm intelligence…

Evolving Information Security Part 1: The Herd Collective vs. Swarm Intelligence (here)

The only viable option for collective intelligence in the future is through the use of intelligent agents, which can perform some base level of analysis against internal and environmental variables and communicate that information to the collective without the need for centralized processing and distribution. Essentially the intelligent agents would support cognition, cooperation, and coordination among themselves built on a foundation of dynamic policy instantiation. Without the use of distributed computing, parallel processing and intelligent agents there is little hope for moving beyond the brittle and highly ineffective defenses currently deployed.

Evolving Information Security Part 2: Developing Collective Intelligence (here)

Once the agent is fully aware of the state of devices it resides on, physical or virtual, it will need to expand its knowledge of the environment it resides in and it’s relative positioning to others. Knowledge of self, combined with knowledge of the environment expands the context in which agents could effect change. In communication with other agents the response to threats or other problems would be more efficiently identified, regardless of location.

As knowledge of self moves to communication with others there is the foundation for inter-device cooperation. Communication and cooperation between seemingly disparate devices, or device clusters, creates collective intelligence. This simple model creates an extremely powerful precedent for dealing with a wide range of information technology and security problems.

Driving the intelligent agents would be a lightweight and adaptable policy language that would be easily interpreted by the agent’s policy engine. New polices would be created and shared between the agents and the system would move from simply responding to changes and begin to adapt on its own. The collective and the infrastructure will learn. This would enable a base-level of cognition where seemingly benign events or state changes coupled with similarly insignificant data could be used to lessen the impact of disruptions or incidents, sometimes before they even occur.

The concept of distributed intelligence and self-healing infrastructure will have a major impact on a highly mobile world of distributed computing devices, it will also form the foundation for how we deal with the loss of visibility and control of the “in the cloud” virtual storage and data centers that service them.

Read Full Post »

And on the second day God said “let there be computing – in the cloud” and he gave unto man cloud computing…on the seventh day man said “hey, uhmm, dude where’s my data?”

There has been much talk lately about the “Cloud“. The promise of information stored in massive virtual data centers that exist in the ethereal world of the Internet, then delivered as data or services to any computing device with connectivity to the “Cloud“. Hoff recently ranted poetic on the “Cloud” (here) and asked the question “How does one patch the Cloud” (here)

So what the hell is the cloud anyway and how is it different from ASPs (application service providers) and MSPs (managed service providers) of yesteryear, the SaaS/PaaS/CaaS (crap as a Service) “vendors” of today and the telepathic, quantum, metaphysical, neural nets of tomorrow?

I am not going to spend any time distinguishing between services offered by, or including the participation of, a 3rd party whether they take the name ASP, SOA, Web services, Web 2.0, SaaS/PaaS, or cloud-computing. For whatever label the ‘topic du jour’ is given, and regardless of the stark differences or subtle nuances between them, the result is the same – an organization acquiesces almost complete visibility and control over some aspect of their information and/or IT infrastructure.

There should be no doubt that the confluence of greater computing standardization, an increasing need for service orientation, advances in virtualization technology, and nearly ubiquitous broad-band connectivity enable radical forms of content and service delivery. The benefits could be revolutionary, the fail could be Biblical.

Most organizations today can barely answer simple questions, such as how many assets do we own? How many do we actively manage and of these how many adhere to corporate policy? So of course it makes sense to look to a 3rd party to assist in creating a foundation for operational maturity and it is assumed that once we turn over accountability to a 3rd party that we significantly reduce cost, improve service levels and experience wildly efficient processes – this is rarely the case, in fact most organizations will find that the lack of transparency creates more questions than they answer and instill a level of mistrust and resentment within the IT team as they have to ask whether the company has performed something as simple as applying a security patch. The “Cloud” isn’t magic, it isn’t built on advanced alien technology or forged in the fires of Mount Doom in Mordor, no it is built on the same crappy stuff that delivers lolcats (here) and The Official Webpage of the Democratic Peoples Republic of Korea (here), that’s right the same DNS, BGP, click-jacking and Microsoft security badness that plague most everybody – well plague most everybody – so how does an IT organization reliably and repeatably gain visibility into a 3rd parties operational processes and current security state? More importantly when we allow services to be delivered by a third party we lose all control over how they secure and maintain the health of their environment and you simply can’t enforce what you can’t control.

In the best case an organization will be able to focus already taxed IT resources on solving tomorrows problems while the problems of today are outsourced, but in the worst case using SaaS or cloud-computing might end up as the digital equivalent of driving drunk through Harlem while wearing a blind fold and waving a confederate flag with $100 bills stapled to it and hoping that “nothing bad happens”. Yes cloud-computing could result in revolutionary benefits or it could result in failures of Biblical proportions, but most likely it will result in incremental improvements to IT service delivery marked by cyclical periods of confusion, pain, disillusionment, and success, just like almost everything else in IT – this is assuming that there is such a thing as the “Cloud

Update: To answer Hoff’s original question “How do we patch the cloud?” the answer is – no different than we patch anything, unfortunately the problem is in the “if and when does one patch the cloud” – which can result in mistmatched priorities between the cloud owners and the cloud users.

Read Full Post »

Older Posts »


Get every new post delivered to your Inbox.

Join 43 other followers