Feeds:
Posts
Comments

Posts Tagged ‘Security’

On December 1, 2011 a Class-action lawsuit was filed in United States District Court Northern District of California against Hewlett-Packard, alleging violations of The California Consumer Legal Remedies Act for Injunctive Relief and the California Unfair Competition Law based on non-disclosure of a known security vulnerability (read the filing here)

Nature of the Action

l. Plaintiff brings this action individually and as a class action against Hewlett-Packard Company (“Hewlett-Packard” or “HP” or “Defendant”) on behalf of all others who purchased a Hewlett-Packard printer (the “HP Printers”).

2. The HP Printer’s suffer from a design defect in the software (which is also sometimes referred to as “firmware” ) that is resident on the HP Printers, which allow computer hackers to gain access to the network on which the HP Printers are connected, steal sensitive information, and even flood the HP Printers, themselves, with commands that are able to control the HP Printers and even cause physical damage to the BP Printers themselves.

3. Despite Defendant’s knowledge of the design defect in the software of the HP Printers. Defendant has failed to disclose the existence of the defect to consumers

4. As a result of the facts alleged herein, Defendant has violated California laws governing consumer protection.

(more…)

Read Full Post »

BigBrother-1984

From Computer World UK (here)

There is little doubt that advances in technology have radically changed many aspects of our lives, from healthcare to manufacturing, from supply chains to battlefields, we are experiencing an unprecedented technical revolution.

Unfortunately, technology enables the average person to leak personal information at a velocity that few understand. Take a moment and think about how much of your life intersects with technology that can be used to track your movements, record your buying patterns, log your internet usage, identify your friends, associates, place of employment, what you had for dinner, where you ate and who you were with. It may not even be you who is disclosing this information. (more…)

Read Full Post »

VDI fail

To address the increasing cost and complexity of managing dynamic IT environments organizations are trying to understand how to adopt virtualization technologies. The value proposition and “killer app” are quite clear in the data center, however less attention has been given to the opportunities for endpoint virtualization. Even though there are multiple methods to address client-side virtualization; hosted virtual desktops (HVD), bare-metal hypervisors, local and streaming virtual workspaces and a range of options that layer on top of and between them all, such as application virtualization, portable personalities, and virtual composite desktops, there is still a tremendous amount of confusion and even more misconceptions about the benefits of client-side virtualization than with server virtualization. The major architectural flaw in almost all of these solutions is they remain very back end and infrastructural heavy, which reduces the benefit of cost-reduction and lower complexity.

Unlike server virtualization, which drove adoption from the bottom up, that is from the hypervisor and then through the other stacks, adoption of endpoint virtualization technologies is moving top down, that is starting with single applications within an existing OS. Application virtualization adoption will accelerate over the next 12-18 months with Gartner life cycle management analyst suggesting that it will be included in the majority of PC life cycle RFP’s in 2010 and beyond. Workspace/Desktop virtualization will follow over the next 24-36 months, as will the endpoint virtualization infrastructures. The adoption of both workspace/desktop and endpoint virtualization infrastructure will align with organizations desktop refresh cycles. Considering the average is between 3-5 years and considering that many are looking at desktop refresh to support Vista, although it probably only has about a 10% market adoption, and Windows 7, it is conceivable that we will begin seeing accelerated adoption of desktop and infrastructure virtualization over the next 24-36 months as organizations rethink their current systems management processes and technologies.

Let’s look at the 4 client/desktop virtualization models I believe will become the most prevalent over the next 3-5 years… (more…)

Read Full Post »

Recently I posted some thoughts on cloud security (here), (here), and (here). The bottom line still holds true…

When we allow services to be delivered by a third party we lose all control over how they secure and maintain the health of their environment and in many cases we lose all visibility into the controls themselves, that being said…Cloud Computing platforms have the potential to offer adequate security controls, but it will require a level of transparency the providers will most likely not be comfortable providing.

In September of 2008 Amazon released a paper entitled “Amazon WebServices: Overview of Security Processes” which discusses, at a high-level, aspects of Amazon’s AWS (Amazon Web Services) security model. Essentially it says that they will provide a base-level of reasonable security controls against their infrastructures and the enterprise is required to provide the required security controls against their guest OS instance and other attributes of the customer environmental variables, including data backup, controls, and secure development.

The biggest problem is that you, as the consumer of this technology, will not be able to audit the security controls. You, as the consumer of this technology, will need to rely on their assertions of the controls and static (SAS 70) audits that these controls are actually in place – sans details of course.

The other big problem with the “joint” security model Amazon proposes is that it adds a level of complexity to the organization utilizing the services. They now have to manage, report against, and provide accountability for the tsunami of compliance audits in a mixed environment where infrastructure is maintained and secured by Amazon and other parts must be maintained and secured by the customer, this is in addition to,  but not necessarily in cooperation with the customers current operational security models.

The rest of the paper weaves its way through traditional security mechanisms like they use firewalls and require SSH access to remote boxes, and they will totally ban someone from port scanning as well as less traditional security mechanisms, but also far less mature or proven, such as relying on the control within the Xen hypervisor.

So what are the salient aspects of the paper? Well you can read the gory details – or lack thereof – (here)

Read Full Post »

Cloud computing, or as I like to call it the return of the mainframe and thin-client computing architecture – only cloudier, has been creating a lot of interesting discussion throughout IT recently.

Cloud computing, which we will define as any service or set of services delivered through the Internet (Cloud) without requiring additional infrastructure on the part of the organization. Although a broad definition it encompasses everything from storage and capacity services to applications like CRM or email to development platforms and everything in between that is delivered and accessed through the Internet (Cloud).

Obviously the concept of ubiquitous broadband connectivity combined with a highly mobile workforce enabled to productivity, independent of location and with the promise of limited, if any, additional infrastructural costs, offers new levels of efficiencies for many organizations looking to leverage and extend their shrinking IT budgets.

There is little doubt that cloud computing offers benefits in how organizations look to drive greater benefit from their IT dollars, but there are also many trade-offs that can dramatically reduce, and negate the benefits altogether, understanding these trade-offs will allow an organization to make the right decisions.

As with most advancements in computing, security is generally an afterthought, bolted on once the pain is great enough to elicit the medication. Sort of like the back pain of IT, security enhancements tend to result once the agility (availability, reliability, etc) is somehow inhibited or because it is prescribed as a result of a Doctors visit (compliance audit) cloud computing is no different.

But before we can understand the strengths or inadequacies of cloud computing security models we need to have an understanding of baseline security principles that all organizations face, this will allow us to draw parallels and define what is and isn’t an acceptable level of risk.

Again for the sake of brevity I will keep this high-level, but it really comes down to two main concepts; visibility and control. All security mechanisms are an exercise in trying to gain better visibility or to implement better controls all balanced against the demands of the business. for the most part the majority of organizations struggle with even the most basic of security demands. For example visibility into the computing infrastructure itself;

  • How many assets do you own? How many are actively connected to the network right now? How many do you actively manage? Are they configured according to corporate policy? Are they up to date with the appropriate security controls? Are they running licensed applications? Are they functioning to acceptable levels? How do you know?
  • How about the networking infrastructure? databases? application servers? web servers? Are they all configured properly? Who has access to them? Have they been compromised? Are they secure to the universe of known external threats? How do you know?
  • Do internal applications follow standard secure development processes? Do they provide sufficient auditing capabilities? Do they export this data in a format that can be easily consumed by the security team? Can access/authentication anomalies be easily identified? How do you know?
  • What happens when we an FTE is no longer allowed access to certain services/applications? Are they able to access them even after they have been terminated? Do they try? Are they successful? How do you know?

These are all pretty basic security questions and it is only a small subset of issues IT is concerned with, but most organizations cannot answer any one of them, let alone all of them, without significant improvement to their current processes. It is fair to say that the majority of organizations lack adequate visibility into their computing infrastructures.

Of course the lack of visibility doesn’t imply a lack of control;

  • Are assets that are not actively managed blocked from accessing corporate services? Are they blocked from accessing internal applications? Based on what criteria – lack of policy adherence? How granular is the control? And if you lack visibility how can you be sure the control is working?
  • What controls have you implemented to prevent external access to internal resources? Does this apply to mobile/remote employees? How long after an employee is released does it take to remove access to all corporate resources? What authentication mechanisms are in place to validate the identify of an employee accessing corporate resources? Without visibility how do you know?
  • What controls are in place to ensure the concept of least privilege? What controls are in place to ensure internal applications (web, non-web, or modifications to COTs) adhere to corporate secure coding standards? If you lack visibility how do you know?
  • What controls are in place to ensure that a malicious actor cannot access internal corporate resources if they have stolen the credentials of a legitimate employee? How do you know the controls are adequate?

Again, just a small subset of the controls IT must be concerned with. Like the problem of visibility most organizations are barely able to implement proper controls for some of these, let alone the universe of security controls required in most organizations. Let me state, in case it isn’t obvious, the goal of security isn’t to prevent all bad things from occurring – this is an unachievable goal – the goal of security is to implement the needed visibility and controls that allow them to limit the probability of a successful incident from occurring, and when an incident does occur to quickly limit it’s impact.

So what happens when we move services to the cloud?  When we allow services to be delivered by a third party we lose all control over how they secure and maintain the health of their environment and in many cases we lose all visibility into the controls themselves, that being said…Cloud Computing platforms have the potential to offer adequate security controls, but it will require a level of transparency the providers will most likely not be comfortable providing.

Our current computing paradigm is inherently insecure because for the most part it is built on top of fundamentally insecure platforms, there is some potential for cloud computing to balance these deficiencies, but to date there has been little assurances that it will. Some areas that require transparency and that will become the fulcrum points of a sound cloud computing security model:

  • Infrastructural security controls
  • Transport mechanism and associated controls
  • Authentication and authorization access controls
  • Secure development standards and associated controls
  • Monitoring and auditing capabilities
  • SLA and methods for deploying security updates throughout the infrastructure
  • Transparency across these controls and visibility into how they function on a regular basis

Most organizations struggle with their own internal security models, they are barely able to focus their efforts on a segment of the problem, and in many cases they are ill-equipped to implement the needed security mechanisms to even meet a base level of security controls, for these organizations looking to a 3rd party to provide security controls may prove to be beneficial. For organizations that are considered to be highly efficient in implementing their security programs, are risk adverse, or are under significant regulatory pressures, they will find that cloud computing models eliminate too much visibility to be a viable alternative to deploying their own infrastructure.

I will leave you with one quick story, when I was an analyst with Gartner I presented at a SOA/Web Services/Enterprise Architecture Summit a presentation titled “Security 101 for Web 2.0″ the room was overwhelming developers who were trying to understand how to better develop and enable security as part of developing the internal applications they were tasked to develop. The one suggestion that elicited the greatest interest and most questions was a simple one; develop your applications so that they can be easily audited by the security and IT teams once they are in production, enable auditing that can capture access attempts (successful or not), date/time, source IP address, etc…the folks I talked to afterwards told me it was probably the single most important concept for them during the summit – enable visibility.

In part 2 we will take an in-depth look into the security models of various cloud computing platforms, stay tuned for more to come….

Some interesting “Cloud” Resources that you can find in the cloud:

  • Amazon Web Services Blog (here)
  • Google App Engine Blog (here)
  • Microsoft Azure Blog (here)
  • Developer.force.com Blog (here)
  • Gartners Application Architecture, Development and Integration Blog (here)
  • The Daily Cloud Feed (here)
  • Craig Balding – Cloudsecurity.org (here)
  • James Urquhart – The wisdom of Clouds (here)
  • Chris Hoff – Rational Survivability (here)

Read Full Post »

Reading through my blog feeds I came across something Hoff wrote in response to Reuven Cohen’s “Elastic Vapor: Life In the Cloud Blog, in particular I wanted to respond to the the following comment (here)

This basically means that we should distribute the sampling, detection and prevention functions across the entire networked ecosystem, not just to dedicated security appliances; each of the end nodes should communicate using a standard signaling and telemetry protocol so that common threat, vulnerability and effective disposition can be communicated up and downstream to one another and one or more management facilities.

I also wrote about this concept in a series of post on swarm intelligence…

Evolving Information Security Part 1: The Herd Collective vs. Swarm Intelligence (here)

The only viable option for collective intelligence in the future is through the use of intelligent agents, which can perform some base level of analysis against internal and environmental variables and communicate that information to the collective without the need for centralized processing and distribution. Essentially the intelligent agents would support cognition, cooperation, and coordination among themselves built on a foundation of dynamic policy instantiation. Without the use of distributed computing, parallel processing and intelligent agents there is little hope for moving beyond the brittle and highly ineffective defenses currently deployed.

Evolving Information Security Part 2: Developing Collective Intelligence (here)

Once the agent is fully aware of the state of devices it resides on, physical or virtual, it will need to expand its knowledge of the environment it resides in and it’s relative positioning to others. Knowledge of self, combined with knowledge of the environment expands the context in which agents could effect change. In communication with other agents the response to threats or other problems would be more efficiently identified, regardless of location.

As knowledge of self moves to communication with others there is the foundation for inter-device cooperation. Communication and cooperation between seemingly disparate devices, or device clusters, creates collective intelligence. This simple model creates an extremely powerful precedent for dealing with a wide range of information technology and security problems.

Driving the intelligent agents would be a lightweight and adaptable policy language that would be easily interpreted by the agent’s policy engine. New polices would be created and shared between the agents and the system would move from simply responding to changes and begin to adapt on its own. The collective and the infrastructure will learn. This would enable a base-level of cognition where seemingly benign events or state changes coupled with similarly insignificant data could be used to lessen the impact of disruptions or incidents, sometimes before they even occur.

The concept of distributed intelligence and self-healing infrastructure will have a major impact on a highly mobile world of distributed computing devices, it will also form the foundation for how we deal with the loss of visibility and control of the “in the cloud” virtual storage and data centers that service them.

Read Full Post »

And on the second day God said “let there be computing – in the cloud” and he gave unto man cloud computing…on the seventh day man said “hey, uhmm, dude where’s my data?”

There has been much talk lately about the “Cloud“. The promise of information stored in massive virtual data centers that exist in the ethereal world of the Internet, then delivered as data or services to any computing device with connectivity to the “Cloud“. Hoff recently ranted poetic on the “Cloud” (here) and asked the question “How does one patch the Cloud” (here)

So what the hell is the cloud anyway and how is it different from ASPs (application service providers) and MSPs (managed service providers) of yesteryear, the SaaS/PaaS/CaaS (crap as a Service) “vendors” of today and the telepathic, quantum, metaphysical, neural nets of tomorrow?

I am not going to spend any time distinguishing between services offered by, or including the participation of, a 3rd party whether they take the name ASP, SOA, Web services, Web 2.0, SaaS/PaaS, or cloud-computing. For whatever label the ‘topic du jour’ is given, and regardless of the stark differences or subtle nuances between them, the result is the same – an organization acquiesces almost complete visibility and control over some aspect of their information and/or IT infrastructure.

There should be no doubt that the confluence of greater computing standardization, an increasing need for service orientation, advances in virtualization technology, and nearly ubiquitous broad-band connectivity enable radical forms of content and service delivery. The benefits could be revolutionary, the fail could be Biblical.

Most organizations today can barely answer simple questions, such as how many assets do we own? How many do we actively manage and of these how many adhere to corporate policy? So of course it makes sense to look to a 3rd party to assist in creating a foundation for operational maturity and it is assumed that once we turn over accountability to a 3rd party that we significantly reduce cost, improve service levels and experience wildly efficient processes – this is rarely the case, in fact most organizations will find that the lack of transparency creates more questions than they answer and instill a level of mistrust and resentment within the IT team as they have to ask whether the company has performed something as simple as applying a security patch. The “Cloud” isn’t magic, it isn’t built on advanced alien technology or forged in the fires of Mount Doom in Mordor, no it is built on the same crappy stuff that delivers lolcats (here) and The Official Webpage of the Democratic Peoples Republic of Korea (here), that’s right the same DNS, BGP, click-jacking and Microsoft security badness that plague most everybody – well plague most everybody – so how does an IT organization reliably and repeatably gain visibility into a 3rd parties operational processes and current security state? More importantly when we allow services to be delivered by a third party we lose all control over how they secure and maintain the health of their environment and you simply can’t enforce what you can’t control.

In the best case an organization will be able to focus already taxed IT resources on solving tomorrows problems while the problems of today are outsourced, but in the worst case using SaaS or cloud-computing might end up as the digital equivalent of driving drunk through Harlem while wearing a blind fold and waving a confederate flag with $100 bills stapled to it and hoping that “nothing bad happens”. Yes cloud-computing could result in revolutionary benefits or it could result in failures of Biblical proportions, but most likely it will result in incremental improvements to IT service delivery marked by cyclical periods of confusion, pain, disillusionment, and success, just like almost everything else in IT – this is assuming that there is such a thing as the “Cloud

Update: To answer Hoff’s original question “How do we patch the cloud?” the answer is – no different than we patch anything, unfortunately the problem is in the “if and when does one patch the cloud” – which can result in mistmatched priorities between the cloud owners and the cloud users.

Read Full Post »

Thanks to VMware you can barely turn around today without someone using the V-word and with every aspect of the English language, and some from ancient Sumeria, now beginning with V it will only get worse. There is no question that virtualization holds a lot of promise for the enterprise, from decreased cost to increased efficiency, but between the ideal and the reality is a chasm of broken promises, mismatched expectations and shady vendors waiting to gobble up your dollars and leave a trail of misery and despair in their wake. To help avoid the landmines I give you the top myths, misconceptions, half-truths and outright lies about virtualization.

Virtualization reduces complexity (I know what server I am. I’m the server, playing a server, disguised as another server)

It seems counter-intuitive that virtualization would introduce management complexity, but the reality is that all the security and systems management requirements currently facing enterprises today do not disappear simply because an OS is a guest within a virtual environment, in fact they increase. Not only does one need to continue to maintain the integrity of the guest OS (configuration, patch, security, application and user management and provisioning), one also needs to maintain the integrity of the virtual layer as well. Problem is this is done through disparate tools managed by FTE’s (full time employees) with disparate skills sets. Organizations also move from a fairly static environment in the physical world, where it takes time to provision a system and deploy the OS and associated applications, to a very dynamic environment in the virtual world where managing guest systems – VMsprawl – becomes an exercise in whack-a-mole. Below are some management capabilities that VMware shared/demoed at VMworld.

  • Vddk (Virtual disk development kit) allows one to apply updates by mounting an offline virtual machine as a file system, and then performing file operations to the mounted file system.  They ignored the fact that file operations are a poor replacement for systems management, such as applying patches.  This method won’t work with windows patch executables, nor will it work with rpm patches which must execute to apply.
  • Offline VDI: The virtual machine can be checked out to a mobile computer in anticipation of a user going on the road and being disconnected from the data center. Unfortunately, data transfers, including the diff’s are very large and one needs to be aware of the impact on the network.
  • Guest API – allows one to inspect the properties of the host environment, but this is limited to the hardware assigned to the virtual machine
  • vCenter – Management framework for viewing and managing a large set of virtual machines across a large set of hardware, a seperate management framework than what IT will use to manage physcial environments.
  • Linked Clones – Among other things, this allows for multiple virtual machine images to serve as a source for a VM instance, however without a link to the parent, clones won’t work.
  • Virtual Machine Proliferation – Since it is so easy to make a snapshot of a machine and to provision a new machine simply by copying another and tweaking a few key parameters (like the computer name), there are tons of machines that get made.  Keeping track of the resulting virtual machines – VMsprawl – is a huge problem.  Additionally disk utilization is often under estimated as the number of these machines and their snapshots grows very quickly.

Want to guess how many start-ups will be knocking on your door to solve one or more of the above management issues?

Virtualization increases security (I’m trying to put tiger balm on these hackers nuts)

Customers that are drawn to virtualization should be aware virtualization adds another layer that needs to be managed and secured. Data starts moving around in ways it never did before as virtual machines are simply files that can be moved wherever.  Static security measures like physical security and network firewalls don’t apply in the same way and need to be augmented with additional security measures, which will increase both cost and complexity.  Network operations, security operations, and IT operations will inherit management of both the physical and the virtual systems so their jobs get more complicated in some ways, and they get simpler in other ways.

Again it would seem counter intuitive that virtualization doesn’t increase security, but the reality is that virtualization adds a level of complexity to organizational security that is marked by new attack vectors in the virtual layer, as well as the lack of security built into virtual environments, which is made even more difficult by the expertise required to secure virtual environments, skills that are sadly lacking in the industry.

The Hoff has written extensively about virtualization security and securing virtual environments (here) – they are different, yet equally complex and hairy – and nowhere will you find a better overall resource to help untangle the Tet offensive of virtualization security or securing virtual environments than from the Hoff.

Virtualization will not require specialization (A nutless monkey could do your job)

What is really interesting about the current state of virtualization technology in the enterprise is the amount of specialization that is required to effectively manage and secure these environments, not only will one need to understand, at least conceptually, the dynamics of systems and security management, but one will also need to understand the technical implementations of the various controls, the use and adminstration of the management tools, and of course follow what is a very dynamic evolution of technology in a rapidly changing market.

Virtualization will save you money today (That’s how you can roll. No more frequent flyer bitch miles for my boy! Oh yeah! Playa….playa!)

Given the current economic climate the CFO is looking for hard dollar savings today. Virtualization has shown itself to provide more efficient use of resources and faster time to value than traditional environments, however the reality is that reaching the promised land requires an initial investment in time, resources, and planning if one is to realize the benefits. Here are some areas that virtualization may provide cost savings and some realities about each of them

  • Infrastructure consolidation – Adding big iron and removing a bunch of smaller machines may look like an exercise in cost-cutting, but remember you still have to buy the big iron, hire consultants to help with the implementation, acquire new licenses, deploy stuff, and of course no one is going to give you money for the machines you no longer use.
  • FTE reduction – Consolidating infrastructure should allow one to realize a reduction in FTE’s right? The problem is that now you need FTE’s with different skills sets, such as how to actually deploy, manage, secure and manage these virtual environments, which now require separate management infrastructures.
  • Decrease in licensing costs – Yes, well, no, depends on if you want to pirate software or not, which is actually easier in virtual environments. With virtual sprawl software asset and license management just jumped the complexity shark.
  • Lower resource consumption – See above references to complexity, security and FTE’s, however one area where virtualization will have immediate impact is in power consumption and support of green IT initiatives, but being green can come at a cost

Virtualization won’t make you rich, teach you how to invest in real-estate, help you lose weight or grow a full head of hair, it won’t make you attractive to the opposite sex, nor will it solve all your problems, it can  improve the efficiency of your operating environment but it requires proper planning, expectation setting and careful deployment. There will be an initial, in some cases substantial, investment of capital, time, and resources, as well as an ongoing effort to manage the environment with new tools and train employees to acquire new skills. Many will turn to consulting companies, systems integrators and service providers that will help them to implement

solutions that generate a quick payback with virtually no risk and position your organization to take advantage of available and emerging real-time infrastructure enablers designed to closely align your business needs with IT resources.

As Les Grossman said in Tropic Thunder “The universe….is talking to us right now. You just gotta listen.”

Read Full Post »

Google recently “leaked” a cartoon providing information on their upcoming browser named “Chrome” (here) and (here) – personally I will be impressed when the movie comes out and there is a guest appearance by Stan Lee. There has already been a tremendous amount of discussion and opinion on the ramifications of such a release. Most of it centering on Google taking aim at Internet Explorer. Hoff believes this signals Google’s entry into the security market (here), obviously the  acquisition of Greenborder and Postini and the release of Google safe browsing were clear signals that security was a critical part of the equation. But what is most important here, and seems to be missed by much of the mainstream media, is that Google is creating the foundation to render the underlying Microsoft PC-based operating system obsolete and deliver the next evolutionary phase of client computing. Hoff pointed this out in his earlier post (here)

So pair all the client side goodness with security functions AND add GoogleApps and you’ve got what amounts to a thin client version of the Internet.

A highly-portable, highly-accessible, secure, thin-client-like, cloud computing software as a service offering that in the next 5-10 years has the potential to render the standard PC-based operating systems virtually obsolete – couple this with streaming desktop virutalization delivered through the Internet and we are quickly entering the next phase of the client computing evolution. You doubt this? OK, ask yourself a question? If Google is to dominate computing through the next decade can it be done on the browser battlefield of old, fought in the same trench warfare like manner experienced during the Early browser wars between Microsoft and Netscape? or will it introduce a much larger landgrab? and what is larger than owning the desktop – fixed or mobile, physical or virtual, enterprise or consumer – regardless of the form it takes?

On another note I recently posted the “7 greatest Ideas in Security” (here), notice that many of them have been adopted by Google in their development of Chrome, including;

  • Security as part of the SDL – designed from scratch to accommodate current needs; stability, speed, and security, also introduces concepts of fuzzing and automated testing using Google’s massive infrastructure.
  • The principle of least privilege – Chrome is essentially sand-boxed so it limits the possibility for drive-by malware or other vectors of attack that use the browser to infect the base OS or adjacent applications, which means the computation of the browser cannot read or write from the file system  – of course social engineering still exists, but Google has an answer for that providing their free Google safe browsing capabilities to automatically and continuously update a blacklist of malicious sites. Now they just need to solve the eco-system problems of plug-ins bypassing the security model of sand-boxing.
  • Segmentation  – Multiple processes with their own memory and global data structures, not to mention the sand-boxing discussed above
  • Inspect what you expect – Google task manager provides visibility into how various web applications are interacting with the browser
  • Independent security research – a fully open source browser, that you can guarantee will be put through the research gauntlet.

Read Full Post »

It is easy to criticize, in fact many have built their entire careers on the foundation of “Monday morning quarter-backing”, not only is it human nature to look for improvements at the detriment of old ideas, but it is also far more humorous to point out what is wrong than to espouse the virtues of what works.

I recently posited what I believed to be the “11 Worst Ideas in Security” (here), but to every yin a yang, to every bad a good, to every Joker a Dark Knight, for the purpose of finding balance, I give to you the 7 Greatest Ideas in Information Security…

7. Microsoft and Security as part of the SDL (Lord Vader finds your lack of faith disturbing)

The greatest flaw in information security is that we try to build security on top of a fundamentally weak foundation, whether we are talking about the core routing infrastructure, the open standards and protocols that drive them or the operating systems themselves, the majority of the Information Security industry is squarely aimed at resolving issues of past incompetence. Nowhere has this been more apparent than the decades plus of vulnerabilities found in Microsoft products. Crappiness exists in other products and is not an attribute solely patented by Microsoft, they just happen to power everything from my Mom’s computer to the Death Star, so when they fail it is almost always epic.

The Microsoft SDL (here) and the work that folks like Michael Howard (here) have done to develop security into a critical aspect of the SDL is not only admirable, it is inspiring. To have witnessed a company the size of Microsoft essentially redesign internal processes to address what was seen as a fundamental deficiency and to then continue to develop these processes changes into thought leadership sets an example for all of us, small business and world dominating enterprise alike. Implementing security as part of the SDL and utilizing concepts such as threat modeling to identify weaknesses and eradicate them before releasing code to the public is arguably one of the greatest ideas in security.

6. The Principle of Least Privilege (Not all of us can know Zarathustra)

Since Saltzer and Schroeder formulated the concept as part of computing we have been striving to achieve it. It is neither new nor is it novel, but it is critical to how we design computing systems and how we develop and implement security controls. It contradicts our own Nietzschean side to feel like constraints and rules are important for the common man, but shouldn’t apply to us personally, but nothing should be afforded more privilege than needed and this is one of the “laws of security”.

5. Segmentation (Your Mendelian trait is in my algorithmic reasoning)

Segmentation of duties, of networks, of memory, of code execution, of anything and everything that should never mix. Combine lack of segmentation with a lack of implementing the principle of least privilege and you turn a simple browser based buffer overflow into a highly damaging payload that can easily replicate throughout the Internets. For us to truly realize improvements in security, as defined by less successful security incidents – real and imagined – and marked by an increase in visibility and control over all of our computing systems, segmentation of everything is an ideal to strive for.

4. Inspect what You Expect (Question everything)

Also known as “trust but verify” as used by the Gipper in his dealings with the Russians during the cold war. Trust is important, but it is even more important to validate that trust. One of the most significant changes every software developer can make today, whether they are developing COTS or internal applications, is to allow security persons to inspect that the application is functioning, being accessed, and managed to the controls that the organizations expects. From networking to applications to users to virtualization to quantum anything, this principle must extend across every layer and concept of computing today and tomorrow,

3. Independent Security Research (So, I’ve been playing with something…no not that)

The ridiculous vulnerability disclosure debate aside, independent security research has had a significant benefit on the security industry. The best example is the recent DNS vulnerability that has been discussed, dissected, and covered ad nauseam. Since it’s disclosure it has not only resulted in providing more awareness of the fundamental flaws in the core infrastructural protocols like DNS and assisted in the implementation of countermeasures, but it has actually driven government policy as the OMB (Office of Management and Budget) has recently mandated the use of DNSSEC for all government agencies (here) – Sweet!

2. Cryptography and Cryptanalysis (From Bletchley with Love)

From the Greek Historian Polybios to the German surrender in May of 1945 to ECHELON, cryptography and cryptanalysis has played a major role in our lives. It has shaped the outcome of wars and changed foreign and domestic policy. It is becoming the cornerstone of the highly distributed, intermittently connected world of technical gadgetry we live in and can make the difference between coverage on the front page of the Wall St. Journal vs. a brief mention in a disgruntled employees blog – Although I wouldn’t argue that encryption as a technology is without flaw, the theory and practice of hiding information and it’s dance partner code breaking, continue to drive some of the greatest advances in information security.

1. Planning, Preparation, and Expectation Setting (Caution: Water on Road, may make road slippery)

Yes a bit of a yawner but since the beginning of forever more failures, more disastrous outcomes and more security incidents result from a lack of proper planning, preparation and expectation setting than all the exploits of all the hackers of all the world combined. As an analyst it became shockingly clear that the majority of failed technology deployments were not the result of a failure in the technology, but a result of poor planning, a lack of preparing and little to no expectation setting, the entire “trough of disillusionment” is riddled with the waste of mismatched technological expectations. The greatest idea in security is not sexy, funny, or terribly enlightened, but it is simple, achievable, repeatable and can be immediately implemented today – plan, prepare and set the proper expectations.

Some may argue that something has been forgotten or that the order is wrong, but I would argue that we must learn to develop securely, implement the proper security controls, verify the functioning of these controls, leverage the research of the greater community, ensure that what cannot be protected is hidden, and from the beginning to the end properly plan, prepare, and set the right expectation – these are the greatest ideas in security and if we learn to embody these principles, we would be moving the industry forward as opposed to constantly feeling like we can only clean up the incompetence that surrounds us.

Read Full Post »

I was reading a book entitled “A Whole New Mind: Why Right-Brainers Will Rule the Future”, it isn’t terribly well written and has the flow of an idea that was shoe-horned into a literary context, but interesting none the less. Anyway against the backdrop of DNSgate (btw – exploit code has been posted – here – thanks guys!) and the complete and utter failure of the security industry to offer anything beyond a never-ending hamster wheel of suites, widgets, add-ons, and modules, the book gave me pause as I reflected on what, for the most part, is a feeling of defeat and despair among security professionals.

This is a feeling that ebbs and flows with the conference season and peaks generally around mid-year with the introduction of clever methods of attack and exploitation presented in the carnival like atmosphere of a Blackhat or *con.

“Come one, come all, see the bearded lady swallow a flaming sword whilst revealing the latest virtual exploit guaranteed to introduce a completely undetectable malicious hypervisor as she rides on the shoulders of the worlds strongest man, who will devastate the entire Internet infrastructure in 10 seconds with a single finger”

Undetectable hyper-visors? 10 seconds to Internet destruction? 1,001 ways to craft a nefarious browser attack? Conceptually these are pretty scary, especially if you are reading your email and Robert Graham singles you out during one of his side-jacking presentations and shows the world how easy it is to own you and how careless you are for being owned – you wall of sheep know who you are – honestly who wouldn’t want to throw in the towel and acquiesce internet dominance to a 15 year old svelte Norwegian hacker with a bad skin condition or a gang of Nigerian spammers.

It would appear that doing business on the internet is like Dom Deluise swimming naked through shark-infested waters with an open wound while wearing a necklace of dead penguins and carrying a 3 lb salami.

It has been argued time and again that the bad-guys have the advantage, that we are on the losing side of the OODA loop, that for the most part we are simply sitting ducks and the best we can do is choose to not sit so close to the gaping jaws of a large crocodile and pray that we do not become prey. I contend that feeling is misguided and incorrect.

Although it has either been lost as inconsequential or we have been so blinded by the constant carpet-bombing of FUD marketing and the ongoing orgy of disclosure that we are simply numb to it, but we have an inherent advantage in that we use the right side of our brains, whereas the bad guys really have no need to, we are clever, we use art with science, we are driven to find the edge cases, we strive to find the unique and obscure – we believe it is the other way around, but that is a result of the complete incompetence of the major security vendors, who like the Diabetes product vendors, will forever keep us in a never-ending cycle of finger-pricking and insulin injecting security practices instead of actually trying to solve problems.

Wait, what, we have the advantage? I know it sounds like security blasphemy, but don’t jump off the roller-coaster of semi-rational fun just yet, we still need to ride through the loop de loop.

  • The majority of ground breaking security research and discoveries, especially of the “holy shit” variety, come from the good guys, not the bad
  • According to the recent Verizon breach disclosure statistics 85% of attacks are opportunistic, which leads one to believe that a. there is no reason for bad guys to find unique ways to exploit and b. we are still our own worst enemy.
  • There is no end in sight for the lack of security prowess ensuring an endless supply of easy targets for the bad guys to attack – remember if we believe that attacks are becoming more financially motivated then there is a cost-benefit analysis that will drive an attacker to take the easiest, least risky path to exploit.

The internet is resilient, business is even more so, and the good guys tend to spend more time on the problem than the bad guys.

Read Full Post »

Security metrics, which I have posted on in the past (here), and (here), are almost as elusive as security ROI. But unlike the mystical pink unicorn that is security ROI, security metrics are real, tangible and meaningful. Why is it then that we have so much difficulty in defining metrics that are both simple in their implementation and significant in their impact on the organization? I believe much of this stems from two flaws in how most organizations approach information security.

The first problem is that, for the most part, security is a reactive, ad-hoc discipline, primarily focused on responding to an incident. This drives post-incident metrics such as how many virus outbreaks did we experience, or how many attacks did our IDS detect, or how much SPAM did our anti-spam thingie block. These might be useful in determining, well, those things above, but they are hardly telling of the effectiveness or efficiency of one’s IT security program.

The second problem is how an organization communicates between groups. Operations, audit & compliance, and security are examples of domains within an organization that use a very different language to communicate problem/resolutions.

Vulnerability assessment is a great example of the problem of cross-organizational communication. Security will look at vulnerability assessment data from the perspective of unique, distinct conditions, operations will look at the data with an eye towards what remediation must be done, and audit & compliance might be concerned with how the data is relevant to regulatory initiatives. Operationally these are all very different ways of describing environmental variables, and it is very difficult to satisfy each of these groups with a simple metric of how vulnerable are we? – to what? or How many vulnerabilities exist in our environment? – Why does it matter? Operations doesn’t care how many, unique, distinct vulnerabilities some VA scanner found – their charter is availability.

A common language that is driven by policy and used in terms of the business is critical to ensuring cross-organizational communication. Ideally we would be able to draft metrics that address effectiveness and efficiency, how effective is our IT security and operations program and how efficient are we in detecting and remediating change. Most of this would require a move towards a policy driven approach and SLA’s to monitor adherence to plan, which we will look at in a future post. I did want to take a minute and list some metrics that every organization must be able to address today, because if you cannot answer these basic questions about your environment, with any degree of accuracy, then all the metrics we will come up will fall short.

1. How many computing devices are actively connected to my network right now and how many of these do we actually own?

2. Of these how many do we actively manage (have full visibility into and command and control of)?

3. What percentage of these are compliant with basic security policies, including…?
a. Endpoint security is up to date and configured in compliance with corporate policy (Anti Virus, Anti Spyware, Personal Firewall, HIPS, Encryption, et al)
b. Systems are configured against a security baseline as defined by NIST, NSA, DISA, CIS, etc…
c. Systems are patched to corporate standards

4. How effective is our change management process? And how quickly can we affect change in the environment. For example, once a decision has been made to change some environmental variable (modify PFW settings, configuration change to the device itself, update to dat files, reconfigure HIPS/PFW settings, etc) what percentage of the environment can we verify conforms these changes within a 24 hour period?

5. What audit mechanisms are in place to detect changes to a corporate COE (common operating environment), how often do we monitor for non-compliance, what is the process for remediating non-compliant devices, and how long does it take from detection to remediation?

If your organization can repeatably and verifiably answer these 5 questions, you are well on your way to metrics nirvana.

Read Full Post »

Web threats are up 1564% since 2005, vulnerabilities continue to number in the thousands annually, malware infections have skyrocketed to over 8 million in November of 2007 alone, SPAM accounts for up to 90% of all email traffic, there is an estimated 3 million plus bot-compromised machines connected to the internet at any given moment, high-impact regional threats and targeted attacks have increased dramatically year over year since 2005, and there is a breach a day in what has become an orgy of disclosure, punctuated by a tsunami of useless loss statistics. This is all against a backdrop of new vectors of attack introduced by mobile computers, virtualization, SaaS, and other disruptive technologies.  Clearly the current reactive, ad-hoc, threat enumeration, information security model is broken and given the economics of malware and cybercrime it will only get worse…

Sample data from research on the underground digital economy in 2007 from Trend Annual Threat Report 2007 (here)

Pay-out for each unique adware installation – $.30 in the US

Malware package, basic version $1,000 – $2,000

Malware package with add-on services – $20 starting price

Undetected copy of an information stealing Trojan – $80, may vary

10,000 compromised PCs – $1,000

Stolen bank account credentials – $50 starting price

1 million freshly-harvested emails – $8 up, depending on quality

Recently I posted some thoughts on evolving information security to move towards distributed, collective intelligence or swarm intelligence, (here) and (here), and came across a project at the University of Washington called Phalanx – (here) via /.

Their system, called Phalanx, uses its own large network of computers to shield the protected server. Instead of the server being accessed directly, all information must pass through the swarm of “mailbox” computers.

The many mailboxes do not simply relay information to the server like a funnel – they only pass on information when the server requests it. That allows the server to work at its own pace, without being swamped.

“Hosts use these mailboxes in a random order,” the researchers explain. “Even an attacker with a multimillion-node botnet can cause only a fraction of a given flow to be lost,” the researchers say.

Phalanx also requires computers wishing to start communicating with the protected server to solve a computational puzzle. This takes only a small amount of time for a normal web user accessing a site. But a zombie computer sending repeated requests would be significantly slowed down.

This is a very interesting way to deal with the problem of DDoS attacks, it isn’t difficult to imagine how one could use a swarm of intelligent agents to cooperate and shield, or even work to identify patterns of behavior that are representative of malicious or nefarious actions and counter an attack in progress or impending attack before it has a chance to impact the environment.

Read Full Post »

Follow

Get every new post delivered to your Inbox.

Join 41 other followers