Feeds:
Posts
Comments

Posts Tagged ‘Gartner’

You’re not really sure how it happened, but some time between last year and the summer of 2011 you were suddenly facing a big data problem, or you were being told you were facing a big data problem, or more accurately you were being told that you needed a big data solution.

Funny thing was that you hadn’t really done anything drastic over the last couple of years that would seem to indicate a tsunami of data was about to breach your storage floodgates, but then again it wasn’t like you watched yourself going bald either.

(more…)

Read Full Post »

“Information is not knowledge” – Albert Einstein

I recently read a couple of posts about BigData from my friend Chris Hoff - “Infosec Fail: The Problem With BigData is Little Data” and “More on Security and BigData…Where Data Analytics and Security Collide”

In these posts Hoff posits that the mass centralization of information will benefit the industry and that monitoring tools will experience a boon, especially those that leverage a cloud-computing architecture…

This will bring about a resurgence of DLP and monitoring tools using a variety of deployment methodologies via virtualization and cloud that was at first seen as a hinderance but will now be an incredible boon.

As Big Data and the databases/datastores it lives in interact with then proliferation of PaaS and SaaS offers, we have an opportunity to explore better ways of dealing with these problems — this is the benefit of mass centralization of information.

Hoff then goes on to describe how new data warehousing and analytics technologies, such as Hadoop, would positively impact the industry…

Even when we do start to be able to integrate and correlate event, configuration, vulnerability or logging data, it’s very IT-centric.  It’s very INFRASTRUCTURE-centric.  It doesn’t really include much value about the actual information in use/transit or the implication of how it’s being consumed or related to.

This is where using Big Data and collective pools of sourced “puddles” as part of a larger data “lake” and then mining it using toolsets such as Hadoop come into play…

(more…)

Read Full Post »

From Computer World UK (here)

Black Friday and Cyber Monday have come and gone. Now it’s time for Amrit Wednesday, or Thursday, or Friday—oh, whatever—to pay our industry back for all the dubious cheer it spread in 2009. Believe me, when it comes to this list, it’s much better to give than receive. Here goes:

(more…)

Read Full Post »

Gartner Magic Quadrant

A storm is brewing throughout the analyst community as one of the largest and most influential technology analyst firms comes under fire for one of their highest prized research artifacts – The Gartner Magic Quadrant (MQ) – ZL Technologies has filed a lawsuit alleging damages from Gartner’s Email and Archiving MQ and the MQ process as a whole, in which ZL has been positioned as a Niche player since 2005.

From ZL technologies website (here)…

ZL Technologies, a San Jose-based IT company specializing in cutting-edge enterprise software solutions for e-mail and file archiving, is challenging Gartner Group and the legitimacy of Gartner’s “Magic Quadrant.” In a complaint filed on May 29, 2009, ZL claims that Gartner’s use of their proprietary “Magic Quadrant” is misleading and favors large vendors with large sales and marketing budgets over smaller innovators such as ZL that have developed higher performing products.

The complaint alleges: defamation; trade libel; false advertising; unfair competition; and negligent interference with prospective economic advantage.

For those unfamiliar with analysts, Gartner and the Magic Quadrant let me provide a quick overview:

(more…)

Read Full Post »

btp2

Not too long ago I embarked on a creating a podcast series that would provide more regularity than the blog. Beyond the Perimeter has been a tremendous amount of fun and as we just posted our 50th podcast I wanted to reflect on some of the highlights and wonderful guests we have been honored to have joined us.

Beyond the Perimeter iTunes subscription

Beyond the Perimeter Direct XML Feed

(more…)

Read Full Post »

The matrix

Consolidation is the major benefit or “killer app” for server/data center virtualization. Standardization is the major benefit or “killer app” for client-side virtualization.

As I was pondering the challenges of current systems management processes, researching the latest and greatest from the client-side virtualization vendors, and talking to a lot of large organizations I was trying to find that one thing that explained the operational benefits of client-side virtualization. There are more than one, but it really does come down to standardization, allow me to explain… (more…)

Read Full Post »

VDI fail

To address the increasing cost and complexity of managing dynamic IT environments organizations are trying to understand how to adopt virtualization technologies. The value proposition and “killer app” are quite clear in the data center, however less attention has been given to the opportunities for endpoint virtualization. Even though there are multiple methods to address client-side virtualization; hosted virtual desktops (HVD), bare-metal hypervisors, local and streaming virtual workspaces and a range of options that layer on top of and between them all, such as application virtualization, portable personalities, and virtual composite desktops, there is still a tremendous amount of confusion and even more misconceptions about the benefits of client-side virtualization than with server virtualization. The major architectural flaw in almost all of these solutions is they remain very back end and infrastructural heavy, which reduces the benefit of cost-reduction and lower complexity.

Unlike server virtualization, which drove adoption from the bottom up, that is from the hypervisor and then through the other stacks, adoption of endpoint virtualization technologies is moving top down, that is starting with single applications within an existing OS. Application virtualization adoption will accelerate over the next 12-18 months with Gartner life cycle management analyst suggesting that it will be included in the majority of PC life cycle RFP’s in 2010 and beyond. Workspace/Desktop virtualization will follow over the next 24-36 months, as will the endpoint virtualization infrastructures. The adoption of both workspace/desktop and endpoint virtualization infrastructure will align with organizations desktop refresh cycles. Considering the average is between 3-5 years and considering that many are looking at desktop refresh to support Vista, although it probably only has about a 10% market adoption, and Windows 7, it is conceivable that we will begin seeing accelerated adoption of desktop and infrastructure virtualization over the next 24-36 months as organizations rethink their current systems management processes and technologies.

Let’s look at the 4 client/desktop virtualization models I believe will become the most prevalent over the next 3-5 years… (more…)

Read Full Post »

viz-and-control1

Quick thought for the day. Most technologies in the security world move through a predictable cycle of adoption. First an organization implements a solution to gain visibility into the scope of the problem (VA, IDS, DLP/CMF, SIEM) then once it becomes apparent that the problem is vast and overwhelming they move to operationally implement technical controls to protect the environment and to enforce organizational policies, when this switch over occurs the adoption of the pure visibility tools becomes eclipsed by the control tools. This doesn’t mean that the visibility tools are ineffective, it generally means that the scope of the problem is understood to the point that an organization can effectively implement controls, it also means that the problem has successfully moved from the security team to the operations team. You can apply this same logic to any segment of security and to any new technology, including cloud computing, virtualization and all the little shiny obejcts in between.

Examples of this movement from visibility to control include intrusion detection, vulnerability assessment and content monitoring and filtering. Let’s look at VA, It’s initial use was to determine the scope of the ‘exposure’ problem, that is to scan the environment against a database of known vulnerabilities to determine the extent of exposure. Unfortunately the volume of output was very high and was presented in a format that was not easily consumable or actionable by the IT operations team. What exactly does one expect the server admin to do with 300 pages of vulnerability data? There were also inherent issues of fidelity. The use of VA tools moved into targeted scans to determine what needed to be patched, which resulted in the operational implementation of patch management technologies, which soon overtook the market adoption of vulnerability assessment tools. There was also the pressure of auditors looking for the implementation of technical controls and although vulnerability assessments were viewed as an important first step, without the work-flow and controls to address the volume of vulnerability data they proved to be less effective in improving operational security than was originally thought.

It became clear that vulnerability management needed to cross the chasm to become an operationally actionable tool, without remediation capabilities the organization would always be under a mountain of vulnerabilities and the use of the technology would linger in the trough of disillusionment. Security configuration management met that need, it allowed an organization to define the desired configuration state of an environment against industry best practices (NIST, DISA, CIS, etc) and then to operationally implement technical controls to identify non-compliant devices and enforce policy. Security configuration management also had the benefit of providing a common language between the security, audit, and operations teams. I wrote about this in a series of posts (here), (here), and (here).

Read Full Post »

Cloud computing, or as I like to call it the return of the mainframe and thin-client computing architecture – only cloudier, has been creating a lot of interesting discussion throughout IT recently.

Cloud computing, which we will define as any service or set of services delivered through the Internet (Cloud) without requiring additional infrastructure on the part of the organization. Although a broad definition it encompasses everything from storage and capacity services to applications like CRM or email to development platforms and everything in between that is delivered and accessed through the Internet (Cloud).

Obviously the concept of ubiquitous broadband connectivity combined with a highly mobile workforce enabled to productivity, independent of location and with the promise of limited, if any, additional infrastructural costs, offers new levels of efficiencies for many organizations looking to leverage and extend their shrinking IT budgets.

There is little doubt that cloud computing offers benefits in how organizations look to drive greater benefit from their IT dollars, but there are also many trade-offs that can dramatically reduce, and negate the benefits altogether, understanding these trade-offs will allow an organization to make the right decisions.

As with most advancements in computing, security is generally an afterthought, bolted on once the pain is great enough to elicit the medication. Sort of like the back pain of IT, security enhancements tend to result once the agility (availability, reliability, etc) is somehow inhibited or because it is prescribed as a result of a Doctors visit (compliance audit) cloud computing is no different.

But before we can understand the strengths or inadequacies of cloud computing security models we need to have an understanding of baseline security principles that all organizations face, this will allow us to draw parallels and define what is and isn’t an acceptable level of risk.

Again for the sake of brevity I will keep this high-level, but it really comes down to two main concepts; visibility and control. All security mechanisms are an exercise in trying to gain better visibility or to implement better controls all balanced against the demands of the business. for the most part the majority of organizations struggle with even the most basic of security demands. For example visibility into the computing infrastructure itself;

  • How many assets do you own? How many are actively connected to the network right now? How many do you actively manage? Are they configured according to corporate policy? Are they up to date with the appropriate security controls? Are they running licensed applications? Are they functioning to acceptable levels? How do you know?
  • How about the networking infrastructure? databases? application servers? web servers? Are they all configured properly? Who has access to them? Have they been compromised? Are they secure to the universe of known external threats? How do you know?
  • Do internal applications follow standard secure development processes? Do they provide sufficient auditing capabilities? Do they export this data in a format that can be easily consumed by the security team? Can access/authentication anomalies be easily identified? How do you know?
  • What happens when we an FTE is no longer allowed access to certain services/applications? Are they able to access them even after they have been terminated? Do they try? Are they successful? How do you know?

These are all pretty basic security questions and it is only a small subset of issues IT is concerned with, but most organizations cannot answer any one of them, let alone all of them, without significant improvement to their current processes. It is fair to say that the majority of organizations lack adequate visibility into their computing infrastructures.

Of course the lack of visibility doesn’t imply a lack of control;

  • Are assets that are not actively managed blocked from accessing corporate services? Are they blocked from accessing internal applications? Based on what criteria – lack of policy adherence? How granular is the control? And if you lack visibility how can you be sure the control is working?
  • What controls have you implemented to prevent external access to internal resources? Does this apply to mobile/remote employees? How long after an employee is released does it take to remove access to all corporate resources? What authentication mechanisms are in place to validate the identify of an employee accessing corporate resources? Without visibility how do you know?
  • What controls are in place to ensure the concept of least privilege? What controls are in place to ensure internal applications (web, non-web, or modifications to COTs) adhere to corporate secure coding standards? If you lack visibility how do you know?
  • What controls are in place to ensure that a malicious actor cannot access internal corporate resources if they have stolen the credentials of a legitimate employee? How do you know the controls are adequate?

Again, just a small subset of the controls IT must be concerned with. Like the problem of visibility most organizations are barely able to implement proper controls for some of these, let alone the universe of security controls required in most organizations. Let me state, in case it isn’t obvious, the goal of security isn’t to prevent all bad things from occurring – this is an unachievable goal – the goal of security is to implement the needed visibility and controls that allow them to limit the probability of a successful incident from occurring, and when an incident does occur to quickly limit it’s impact.

So what happens when we move services to the cloud?  When we allow services to be delivered by a third party we lose all control over how they secure and maintain the health of their environment and in many cases we lose all visibility into the controls themselves, that being said…Cloud Computing platforms have the potential to offer adequate security controls, but it will require a level of transparency the providers will most likely not be comfortable providing.

Our current computing paradigm is inherently insecure because for the most part it is built on top of fundamentally insecure platforms, there is some potential for cloud computing to balance these deficiencies, but to date there has been little assurances that it will. Some areas that require transparency and that will become the fulcrum points of a sound cloud computing security model:

  • Infrastructural security controls
  • Transport mechanism and associated controls
  • Authentication and authorization access controls
  • Secure development standards and associated controls
  • Monitoring and auditing capabilities
  • SLA and methods for deploying security updates throughout the infrastructure
  • Transparency across these controls and visibility into how they function on a regular basis

Most organizations struggle with their own internal security models, they are barely able to focus their efforts on a segment of the problem, and in many cases they are ill-equipped to implement the needed security mechanisms to even meet a base level of security controls, for these organizations looking to a 3rd party to provide security controls may prove to be beneficial. For organizations that are considered to be highly efficient in implementing their security programs, are risk adverse, or are under significant regulatory pressures, they will find that cloud computing models eliminate too much visibility to be a viable alternative to deploying their own infrastructure.

I will leave you with one quick story, when I was an analyst with Gartner I presented at a SOA/Web Services/Enterprise Architecture Summit a presentation titled “Security 101 for Web 2.0″ the room was overwhelming developers who were trying to understand how to better develop and enable security as part of developing the internal applications they were tasked to develop. The one suggestion that elicited the greatest interest and most questions was a simple one; develop your applications so that they can be easily audited by the security and IT teams once they are in production, enable auditing that can capture access attempts (successful or not), date/time, source IP address, etc…the folks I talked to afterwards told me it was probably the single most important concept for them during the summit – enable visibility.

In part 2 we will take an in-depth look into the security models of various cloud computing platforms, stay tuned for more to come….

Some interesting “Cloud” Resources that you can find in the cloud:

  • Amazon Web Services Blog (here)
  • Google App Engine Blog (here)
  • Microsoft Azure Blog (here)
  • Developer.force.com Blog (here)
  • Gartners Application Architecture, Development and Integration Blog (here)
  • The Daily Cloud Feed (here)
  • Craig Balding – Cloudsecurity.org (here)
  • James Urquhart – The wisdom of Clouds (here)
  • Chris Hoff – Rational Survivability (here)

Read Full Post »

Now of course it would be easy to slap the hide of NAC, IDS, and DLP technologies, but why kick something when it is down, besides we have Stiennon for that (here)…so I give you the 11 worst ideas in security, presented in far less a grumpy format than Ranum’s 6 dumbest ideas in security (here), and of course I kicked it up to 11…

11. Security Industry and Market Analysts (I am become analyst, the destroyer of markets)

Those bastions of knowledge, defenders of the objective faith, and creators of 2-page, in depth, market analysis reports. They don’t actually analyze security they analyze the security market, they say cool things like “By the end of 2007, 75% of enterprises will be infected with undetected, financially motivated, targeted malware that evaded their traditional perimeter and host defenses.” and come up with amusing names and acronyms, (did you know that NBA – Network Behavior Analysis – was at one time called NADS – Network Anomaly Detection System – you can imagine the fun Gartner could of had with an overview of the NADS market). I spent years as an analyst myself and I loved my time, but I will always regret that analysts never actually test, demo, or even interact with the technology they so confidently and assertively write about.

10. Microsoft CPAV (Central Point Anti-Virus – when turning it up to 11 is 10 too many)

Many of you may not remember that Microsoft used to ship an integrated AV product – CPAV (Central Point Anti Virus) CPAV = total suckage. It was a simpler time, malware consisted of threats like the stoned virus (infect the computer, make it look droopy and display a “your computers stoned” message) and you really didn’t need quality, but you did need something that didn’t completely impact user productivity, suck all the computing resources, and disrupt other services – ah the good old days.

9. The Vulnerability market (what can I get for $.63?)

What happens if you create a market and no one buys? Nothing, but a whole lot of complaining from a whole lot of grumpy researchers about how no one takes security seriously and what a thankless job it is to break someone else’s software and then not be showered with accolades when you present them with the data that their software is broken.

8. Scan and Patch (The never ending hamster wheel of late nights and working weekends)

The security group will scan the environment against a database of known vulnerabilities and then harass, scare and guilt-trip the operations team into actually fixing something – it is also referred to by Philip Roth as the Jewish Mother process. This never-ending, reactionary, ad-hoc, false-positive laden, non-environmentally aware, slow, cumbersome, disruptive, snapshot in time approach equals = effectiveness fail. I have written about this before (here)

7. PKI (Easy to deploy, manage, and administer – oh, wait, whoops, never mind)

Quick Story: When I was with McAfee we acquired PGP, as part of the acquisition the McAfee IT department attempted to roll-put PGP encryption. It was a total fail. It was never properly deployed and the IT folks just gave up and moved on to some other important project, like getting their hands on some cool network sniffers. At the time I thought Wow we own this crap and can’t deploy it, how the hell will the people we sell it to – it would require like a ton of bureaucracy and an army of civil servants to be successful, and this is why the federal government loves PKI.

6. Security Through Obscurity (These are not gur qebvqf lbh are looking for – guess how I cryptoed that)

Frphevgl guebhtu bofphevgl qbrfa’g jbex…crbcyr jvyy nethr gung vs lbh anzr lbhe FFVQ fbzrguvat yvxr AFN Abqr, ab bar jvyy oernx va – OF, be vs lbh pnyy lbh Jvaqbjf obk SerrOFQ, be qvfnoyr inevbhf UGGC cbfg erfcbafrf gung lbh ner fnsr – jebat, lbh’er whfg na vqvbg =)

5. WEP (French encryption – it surrenders in minutes)

What is worse than no security? ineffective security that doesn’t work – WEP is like putting up an aluminum foil door and pretending that no one can break through it – far better to just not have a door and know it – really not a lot more to add.

4. Signature-based AV (Design fail – only works if there is parity between sigs and viruses)

Signature based AV isn’t protecting anyone anymore (here), it certainly wasn’t providing any protection against spyware or some of the nastier threats that have popped up recently. It didn’t stop blaster, or sasser, or slammer, it did nothing to help choicepoint, or the VA or the orgy of disclosure we have all become numb too. It was running happily along, updated and content on my mom’s machine when it turns out her Windows XP box was infected with some pretty nasty bits. The real problem though is the sheer volume of malware that one needs to create a signature against – and wha does one do with a 5 million signature dat file – no wonder every time Symantec runs an application dies

3. The Vulnerability Disclosure Debate (good, bad, good, bad – who gives a crap)

There was a time when I had some passion for htis topic, right or wrong I had an opinion and was looking for responsible disclosure (here). I have come to realize that a. It really doesn’t matter and b. those with malicious intent are far less concerned with silly disclosure debates than those fighting the good fight. The vulnerability disclosure debate is the security’s equivalent of Britney Spears – no matter how bad it gets, you can’t help but be curious.

2. Passwords (2Chr177xh0ff)

Passwords suck (here), they are cumbersome, difficult to manage, prone to attack and require continuous care and feeding – they also aren’t terribly effective, but they are the best we can do with what we have, so remember choose wisely and don’t feel like less than a man simply because you have to use a password manager, everybody needs a little assistance now and again.

1. Security Vendors and the VC’s that love them (The root of all security evil)

The goal of the security industry is not to secure, the goal of the security industry is to make money. I think we all know this conceptually, and even with the best intentions in our capitalistic society we must understand that security companies are motivated by profits. This isn’t necessarily a  bad thing, but it should help to dispel the myth that security companies are smarter than hackers, they aren’t, they are just  smarter than the buyers – from (here)

Read Full Post »

According to IBM the Security industry is dead and has no future (here)

“The security business has no future,” Val Rahamani, general manager of IBM ISS and of security and privacy for IBM Global Technology Services. Rahamani said the security industry as it is today is not sustainable, and that IBM is instead going into the “business of creating sustainable business.”

“It’s all about putting security into the context of business operations, she said. “Parasitic threats are only a metaphor for the greater issue — there will always be new threats to business sustainability, ranging from parasites to regulations to insiders to global politics. We cannot achieve true sustainability if we continue to focus on individual threats. We can only achieve true sustainability if we design security and continuity into our processes from the beginning.”

“The traditional security industry is simply not sustainable… We have a historic opportunity to change our mindset from IT security to secure business. We have the technology, services, and expertise available today to create truly sustainable business, even in a world where we assume everyone is infected.”

“The security industry is dead,” Rahamani said. “Long live sustainability.

At first read some of you may be taken aback and look at this as an overly provocative stance along the lines of Bill Gates assertion at a Gartner Symposium over 5 years ago that Microsoft would solve security, or John Thompson’s stance 4 years ago that convergence between security and storage were not only demanded they were needed to evolve the industry, or Art Covello’s prediction last year that the security industry would experience wide-spread and massive consolidation with only large, broad-scoped vendors remaining – with hundreds of security start-ups and more on the way, someone clearly didn’t get the memo.

The reality is that the current reactive, ad-hoc security model isn’t working. Val’s statements reflect a growing awareness and acceptance that a significant part of the security challenge must be addressed through pro-active, insightful, management of the infrastructure, in a way that enables security to support the needs of the business. I have spoken about this in numerous posts

1. Why Should We Spend on Security (here)

“There is a dull hum permeating the industry of late – security is dead some say, others think it to be too costly to maintain, others still believe that what is needed is a change of perspective, perhaps a radical shift in how we approach the problem. What underlies all of these positions is a belief that the status quo is woefully ineffective and the industry is slated for self-destruction or, as a whole, we will succumb to a digital catastrophe that would have been avoided if only we had just…well, just done something different from whatever it is we are doing at the time something bad happens.”

“As we go round and round on the never ending hamster wheels provided as best practice guidelines by security vendors, consultants, and pundits, we find ourselves trapped in an OODA loop that will forever deny us victory against malicious actors because we will never become faster, or more agile than our opponents. But to believe one can win, implies that there is an end that can be obtained, a victory that can be held high as a guiding light for all those trapped in eternal security darkness. We are as secure as we need to be at any given moment, until we are no longer so – when that happens, regardless of what you may believe, is outside of of our control.”

2. Information Security Must Evolve (here)

“Security professionals must have a better understanding of the business they are hired to protect, must posses more soft skills such as communication and cooperation, and must evolve their skill against the dynamic threat environment and the evolving business infrastructure…These soft skills will become increasingly important in the coming decade as security programs mature and become an integral part of business success. More importantly organizations structure becomes critical as enterprises must implement an organizational structure that supports cross-group cooperation and workflow.”

3. RSA Themes: Information Security Evolves (here)

“a general market realization that security is evolving beyond a reactive, ad-hoc activity to an integral part of running a business in today’s world. We are increasingly reliant on technology for every aspect of our lives and business is looking to IT to play a significant role in innovation, whether that is to tap into new revenue streams or to achieve new levels of operational efficiency that also boosts the bottom line.”

“It is encouraging to see organizations begin to embrace security as an integral part of how a successful business functions. But we have a long way to go as we evolve from reactive security programs performed in a silo to security and operations convergence, and a level of operational maturity and agility that allows organizations to leverage IT for innovation.”

4. Security Prediction 2007: The year security becomes irrelevant! (here)

“So does security become irrelevant? well not exactly, but it is the year security goes main stream and becomes just another function performed by an increasingly taxed IT organization. Security will become less and less silo’d and more operationalized. Security and operational convergence will drive more technology convergence as vendors scramble to address multiple constituencies in the operations, security and compliance domains. The bottom line is that information security will begin to mature and evolve”

5. Rational Fear vs. Irrational Security (here)

Security must be agile, we must be able to quickly adapt to changing threats and we have to be careful to balance security of the unknown vs. securing against the known. Zero-days are scary, yet they are relatively infrequent compared to the thousands of known vulnerabilities organizations face annually, we certainly need to adapt to zero-day threats, but we can’t do this at the loss of security against the more frequent but less exotic MSFT or browser vulns. What’s scary is that most organizations, even after years of dealing with vulnerabilities, still have not implemented effective vulnerability management programs (here), (here), and (here)

6. Information Survivability vs. Information Security (here)

Bottom Line: you cannot stop all bad things from happening, this is not the goal of security. The goal of security is to limit the probability of bad things from happening and when they do happen to limit their impact. It really is that simple.

Read Full Post »

Follow

Get every new post delivered to your Inbox.

Join 41 other followers