We have entered a new era of information technology, an era where the clouds are moist, the data is obese and incontinent, and the threats are advanced, persistent, and the biggest ever. Of course with all the paradigm-shifting, next generation, FUD vs. ROI marketing, its important to remember that sometimes we need to balance innovation against misunderstood expectations, vendor double-speak, and relentless enterprise sales guys.
Because contrary to the barrage of marketing, these technologies won’t make you rich, teach you how to invest in real-estate, help you lose weight or grow a full head of hair, it won’t make you attractive to the opposite sex, nor will it solve all your problems, in some cases they can improve the efficiency and effectiveness of your operating environment but it requires proper planning, expectation setting and careful deployment…and on that note, I give you the top 10 most overhyped technology terms over the last decade.
So it appears the Internet went down, or so many claimed when they were presented with 404 errors when attempting to watch “Georgia Hillbilly Massacre 17: The return of the Banjo Man” on Netflix – Since Netflix is selective on what you can stream they certainly weren’t queuing up the latest and greatest new releases, but that is a totally different rant – or attempting to declare themselves the Mayor of “who gives a rats ass where you are right now” on Foursquare.
Last time this happened some started to claim that it rocked the very foundation of confidence in cloud-computing (here), yet they failed to juxtapose Amazon’s operational failures against the universe of enterprise operational failures, security compromises and general administrative stupidity that plagues nearly 99.98% of every organization on Earth (minus the DPRK’s website, really not more you can do to fudge that one up)
Systems and security management is difficult, ineffective, costly and becoming ever more so in increasingly distributed, heterogeneous, complex, and mobile computing environments…
- 98% of all external attacks take advantage of poorly administered, misconfigured, and unmanaged systems (Source: Verizon Data Breach Investigations Report 2009)
- A locked down and well managed PC can cost 42% less than an unmanaged one (Source: Gartner – The Total Cost of Ownership: 2008 Update)
- The direct costs incurred in a “somewhat managed” PC are only slightly lower than the direct costs of an unmanaged PC, because of expenses to maintain underutilized or dysfunctional management systems (Source: Gartner – The Total Cost of Ownership: 2008 Update)
The benefits provided by server virtualization are being realized as server consolidation has enabled cost reduction and efficiencies in data center/server management. This is of course leading many to ask the question “why can we not virtualize our desktops as well?” Continue reading
So apparently the latest version of the Qualys Laws of Vulnerabilty Report has Qualys jumping to some pretty outrageous claims about how cloud-computing – invented by Qualys according to Courtot (insert cute smiley here) – can secure IT more effectively or allow people to not patch any more or some such nonsense (thanks to Hoff for the heads up).
Anyway so the logic flaw goes something like this -> Continue reading
Quotes from a recent SC Magazine article “Increased Mobile Working Has Caused a Rethink on Endpoint Security” (here) focuses on encryption, cloud-computing and desktop virtualization… Continue reading
I had an interesting conversation with a peer recently that started with a statement he made that “innovation was all but dead in security”. The implication was that we had done all we could do and that there was very little more that would be accomplished. Of course I felt this was an overly simplistic and narrow view, not to mention that it completely ignores the rather dramatic impact changes in computing infrastructures will have over the next 5-10 years and beyond.
How have enterprise architectures evolved over the past 10 years and how will it continue to evolve? Simply put we are pushing more of our computing assets and the infrastructure that supports them out into the Internet / cloud. It began with mobile computing devices, remote offices, and telecommuters and is now moving into aspects of the traditional internal infrastructure, such as storage, application / service delivery, and data management. This has forced IT to, in some cases, radically redefine the technologies and processes they implement to even provide the basics of availability, maintenance and security. How does an IT organization maintain the health and availability of the evolving enterprise while securing the environment? How do they ensure visibility into and control over an increasingly complex and opaque infrastructure? Continue reading
Quick thought for the day. Most technologies in the security world move through a predictable cycle of adoption. First an organization implements a solution to gain visibility into the scope of the problem (VA, IDS, DLP/CMF, SIEM) then once it becomes apparent that the problem is vast and overwhelming they move to operationally implement technical controls to protect the environment and to enforce organizational policies, when this switch over occurs the adoption of the pure visibility tools becomes eclipsed by the control tools. This doesn’t mean that the visibility tools are ineffective, it generally means that the scope of the problem is understood to the point that an organization can effectively implement controls, it also means that the problem has successfully moved from the security team to the operations team. You can apply this same logic to any segment of security and to any new technology, including cloud computing, virtualization and all the little shiny obejcts in between.
Examples of this movement from visibility to control include intrusion detection, vulnerability assessment and content monitoring and filtering. Let’s look at VA, It’s initial use was to determine the scope of the ‘exposure’ problem, that is to scan the environment against a database of known vulnerabilities to determine the extent of exposure. Unfortunately the volume of output was very high and was presented in a format that was not easily consumable or actionable by the IT operations team. What exactly does one expect the server admin to do with 300 pages of vulnerability data? There were also inherent issues of fidelity. The use of VA tools moved into targeted scans to determine what needed to be patched, which resulted in the operational implementation of patch management technologies, which soon overtook the market adoption of vulnerability assessment tools. There was also the pressure of auditors looking for the implementation of technical controls and although vulnerability assessments were viewed as an important first step, without the work-flow and controls to address the volume of vulnerability data they proved to be less effective in improving operational security than was originally thought.
It became clear that vulnerability management needed to cross the chasm to become an operationally actionable tool, without remediation capabilities the organization would always be under a mountain of vulnerabilities and the use of the technology would linger in the trough of disillusionment. Security configuration management met that need, it allowed an organization to define the desired configuration state of an environment against industry best practices (NIST, DISA, CIS, etc) and then to operationally implement technical controls to identify non-compliant devices and enforce policy. Security configuration management also had the benefit of providing a common language between the security, audit, and operations teams. I wrote about this in a series of posts (here), (here), and (here).