![]() |
Much of the focus on WannaCry has been on how it works and what organizations need to do in the near term to recover. It’s important, however, to take a step back and ask ourselves why WannaCry became such a tour-de-force in the first place. After all, the security community has been talking about concepts like patch management for decades. For example, rapidly spreading worms like Slammer, Sasser, Nimda, Code Red, etc. appeared on the scene over a decade ago, and in some cases continue to resurface. Moreover, Ransomware as a concept has been around even longer, though there was a major uptick with Cryptolocker in 2013 and a second major uptick in late 2015. I posit that we repeatedly have these types of discussions because historically it has been hard for an organization’s security strategy to align with their overall business objectives. While everyone generally agrees that security breaches are bad, balancing the cost of prevention against other business priorities can be even trickier. Unified in preventing breaches, these same stakeholders diverge when forced to choose between security and business values such as profitability, operational uptime, or ease of use. Patch Management is a great example of this phenomenon. CISOs aren’t willfully against the idea of patching. Instead, there are real-world operational challenges in succeeding. These challenges include the plethora patches at the operating system and application level, the vast number of devices within an organization, and the potential impact to uptime. While patching mitigates many risks, it can also introduce new ones. For example, I was recently talking to one of our customers – a large medical device manufacturer. For them, patching a medical device implanted in a patient might literally require a surgical procedure, which carries a life or death risk. When looking at patch management from this lens, we have to understand the likelihood that a vulnerability will be exploited and the impact associated with an exploitation. Factors that might play into likelihood include whether exploit code is available or whether it appears that such exploit can readily become available (e.g., the vulnerability is easy to exploit). Not all critical vulnerabilities are equal, and not all patches are equal. There are, roughly, 5000 new vulnerabilities each year, but not all are widely exploited. Factors that might play into severity include the technical severity of the vulnerability itself and the business impact to your organization (something that’s very personal and organization dependent). How critical is the asset that could be impacted? Is there a business continuity impact? Is there a compliance impact? Is there a reputation loss impact? Is there an intellectual property loss impact? How do those concerns play into your overall business objectives? Today’s security organizations need to ruthlessly prioritize and be able to engage in a business discussion with IT and broader business stakeholders. In other words, they need to take a business-driven security approach. We have a choice. WannaCry was undoubtedly a cautionary tale. I believe that organizations who adopt a Business-Driven Security™ approach can prevent WannaCry from being yet another recurring cybersecurity motif.
To learn about the mechanics behind WannaCry check out Zulfikar’s chalk talk. The post What Really Led to WannaCry? appeared first on Speaking of Security - The RSA Blog. |
