Headlines continue to be filled with reports of government agencies and large companies being victimized by cyber intrusions. This remains true despite a proliferation of cybersecurity guidance and large increases in global cybersecurity spending (around $150 billion per year globally on cyber products & services). Why? In addition to increasingly well-financed threat actors, the “attack surface” where these attacks are deployed is changing dramatically.

  • The number of applications used by a typical organization has rapidly grown over the last decade: according to recent reporting, typical organizations use 130 SaaS apps, up from 16 apps five years ago. Our firm works with large companies managing thousands of SaaS and on-premises applications. Each application requires posture (an overall state of cybersecurity readiness), vulnerability management, and authentication controls.
  • The number of internet of things (IoT) devices is also exploding: some forecasts project that there will be 41.6 billion such devices by 2025. And 5G networks will enable a much greater level of distributed computing at the edge. Drones and robotics are no longer simply the thing of military environments — they are being used today in many industries and service areas, from farming to retail distribution centers, to delivery services, and more.
  • The software and firmware running these systems sit atop increasingly complex codebases, both in sheer size and dependency on third-party code. The original space shuttle’s code base was only 400,000 lines of code. Modern cars have 100 million lines of code.

It is becoming practically impossible to ensure that everything is properly patched. And the consequences are growing increasingly severe. MKS Instruments, a technology supplier to the semiconductor industry, recently reported a $200 million impact from a ransomware attack. Exploitation of vulnerabilities in industrial control systems and IoT devices entail life and safety impacts, as we have seen with recent attempts to poison water systems.

To manage cyber risk in this context, we need to fundamentally change the way we measure performance. Measures we see utilized today include things like maturity assessments (which use a scale to define progressive levels of maturity for capabilities utilized to manage cyber risk), compliance attestations (where a company or third party auditors attest or validate that a pre-defined set of security controls are in place), vulnerability aging reports (which measure the presence of critical and high vulnerabilities present on IT assets and how long they have gone unremediated), and mean-time-to-detect statistics (which measure how long it takes to detect threat activity inside an organization’s environment). These measures are valuable and necessary, but no longer sufficient, and they need to be book-ended in three ways.

Three Ways We Need to Improve Current Cyber Risk Measures

First, at the front-end, we need to bring greater visibility to organizations’ inherent risk levels — essentially, “What are we being asked to defend?”

For two decades, we have awarded Department of Homeland Security (DHS) Urban Area Security Initiative (UASI) grants based on the relative degree of risk in different metropolitan areas, and we need a similar approach in cyber. This includes measuring threat, complexity, and potential business impact. We need dashboards that measure trending in factors such as the number of applications, the size and nature of databases and code repositories, the regions we operate in, the velocity of M&A, and dependencies on key suppliers. This will become particularly important as Big Data, AI, and IoT evolve, because the benefits and risks of these innovations will fall unevenly across organizations.

Second, we need much greater transparency, accuracy, and precision around how we perform against likely threats and whether we do so consistently across the attack surface.

The most authoritative, transparent knowledgebase of threat behavior available today is the MITRE Corporation’s ATT&CK framework. The Cybersecurity and Infrastructure Security Agency (CISA) recently released a set of Cybersecurity Performance Goals intended to help establish a common set of fundamental cybersecurity practices for critical infrastructure. Each of the goals is mapped to specific MITRE threat techniques. Companies can test security performance against these techniques, and CISA, the FBI, and the NSA have jointly issued guidance recommending they do so.

Each year, CISA produces a compendium of its penetration testing efforts, and the reporting indicates that the compromise of valid accounts is the technique where the greatest number of organizations fail. This trend will only increase as we migrate to the cloud, where identity is the perimeter, and it means that defenses around identity and access must be a top priority. As it does, automation and continuous security performance measurement will become increasingly important. Fortunately, major cloud companies are providing tools that do so, like Microsoft’s Secure Score.

Likewise, with the advancement of AI and the ability to mimic legitimate users, reputational analysis techniques will also become increasingly important to identify imposters. U.S. Customs & Border Protection continuously risk-rates inbound cargo in part by reviewing whether a shipper is known and trusted. These same principles apply in cyberspace. Reputational analysis techniques can — and are — being automatically applied to decide whether to block certain websites, inbound emails, or suspicious authentication attempts.

Another lens for threat-informed defense is measuring how “high value assets” are defended. Defining “high value assets” can be subjective and overbroad, but we know that certain systems are repeatedly targeted because they perform functions critical to trust. After the SolarWinds incident, the U.S. National Institute of Standards and Technology (NIST) defined such a list of critical software, and a good place to start is measuring how well these systems are defended.

In measuring these defenses, we need to evaluate not only how well we have secured these systems in operation, but also how securely the providers of these systems have developed and updated them. The recent source code theft at Okta, a leading provider of cloud-based multifactor authentication and single sign-on solutions, as well as the breach at password manager LastPass, put this in relief. Unfortunately, existing certifications like ISO 27001 and SOC 2 shed little light on whether robust software lifecycle security processes are in place. Thus the National Cybersecurity Strategy released in March calls for incentivizing better software security both through shifting liability to software providers and using the government’s purchasing power to incentivize the adoption of modern frameworks like NIST’s Secure Software Development Framework (SSDF) and the related concept of a Software Bill of Materials (SBOM). As SSDF and SBOM attestation frameworks become formalized, they should be adopted into companies’ third-party risk management programs.

Third, we need to plan for, and measure performance against, low probability high consequence events.

There is a trend toward quantifying financial impacts of cyber risk through models like Value at Risk, which quantifies (usually in dollar terms) an entity’s potential loss in value over a defined period of time at a given confidence level. These models are useful, but are necessarily dependent on data inputs. Depending on what data drives these models, they can present an overly rosy view of risk. History tells us this is what happened on credit and liquidity risk in the early 2000s, and we have the 2007-2008 financial crisis to show for it.

At DHS after 9/11, we framed preparedness planning around a core set of planning scenarios, and British banking regulators now require similar planning and testing around “severe but plausible” scenarios. A good place to start is what happened to Maersk in the notPetya incident, where the company came within a hair’s breadth of permanently losing its IT system to destructive malware later attributed to Russia. More recently, Ukraine’s pre-invasion migration of workloads to the cloud was critical to its ability to weather a torrent of Russian cyber attacks. The current geopolitical climate underscores the importance of reframing resiliency planning around how to keep the company afloat if its core systems are compromised. Have we maintained offline back-ups and tested recovery? Can we reconstitute a way to communicate with essential employees? Do we know how to ensure that certain important but low-risk payments can continue?

We can turn risk into opportunity: if we can coalesce around mechanisms to measure cybersecurity performance with transparency, accuracy, and precision, we could work with allied nations to codify and implement them. They could then be reflected as baseline requirements in technology procurements abroad, creating larger opportunities for differentiation. There is no such thing as risk elimination, but through better measurement and incentivization, we can not only manage these technology risks, but turn them into opportunities for a more resilient economy.