ActiveCyber’s Quest for the Holy Grail of Cyber Investment: Part 2three-d-tactics-maze-of-delay

ActiveCyber is on a journey this month to find the holy grail formula for cybersecurity investment. In this article we report about what we learn at our second and third stops on this journey. (Go here to learn about our first stop on this quest.) Find out how cyber, economics, and mathematics were combined by two University of Maryland researchers to produce the first model for cyber investment. Next discover how the kill chain provides a fundamental aligning model to frame cyber investment decisions. We also take a look at how complexity helps to shape our cyber investment strategies.

Second Stop: Cyber Economics and the University of Maryland

My quest for answers regarding how much to invest in cyber security led me to Professor Larry Gordon, who along with his colleague – Marty Loeb at the University of Maryland developed the Gordon-Loeb model, often considered a gold standard for guiding cybersecurity investment decisions. I sat down with Professor Gordon recently who explained the background and purpose of the model.

The purpose according to their seminal paper is: “we construct a model that specifically considers how the vulnerability of information and the potential loss from such vulnerability affects the optimal amount of resources that should be devoted to securing that information.” Essentially, the Gordon-Loeb model is a mathematical model used for determining the optimal investment level in information security protection.

From the model, one can conclude that the amount an enterprise spends to protect information should generally be only a small fraction of the expected loss (i.e., the expected value of the loss resulting from a cyber/information security breach). More specifically, the model shows that it is generally uneconomical to invest in information security activities more than 37 percent of the expected loss that would occur from a security breach. The Gordon-Loeb Model also shows that, for a given level of potential loss, the optimal amount to spend to protect an information set does not always increase with increases in the information set’s vulnerability. In other words, organizations may derive a higher return on their security activities by investing in cyber/information security activities that are directed at improving the security of information sets with a medium level of vulnerability.

The utility of the Gordon-Loeb model is enhanced today by a variety of vulnerability management tools which can identify risk scores and help prioritize vulnerabilities as a result of scanning. Typically, however, these risk scores are based on the technical attributes of the vulnerability and don’t often account for the mission impact or provide estimates of expected losses.

The Gordon-Loeb Model was first published by Lawrence A. Gordon and Martin P. Loeb in their 2002 paper, in ACM Transactions on Information and System Security, entitled “The Economics of Information Security Investment.” You can find out more on the model and related research by Professor Gordon in this interview with ActiveCyber, or check out this YouTube video that explains the Model.

Third Stop: Kill Chain Model, Cyber DRGs and the Department of Homeland Security

My quest to find approaches to cybersecurity investment strategies has led me so far to two stops: My first stop was NIST (see interviews with Ron Ross on the Risk Management Framework, and Matt Barrett on the Cybersecurity Framework) which provides direction on life cycle risk management approaches and additional guidance on how to determine gaps in your security posture – that is, possible places to invest. My second stop brought me to Professor Larry Gordon who developed an investment model with his colleague Marty Loeb – the Gordon-Loeb Model – that provides a rule-of-thumb for how much to invest in cybersecurity protections. Both approaches provide worthwhile guidance; however, neither, in my mind really captures the business case or specific investment strategy needed for adaptive defenses – the main premise of ActiveCyber.

I believe today’s cyber threats call for a new perspective on cybersecurity investment – i.e., an investment strategy that matches the speed, sophistication, and agility of the cyber attacker. What is needed is a diagnostics and estimation model that can guide a portfolio of cyber investments based on their effectiveness in combating specific threat categories. I believe investment strategies for meeting this goal would be best aligned to the cyber kill chain model.

Described initially by Lockheed Martin: “Using a kill chain model to describe phases of intrusions, mapping adversary kill chain indicators to defender courses of action, identifying patterns that link individual intrusions into broader campaigns, and understanding the iterative nature of intelligence gathering form the basis of intelligence-driven computer network defense (CND). Institutionalization of this approach reduces the likelihood of adversary success, informs network defense investment and resource prioritization, and yields relevant metrics of performance and effectiveness.” Cyber-Kill-Chain

Recently I ran into a DHS-sponsored effort that is focused on this kill chain approach for cyber defense and cyber investment. The effort was highlighted at a meeting / WebEx hosted by the OpenC2.org COA Standardization Working Group that featured a research presentation by Olga Livingston, a Senior Economist of DHS. Ms. Livingston, along with some research colleagues from MITRE, is developing cyber investment metrics in the context of the kill chain. The research is currently aimed at trying to quantify the costs of a breach where the cost of a breach is the sum of [the value of loss + cost of recovery + intangible impact (e.g., reputation loss)]. A breach is associated to a particular intrusion set. The breach costs are compared to the costs of a protection set [cost of prevention, detection, mitigation tools + cost of labor] as governed by a set of COAs (courses of action) used to handle the intrusion set. Industry participants in the research are being asked to supply the breach and protection cost data along with associated descriptions of intrusion sets, protection sets, and related COAs.

The research is also trying to determine the optimal COAs for a particular intrusion set – optimal in the sense of minimizing the cost of breach while also minimizing any detrimental mission impact resulting from the incident response, recovery and threat mitigation activities. As a medical analogy and contra-example, a non-optimal COA may be one that is effective in eliminating the disease but ends in killing the patient as well. To carry the medical analogy one step further, this DHS/MITRE research effort reminds me of Diagnostic-Related Groups (DRGs) that are used in the medical field to control the cost of patient treatments relative to the effectiveness of prescribed treatment plans. By examining the anatomy of a breach and the cost and effectiveness of incident response, these cyber DRGs can help defenders gain a perspective of the ROI of their defenses against intrusion sets and where there may be protection gaps to prioritize or improve COAs for incident response and remediation.

Other indicators can be used to calibrate the effectiveness of “outcomes” for respective cyber DRGs:
• Attack detection moves to earlier in the kill chain?
• Dwell time from breach to discovery goes down?
• Results from pentests improve?
• Number and average cost of breaches goes down?
• Network reputation improves? (More about this indicator in this interview post)

Cyber DRGs can also help provide investment portfolio analysis and optimization of cyber protections to guide future investments. Correlating multiple intrusion kill chains and cyber DRGs over time will identify best practices for COAs. Common indicators of compromise will also emerge from this analysis that can guide the investment in COAs that provide the most effective mitigation.

Combining Kill Chain Model, Cyber DRGs and Cyber/Business Economics

Relating cyber investments to the kill chain model is complemented by applying a value-at-risk model. The value-at-risk model helps to align cyber investments to the needs of the business. It provides a focus on business relevance to cyber investment by building cyber use cases at the intersection of threats, intrusion sets, COAs and assets at risk. Such a value-at-risk model is being developed by the WEForum as part of The Partnering for Cyber Resilience Initiative. It is also closely aligned to the Gordon-Loeb Model giving it a theoretical stance as well.

According to the WEForum, the concept of cyber value-at-risk (VaR) is based on a similar notion widely used in the financial services industry. In finance, VaR is a risk measure for a given portfolio and time horizon defined as a threshold loss value. Specifically given a probability X, VaR expresses the threshold value such that the probability of the loss exceeding the VaR value is X. According to WEForum, cyber value-at-risk incorporates multiple components (see Figure below) that need to be assessed by each organization in the process of cyber risk modeling and investment portfolio optimization. The applicability and impact in each model of these components will vary per industry and cyber maturity.

WEForum1

The work by the DHS research team is dovetailing with the work from WEForum. The DHS research team is beginning to explore if there are thresholds for return on investments and effectiveness of cyber DRGs given an organizational level of maturity / automated security posture by mapping their protection sets to the Implementation Tiers (maturity level of defending systems) as listed in the NIST Cyber Framework.

An understanding of the value-at-risk to cyber threats necessitates analysis of the workloads processed by an organization and how the workloads relate to mission accomplishment or business value.

Combining VaR as it relates to the mission workloads with the kill chain model helps to highlight the business impact for specific threats and cyber DRGs. Ideally, security protections operate within the mission workload context. The “ideal” security protection set marries the current state (running context) of all workloads in a data center, the applications that those workloads take part in, the environment the applications run in (e.g., development, PCI, production), and allocates the minimum set of privileges needed to make the application work. Valuing the workload as an asset-at-risk rather than a static asset also brings time and location context into the cyber investment equation. This seems to be an important step forward in understanding investments in cyber protections, and especially helps in defining the business case for dynamic defenses.

Adding Complexity to the Investment Equation

Another metric that could add insight into the cyber investment formula is a Complexity metric. The underlying premise for a complexity metric is that simplification of the protection approach can produce better security and lower costs by lessening the drag on business processes due to overburdened analysts, overlapping technology, and slow, stovepiped security processes. Therefore, simplifying protections for a workload can create much higher business value as compared to the risks that might be generated by lowering its complexity. Reducing complexity can also result from streamlining processes, making them more agile and adaptive. Minimizing the attack surface is another technique to reduce complexity of a system or a network.

Adding a complexity metric as a way to measure a workload can help pinpoint places that need investment in security automation and adaptive defenses. To explain further, simplifying a system can reduce human error in configuring a system which leads to vulnerabilities. One of the best ways to reduce complexity and the probability of configuration errors is to decrease the number of human touch points. In a world where security teams cannot afford to ever get the wrong answer, security automation is the answer. The only way to secure applications in an automated world is to decouple security from the underlying physical network, then use algorithms to solve the computing problem. Using automated security, tied to the context of the workload rather than the network, massively simplifies the configuration of security protections. NFV is a prime example of such an automated, workload-oriented overlay network that can tie security protections to the context of mission processes.

Investment strategy also needs to be fine-tuned to handle changes in context that could lead to attacks on assets. Time and location are examples of context changes that can occur to change the value of assets. Some assets may also become a more significant target due to their location or proximity to a target, their network connectivity and respective supply chains that add value to an asset, or at certain times (M&A, tax season, a State visit, litigation). Assets can also be mobile and therefore become endangered when they move into threat zones. Therefore, the values of assets are context dependent – location, time and other factors. In my next stop I will explore how to extend the workload context of security investment into real-time cyber economics decision-making – aka real-time risk-adaptive cybersecurity that can account for these changes in context.