I am always on the lookout for new, innovative tools especially ones that break new ground in the cybersecurity fight. At a recent conference I ran across a tool that does that – Verodin – but not in your typical “detect” or “protect” roles that most security tools fall into. Instead, Verodin provides evidence-based, continuous validation that the controls you employ for your enterprise are present and working. It also allows the CISO to step back and evaluate his / her security posture for gaps and to prioritize investments. I regard this type of capability as comprising a new and unique category of tools – one that the industry is starting to call “security instrumentation platforms.” Learn more about this new type of tool from Verodin’s Chief Strategy Officer and evangelist – Major General Earl Matthews – a retired USAF general who got started in security at a young age and made it a key part of his career in the Air Force. His excitement about this new capability is quite evident in my discussions with him. You can also click on the ad to the right —>>> to find out more about Verodin’s solutions and check out the podcast where General Matthews and I go into more depth on this technology and other cyber topics.

Spotlight on Major General Earl Matthews (USAF Ret.)

» Title: Chief Strategy Officer, Verodin

» Website: https://www.verodin.com

» LinkedIn: linkedin.com/in/earlmatthews

Read his bio below.


Chris Daly, Active Cyber™: Verodin is making itself known as a “Security Instrumentation Platform” or SIP. What is a “security instrumentation platform” and how does it add value to an already crowded mix of security tools that are operated by an enterprise?

Major General Earl Matthews (Ret.), Chief Strategy Officer, Verodin: First and foremost, we are not a defensive security tool. Until the onset of platforms like the Verodin Security Instrumentation Platform or SIP, even with the best tools and the best people, it was almost impossible to validate security controls with any level of empirical evidence on an automated, continuous basis. CISOs had to rely on an Audit or Pentest; but these only provided a snapshot in time. As such, security was, and still is in many cases, assumption-based in regard to how effective security tools are in production across measures like prevention, detection, and correlation. Basing security on assumptions instead of evidence is one of the main causes of reduced value from security tools and reduced overall security effectiveness. 

Verodin SIP’s ongoing approach addresses this specifically, since instrumentation isn’t about highlighting that only 20 percent of your security is effective—it’s about getting the other 80 percent right and keeping it there. The platform instruments customer IT environments to test the effectiveness of network, endpoint, email and cloud controls. Verodin SIP continuously executes tests and analyzes the results to proactively alert on drift from a known-good baseline and validate control configuration. The platform provides evidence demonstrating if a customer’s controls are actually delivering the desired business outcomes — or if they are exposing them to risk. 

Active Cyber™: What are the critical business needs addressed by the Verodin solution? 

MG Matthews: There are 4 key critical needs: 

1. Controls effectiveness – It is critical that businesses have evidence that the controls protecting their critical assets are effective and remain so. Don’t assume controls are working correctly.

2. Optimize & rationalize – It is time to pause and replace assumptions with evidence. Leverage instrumentation to optimize existing controls and rationalize true gaps and overlap before continuing with the next product purchase or resource-intensive project.

3. Environmental drift detection – A fundamental challenge for cybersecurity is that it is burdened with the responsibility of protecting the environment without the corresponding authority to control it. As new systems and applications come into play, the security stack can be affected. Environmental drift detection ensures controls are measured and optimized on an ongoing basis.

4. Understanding risk – The disconnect between an organization’s cybersecurity assumptions and its true posture is generally so vast that any discussion around “cyber risk” is premature. Remove assumptions and validate effectiveness.

Active Cyber™: What is “environmental drift,” why is it detrimental to IT operations, and how does Verodin help enterprises remedy this drift? 

MG Matthews: The IT Operational environment is dynamic. With changes to IT and networks, the environment is never going to be the same as it was the previous day. Even in organizations with rigorous change management processes, all parties must fully understand the scope of a change, clearly communicate its impact and ultimately execute on it with 100 percent perfection.

Verodin SIP allows users to constantly test their environment and know if a security control that has been put in place is no longer functioning because of a change — whether known or unknown — with the use of Verodin monitors. It’s a true game-changer.

Active Cyber™: From your perspective gained through your background and experience in the cyber corps of DoD, what excites you the most about the possibilities that the Verodin solution provides?

MG Matthews: I’ve seen us get better overall on detection capabilities and dwell rates once the enemy has penetrated our networks. However, the same two problems exist – namely, cyber-hygiene issues and the email threat vector. Since 2004, I’ve been speaking about the ever-growing amount of money being spent and the number of products that an organization needs to combat this ever-growing threat. What excites me most about the Verodin platform is that for the first time, CISOs can actually have quantifiable data on a continuous basis knowing that their security controls are being validated and working as they think they should be. Having the ability to actually start measuring how the security products in an environment are performing and determining if a company actually needs them all is critical to overall operations. Then, security professionals can actually start reducing the complexity in security and improve their overall defenses, process and people by increasing ROI through data.

Active Cyber™: What is the Threat Actor Assurance Program developed by Verodin and others, and how can it be used to validate cybersecurity effectiveness? How does it leverage the MITRE ATT&CK model? Who are the other players in the program and how do they contribute? 

MG Matthews: At RSA 2019 we introduced our new Threat Actor Assurance Program (TAAP), which combines industry-leading threat intelligence from a number of industry partners with Verodin’s proven capability to validate cybersecurity effectiveness. This program delivers actionable intelligence on how an organization’s defenses will perform against the threat actors specifically targeting them.

As part of the program, we are planning to introduce our new Threat Actor Assurance Module (TAAM), which will provide customers with the ability to determine if threat actors can get through their defenses before the actual attack by making threat intelligence actionable. TAAM will validate a customer’s defensive stack’s capabilities to prevent, detect, and alert on both indicators of compromise and tactics, techniques, and procedures (TTPs) – including the MITRE ATT&CK™ framework. 

Organizations using Verodin TAAM will also be able to determine if they have gaps in control visibility or misconfigurations that could aid in a threat actor compromise. Once an organization has a baseline understanding of their coverage, they can tune and optimize their security stack to reach a higher level of assurance. This capability extends the functionality of the Verodin MITRE ATT&CK module, launched in August 2018.

Active Cyber™: How does Verodin help to collaborate across functional silos such as dev and ops or network and systems operations centers while managing the effectiveness of security controls of the enterprise?  

MG Matthews: This is all about RISK. In every other risk discipline (finance, HR, operations, manufacturing) that an enterprise is managing, there are hard, quantifiable bits of information that you use to tell if you are making progress. Security instrumentation gives us that quantifiable insight into where we need to invest our time, money and people. 

Many organizations buy a security product based on a specific business goal with no real validation of whether the control is accomplishing what it’s supposed to. They then use this list of controls to make broad statements around risk with no understanding of what each control is actually doing. For example, there is a call to “protect customer data,” so funds are approved for a DLP, which then gets implemented in alerting-only mode. The desire to stop malware and “advanced threat behaviors” generates funds to purchase a Next Generation Firewall (NGFW), which only actually blocks 25 percent of the things it is marketed to do out-of-the-box.

Simply owning a technology means nothing. Technology effectiveness and configuration must be validated, continuously.

Active Cyber™: Cyber resiliency, which involves the ability to safely and securely operate in a degraded mode, is the new buzzword, especially when it comes to IoT and Industrial control systems. How does Verodin account for the measures of resiliency, including safety instrumented systems, as it looks to extend its capabilities to these emerging market segments?

MG Matthews: Verodin’s mission is all about furthering the concept and frameworks behind Cyber Resilience. The concept of Cyber Resiliency is gaining traction because it brings InfoSec and business continuity together. Blending these concepts improves an organization’s ability to operate despite adverse cyber events. If executed properly, Cyber Resiliency frameworks enable organizations to manage information security more like traditional business units.

Active Cyber™: Congratulations on your recent acquisition by FireEye. What type of synergy should customers expect to find as a result of the acquisition? What changes will there be for Verodin as a result of the acquisition?

MG Matthews: Every day, FireEye is on the frontlines of cyber attacks with the same mission as Verodin – to relentlessly protect organizations from the threats that are targeting them. Every day, FireEye Mandiant consultants witness first-hand the world’s most massive breaches that have resulted from exploited, misconfigured or disabled security technologies. The Verodin platform was purpose-built to expose these gaps, which has positioned Verodin as a strong complement to the existing cybersecurity products and technology-enabled services FireEye brings to the table.

By incorporating FireEye frontline intelligence and leading incident response expertise, Verodin SIP will allow organizations to test security environments against both publicly known and newly discovered threats to identify risks in security controls before a breach occurs. By bringing our two organizations together, we will be able to scale our development, quality assurance, customer support, and sales teams with the global reach and extensive resources of FireEye. Our combined goal is to continuously improve our customers’ ability to rapidly adapt defenses to the evolving threat landscape.

The integration of the Verodin platform and FireEye’s technology, intelligence and expertise significantly enhance FireEye’s ability to relentlessly protect our customers. Equipped with FireEye’s leading expertise and frontline intelligence, the Verodin platform tests customers’ security environments against both publicly known and newly discovered threats. 

This proactive, repeatable and measurable approach will allow customers to identify risks in their security controls before a breach occurs and orchestrate the processes needed to optimize their defense. The combination of FireEye and Verodin will empower customers to rapidly adapt their defense to the evolving threat landscape, while also maximizing ROI from their security investments. Further, having FireEye customers use the Verodin platform will help to automate their security effectiveness testing, and that they are constantly getting the most out of their investments.


Thank you MG Matthews for this informative overview of Verodin’s ground-breaking security instrumentation capability. Providing evidence that security controls are in place and working, like that offered by Verodin, is a must-have feature for any CISO given the high visibility of cyber attacks and the pressure on corporate boards to show due diligence in the area of cybersecurity risk management. I believe that the security instrumentation market is about to take off and Verodin seems to be well-positioned with FireEye to take advantage of this uptick. I look forward to hearing more about Verodin’s continued success in the market. And thanks to my subscribers and visitors to my site for checking out ActiveCyber.net! Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, PQ cryptography, risk assessment and modeling, autonomous security, digital forensics, securing ICS / IIoT and IoT systems, or other security topics. Also, email marketing@activecyber.net if you’re interested in interviewing or advertising with us at Active Cyber™.

About Major General Earl Matthews (USAF Ret.)

Major General Earl Matthews USAF (Ret) is an award-winning retired Major General of the U.S. Air Force with a successful career influencing the development and application of cybersecurity and information management technology. His strengths include his ability to lead large-scale, diverse, global organizations that operate, extend, maintain and defend global networks. He has earned a reputation as a motivational leader and change agent focused on delivering technical innovations that resolve complex challenges.

Spotlight: Cybersecurity ResearchI thought for this tax day (April 15) I would take an accounting of the articles and interviews I have written or conducted since the inception of ActiveCyber.net. Since the initial launch of ActiveCyber.net in 2014, Active Cyber has presented 30 articles and 41 interviews to readers, as shown in the table below. Most of the focus has been on active defenses with some forays into risk economics, risk management, quantum cryptography, IoT/ICS, SOC operations and VR systems, blockchain, cyber forensics, and cyber training. Coming up over the next few months I plan to do a deeper dive into ICS/IoT/IIoT security, develop some insight into the security of autonomous systems, go farther into the intricacies of securing blockchain, investigate trusted systems, explore the human side of active defenses, and take a look at security and privacy issues around augmented reality / virtual reality. Let me know if you are interested in guest blogging or doing an interview on a topic or present an offering in one of these areas.


 

# Title of Article or Interview (linked to post) Category / Menu path Date Pub
1 DHS S&T Lays Out a Broad Yet Innovative Portfolio of Cybersecurity Research In This Year’s Showcase Spotlight, Spotlight – Interviews, Spotlight – Research & Standards Published
2019/03/15
2 Learn How D3 Security’s SOAR Platform Transforms SOC Operations In This Interview With ActiveCyber.net Automated Orchestration, Spotlight – Interviews Published
2019/03/04
3 Not Going to RSA? Check Out These Upcoming Cyber Conferences Spotlight, Spotlight – Cybersecurity News & Events Published
2019/02/25
4 Mr. Robert Rahmer, Program Manager of IARPA’s CAUSE Program, Discusses Progress in Cyber Event Forecasting Research Spotlight, Spotlight – Interviews, Spotlight – Research & Standards Published
2019/02/19
5 Learn How Cybersecurity Dynamics Lays The Foundation For Advanced Cybersecurity Defenses In This Active Cyber Interview with Professor Shouhuai Xu of UTSA Spotlight, Spotlight – Interviews Published
2019/02/14
6 Digital Forensics Pioneer Jim Christy Provides His Unique Insights In This Interview with Active Cyber™ Spotlight, Spotlight – Interviews Published
2019/02/11
7 Cyber Risk Assessment Takes On An Actionable Approach In This Methodology by Dr. Charles Harry of UMD Spotlight, Spotlight – Interviews Published
2019/02/04
8 University of Maryland 15th Annual Forum Highlights Research At Intersection of Cyber, Business, and Public Policy Spotlight, Spotlight – Cybersecurity News & Events, Spotlight – Research & Standards Published
2019/01/23
9 Learn How the Army Research Lab Is Paving the Way to a Virtual Reality-Based Security Operations Center In This Interview With ActiveCyber.net Spotlight, Spotlight – Interviews Published
2019/01/02
10 Learn How DARPA Is Leading Research In Developing Resilient and Cyber-Assured Embedded Systems In This Active Cyber Interview Spotlight, Spotlight – Interviews Published
2018/12/05
11 Dr. Ron Ross of NIST Discusses New Multi-dimensional Cyber Protection Strategy In This Interview with Active Cyber Spotlight, Spotlight – Interviews Published
2018/11/27
12 Active Cyber Interviews Maryam Rahmani of Global Cyber Alliance – Learn About Free DMARC and DNS Security Offerings to Fight Internet Threats Spotlight, Spotlight – Interviews Published
2018/11/20
13 Cybrary’s Free and For Pay Training Offerings Helps to Turn Cyber Novices Into Professionals and Professionals Into Experts. Learn More In This Interview with Ralph Sita Spotlight, Spotlight – Interviews Published
2018/11/08
14 CyberFence from Ultra Electronics 3eTI Helps Secure ICS Enclaves – Learn How In This Interview with Ben Garber of 3eTI Spotlight, Spotlight – Interviews, Spotlight – Product Reviews Published
2018/09/18
15 AI and Mod-Sim Tools Create Insight for Better Cyber Investing Active Cyber Defense, Automated Orchestration Published
2018/09/12
16 Learn How New Context Services Is Building Secure Attribution and Threat Detection Into ICS Security Spotlight, Spotlight – Interviews Published
2018/08/15
17 Scalable Network Technologies’ Mod-Sim Tools Enable LVC Training and Testing of Active Defenses Across Different Cyber Terrains Spotlight, Spotlight – Interviews Published
2018/07/24
18 How Can Machine Learning Improve Your Cyber Posture? Learn how in this interview with Homer Strong of Cylance Spotlight, Spotlight – Interviews Published
2018/04/23
19 Learn how Demisto applies machine learning to facilitate collaborative investigations beyond playbook automation Spotlight, Spotlight – Interviews Published
2018/01/30
20 ActiveCyber Interviews Professor Ehab Al-Shaer on Advanced Research in Autonomous Cyber Defenses Spotlight, Spotlight – Interviews, Spotlight – Research & Standards Published
2017/12/21
21 ActiveCyber Interviews Ron Gula – Former Tenable CEO Embarks on New Venture in Cybersecurity Start-up Investing Spotlight, Spotlight – Interviews Published
2017/11/27
22 ActiveCyber Interviews Jeanette Manfra – Assistant Secretary for the Office of Cybersecurity and Communications at DHS on Cybersecurity Strategies and Plans Spotlight, Spotlight – Interviews Published
2017/11/20
23 ActiveCyber Interviews Tony Sager – Chief Evangelist of the Center for Internet Security – on Emerging Technologies for Active Defenses Active Cyber Defense, Spotlight, Spotlight – Interviews Published
2017/11/07
24 Government-Industry Partnerships Enable Rapid Growth in Security Automation Advances and Adoption Active Cyber Defense, Spotlight, Spotlight – Interviews Published
2017/10/09
25 Security Orchestration: Looking Forward Automated Orchestration Published
2017/09/27
26 Waverley Labs Pioneers Open Source Version of CSA’s Software-Defined Perimeter Specification Spotlight, Spotlight – Interviews Published
2017/08/21
27 Start-up Rivetz Wants to Secure Internet Transactions Using Crypto Tokens that Provide Proof of Security Spotlight, Spotlight – Interviews Published
2017/08/14
28 ActiveCyber Interviews Scott Musman, MITRE, About Applying Gaming Techniques to Cyber Risk Estimation Spotlight, Spotlight – Interviews Published
2017/06/12
29 ActiveCyber Interviews Professor Jintai Ding – Author of Post-Quantum Safe Crypto Algorithm and PAKE Protocol Spotlight, Spotlight – Interviews Published
2017/05/23
30 ActiveCyber Interviews Oliver Friedrichs, CEO of Phantom, on Security Orchestration Spotlight, Spotlight – Interviews Published
2017/04/11
31 ActiveCyber Interviews Professor Kanta Matsuura on Blockchain and BSafe.network Spotlight, Spotlight – Interviews Published
2017/03/21
32 ActiveCyber Interview with Kris Lovejoy – CEO of BluVector, Inc. Spotlight – Interviews Published
2017/03/15
33 New Innovations Drive Active Cyber Defense Active Cyber Defense, Articles Published
2017/01/03
34 ActiveCyber Interview with Mr. Curtis Dukes – Deputy National Manager for National Security Systems at NSA Spotlight, Spotlight – Interviews Published
2016/11/15
35 Interview with Mike Brown, CTO of ISARA Corporation on Quantum Cryptography Spotlight, Spotlight – Interviews Published
2016/11/01
36 Run-time Cyber Economics – Applying Risk-Adaptive Defenses Active Cyber Defense, Articles Published
2016/07/26
37 The Rise of Quantum Computers – The Current State of Cryptographic Affairs Active Cyber Defense, Articles Published
2016/05/31
38 Who is Innovating in the IoT Security Space – and How? Active Cyber Defense, Articles Published
2016/05/17
39 ActiveCyber Interview With Dr. Doug Maughan – DHS HSARPA Cyber Director Spotlight – Interviews, Spotlight – Research & Standards Published
2016/04/21
40 Are We There Yet? A Walk Through Quantum Capabilities Today Spotlight, Spotlight – Research & Standards Published
2016/04/19
41 ActiveCyber Interview with Steve Orrin and Ned Miller — Security Leaders for Intel Security Spotlight, Spotlight – Interviews Published
2016/04/04
42 Continuing the Quest for How to Invest in Cybersecurity – Stops 2 and 3 Active Cyber Defense, Articles Published
2016/03/22
43 How Can Global Network Reputation Scores Be Used To Guide Cyber Investment? ActiveCyber Interviews Leading Researcher – Mingyan Liu – To Find Out Spotlight, Spotlight – Interviews Published
2016/03/22
44 Learn the Fundamentals of the Gordon-Loeb Cyber Investment Model in this Interview with Professor Gordon Spotlight, Spotlight – Interviews Published
2016/03/22
45 ActiveCyber’s Quest for the Holy Grail of Cyber ROI Active Cyber Defense, Articles Published
2016/03/14
46 Interview with Matt Barrett of NIST on the Cybersecurity Framework Spotlight, Spotlight – Interviews Published
2016/02/07
47 Cloud Access Security Brokers – The New Frontier of Active Cyber Defenses Agile Cloud, Articles Published
2016/02/01
48 ActiveCyber Interview with Rob Frazier – Certified Splunk Architect Spotlight, Spotlight – Interviews Published
2015/12/02
49 ActiveCyber Interview with Bob Gourley – Former CTO of DIA Spotlight, Spotlight – Interviews Published
2015/10/30
50 Adaptively secure smartphones Adaptive Endpoint, Articles Published
2015/10/28
51 Interview with Dr. Eric Burger: the State of Cyber Threat Intelligence Research / STIX/TAXII Spotlight, Spotlight – Interviews Published
2015/10/09
52 JHU APL Community Day Lays Out Research Roadmap for Active Cyber Defense Spotlight, Spotlight – Research & Standards Published
2015/10/08
53 Interview with Richard Spires, Former CIO of DHS Spotlight, Spotlight – Interviews Published
2015/09/29
54 Getting Proactive About User & System Management Articles, Automated Orchestration Published
2015/09/04
55 Interview with Bob Bigman, Former CISO of CIA Spotlight, Spotlight – Interviews Published
2015/08/20
56 Voice Biometrics – Plugging the OPM Breach Adaptive Endpoint, Articles Published
2015/08/12
57 Interview with NIST’s Ron Ross Spotlight – Interviews Published
2015/08/04
58 Spotlight On… Active Cyber Defense Events – Past, Present and Future Spotlight – Cybersecurity News & Events Published
2015/06/30
59 Spotlight: Something New for the Internet of Things Spotlight – Product Reviews Published
2014/11/19
60 Spotlight: US Federal Research on Active Cyber for 2015 Spotlight – Research & Standards Published
2014/11/18
61 The Three Ds: Deception, Detection, Delay Part 3 Deception Detection & Delay Published
2014/11/18
62 Agile Cloud Security Agile Cloud Published
2014/11/15
63 The Three Ds: Deception, Detection, Delay Deception Detection & Delay Published
2014/11/15
64 Adaptive Endpoints Adaptive Endpoint Published
2014/11/15
65 Automated Orchestration Automated Orchestration Published
2014/11/15
66 The Three Ds: Deception, Detection, Delay Part 2 Deception Detection & Delay Published
2014/11/15
67 Active Cyber Defense Capability Areas Active Cyber Defense Published
2014/11/15
68 What is Active Cyber Defense? Active Cyber Defense Published
2014/11/05
69 Intel-based Defenses Intel-Based Defenses Published
2014/10/25
70 Intel-Based Defenses Part 2 Intel-Based Defenses Published
2014/10/25
71 Intelligent Networks Intelligent Networks Published
2014/10/16

And thanks for checking out ActiveCyber.net! Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, PQ cryptography, risk assessment and modeling, autonomous security, digital forensics, securing ICS / IIoT and IoT systems, or other security topics. Also, email marketing@activecyber.net if you’re interested in interviewing or advertising with us at Active Cyber™.

My recent SOC Service Manager experience got me to thinking about cyber tool sprawl and the return on investment that my customer was getting for their cyber investment. There is easily over 30 different cyber tools in their portfolio which is not unlike many large organizations these days. And it seemed during my tenure that more of my cyber operations resources were directed to cyber tool break-fix and maintenance than to optimizing their effective operation and contribution to risk reduction in the enterprise. These tool issues stemmed from two causes, in my opinion:

1) the customer’s desire to provide a tool to try to cover every threat vector; and, to protect every asset location and every different asset type; and,

2) a lack of focus on technology refresh / end-of-life. The thirst for cyber solutions doesn’t seem to get quenched either, as new tools are always being evaluated and added to the portfolio but no tool seems to leave.

Generally these tools all serve a purpose to protect the enterprise, and even though the risks to the enterprise are seemingly reduced, I wonder at what cost and to what extent? Could the cyber investment be more effective? What is the overall tool-cost-to-risk-reduction ratio? How much duplication of protection was there? What gaps exist in the coverage of risks? Sometimes it didn’t seem like the risk level had materially diminished with the cumulative operation of all the tools. There often seemed to be a lack of actionable situational information regarding asset cyber posture and operational state, or ambiguity around the attribution and authority for actions performed on the network, or uncertainty regarding the current threat status or potential impact of a threat to the mission. The lack of situational information regarding risk status made it difficult to discern where cyber investments need to be reallocated since it is hard to prioritize investments against where the greatest risk lies when you don’t know how much and where the risk is.

One reason for this conundrum, in my view, is that the cyber workforce skill levels seem to lag from the time a tool first gets introduced into the enterprise to the time a resource is actually trained and proficient in the use of the tool. In fact, lack of familiarity with the tool – its proper configuration, control, and operational standards – often occurred due to frequent personnel turnover, lack of training, and variation in approved baselines from location to location or tool to tool. These inexperienced operator issues and the lack of global standards sometimes created greater risks and occurrences of downtime through misconfigurations, misinterpretations of data, or incorrect security policy changes.

These problems have also been noted by Dr. Gene Spafford, the founder and executive director of Purdue University’s Center for Education and Research in Information Assurance and Security (CERIAS), “… poorly coded software combined with growing network complexity has increased the attack surface at many organizations. This has resulted in using all these [security] tools on a regular basis because the underlying software is not trustworthy.” All too often security tools do not integrate well with each other and only cover a subset of the assets that need to be protected. These coverage gaps result in loss of situational awareness, difficulties in defending against attacks, and slow response processes.

The problems that resulted from the lack of tool experts were further complicated by practices that relied on functional silos to manage and conduct work activities. In general, IT organizations have varying budgets and requirements leading to functional silos. Silos promote specialized (discipline specific) data sources, models, and methods to inform decision-making and guide work activities. However, the disjointed models of the different silos for my customer tended to inhibit collaborative decision-making and hampered accurate and holistic views of the cyber terrain. Cross-discipline communications about issues were through static, watered-down models or tool data subject to varying interpretation—“This is what I am seeing. What are you seeing?” The lack of a common language to describe what is happening, along with a common view of events and a common system model, created delays in diagnosing or troubleshooting problems and hid incidents from view, as well as hindered other operational processes.

One benefit of cyber tool investment often highlighted by the vendor is an increase to the speed-to-capability for responding to problems and incidents. However, IT operations are governed by ITSM processes and ticketing procedures. While necessary for managing change and understanding performance, these ITSM tools, processes and procedures tend to drive the response pace and may slow the tempo of IT operations. In my experience, playbooks for any event response often required a great deal of manual intervention with the workflow triggered and maintained by a ticketing system. Mitigations were often delayed as tickets got escalated and lost amid confusing hand-offs between functional silos and non-interoperable tools. Often, change management boards were required prior to making modifications to systems. This manually-intensive workflow system created significant latencies in incident responses, potentially allowing cyber adversaries time to pivot and to burrow into the network.

So in addition to improving workforce training and experience, I believe that more attention is needed towards security system engineering to manage cyber investment effectively. I believe a set of “living models” reflecting a cross-discipline view of the cyber enterprise is a must-have to begin to understand how to [re-]allocate cyber investments. The models should be centered on assessing risk to the mission or business area and to providing situational awareness about the relationships between mission, people, technology, and process. The following chart summarizes some of the types of models and information involved in such cross-discipline views.

My vision is for a model that is centered on the development and management of an operational Mission Model of the different mission flows and dependencies. An ontology such as the DoDAF Meta Model (DM2) or the Mission and Means Framework (MMF) could be used as the foundation for such a model. Dependencies flow down from the mission objectives to the tasks, assets, flows, and threats related to accomplishing the mission. Each element of the model could be tagged with attributes, for example, mission assets can be tagged with attributes such as readiness, asset value, and identity for use in the model.

To make this “living model” real to the cyber analyst or operator, I would use it to evaluate potential attacks and protections (including new tools, security policies, and controls), while using live or simulated event data that captured the dynamics of the cyber terrain. As shown in the following figure, a series of models would be constructed and tied together by a scenario that reflects the critical activity sequence and timing of actions in the scenario.

• Defense objectives would be identified, modified, and allocated to sensors based on heuristics to create the sensor grid.
• An adversarial grid would also be developed based on the MITRE Att@ck model and the exposures related to the enterprise attack surface.
• A Network Model describes the computational and communication assets that are within the scope of the decision space, including their capacity limits and current configuration. A Vulnerability Graph, based on a network model, would be used to depict the influence of attacks and capture intrusion propagation.
• A Mission Model incorporates the objectives and “demands” or requirements that are to be met; policy, budget, or operational constraints; and the operational tasks and resource dependencies needed to support the mission. A mission could be defined simply as a “workload” or a more complex business process. The Mission Dependency graph (Mission-Task-Asset-Threat map) would capture the attacks’ impacts toward system objects as they relate to “missions.” A Bayesian network can be constructed on top of the MTA to infer the probabilities of missions being tainted.
• A Control Model describes the security controls. I would use Cybersecurity Framework Profiles to map the protections / controls to threats. The degree to which the control provided mitigation of the attack would need to be evaluated and captured as part of the model.

An executable set of models as described could provide a valuable “what if” analysis tool for understanding cyber posture and potential threats, while constructing a viable defense plan for mitigating attacks. It therefore can provide a foundation for the deployment of second generation security automation and orchestration (SAO) tools. AI and machine learning tools coupled together with the mod-sim aspects of an executable architecture such as this could enrich the automated synthesis of alternative actions and contextual-based decision-making capabilities, thereby reducing the need for a human-in-the-loop, and create intelligent dynamic systems that understand what is normal, what is not normal, and the ramifications of both action and inaction – or, change and non-change – within and around the cyber terrain.

Overlaying a cost model and/or a value model to such an executable reference model also can help pinpoint where investment may be needed to fill a cyber gap. The value model could capture the asset value [e.g., value of data at a specific time stored on a particular node of the network], workload value [value of a processing stream or computational process], or mission value [value of a set of workloads distributed over time and/or distance, and designed for a specific business purpose]. The cost model would include the cost of a protection or control used to defend the network or recover from an attack. These additional parameters are needed to measure returns on investment.

Tools such as ScienceLogic SL1, Scalable Networks’ EXata, Magic Draw and System Architect can help to build such an architecture model, to enable cyber tool portfolio analysis, help to identify better allocation of cyber investments, and to ultimately, help stem the tide of cyber tool glut. It is unfortunate that you may need to invest in yet another tool, but such a tool and set of models could be constructed to solve many different problems. For example, such a high fidelity model combined with machine learning and artificial intelligence can also help to automate the performance assessment, forecasting, and prognostics processes over the system life cycle – identifying issues before they become problems.


What tools are you using to evaluate your cyber investment? Are you considering the application of AI or mod-sim tools to complement your understanding of your cyber posture and risk profile? What unique issues are you facing when it comes to cyber investment? Drop us a comment to let us know your thoughts on this important topic.

And thanks for checking out ActiveCyber.net! Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, PQ cryptography, risk assessment and modeling, autonomous security, securing the Internet of Things, or other security topics. Also, email marketing@activecyber.net if you’re interested in interviewing or advertising with us at ActiveCyber.

 

Next Generation Concepts for Security Orchestration

Workflow automation tools [aka orchestrators] are at the top of many CxOs’ buy and tech refresh lists as a way to drive down operational costs while improving quality of service across a diverse and increasingly complex set of technology resources. There are several reasons why orchestrators are receiving such growing attention for modernizing IT. Orchestration represents a comprehensive and automated approach to defining, building, managing and reporting on intelligent workflows. Orchestration tools work across multiple third-party tools and operational processes within the enterprise, within the cloud, and within hybrid environments. The automation offered by orchestration tools can also drastically reduce processing time, increase speed to capability, and lead to reduced headcount and lower costs of operations.

However, the challenges to deploying orchestration tools are significant. As applications grow in complexity and endpoints become more distributed and mobile, the design of orchestration courses of action and technical requirements levied on orchestration tools are also becoming much more complicated. Orchestration processes reflect more inter-dependencies as applications are converted to micro-services. More inter-dependency means more complexity to operate orchestrators when it comes to sequencing and timing of tasks, as data is produced and consumed faster via a greater number of tasks, communication paths, and using protocols richer in data types and content. For example, mitigation actions that are controlled by an orchestration tool to remediate attack vectors need to be executed in a particular order across the set of disparate infrastructure components and micro-services that comprise the application stack. A more complex application stack means a more complicated remediation course of action (COA). Complexity also increases for orchestrators when it comes to assessing the situation and selecting a course of action. With more application complexity across dynamic virtual appliances comes more parameters that affect the selection of a COA. Orchestrators are beginning to incorporate artificial intelligence and machine learning to deal with the complications that arise in the playbook decision-making and selection process.

There are also many different choices and use cases to consider when it comes to orchestration tools. There are cloud orchestrators, cross-cloud orchestrators, network orchestrators, infrastructure orchestrators, identity orchestrators, application orchestrators, API orchestrators, and security orchestrators, to name some. Each of these orchestrators work independently of the other but also must interoperate to continue to achieve the benefits of automation. One starts to wonder how all these different tools work together, especially considering 1) different operating environments such as cloud, hybrid, distributed, mobile, IIoT and IoT; 2) different policy engines and command languages to be made interoperable; 3) different 3rd party vendor tools to be managed, and all this while managing possible conflicts for directing or changing the same resource.

So join in this three-part journey to review the role of the security orchestrator and where it fits architecturally in today’s modern environments to facilitate active cyber defenses. During this journey we will explore the following questions:

1. What is the role of the orchestrator overall and how does it relate to improving the operation of the enterprise? Where in the overall value chain does the security orchestrator live and how do I ensure interoperability among the different orchestrators in the value chain?

2. What role should Artificial Intelligence (AI) / Machine Learning (ML) / Deep Learning (DL) play in automating security orchestration decision processes?

3. What are the key requirements for security orchestration going forward, especially in environments such as IoT and IIoT/ICS?

The formulations of the answers to these questions are developed against a diverse architectural landscape including enterprise, cloud/SDN, and edge computing (IoT, ICS/IIoT, and mobile).

The Orchestrator Role

The term “orchestration” has been applied to many different products, each time with a nuanced or different twist to the meaning. Wikipedia provides the following definition:

“orchestration is largely the effect of automation or systems deploying elements of control theory.”

It goes on to say that orchestration consists of these functions:

  • Composing of architecture, tools, and processes used by humans to deliver a defined Service.
  • Stitching of software and hardware components together to deliver a defined Service.
  • Connecting and automating of workflows when applicable to deliver a defined Service.

Orchestrators work at different levels of the stack to manage a service chain (aka course of action). For example, in the network domain, a Virtual Network Function Manager (VNFM) manages the operational requirements of an individual virtual network function (VNF). Specifically, the VNFM manages the compute, storage, and networking requirements of the VNF. At a higher level, a [NFV] Service Orchestrator (SO) marshals individual VNFs together into a network service.

Orchestrated services may also cross multiple layers in the network. In the context of multi-layer networking, this could include:

  • Informing the IP/MPLS layer of links that share optical resources so that it can setup L3 services
  • Rerouting optical paths so that the impact of a fiber cut on the IP/MPLS layer is minimized
  • Modifying the IP/MPLS layer topology so that it better fits the current traffic (this is often called “router bypass”)
  • Setting up optical restoration for IP links in a way that is suitable for the IP/MPLS layer (multi-layer restoration)

All these functions improve the efficiency of the overall network by causing the layers to work better together. This saves hardware, improves availability, or simplifies operations, and often, all of the above.

Orchestrators determine the order and timing in which tasks should happen. By “chaining” tasks together, orchestrators impose discipline among coordinating process and technology elements as part of a COA, using APIs to integrate diverse vendor technologies. Orchestration doesn’t allow skipping steps or processes — it does allow the automation at each stage or place in the hierarchy to come together under a single workflow. This means that instead of the workflow stopping at each stage for the task to be done, the orchestrator handles it from start to completion and the end result is a finished task or service that is free of human error and typically done in a much quicker timeframe than achievable with manual intervention and hand-offs.

Orchestrators may also extend to workflows that cross domains and cover the end-to-end service needs across the entire network and life cycle of the service. For example, there’s the Life Cycle Service Orchestrator (LSO), which is in charge of specifying, provisioning, managing, and tearing down the end-to-end services across heterogeneous (physical and virtual) networks. Such a multi-domain service deployment process may begin with a request from dev to ops for the basics of creating the virtual machines, to networking to assign the IP address and getting it into the proper alerting and management groups for operations. All of these tasks are IT-related but involve different departments or domains. Once the Life Cycle Service Orchestrator determines what service to set up in each domain, it talks to each of the domain controllers that actually run the service. Typically, the LSO is also expected to manage the services it has helped set up and to eventually take them down.

An orchestration platform is also useful to verify what pieces of applications are still running and restart failed components in the correct order as required by the application, which goes beyond automated task performance to examining the entire stack and selectively performing automated tasks. In such cases, the orchestrators may coordinate installation, alert disabling during the reboot cycle, update verification, reestablishing the alerts and finally, provide the ability to halt a workflow and notify personnel in the event of an update gone wrong.

In summary, orchestrators are complex control systems that arrange and automate sequences of actions involving multiple technology elements to deliver a service.

The Orchestrator Value Chain

As previously mentioned, there are many types of orchestrators, each designed for a specific role within an environment, within a domain, within the stack, and within or across the life cycle of a service. In many cases, orchestrators are called upon to work with another orchestrator tool to satisfy some workflow automation need, and thereby add value to the service delivery life cycle, whether that value is manifested as speed-to-capability, elasticity, efficiency, security, safety or some other quality.

The TM Forum describes this modular control scheme as the Management-Control Continuum (MCC), consisting of interacting chains of cooperating functions that manage the delivery of services. The TM Forum envisions the MCC as part of its Future Mode of Operation to managing the nexgen network of agile, micro-services-based, networked applications. The continuum of control functions encompass the orchestration tools of the physical and virtual networks.  Each orchestrator decomposes and automates services, resources, and tasks in their respective domains cooperatively. The MCC concept is also congruent to command and control approaches applied by operational technology (OT) environments, such as the Purdue Model for Control Hierarchy, a model used by many of the Industrial Control Systems (ICS) and Industrial Internet of Things (IIoT).

Service Provider Orchestrators

The need to interoperate across multiple orchestrators in the delivery of a service is most prevalent for cloud and telecom service providers as their services tend to be increasingly personalized based on multiple parameters — user policies, application and content, network conditions, and more – and, therefore involve interconnecting orchestration processes running across heterogeneous systems in multiple locations. As services become more personalized and adaptable, the workflows become exponentially more complex. Also, orchestration in these environments is becoming even more dynamic as software-defined networking and micro-services take hold to deliver agile services.

The following list presents some of the orchestrators that may be involved in a cloud or telecom services provider environment. These orchestrators are strewn across many technology layers and specialized domains like cloud, metro, access, and core networks.

1. Business Support System (BSS) – Automates workflows related to business services and delivery of customer-oriented processes, such as customer registration, service order creation and order management, managing customer data, processing bills and collecting payments. Interfaces with the OSS.

2. Operational Support Services (OSS) – Collection of orchestrated services that automate workflows related to managing and controlling the physical infrastructure elements of the networks. Typically OSS covers the following functions – network management, service delivery, service fulfillment, (including the network inventory, activation and provisioning), service assurance /alerting, security, and customer care. Provides interfaces with the BSS and to lower level Element Management Systems (EMS).

3. Life Cycle Service Orchestrator (LSO) – Assembles complex, multi-faceted IT service requests into automated, repeatable loops. Sits atop each domain and orchestrates services from end to end. It interfaces with element management systems, cloud management platforms, the NFV orchestrator, and the data center SDN controller and understands available resources, both physical and virtual, that need to be managed and used to deliver a service. Serves data up to the BSS/OSS.  Also referred to as a Multi-Domain Service Orchestrator (MDSO).

4. Service Orchestrator (SO) – Marshals individual functions together into a course of action or service. Manages catalogs of customer facing services and resource facing services and does CFS to RFS mapping, RFS service activation, and life cycle management of the services. A special type of SO, called a Network Function Virtualization Orchestrator (NFVO) manages virtual network services. It is responsible for on-boarding of new network services (NS) and virtual network function (VNF) packages; NS lifecycle management; global resource management; validation and authorization of network functions virtualization infrastructure (NFVI) resource requests. It interfaces with the Virtual Network Function Manager (VNFM) for VNF deployment, and to the data center SDN controller to establish VNF forwarding graphs and service chain paths (i.e., topology management) of the network services instances.

Note:  Two competing standards organizations, OPEN-Orchestrator Project and Open Source MANO, were established to promote open source use of NFV orchestration.

5. Virtual Network Function Manager (VNFM) – Manages the operational requirements of an individual virtual network function (VNF). The VNFM works in concert with other orchestrators, such as the Virtualized Infrastructure Manager (VIM) and the NFVO to help standardize the functions of virtual networking; and, interfaces with EMS to increase the interoperability of software-defined networking elements. Manages VNF inventory, link descriptors, and performs VNF lifecycle management, VNF monitoring, healing and root-cause analysis. VNFMs are critical for scaling, changing operations, adding new resources, and communicating the states of VNFs to other managers. Sometimes also referred to as a Domain Orchestrator (DO).

6. Virtual Infrastructure Manager (VIM) – Keeps an inventory of the allocation of virtual resources to physical resources. It manages a repository of hardware resources (compute, storage, and networking) and software resources (hypervisors), along with the discovery of the capabilities and features to optimize the use of such resources. This allows the VIM to orchestrate the allocation, upgrade, release, and reclamation of compute, network and storage infrastructure resources and optimize their use. From a network perspective, it supports the management of VNF forwarding graphs by organizing virtual links, networks, subnets, and ports. The VIM also manages security group policies to ensure access control. The VIM performs other functions as well – such as collecting performance and fault information via notifications; managing software images (add, delete, update, query, copy) as requested by other managers and orchestrators; and managing catalogs of virtualized resources that can be consumed by services.

7. Data center SDN controller – A network operating system that provides data center-wide management of software-defined networking (SDN) services, by providing a global view of the entire network – link state and topology. Traffic flow is managed using a software controller which allows dynamic optimization and tuning not available in locally managed switches and routers. Primary functions include establishment and management of virtual networks (VLAN, VXLAN…), service chains, floating IP addresses, creation of forwarding policies, and network access control.

8. WAN controller (classical or SDN) – Automates and controls processes related to WAN virtual and/or physical network service instantiation and control (MPLS, IP, optical), multi-layer path computation and resource optimization.

9. Element management system (EMS) – Automates and manages workflows for the commissioning, configuration, and monitoring of specific network elements in the underlying physical network infrastructure including fault, configuration, accounting, performance, security (FCAPS). To support management of the traffic between itself and other network elements, the EMS communicates upward to a higher-level network management system (NMS) that is part of the collection of OSS services.

In general, most of the orchestrators listed above may be considered “infrastructure orchestrators,” with a special focus or relevance to virtualized network and cloud environments.

Application Orchestrators

There are also orchestrators that function at the application layer. Generally, application orchestrators work by decoupling integration logic from the applications to perform event processing. Application orchestrators manage tasks in a message-driven container that provides security, message routing, transaction management, reliability patterns, transformation, and crucial monitoring and tracking capabilities. Some examples of application orchestrators include service brokers, API managers, enterprise service bus, pub/sub messaging systems, and event-driven processors. These orchestrators provide the ability to route a session through various applications, with policy configured to determine the applications that the orchestration engine invokes, the order in which the orchestration engine invokes the applications, and conditions for invoking applications. Each application executes a certain business logic.

One example of an Application orchestrator is the Conductor tool used by Netflix and open sourced under an Apache License 2.0 on GitHub. The Conductor tool is the “front door” to the Netflix ecosystem of micro-services. As requests come from devices, the Conductor provides the logic of composing calls to all services that are required to construct a response. It gathers whatever information it needs from the back-end services, in whatever order needed, formats and filters the data as necessary, and returns the response. So, at its core, the Netflix API is an orchestration service that exposes coarse grained APIs by composing fine-grained functionality provided by the micro-services.

According to Netflix’ developer documentation on Conductor, the main characteristics of this orchestrator are:

The ability to create complex workflows

The tasks are executed by micro-services.

The workflow blueprint is written in a JSON Domain Specific Language (DSL).

The engine provides visibility and traceability into process execution.

The ability to pause, resume, restart, and stop tasks.

The task execution is generally asynchronous but it can be enforced to run synchronously.

The ability to scale to millions of process flows.

Netflix decided to create an orchestration rather than a choreography engine because the latter did not scale well in their tests. One of the additional deciding issues was there was almost no way to systematically answer “how much are we done with process X?”

Another class of application orchestration features configuration management software that is used to model the desired state of an application and its supporting infrastructure, and then enforces the correct order of configuring nodes that host application components. One example of this application orchestrator class is Puppet. Puppet application orchestrator provides an interface called a capability to share configuration information among unrelated nodes. Then, using Puppet to specify the order of the configurations, information is shared between nodes (for example, services are stopped or started) to ensure each component of the stack is created in the correct order.

Another class of application orchestration is API orchestration which is defined as “an abstraction layer that takes generically-modeled data elements and/or features and prepares them in a more specific way for a targeted application.” An API orchestrator differs from traditional, formalized APIs which define interactions in a very fixed / rigid way. An API orchestrator enables much more flexibility, providing translation layers using some commonly agreed underlying information model to pass information between unrelated modules. Often, an API Orchestrator is deployed as a gateway to back-end micro-services to optimize communication between clients and the application. It can also encapsulate the details of the micro-services. This enables the micro-services to evolve without impacting the clients. For example, two micro-services might be merged, another micro-service might be partitioned into two or more services. Only the API orchestrator / gateway needs to be updated to reflect these changes. The clients are unaffected. As reflected in the following figure, it uses pre-programmed logic to construct one or more calls to other services to get or pass information. It can then enforce different policies for each transport, style, or message, as well as implement global policies that can be applied to the entire API.

API orchestration is aided through the development of an API standard. One example of such a standard is led by the Open API Initiative (OAI) which is developing an interoperability specification. The goal of the OAI specification is to define a standard, language-agnostic, vendor-neutral interface to REST APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic.  SmartBear Software is donating the Swagger Specification directly to the OAI as the basis of this open specification.

An example of an uber application orchestrator that combines several application orchestrator classes is the Itential Pronghorn Application Platform. Pronghorn is a multi-tier platform focused on improving network operations. The platform consumes APIs of southbound systems and presents these capabilities to the Pronghorn application layer to enable unified workflow and management capabilities across multiple southbound systems and networks. Pronghorn provides an extensible platform for management of multi-vendor/multi-technology environments. The following figure highlights the basic architecture of the Pronghorn Application Platform.

Key capabilities include:

The Adapter and Broker layers provide integration with southbound entities, as well as supplying important core functionality.

The Cog layer houses business logic that is reusable across applications.

The Application layer provides user interfaces and additional business logic not provided by Cogs.

Pronghorn integration takes place via REST APIs that reside just above the Cog layer. These APIs expose the full functionality contained in the Cog layer to both Pronghorn and third party applications. Pronghorn APIs are built utilizing Open API Initiative concepts to provide an open and easy to use platform. This openness enables simplified customization and app development via a SDK.

Security Orchestrators

Forrester defines security orchestrators as:

“Technology products that provide automated, coordinated, and policy-based action of security processes across multiple technologies, making security operations faster, less error-prone, and more efficient.”

Key functions of a security orchestrator include:

  1. COA decision-making and selection
  2. Alert management
  3. Case management
  4. Playbook management
  5. Automation editor / visual COA editor
  6. Rules engine or AI engine
  7. Application integration framework
  8. Metrics and reporting.

Time-draining security tasks that generally can be automated and orchestrated include:

  • Incident investigation involving log gathering and analysis
  • Review and analysis of threat intelligence sources
  • Vulnerability scanning and patching
  • Ticket updating, report creation, and email alerts (e.g. automatically log into multiple systems and entering incident information)
  • Taking corrective actions (e.g. implement security controls, update black list, update IDS rule, disable a user account, etc.).

Security orchestrators vary in their approaches (e.g., community-based vs. proprietary) to provide playbook composition and to integrate their orchestration capabilities to other point tools and applications (e.g., open architecture using published REST API to closed proprietary message fabrics). Their sophistication in automated decision-making also varies widely, from primarily man-in-the-middle approaches, to fully scripted, risk-based decision-making with human review only in the most complex situations.

There are also several specialized security orchestrators. For example, an Identity-Defined Networking (IDN) orchestrator enables customers to rapidly build cloaked and encrypted networks. With IDN, the devices, endpoints and networks are assigned a unique, cryptographic identity and can only communicate with explicitly white-listed systems or endpoints. Centralized orchestration allows policies to be set to dynamically manage the cryptographic identities with no changes to IP configuration. The IDN orchestrator ensures all IDN services are up to date and synchronized, collects metrics and active state information from the service endpoints within the IDN fabric. It also provides the secure API that enables integration and automation with other security and networking services like directory services, SIEMs, and monitoring tools for data center or resource fail-over. The Host Identity Protocol (HIP) entails a collection of IETF standards that enable the underlying functionality. HIP separates the end-point identifier and locator roles of IP addresses. It introduces a Host Identity (HI) name space, based on a public key security infrastructure.

Other specialized security orchestrators / controllers include Software-Defined Perimeter (SDP) controllers, Network Access Controllers (NAC), risk managers, and deception and honeynet orchestrators.

Interoperability Across Orchestrators

It can be easily imagined that there would be on-going interactions between different security orchestrators as well as interactions with other infrastructure orchestrators, application orchestrators, and various controllers as part of the end-to-end automation of a Course of Action (COA) or playbook. These interactions may be managed in several ways, such as:

  • Using an uber orchestrator of orchestrators
  • Via the TM Forum’s modular MCC-based orchestrator interoperation
  • By a standardized message fabric and standard C2 command language.

The right approach is highly dependent on the characteristics of the target environment and the hierarchy of control, however orchestration interaction that may be involved in a distributed environment such as IoT cannot practically be fielded without decentralized control, likely with a high degree of autonomy, and agile communications.

A monolithic uber-orchestrator, like that often used for traditional telecom and cloud fulfillment systems, has a full model of all resources in the network and understands all domain specific languages and, generally would lead to a reduction in the number of controllers and orchestrators. This approach would also lead to scaling issues. Instead, a hierarchy of domain-focused orchestrators, where each orchestrator models its own domain and uses abstracted views of other domains it interacts with, leads to a more robust and scalable network and allows network operators to choose the best solution for each domain.
Nokia’s white paper – Scalable Orchestration with a Hierarchy of Domain-focused Orchestrators  – highlights the information flow across multiple infrastructure orchestrators for a complex COA. I have superimposed on this flow a notional concept of operations describing a set of security orchestrators and their interactions with other infrastructure orchestrators as shown in the figure below.

1. A mission monitor system tracks parameters (physics and network-based) received from a water management system to identify the risk to critical water management applications. A history of what is normal is maintained by the system for each critical parameter so that anomalous events may be detected. Events are classified and assigned a risk score.

2. Once a certain risk score threshold is exceeded, an alert is issued by the mission monitor system to the OSS to trigger appropriate responses. Abnormal and possibly malicious behavior that was detected by a network security sensor triggers an alert to the OSS.

3. The OSS sets up tracking for the alert, classifies the alert, and dispatches workflows to handle the event. It sends the alert to a Security Orchestrator.

4. A Security Orchestrator (SO) is a workflow system included as part of the OSS. Based on the risk profile provided, the SO classifies the risk condition indicated by the alert, and selects a course of action (COA) to respond to the alert. The SO sends the COA tasking model to the Service Orchestrator which breaks the COA model down into the required resource facing services to implement the tasks contained in the COA tasking model.

5. The first task is to enrich the situational context around the alert. The Service Orchestrator dispatches a task to investigate threat data from internal and external threat intel feeds and to correlate the event to other behavior captured by logs and SIEM.

6. Results from the enrichment tasks are sent to the SO for updating the COA model. The SO updates the COA model.

7. The SO issues a new COA tasking model to the Service Orchestrator based on the enriched alert.

8. The updated COA model is broken down by the Service Orchestrator. Tasks are dispatched by the Service Orchestrator to update security configurations and policies to the respective EMS.

9. The Service Orchestrator also implements the COA by tasking the NFVO for different virtual network services: map / geo-locate the event against the end-to-end network topology, identify the risk to connected assets, retrieve artifacts from affected hosts, isolate and cloak affected network services segments, update policy files in virtual network security functions (vIDS) and block lists (vFW, vIPS, etc.), deploy new overlay networks to the water management system along with required virtual security functions.

10. The NFVO uses the COA-specific security parameters to pick the required controls from a catalog of Security Service Functions maintained by the Open Security Controller (OSC) to create the new logical security services. The requested network services must have been on-boarded to the NFVO’s catalog of network services along with their network service descriptors (NSD). An NFV network service is comprised of virtual network functions (VNFs) and connected physical network functions (PNFs), a VNF forwarding graph describing the connectivity between the VNFs/PNFs, as well as one or more forwarding paths. The forwarding graph is made up of a series of connection points and virtual links. The NFVO executes the NSD to create the requested services and service policy updates.

11. The OSC coordinates the update of security policies between the EMS and virtual security function managers. OSC acts as an abstraction layer for the security function managers, alleviating the need for integration into each SDN controller or virtualization infrastructure manager. It is important to understand that OSC does not manage the virtualized security functions or their security policies, which are still completed by traditional element or security function manager for each function.

12. The NFVO instructs the data center SDN controller to create the new connectivity between the to-be-created VNFs based on the VNF forwarding graph.

13. The OSC works in conjunction with the SDN controller depending on it for traffic steering, service function chaining for security service insertion, and ensuring that the traffic is steered to the right security enforcement capability based on security policies.

14. The data center SDN controller implements the specified forwarding policies with service chaining.

15. For each of the required VNFs that are part of the COA tasking, the NFVO requests a VNFM to deploy it. The NFVO checks if required VNFMs are running and, if not, it will instantiate an appropriate VNFM. The VNFM deploys a VNF as one or more virtual deployment units (VDUs) that typically run on virtual machines. Different VNFs may require different flavors of VDU depending on the ratio of required memory, compute, storage, and I/O performance. The NFVO resource orchestrator makes a placement decision based on the requested resources and its policies, and provides instructions on where and how to deploy the VNF to the VNFM.

16. The OSC coordinates the insertion of security policies using metadata like policy tags and dynamic OSC security group membership for the new VNFs across the VIM environment.

17. Any VNF that consists of more than one VDU needs internal connectivity between these VDUs. The VNFM creates this connectivity either through the VIM or through the appropriate data center SDN controller.

18. Based on the placement decision, the VNFM knows the data center where to deploy the VNF and the virtual infrastructure manager (VIM) responsible for the needed resources. Based on that knowledge, the VNFM requests the VIM to create a virtual machine for each VDU.

19. The HIP controller segments and cloaks the affected network segment and sets up new network connections from the water management premises to the data center as an overlay VPN based on instructions received from the SO and OSC.

20. To complete activation of the COA, the OSC or, if needed, the service orchestrator configures the physical and VNFs through their relevant management systems with COA-specific parameters, such as security access codes, firewall rules, signatures for IPS, etc.

The NFV security orchestrator shown in the figure is based on the features and functions of the Open Security Controller (OSC). OSC is a github project, originally initiated by Intel Security, that is designed to address SDN / NFV security challenges by playing a role of a centralized security services orchestration tool across multiple cloud environments. As shown in the figure, it does so using a modular architecture to interface with:

  • Multi-vendor virtualized security function managers (Security Manager Plugins)
  • Multi-vendor SDN controllers (SDN Controller Plugins)
  • Virtualization infrastructure Managers (Virtualization Connectors).

Conceptually, OSC fits in the data center as a security service orchestration solution to automate deployments and co-ordinate policy-based insertion for the virtualized security functions (vNGFW, vIPS, vWAF, vADC etc.) and SDN-based Service Function Chains (SFC). OSC allows a tenant to pick the required controls from a catalog of Security Service Functions and create a logical service on how that control will be deployed across the multiple virtualization environments. OSC is abstracting the lower level infrastructure for the security function managers, alleviating the need for them to do point-to-point integration with each virtualization environment. OSC achieves this using Virtualization Connectors and SDN Controller Plugins via its southbound APIs. By doing so, OSC allows the security administrator to define and validate consistent and granular security controls across these environments.

OSC enables fully automated provisioning, de-provisioning, distribution and delivery of security inside the perimeter of the network for virtualized security functions such as virtual Next Gen Firewalls (vNGFW), virtualized IPS (vIPS), virtualized Web App Firewalls (vWAF) and virtualized Application Delivery Controllers (vADC) from multiple vendors. OSC coordinates policies (using metadata-like policy tags and dynamic OSC security group membership) across multiple virtualization and SDN environments. OSC enables policies to be defined and enforced based on application workload requirements as opposed to traditional perimeter-based security in virtualized data centers. OSC’s role in ETSI MANO (Management and Network Orchestration) is that of NFV Security Orchestrator (refer SEC013 published normative specification developed by SEC working group within ETSI NFV). The core functionalities defined for NFV Security Orchestrator are as follows:

  • Orchestrates system wide security policies within the NFVI (NFV Infrastructure).
  • Acts as a trusted 3rd party that resides independently.
  • Manages NFV SSAs (NFV Security Services Agent) to keep them in a consistent state according to the specified policy for security functions that can be orchestrated and deployed at system start-up or dynamically.
  • Facilitates secure bootstrapping of SSAs.
  • Secures pairing up VNFMs and EMSs, policy management, integrity assertion.
  • Monitors SSAs for failure and remediation.

A standard messaging fabric may provide an underlying foundation to the Service Orchestrator or Security Orchestrator in support of the domain-based, modular interaction as described in the flow above. Currently, the Integrated Adaptive Cyber Defense (IACD) effort under Johns Hopkins Applied Physics Lab (JHUAPL) is evaluating messaging standards for orchestration framework specifications as part of the Secure and Resilient Cyber Ecosystem (SRCE) Focus Group. The mission of SRCE is to promote enhanced interoperability, automation, trust, and machine-speed information sharing across cybersecurity products and services. As part of this mission, the SRCE is defining specifications for a message fabric to support orchestration. Key specification goals include:

  • A set of commonly understood application interfaces/descriptors usable by any tool or information source to plug-in to the fabric [Technical Interoperability]
  • A standardized message set that establishes the contextual constructs and data formats to understand the message in the intended way [Semantic Interoperability]
  • A consistently defined set of message services (supporting control, configuration, publish/subscribe, etc.)
  • A set of configurable trust and access services that enables secure communications with confidentiality, integrity, and availability
  • A set of transport protocols such as Advanced Message Queueing Protocol (AMQP) from OASIS and Data Distribution Services (DDS) from Object Management Group

Relying on common data representations and a message bus facilitates integration of components from independent vendors while providing backward compatibility of message formats.

Meeting the SRCE specification goals using a reactive architecture can also provide additional orchestration service capabilities. A reactive architecture enables faster end-to-end processing as the playbook services can be executed in parallel / asynchronously. For example, instead of assuming that process A will be called by process B after an event occurs, you could program process B to write data to a message queue indicating the event has occurred and including all relevant data. Control is also easily distributed in a reactive architecture, so there is no longer a single orchestrator serving as a central point of failure. A decoupled architecture enabled through a reactive architecture that uses message queues is an improvement over custom scripting for complex processes. It also works well when you need to scale certain parts of a workflow but not others. If there are more messages in a queue than a single instance can handle in the time allowed, for example, additional instances can be brought online. There is no need to change coding or alter the system architecture. Event Sourcing is another pattern facilitated by a reactive architecture where an Event Stream stores all of the events and enables event replay. This way, if an orchestrator service went down while events were still being produced, when it came back online it could replay those events to catch back up.

Rapid and accurate decision-making for orchestration will require more than message interoperability. Orchestrating and informing the mosaic of micro-services that provide security protections will also require semantic interoperability through the use of domain ontologies, domain specific languages, and ways to link or relate ontologies. To this goal, interoperability is also being facilitated by another effort created through the auspices of the JHUAPL IACD project that is focused on the development of a security-domain specific orchestration language known as OpenC2. This language is being developed and maintained now through an OASIS work group to enable the coordination and execution of command and control of cyber defense components.

Publish/subscribe messaging systems are also widely used to achieve a synchronized global network view among diverse service producers and consumers. For example, HyperFlow, a distributed event-based control plane for OpenFlow, allows sharing of a synchronized consistent network-wide view among multiple orchestrators / controllers. HyperFlow uses a publishing mechanism to maintain a consistent global view across controllers. Whenever a system status change is detected, each controller selectively publishes an event about the change through a publish/subscribe system. New status is then pushed to subscribed controllers for immediate updating. Another possible approach to linking different domains is through a clearinghouse capability. For example, the Interface for Metadata Access Points (IF-MAP) – an open specification developed by the Trusted Computing Group (TCG) – provides a publish / subscribe interface and clearinghouse for information about security events and objects on the network, as shown in the adjoining figure. It can support lots of real-time data writes, unstructured relationships, diverse interest in changes to the current state of a device as they occur by different consumers; and, support for distributed data producers and consumers. The Contrail SDN controller leverages IF-MAP to share the state of virtual network functions that it controls.

These technology examples provide just a hint of what is currently available to enable interoperability among orchestrators at the messaging and informational layers. Tools such as Tibco, Informatica, ActiveMQ, RabbitMQ, WebSphere MQ, RTS Data Distribution Service are some more examples of tools that provide interoperability at the messaging layer.


This concludes the first part of this three-part journey into the realm of orchestration with a special emphasis on security automation and orchestration. Next stop is to evaluate the role of artificial intelligence as a decision-making element for the next generation of security orchestrators. We will review a couple of security orchestrators that are applying such technology so stay tuned for Part 2.

And thanks for checking out ActiveCyber.net! Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, PQ cryptography, risk assessment and modeling, or other security topics. Also, email marketing@activecyber.net if you’re interested in interviewing or advertising with us at ActiveCyber.