Next Generation Concepts for Security Orchestration
Workflow automation tools [aka orchestrators] are at the top of many CxOs’ buy and tech refresh lists as a way to drive down operational costs while improving quality of service across a diverse and increasingly complex set of technology resources. There are several reasons why orchestrators are receiving such growing attention for modernizing IT. Orchestration represents a comprehensive and automated approach to defining, building, managing and reporting on intelligent workflows. Orchestration tools work across multiple third-party tools and operational processes within the enterprise, within the cloud, and within hybrid environments. The automation offered by orchestration tools can also drastically reduce processing time, increase speed to capability, and lead to reduced headcount and lower costs of operations.
However, the challenges to deploying orchestration tools are significant. As applications grow in complexity and endpoints become more distributed and mobile, the design of orchestration courses of action and technical requirements levied on orchestration tools are also becoming much more complicated. Orchestration processes reflect more inter-dependencies as applications are converted to micro-services. More inter-dependency means more complexity to operate orchestrators when it comes to sequencing and timing of tasks, as data is produced and consumed faster via a greater number of tasks, communication paths, and using protocols richer in data types and content. For example, mitigation actions that are controlled by an orchestration tool to remediate attack vectors need to be executed in a particular order across the set of disparate infrastructure components and micro-services that comprise the application stack. A more complex application stack means a more complicated remediation course of action (COA). Complexity also increases for orchestrators when it comes to assessing the situation and selecting a course of action. With more application complexity across dynamic virtual appliances comes more parameters that affect the selection of a COA. Orchestrators are beginning to incorporate artificial intelligence and machine learning to deal with the complications that arise in the playbook decision-making and selection process.
There are also many different choices and use cases to consider when it comes to orchestration tools. There are cloud orchestrators, cross-cloud orchestrators, network orchestrators, infrastructure orchestrators, identity orchestrators, application orchestrators, API orchestrators, and security orchestrators, to name some. Each of these orchestrators work independently of the other but also must interoperate to continue to achieve the benefits of automation. One starts to wonder how all these different tools work together, especially considering 1) different operating environments such as cloud, hybrid, distributed, mobile, IIoT and IoT; 2) different policy engines and command languages to be made interoperable; 3) different 3rd party vendor tools to be managed, and all this while managing possible conflicts for directing or changing the same resource.
So join in this three-part journey to review the role of the security orchestrator and where it fits architecturally in today’s modern environments to facilitate active cyber defenses. During this journey we will explore the following questions:
1. What is the role of the orchestrator overall and how does it relate to improving the operation of the enterprise? Where in the overall value chain does the security orchestrator live and how do I ensure interoperability among the different orchestrators in the value chain?
2. What role should Artificial Intelligence (AI) / Machine Learning (ML) / Deep Learning (DL) play in automating security orchestration decision processes?
3. What are the key requirements for security orchestration going forward, especially in environments such as IoT and IIoT/ICS?
The formulations of the answers to these questions are developed against a diverse architectural landscape including enterprise, cloud/SDN, and edge computing (IoT, ICS/IIoT, and mobile).
The Orchestrator Role
The term “orchestration” has been applied to many different products, each time with a nuanced or different twist to the meaning. Wikipedia provides the following definition:
“orchestration is largely the effect of automation or systems deploying elements of control theory.”
It goes on to say that orchestration consists of these functions:
- Composing of architecture, tools, and processes used by humans to deliver a defined Service.
- Stitching of software and hardware components together to deliver a defined Service.
- Connecting and automating of workflows when applicable to deliver a defined Service.
Orchestrators work at different levels of the stack to manage a service chain (aka course of action). For example, in the network domain, a Virtual Network Function Manager (VNFM) manages the operational requirements of an individual virtual network function (VNF). Specifically, the VNFM manages the compute, storage, and networking requirements of the VNF. At a higher level, a [NFV] Service Orchestrator (SO) marshals individual VNFs together into a network service.
Orchestrated services may also cross multiple layers in the network. In the context of multi-layer networking, this could include:
- Informing the IP/MPLS layer of links that share optical resources so that it can setup L3 services
- Rerouting optical paths so that the impact of a fiber cut on the IP/MPLS layer is minimized
- Modifying the IP/MPLS layer topology so that it better fits the current traffic (this is often called “router bypass”)
- Setting up optical restoration for IP links in a way that is suitable for the IP/MPLS layer (multi-layer restoration)
All these functions improve the efficiency of the overall network by causing the layers to work better together. This saves hardware, improves availability, or simplifies operations, and often, all of the above.
Orchestrators determine the order and timing in which tasks should happen. By “chaining” tasks together, orchestrators impose discipline among coordinating process and technology elements as part of a COA, using APIs to integrate diverse vendor technologies. Orchestration doesn’t allow skipping steps or processes — it does allow the automation at each stage or place in the hierarchy to come together under a single workflow. This means that instead of the workflow stopping at each stage for the task to be done, the orchestrator handles it from start to completion and the end result is a finished task or service that is free of human error and typically done in a much quicker timeframe than achievable with manual intervention and hand-offs.
Orchestrators may also extend to workflows that cross domains and cover the end-to-end service needs across the entire network and life cycle of the service. For example, there’s the Life Cycle Service Orchestrator (LSO), which is in charge of specifying, provisioning, managing, and tearing down the end-to-end services across heterogeneous (physical and virtual) networks. Such a multi-domain service deployment process may begin with a request from dev to ops for the basics of creating the virtual machines, to networking to assign the IP address and getting it into the proper alerting and management groups for operations. All of these tasks are IT-related but involve different departments or domains. Once the Life Cycle Service Orchestrator determines what service to set up in each domain, it talks to each of the domain controllers that actually run the service. Typically, the LSO is also expected to manage the services it has helped set up and to eventually take them down.
An orchestration platform is also useful to verify what pieces of applications are still running and restart failed components in the correct order as required by the application, which goes beyond automated task performance to examining the entire stack and selectively performing automated tasks. In such cases, the orchestrators may coordinate installation, alert disabling during the reboot cycle, update verification, reestablishing the alerts and finally, provide the ability to halt a workflow and notify personnel in the event of an update gone wrong.
In summary, orchestrators are complex control systems that arrange and automate sequences of actions involving multiple technology elements to deliver a service.
The Orchestrator Value Chain
As previously mentioned, there are many types of orchestrators, each designed for a specific role within an environment, within a domain, within the stack, and within or across the life cycle of a service. In many cases, orchestrators are called upon to work with another orchestrator tool to satisfy some workflow automation need, and thereby add value to the service delivery life cycle, whether that value is manifested as speed-to-capability, elasticity, efficiency, security, safety or some other quality.
The TM Forum describes this modular control scheme as the Management-Control Continuum (MCC), consisting of interacting chains of cooperating functions that manage the delivery of services. The TM Forum envisions the MCC as part of its Future Mode of Operation to managing the nexgen network of agile, micro-services-based, networked applications. The continuum of control functions encompass the orchestration tools of the physical and virtual networks. Each orchestrator decomposes and automates services, resources, and tasks in their respective domains cooperatively. The MCC concept is also congruent to command and control approaches applied by operational technology (OT) environments, such as the Purdue Model for Control Hierarchy, a model used by many of the Industrial Control Systems (ICS) and Industrial Internet of Things (IIoT).
Service Provider Orchestrators
The need to interoperate across multiple orchestrators in the delivery of a service is most prevalent for cloud and telecom service providers as their services tend to be increasingly personalized based on multiple parameters — user policies, application and content, network conditions, and more – and, therefore involve interconnecting orchestration processes running across heterogeneous systems in multiple locations. As services become more personalized and adaptable, the workflows become exponentially more complex. Also, orchestration in these environments is becoming even more dynamic as software-defined networking and micro-services take hold to deliver agile services.
The following list presents some of the orchestrators that may be involved in a cloud or telecom services provider environment. These orchestrators are strewn across many technology layers and specialized domains like cloud, metro, access, and core networks.
1. Business Support System (BSS) – Automates workflows related to business services and delivery of customer-oriented processes, such as customer registration, service order creation and order management, managing customer data, processing bills and collecting payments. Interfaces with the OSS.
2. Operational Support Services (OSS) – Collection of orchestrated services that automate workflows related to managing and controlling the physical infrastructure elements of the networks. Typically OSS covers the following functions – network management, service delivery, service fulfillment, (including the network inventory, activation and provisioning), service assurance /alerting, security, and customer care. Provides interfaces with the BSS and to lower level Element Management Systems (EMS).
3. Life Cycle Service Orchestrator (LSO) – Assembles complex, multi-faceted IT service requests into automated, repeatable loops. Sits atop each domain and orchestrates services from end to end. It interfaces with element management systems, cloud management platforms, the NFV orchestrator, and the data center SDN controller and understands available resources, both physical and virtual, that need to be managed and used to deliver a service. Serves data up to the BSS/OSS. Also referred to as a Multi-Domain Service Orchestrator (MDSO).
4. Service Orchestrator (SO) – Marshals individual functions together into a course of action or service. Manages catalogs of customer facing services and resource facing services and does CFS to RFS mapping, RFS service activation, and life cycle management of the services. A special type of SO, called a Network Function Virtualization Orchestrator (NFVO) manages virtual network services. It is responsible for on-boarding of new network services (NS) and virtual network function (VNF) packages; NS lifecycle management; global resource management; validation and authorization of network functions virtualization infrastructure (NFVI) resource requests. It interfaces with the Virtual Network Function Manager (VNFM) for VNF deployment, and to the data center SDN controller to establish VNF forwarding graphs and service chain paths (i.e., topology management) of the network services instances.
Note: Two competing standards organizations, OPEN-Orchestrator Project and Open Source MANO, were established to promote open source use of NFV orchestration.
5. Virtual Network Function Manager (VNFM) – Manages the operational requirements of an individual virtual network function (VNF). The VNFM works in concert with other orchestrators, such as the Virtualized Infrastructure Manager (VIM) and the NFVO to help standardize the functions of virtual networking; and, interfaces with EMS to increase the interoperability of software-defined networking elements. Manages VNF inventory, link descriptors, and performs VNF lifecycle management, VNF monitoring, healing and root-cause analysis. VNFMs are critical for scaling, changing operations, adding new resources, and communicating the states of VNFs to other managers. Sometimes also referred to as a Domain Orchestrator (DO).
6. Virtual Infrastructure Manager (VIM) – Keeps an inventory of the allocation of virtual resources to physical resources. It manages a repository of hardware resources (compute, storage, and networking) and software resources (hypervisors), along with the discovery of the capabilities and features to optimize the use of such resources. This allows the VIM to orchestrate the allocation, upgrade, release, and reclamation of compute, network and storage infrastructure resources and optimize their use. From a network perspective, it supports the management of VNF forwarding graphs by organizing virtual links, networks, subnets, and ports. The VIM also manages security group policies to ensure access control. The VIM performs other functions as well – such as collecting performance and fault information via notifications; managing software images (add, delete, update, query, copy) as requested by other managers and orchestrators; and managing catalogs of virtualized resources that can be consumed by services.
7. Data center SDN controller – A network operating system that provides data center-wide management of software-defined networking (SDN) services, by providing a global view of the entire network – link state and topology. Traffic flow is managed using a software controller which allows dynamic optimization and tuning not available in locally managed switches and routers. Primary functions include establishment and management of virtual networks (VLAN, VXLAN…), service chains, floating IP addresses, creation of forwarding policies, and network access control.
8. WAN controller (classical or SDN) – Automates and controls processes related to WAN virtual and/or physical network service instantiation and control (MPLS, IP, optical), multi-layer path computation and resource optimization.
9. Element management system (EMS) – Automates and manages workflows for the commissioning, configuration, and monitoring of specific network elements in the underlying physical network infrastructure including fault, configuration, accounting, performance, security (FCAPS). To support management of the traffic between itself and other network elements, the EMS communicates upward to a higher-level network management system (NMS) that is part of the collection of OSS services.
In general, most of the orchestrators listed above may be considered “infrastructure orchestrators,” with a special focus or relevance to virtualized network and cloud environments.
Application Orchestrators
There are also orchestrators that function at the application layer. Generally, application orchestrators work by decoupling integration logic from the applications to perform event processing. Application orchestrators manage tasks in a message-driven container that provides security, message routing, transaction management, reliability patterns, transformation, and crucial monitoring and tracking capabilities. Some examples of application orchestrators include service brokers, API managers, enterprise service bus, pub/sub messaging systems, and event-driven processors. These orchestrators provide the ability to route a session through various applications, with policy configured to determine the applications that the orchestration engine invokes, the order in which the orchestration engine invokes the applications, and conditions for invoking applications. Each application executes a certain business logic.
One example of an Application orchestrator is the Conductor tool used by Netflix and open sourced under an Apache License 2.0 on GitHub. The Conductor tool is the “front door” to the Netflix ecosystem of micro-services. As requests come from devices, the Conductor provides the logic of composing calls to all services that are required to construct a response. It gathers whatever information it needs from the back-end services, in whatever order needed, formats and filters the data as necessary, and returns the response. So, at its core, the Netflix API is an orchestration service that exposes coarse grained APIs by composing fine-grained functionality provided by the micro-services.
According to Netflix’ developer documentation on Conductor, the main characteristics of this orchestrator are:
The ability to create complex workflows
The tasks are executed by micro-services.
The workflow blueprint is written in a JSON Domain Specific Language (DSL).
The engine provides visibility and traceability into process execution.
The ability to pause, resume, restart, and stop tasks.
The task execution is generally asynchronous but it can be enforced to run synchronously.
The ability to scale to millions of process flows.
Netflix decided to create an orchestration rather than a choreography engine because the latter did not scale well in their tests. One of the additional deciding issues was there was almost no way to systematically answer “how much are we done with process X?”
Another class of application orchestration features configuration management software that is used to model the desired state of an application and its supporting infrastructure, and then enforces the correct order of configuring nodes that host application components. One example of this application orchestrator class is Puppet. Puppet application orchestrator provides an interface called a capability to share configuration information among unrelated nodes. Then, using Puppet to specify the order of the configurations, information is shared between nodes (for example, services are stopped or started) to ensure each component of the stack is created in the correct order.
Another class of application orchestration is API orchestration which is defined as “an abstraction layer that takes generically-modeled data elements and/or features and prepares them in a more specific way for a targeted application.” An API orchestrator differs from traditional, formalized APIs which define interactions in a very fixed / rigid way. An API orchestrator enables much more flexibility, providing translation layers using some commonly agreed underlying information model to pass information between unrelated modules. Often, an API Orchestrator is deployed as a gateway to back-end micro-services to optimize communication between clients and the application. It can also encapsulate the details of the micro-services. This enables the micro-services to evolve without impacting the clients. For example, two micro-services might be merged, another micro-service might be partitioned into two or more services. Only the API orchestrator / gateway needs to be updated to reflect these changes. The clients are unaffected. As reflected in the following figure, it uses pre-programmed logic to construct one or more calls to other services to get or pass information. It can then enforce different policies for each transport, style, or message, as well as implement global policies that can be applied to the entire API.
API orchestration is aided through the development of an API standard. One example of such a standard is led by the Open API Initiative (OAI) which is developing an interoperability specification. The goal of the OAI specification is to define a standard, language-agnostic, vendor-neutral interface to REST APIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection. When properly defined, a consumer can understand and interact with the remote service with a minimal amount of implementation logic. SmartBear Software is donating the Swagger Specification directly to the OAI as the basis of this open specification.
An example of an uber application orchestrator that combines several application orchestrator classes is the Itential Pronghorn Application Platform. Pronghorn is a multi-tier platform focused on improving network operations. The platform consumes APIs of southbound systems and presents these capabilities to the Pronghorn application layer to enable unified workflow and management capabilities across multiple southbound systems and networks. Pronghorn provides an extensible platform for management of multi-vendor/multi-technology environments. The following figure highlights the basic architecture of the Pronghorn Application Platform.
Key capabilities include:
The Adapter and Broker layers provide integration with southbound entities, as well as supplying important core functionality.
The Cog layer houses business logic that is reusable across applications.
The Application layer provides user interfaces and additional business logic not provided by Cogs.
Pronghorn integration takes place via REST APIs that reside just above the Cog layer. These APIs expose the full functionality contained in the Cog layer to both Pronghorn and third party applications. Pronghorn APIs are built utilizing Open API Initiative concepts to provide an open and easy to use platform. This openness enables simplified customization and app development via a SDK.
Security Orchestrators
Forrester defines security orchestrators as:
“Technology products that provide automated, coordinated, and policy-based action of security processes across multiple technologies, making security operations faster, less error-prone, and more efficient.”
Key functions of a security orchestrator include:
- COA decision-making and selection
- Alert management
- Case management
- Playbook management
- Automation editor / visual COA editor
- Rules engine or AI engine
- Application integration framework
- Metrics and reporting.
Time-draining security tasks that generally can be automated and orchestrated include:
- Incident investigation involving log gathering and analysis
- Review and analysis of threat intelligence sources
- Vulnerability scanning and patching
- Ticket updating, report creation, and email alerts (e.g. automatically log into multiple systems and entering incident information)
- Taking corrective actions (e.g. implement security controls, update black list, update IDS rule, disable a user account, etc.).
Security orchestrators vary in their approaches (e.g., community-based vs. proprietary) to provide playbook composition and to integrate their orchestration capabilities to other point tools and applications (e.g., open architecture using published REST API to closed proprietary message fabrics). Their sophistication in automated decision-making also varies widely, from primarily man-in-the-middle approaches, to fully scripted, risk-based decision-making with human review only in the most complex situations.
There are also several specialized security orchestrators. For example, an Identity-Defined Networking (IDN) orchestrator enables customers to rapidly build cloaked and encrypted networks. With IDN, the devices, endpoints and networks are assigned a unique, cryptographic identity and can only communicate with explicitly white-listed systems or endpoints. Centralized orchestration allows policies to be set to dynamically manage the cryptographic identities with no changes to IP configuration. The IDN orchestrator ensures all IDN services are up to date and synchronized, collects metrics and active state information from the service endpoints within the IDN fabric. It also provides the secure API that enables integration and automation with other security and networking services like directory services, SIEMs, and monitoring tools for data center or resource fail-over. The Host Identity Protocol (HIP) entails a collection of IETF standards that enable the underlying functionality. HIP separates the end-point identifier and locator roles of IP addresses. It introduces a Host Identity (HI) name space, based on a public key security infrastructure.
Other specialized security orchestrators / controllers include Software-Defined Perimeter (SDP) controllers, Network Access Controllers (NAC), risk managers, and deception and honeynet orchestrators.
Interoperability Across Orchestrators
It can be easily imagined that there would be on-going interactions between different security orchestrators as well as interactions with other infrastructure orchestrators, application orchestrators, and various controllers as part of the end-to-end automation of a Course of Action (COA) or playbook. These interactions may be managed in several ways, such as:
- Using an uber orchestrator of orchestrators
- Via the TM Forum’s modular MCC-based orchestrator interoperation
- By a standardized message fabric and standard C2 command language.
The right approach is highly dependent on the characteristics of the target environment and the hierarchy of control, however orchestration interaction that may be involved in a distributed environment such as IoT cannot practically be fielded without decentralized control, likely with a high degree of autonomy, and agile communications.
A monolithic uber-orchestrator, like that often used for traditional telecom and cloud fulfillment systems, has a full model of all resources in the network and understands all domain specific languages and, generally would lead to a reduction in the number of controllers and orchestrators. This approach would also lead to scaling issues. Instead, a hierarchy of domain-focused orchestrators, where each orchestrator models its own domain and uses abstracted views of other domains it interacts with, leads to a more robust and scalable network and allows network operators to choose the best solution for each domain.
Nokia’s white paper – Scalable Orchestration with a Hierarchy of Domain-focused Orchestrators – highlights the information flow across multiple infrastructure orchestrators for a complex COA. I have superimposed on this flow a notional concept of operations describing a set of security orchestrators and their interactions with other infrastructure orchestrators as shown in the figure below.
1. A mission monitor system tracks parameters (physics and network-based) received from a water management system to identify the risk to critical water management applications. A history of what is normal is maintained by the system for each critical parameter so that anomalous events may be detected. Events are classified and assigned a risk score.
2. Once a certain risk score threshold is exceeded, an alert is issued by the mission monitor system to the OSS to trigger appropriate responses. Abnormal and possibly malicious behavior that was detected by a network security sensor triggers an alert to the OSS.
3. The OSS sets up tracking for the alert, classifies the alert, and dispatches workflows to handle the event. It sends the alert to a Security Orchestrator.
4. A Security Orchestrator (SO) is a workflow system included as part of the OSS. Based on the risk profile provided, the SO classifies the risk condition indicated by the alert, and selects a course of action (COA) to respond to the alert. The SO sends the COA tasking model to the Service Orchestrator which breaks the COA model down into the required resource facing services to implement the tasks contained in the COA tasking model.
5. The first task is to enrich the situational context around the alert. The Service Orchestrator dispatches a task to investigate threat data from internal and external threat intel feeds and to correlate the event to other behavior captured by logs and SIEM.
6. Results from the enrichment tasks are sent to the SO for updating the COA model. The SO updates the COA model.
7. The SO issues a new COA tasking model to the Service Orchestrator based on the enriched alert.
8. The updated COA model is broken down by the Service Orchestrator. Tasks are dispatched by the Service Orchestrator to update security configurations and policies to the respective EMS.
9. The Service Orchestrator also implements the COA by tasking the NFVO for different virtual network services: map / geo-locate the event against the end-to-end network topology, identify the risk to connected assets, retrieve artifacts from affected hosts, isolate and cloak affected network services segments, update policy files in virtual network security functions (vIDS) and block lists (vFW, vIPS, etc.), deploy new overlay networks to the water management system along with required virtual security functions.
10. The NFVO uses the COA-specific security parameters to pick the required controls from a catalog of Security Service Functions maintained by the Open Security Controller (OSC) to create the new logical security services. The requested network services must have been on-boarded to the NFVO’s catalog of network services along with their network service descriptors (NSD). An NFV network service is comprised of virtual network functions (VNFs) and connected physical network functions (PNFs), a VNF forwarding graph describing the connectivity between the VNFs/PNFs, as well as one or more forwarding paths. The forwarding graph is made up of a series of connection points and virtual links. The NFVO executes the NSD to create the requested services and service policy updates.
11. The OSC coordinates the update of security policies between the EMS and virtual security function managers. OSC acts as an abstraction layer for the security function managers, alleviating the need for integration into each SDN controller or virtualization infrastructure manager. It is important to understand that OSC does not manage the virtualized security functions or their security policies, which are still completed by traditional element or security function manager for each function.
12. The NFVO instructs the data center SDN controller to create the new connectivity between the to-be-created VNFs based on the VNF forwarding graph.
13. The OSC works in conjunction with the SDN controller depending on it for traffic steering, service function chaining for security service insertion, and ensuring that the traffic is steered to the right security enforcement capability based on security policies.
14. The data center SDN controller implements the specified forwarding policies with service chaining.
15. For each of the required VNFs that are part of the COA tasking, the NFVO requests a VNFM to deploy it. The NFVO checks if required VNFMs are running and, if not, it will instantiate an appropriate VNFM. The VNFM deploys a VNF as one or more virtual deployment units (VDUs) that typically run on virtual machines. Different VNFs may require different flavors of VDU depending on the ratio of required memory, compute, storage, and I/O performance. The NFVO resource orchestrator makes a placement decision based on the requested resources and its policies, and provides instructions on where and how to deploy the VNF to the VNFM.
16. The OSC coordinates the insertion of security policies using metadata like policy tags and dynamic OSC security group membership for the new VNFs across the VIM environment.
17. Any VNF that consists of more than one VDU needs internal connectivity between these VDUs. The VNFM creates this connectivity either through the VIM or through the appropriate data center SDN controller.
18. Based on the placement decision, the VNFM knows the data center where to deploy the VNF and the virtual infrastructure manager (VIM) responsible for the needed resources. Based on that knowledge, the VNFM requests the VIM to create a virtual machine for each VDU.
19. The HIP controller segments and cloaks the affected network segment and sets up new network connections from the water management premises to the data center as an overlay VPN based on instructions received from the SO and OSC.
20. To complete activation of the COA, the OSC or, if needed, the service orchestrator configures the physical and VNFs through their relevant management systems with COA-specific parameters, such as security access codes, firewall rules, signatures for IPS, etc.
The NFV security orchestrator shown in the figure is based on the features and functions of the Open Security Controller (OSC). OSC is a github project, originally initiated by Intel Security, that is designed to address SDN / NFV security challenges by playing a role of a centralized security services orchestration tool across multiple cloud environments. As shown in the figure, it does so using a modular architecture to interface with:
- Multi-vendor virtualized security function managers (Security Manager Plugins)
- Multi-vendor SDN controllers (SDN Controller Plugins)
- Virtualization infrastructure Managers (Virtualization Connectors).
Conceptually, OSC fits in the data center as a security service orchestration solution to automate deployments and co-ordinate policy-based insertion for the virtualized security functions (vNGFW, vIPS, vWAF, vADC etc.) and SDN-based Service Function Chains (SFC). OSC allows a tenant to pick the required controls from a catalog of Security Service Functions and create a logical service on how that control will be deployed across the multiple virtualization environments. OSC is abstracting the lower level infrastructure for the security function managers, alleviating the need for them to do point-to-point integration with each virtualization environment. OSC achieves this using Virtualization Connectors and SDN Controller Plugins via its southbound APIs. By doing so, OSC allows the security administrator to define and validate consistent and granular security controls across these environments.
OSC enables fully automated provisioning, de-provisioning, distribution and delivery of security inside the perimeter of the network for virtualized security functions such as virtual Next Gen Firewalls (vNGFW), virtualized IPS (vIPS), virtualized Web App Firewalls (vWAF) and virtualized Application Delivery Controllers (vADC) from multiple vendors. OSC coordinates policies (using metadata-like policy tags and dynamic OSC security group membership) across multiple virtualization and SDN environments. OSC enables policies to be defined and enforced based on application workload requirements as opposed to traditional perimeter-based security in virtualized data centers. OSC’s role in ETSI MANO (Management and Network Orchestration) is that of NFV Security Orchestrator (refer SEC013 published normative specification developed by SEC working group within ETSI NFV). The core functionalities defined for NFV Security Orchestrator are as follows:
- Orchestrates system wide security policies within the NFVI (NFV Infrastructure).
- Acts as a trusted 3rd party that resides independently.
- Manages NFV SSAs (NFV Security Services Agent) to keep them in a consistent state according to the specified policy for security functions that can be orchestrated and deployed at system start-up or dynamically.
- Facilitates secure bootstrapping of SSAs.
- Secures pairing up VNFMs and EMSs, policy management, integrity assertion.
- Monitors SSAs for failure and remediation.
A standard messaging fabric may provide an underlying foundation to the Service Orchestrator or Security Orchestrator in support of the domain-based, modular interaction as described in the flow above. Currently, the Integrated Adaptive Cyber Defense (IACD) effort under Johns Hopkins Applied Physics Lab (JHUAPL) is evaluating messaging standards for orchestration framework specifications as part of the Secure and Resilient Cyber Ecosystem (SRCE) Focus Group. The mission of SRCE is to promote enhanced interoperability, automation, trust, and machine-speed information sharing across cybersecurity products and services. As part of this mission, the SRCE is defining specifications for a message fabric to support orchestration. Key specification goals include:
- A set of commonly understood application interfaces/descriptors usable by any tool or information source to plug-in to the fabric [Technical Interoperability]
- A standardized message set that establishes the contextual constructs and data formats to understand the message in the intended way [Semantic Interoperability]
- A consistently defined set of message services (supporting control, configuration, publish/subscribe, etc.)
- A set of configurable trust and access services that enables secure communications with confidentiality, integrity, and availability
- A set of transport protocols such as Advanced Message Queueing Protocol (AMQP) from OASIS and Data Distribution Services (DDS) from Object Management Group
Relying on common data representations and a message bus facilitates integration of components from independent vendors while providing backward compatibility of message formats.
Meeting the SRCE specification goals using a reactive architecture can also provide additional orchestration service capabilities. A reactive architecture enables faster end-to-end processing as the playbook services can be executed in parallel / asynchronously. For example, instead of assuming that process A will be called by process B after an event occurs, you could program process B to write data to a message queue indicating the event has occurred and including all relevant data. Control is also easily distributed in a reactive architecture, so there is no longer a single orchestrator serving as a central point of failure. A decoupled architecture enabled through a reactive architecture that uses message queues is an improvement over custom scripting for complex processes. It also works well when you need to scale certain parts of a workflow but not others. If there are more messages in a queue than a single instance can handle in the time allowed, for example, additional instances can be brought online. There is no need to change coding or alter the system architecture. Event Sourcing is another pattern facilitated by a reactive architecture where an Event Stream stores all of the events and enables event replay. This way, if an orchestrator service went down while events were still being produced, when it came back online it could replay those events to catch back up.
Rapid and accurate decision-making for orchestration will require more than message interoperability. Orchestrating and informing the mosaic of micro-services that provide security protections will also require semantic interoperability through the use of domain ontologies, domain specific languages, and ways to link or relate ontologies. To this goal, interoperability is also being facilitated by another effort created through the auspices of the JHUAPL IACD project that is focused on the development of a security-domain specific orchestration language known as OpenC2. This language is being developed and maintained now through an OASIS work group to enable the coordination and execution of command and control of cyber defense components.
Publish/subscribe messaging systems are also widely used to achieve a synchronized global network view among diverse service producers and consumers. For example, HyperFlow, a distributed event-based control plane for OpenFlow, allows sharing of a synchronized consistent network-wide view among multiple orchestrators / controllers. HyperFlow uses a publishing mechanism to maintain a consistent global view across controllers. Whenever a system status change is detected, each controller selectively publishes an event about the change through a publish/subscribe system. New status is then pushed to subscribed controllers for immediate updating. Another possible approach to linking different domains is through a clearinghouse capability. For example, the Interface for Metadata Access Points (IF-MAP) – an open specification developed by the Trusted Computing Group (TCG) – provides a publish / subscribe interface and clearinghouse for information about security events and objects on the network, as shown in the adjoining figure. It can support lots of real-time data writes, unstructured relationships, diverse interest in changes to the current state of a device as they occur by different consumers; and, support for distributed data producers and consumers. The Contrail SDN controller leverages IF-MAP to share the state of virtual network functions that it controls.
These technology examples provide just a hint of what is currently available to enable interoperability among orchestrators at the messaging and informational layers. Tools such as Tibco, Informatica, ActiveMQ, RabbitMQ, WebSphere MQ, RTS Data Distribution Service are some more examples of tools that provide interoperability at the messaging layer.
This concludes the first part of this three-part journey into the realm of orchestration with a special emphasis on security automation and orchestration. Next stop is to evaluate the role of artificial intelligence as a decision-making element for the next generation of security orchestrators. We will review a couple of security orchestrators that are applying such technology so stay tuned for Part 2.
And thanks for checking out ActiveCyber.net! Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, PQ cryptography, risk assessment and modeling, or other security topics. Also, email marketing@activecyber.net if you’re interested in interviewing or advertising with us at ActiveCyber.
Security automation and orchestration is the next big thing in the world of cyber security and it surely has improved the way of solving cyber security threats. It will help to effectively manage the innumerable threat alerts and work on it timely. This blog has helped me a lot in researching this topic and look forward to the next part of the series.