Industry 4.0 Ushers In the Age of Digitalization

Industry 4.0, or the fourth wave of the Industrial Revolution, leverages connectivity and autonomous operation to create robust Industrial IoT (IIoT) applications that will work in conjunction with legacy Operational Technology (OT). Industry 4.0 can be summed up as the digitalization of everything industrial. This includes digitalization of the physical world where OT operates. Spatialization is an Industry 4.0 technology that extends the benefits and capabilities of digitization to every aspect of the physical world in which we live – unlocking valuable new products, services, and business models. Spatialization, by digitizing the physical world, will amplify the explosion of data already underway. Today, big data is derived from a multitude of sources – from websites and social media to IoT applications to the expanded use of mobile devices. This immense amount of data makes it difficult for organizations to make sense of the data today, but as we enter Industry 4.0 where spatialization will take hold, and as a new generation of distributed production and IIoT sensors are added, it will be nearly impossible to translate the oceans of real-time data into useful decision-making information unless the approach to information analysis and presentation is upgraded.

According to Gabriel Rene and Dan Mapes, authors of The Spatial Web, spatial interfaces will be necessary to cope with this digitalization of the physical world and the resulting explosion of data. HMIs, historians, and other analytic tools will need to accommodate this new type of interface to view spatial data, navigate it, modify it, share it, make decisions about it and use it to simulate multiple alternative futures. These interfaces will leverage augmented reality and virtual reality to produce a mixed reality where the physical world and the virtual world meet. Spatialization will also usher in a new era of spatial protocols to address the digitalization of the physical world, including a spatial domain that points to a 3D volumetric spatial address made of coordinates that can be registered in a domain registry similar to ICANN. The Spatial Web will also create a new world of digital twins – enabling the ability to perform predictive maintenance and provide many other benefits. At the same time, digital twins of humans enable hard-to-detect deep fakes, creating new identity and authentication issues and resulting security problems. Identity issues will also extend to non-person entities as machine-to-machine communications explodes in Industry 4.0.

New Industry 4.0 Digital Technologies Are Disrupting OT Systems

In addition to spatial computing, a set of Industry 4.0 technologies is already causing major disruptions to the architecture and operation of OT systems and are likely to accelerate their impact over the next decade. These technologies are:

  • M2M communication,
  • Artificial Intelligence and machine-learning,
  • Collaborative robotics and augmented human,
  • Additive Manufacturing and 3D printing,
  • Cloud/edge technology and mobility,
  • 5G, self-organizing networks and software-defined networks,
  • Distributed ledger technology.

These technologies will have a massive impact on control systems and the ability to secure OT operating systems. Control systems compose the neural system of operational technology, connecting sensors and sensing, and connecting actuators and actuating in this new digital ecosystem. As the digitization of industrial systems proceeds, analysis, decision-making, and control will be physically distributed among edge devices, the network, the cloud, and connected systems, as appropriate. The distribution of functions and control among the devices and involving different technologies will allow users to adapt to the needs and constraints of each specific use case. This distribution of function and control will also likely make securing OT systems more difficult as design, composition, and operation of OT systems [and systems of systems] becomes more complex. Distributed OT / IIoT systems will also increase the cyber attack surface of these systems.

By the end of next year, the number of devices connected to the Internet will approach 40 billion. As connected devices multiply within industrial settings, the OT architecture layers and topologies must evolve to accommodate them, especially at the edge where the IIoT expansion has opened up opportunities for new application categories that need machine learning and AI beyond the enterprise firewall. With edge computing and analytics, data is processed near the source, in IIoT sensors and actuators, controllers, machines, gateways, closet or factory room servers, and the like. These systems will use M2M communications to pass data around and may not send all data back to the cloud, but the data can be used to inform local machine behaviors as it is filtered, integrated, and AI is applied locally. Edge nodes can provide in-the-moment data cleansing and routing, as well as real-time turnaround in complex applications, and they make it possible to place machine learning where IIoT lives. For example, cameras are one of the best sensors around, and computer vision — applying AI to image-based streaming data — is the first killer app for edge computing. Blindly sending high-resolution video over the Internet is not a good idea. A smarter practice [and one that is informed through AI] is to store and analyze video in place at your various edges and review or backhaul only when meaningful events occur. The edge is also where autonomous machines can be found. These “self-driving” robotic machines need local control to interface with and direct mechanical systems; local analysis, AI, identity, and decision-making to coordinate with other machines, things, and systems; and, usually some remote analysis and decision-making to ensure that the autonomous machines are all pulling in the proper direction to augment human activity and support the dynamic business needs. Placing intelligence at the edge helps address many problems often encountered in industrial settings, however, edge computing will also drive the need for specific risk assessments and new risk mitigation approaches to secure OT.

Although a great need for AI exists at the edge, there are a number of technical challenges that still need to be overcome to make it work there. For example, moving compute to the edge for machine learning analytics and AI won’t work without the appropriate level and type of compute support. There are different possibilities for providing the compute support needed, such as neuromorphic computing and in-memory computing, as IBM Research is doing. These new compute approaches don’t follow the standard “von Neumann” architecture normally used to build a computer. These new computers function more like how our brains work — with a very low-power chip that can process unstructured data like images or video. There is also in-chip processing which enables highly efficient processing by using a columnar database management system, where each field is individually stored in a memory-mapped file. Customized FPGAs also provide significant compute capacity along with the low latency and low power requirements necessary to perform at the edge and in IIoT settings.

The Volume, Velocity, and Variety of Data Growth From Industry 4.0 Will Put Pressure on the OT Security Infrastructure

Big data also affects OT security. Emergence of new factory types, such as smart automated plants, customer-centric plants, e-plants, mobile workshops, along with new business models – as a service, as a platform, IP-based, data-driven – are being combined with more expansive pipes [5G] that connect an ever greater dispersal of sensors and data generators that produce data at phenomenal rates. Autonomous systems also generate lots of data—that often arrives in high volumes at high speed: imagine a connected car that generates four terabytes of data per day, then multiply that by a fleet of cars. That could easily create petabytes or even exabytes of data arriving at high velocity every day. Large data lakes across multi-cloud and multi-tier environments are needed to handle this immense flow of data. This utilization of cloud resources is already occurring across critical infrastructure sectors, and picking up speed. For example, a large majority (71%) of utilities now use cloud software, up from just 45% three years ago according to a new Oracle survey. But while progress has been made, 85% of respondents still cite security as a top concern and noted that regulatory acceptance remains a barrier to cloud adoption. Even so, the flow of data being driven by IIoT applications is creating highly complex data flow / security / privacy rules as data is moved around, filtered, protected, used, stored, and discarded. This volume and velocity of data movement will also impose significant pressure on the OT security infrastructure to meet legacy safety, security, and availability concerns, while organizations also shift focus to new concerns around data governance and privacy.

Data governance and privacy present new challenges for any business implementing an IIoT project, or for that matter, any of the technologies cited earlier. Data governance is required to ensure that an organization’s information assets are formally, properly, proactively and efficiently managed throughout the enterprise and across the value chain to secure its trust and accountability. The General Data Protection Regulation (GDPR) and other compliance regulations bring data governance to the forefront of concerns for enterprise managers – From which device did the data originate? What users interacted with it? How has the data changed along its path? Industrial businesses will need to create infrastructures that account for governance, volume and velocity, with analysis tools to match the velocity of information being processed and stored.

Businesses also need to account for variety—meaning the various types of data that come in—and must assume that data sources will only multiply. Spatialization will drive new standards for data usage and storage as well as for identity of 3D spaces, virtual assets, and virtual users. And businesses will need infrastructures that are adaptable enough to absorb all data types, regardless of the format. New APIs will be needed that can translate a broad range of proprietary protocols into a common format to give applications a single source of industrial automation data to enable the velocity of data needed and expected for Industry 4.0. These APIs must be secure as well, as broken, exposed, or hacked APIs are behind major data breaches. This means APIs must enable a trusted environment with policies for authentication and authorization, ususally through an API gateway.

5G Will Bring New Levels of Performance and Innovation to Control Automation and Manufacturing Operations

5G technologies provide the network characteristics essential for future manufacturing under Industry 4.0. Besides transporting oceans of data from IIoT applications, 5G technologies will introduce new network capabilities that will enhance distributed cloud computing within manufacturing facilities and operations. The digital transformation of industrial processes requires evolving the network to deliver low latency and high reliability that are key to addressing manufacturing use cases. The high bandwidth and connection density of 5G networks also offer manufacturers the chance to build smart factories and take advantage of technologies such as automation, artificial intelligence, augmented reality for troubleshooting, and IIoT applications. In these futuristic factories, connected devices can sense their environments and interoperate with each other, making decentralized decisions. The mobile 5G technology will allow for higher flexibility, lower cost, and shorter lead times for factory floor production reconfiguration, layout changes, and alterations.

The ongoing 5G network evolution features the following developments:

  • 5G NR and carbon nanotubes – a new radio interface/access and a breakthrough in chip technology, respectively, that together extend capacity far beyond those of previous generations of mobile communication. Capabilities include massive system capacity, very high data rates everywhere, very low latency, ultra-high reliability and availability, very low device cost and energy consumption, and energy-efficient networks;
  • Realtime machine learning and artificial intelligence (AI) – analytics will be important in making the networks self-optimizing and to enhance self-organizing network features for SLA fulfillment of services;
  • Network slicing – this enables operators to provide dedicated virtual networks with customer-specific functionality;
  • Distributed Cloud – this enables placing workloads closer to the edge for better QoS such as latency;
  • End-to-End Encryption – 5G marks the beginning of a new era of network security with the introduction of IMSI encryption. All traffic data which is sent over 5G radio network is encrypted, integrity protected and subject to mutual authentication, e.g., device to network.

Aside from increasing throughput, 5G networks can also help reduce downtime. 5G-connected sensors can channel real-time information about equipment performance, ranging from vibration to noise data. Combined with machine learning algorithms, this data can help companies predict when expensive equipment is about to fail, reducing the likelihood of expensive downtime. For example, when the network provides advanced warning that a piece of specialized equipment needs a repair, augmented reality using low-latency 5G-enabled headsets will make technicians more efficient. Level 1 technicians can travel to a site and have engineers at headquarters guide them through the repair process remotely via 5G networks, using context-sensitive 3D animations to walk them through the necessary steps.

5G network slicing is a network architecture that enables the multiplexing of virtualized and independent logical networks on the same physical network infrastructure. Each network slice is an isolated end-to-end network tailored to fulfil diverse requirements requested by a particular application. For this reason, this technology assumes a central role to support 5G mobile networks that are designed to efficiently embrace a plethora of services with very different service level requirements (SLA). The realization of this service-oriented view of the network leverages on the concepts of software-defined networking (SDN) and network function virtualization (NFV) that allow the implementation of flexible, secure, and scalable network slices on top of a common network infrastructure. Specifically, these technologies enable 5G networks to have elastic services that operate as virtual machines. As “virtual machines,” 5G network services can scale out horizontally and independently from networking hardware. Network services include security services such as port-based authentication to verify authorized network devices are attached to the network, and IPsec or MACsec to create authenticated and cryptographic secured tunnels for sending data between sites and network elements, and virtual security elements that can be tied to specific workloads and migrate whereever the workload goes.

From a business model perspective, each network slice is administrated by a mobile virtual network operator (MVNO). The infrastructure provider (the owner of the telecommunication infrastructure) leases its physical resources to the MVNOs that share the underlying physical network. According to the availability of the assigned resources, a MVNO can autonomously deploy multiple network slices that are customized to the various applications provided to its own users.

5G also raises several questions. 5G requires higher power user elements (UEs) to work. This creates an open issue on safety which is still being tested and debated. A high density of cells is also needed for metropolitan areas raising questions about the physical space needed to deploy 5G. There are also factory spatial challenges for 5G.

Resiliency, Interoperability, and OT/IT Convergence Are Driving Security Frameworks for Industry 4.0

Industry 4.0 will need updated or new protocols to drive pervasive interoperability, performance, and trust at scale.  These new protocols must be designed and implemented securely to ensure that new vulnerabilities are not introduced into the enterprise as new IIoT projects are implemented. Addressing this challenge requires broad collaboration to develop consensus around key concepts and to build a shared understanding of the underlying technologies.

NIST understands that realizing the future promise of Industry 4.0 will require interoperability between elements and systems, supported by new reference architectures and common definitions and lexicons. To this end it developed NIST Special Publication 1500-202 – Framework for Cyber-Physical Systems and has established the Cyber-Physical Systems Public Working Group (CPS PWG), which is open to all, to foster and capture inputs from those involved in CPS, both nationally and globally. The Public Working Group is comprised of 5 sub-groups: Vocabulary and Reference Architecture, Cybersecurity and Privacy, Data Interoperability, Timing, and Use Cases.

The report of the Vocabulary and Reference Architecture Sub-group focuses on describing a consensus-based CPS framework and common taxonomy.

The report of the Cybersecurity and Privacy Sub-group focuses on the Trustworthiness Aspect of the CPS Framework, defined as the demonstrable likelihood that the system performs according to designed behavior under any set of conditions as evidenced by characteristics including, but not limited to, safety, security, privacy, reliability and resilience.

The report of the Data Sub-group provides an overview of the Data Aspect, discusses data interoperability from the CPS viewpoint, examines traditional data interoperability issues, and discusses the difference between data versus information models.

The report of the Timing Sub-group provides an overview of the Timing Aspect, presents the current status of, and needs for, time awareness in system elements of a CPS, discusses timing and latency in CPS, and describes special security issues that arise with respect to timing.

The report of the Use Cases Sub-group provides background on methods and approaches in use case analysis, discusses the analysis method, and examines supporting use case examples in manufacturing and food production.

NIST is already starting to apply some of the findings of this CPS Framework towards its efforts in securing autonomous vehicles, as this ActiveCyber interview with Dr. Ed Griffor of NIST points out.

In addition to these NIST efforts, the Electricity Subsector Cybersecurity Capability Maturity Model (ES-C2M2) was established as a result of the Administration’s efforts to improve electricity subsector cybersecurity capabilities, and to understand the cybersecurity posture of the energy sector. The ES-C2M2 includes the core C2M2 as well as additional reference material and implementation guidance specifically tailored for the electricity subsector. The ES-C2M2 comprises a maturity model, an evaluation tool, and DOE facilitated self-evaluations.

The ES-C2M2 provides a mechanism that helps organizations evaluate, prioritize, and improve cybersecurity capabilities. The model is a common set of industry-vetted cybersecurity practices, grouped into ten domains and arranged according to maturity level. The ES-C2M2 evaluation tool allows organizations to evaluate their cybersecurity practices against ES-C2M2 cybersecurity practices. Based on this comparison, a score is assigned for each domain. Scores can then be compared with a desired score, as determined by the organization’s risk tolerance for each domain. The model is publicly available and can be used by any organization to enhance its cybersecurity capabilities. For organizations performing self-assessments, a C2M2 Facilitators Guide and C2M2 toolkit are available. The Department of Energy continues to work with public and private partners to support adoption of the C2M2.

On the resiliency side, there have been industry and research efforts on developing models for improving structural resilience in the presence of cascading failures. However, according to the Energy Sector Cybersecurity Capability Maturity Model, there is a need to develop techniques to reduce risks and to increase operational grid resilience, commensurate with the risk to critical infrastructure and organizational objectives.

NIST has developed an approach to improve the cyber resilience of OT systems as published in  NIST SP 800-160v2 [final] – Developing Cyber Resilient Systems: A Systems Security Engineering Approach. This publication is used in conjunction with NIST Special Publication 800-160, Volume 1Systems Security Engineering—Considerations for a Multidisciplinary Approach in the Engineering of Trustworthy Secure Systems and NIST Special Publication 800-37, Risk Management Framework for Information Systems and Organizations—A System Life Cycle Approach for Security and Privacy. It can be viewed as a handbook for achieving the identified cyber resiliency outcomes based on a systems engineering perspective on system life cycle processes in conjunction with risk management processes, allowing the experience and expertise of the organization to help determine what is correct for its purpose. Organizations can select, adapt, and use some or all of the cyber resiliency constructs (i.e., objectives, techniques, approaches, and design principles) described in this publication and apply the constructs to the technical, operational, and threat environments for which systems need to be engineered. The system life cycle processes and cyber resiliency constructs can be used for new systems, system upgrades, or repurposed systems; can be employed at any stage of the system life cycle; and can take advantage of any system or software development methodology including, for example, waterfall, spiral, or agile.

It is also important to collaborate on open tools that facilitate interoperability in an inherently fragmented edge solution stack. One such protocol standard that integrates M2M and secure enterprise application interoperability over cloud is OPC-UA. OPC Unified Architecture (OPC UA) is a machine to machine communication protocol for industrial automation developed by the OPC Foundation. OPC UA differs significantly from its predecessor, Open Platform Communications (OPC). The Foundation’s goal for OPC UA was to provide a path forward from the original OPC communications model (namely the Microsoft Windows-only process exchange COM/DCOM) that would better meet the emerging needs of industrial automation. With OPC-UA, clear cost, power and TTM benefits can be delivered for factory automation systems.

Another option to facilitating open interoperability include tools like the EdgeX Foundry framework. EdgeX Foundry is an open source project hosted by the Linux Foundation that is building a common open platform for IoT edge computing. This open framework provides bare-minimum plumbing to not only serve as a center of gravity for assembling predictable solutions, but to also facilitate stronger performance benchmarks regardless of use. The interoperable platform enables an ecosystem of plug-and-play components that unifies the marketplace and accelerates the deployment of secure IIoT solutions across a wide variety of industrial and enterprise use cases.

Distributed Ledger Technology Will Play a Significant Role Securing the Distributed Enterprise of Industry 4.0

One new element of the Industry 4.0 infrastructure will be distributed ledger technology. Distributed ledger technology (DLT) is being considered as one technology that will help enable distributed control (via smart contracts) since DLT can process IIoT device data at the edge rather than sending everything back to a centralized cloud computing model. DLT can also help with reducing security exposure in the distributed enterprise, while also enabling a more secure supply chain.

In 1996, Nick Szabo described a smart contract as “a set of promises, specified in digital form, including protocols within which the parties perform on these promises.” While the technology available to support smart contracts has evolved considerably since then, this definition continues to capture the essence of what a smart contract is and does. Today, smart contracts are typically associated with block chain (although it is possible for other platforms to host them too). Within a block chain view of this, smart contract program logic sits within a “block.” A block is a software-generated container that bundles together the messages relating to a particular smart contract. Those messages may act as inputs or outputs of the smart contract programing logic and may themselves point to other computer code. A simple way of describing smart contracts is as a species of business process automation – it is a set of business rules encoded in software.

As a technology, block chain can be implemented using a shared, permission-based distributed ledger, which is a very specific database that features strict confidentiality, access and consistency requirements. Since these requirements are enforced in terms of cryptography and protocols, the employment of DLT enables one truth between multiple companies. It also allows those on these shared networks to see databases of data that have never been brought together before, which creates opportunities for new forms of analytics and AI. DLT-based smart contracts can provide visibility at every step of a supply chain. Internet of Things devices can write to a smart contract as a product moves from the factory floor to the store shelves, providing real-time visibility of an enterprise’s entire supply chain.

Block chain-based smart contracts can help fill  optimization and security gaps and improve the state of the art in resilience by providing an atomically verifiable cryptographic signed distributed ledger to increase the trustworthiness, integrity, and resilience of OT systems at the edge.

Privacy and cybersecurity are major concerns regarding DLT for regulators and industry leaders. To enforce distributed ledger and smart contract regulations, regulators will need varying levels of access to often times private information to monitor and regulate underlying transactions. Depending on the level of access necessary, market participants will likely voice demand for the protection of their information, both from other commercial parties as well as from inappropriate access by hackers or other unauthorized third parties. Some industry observers suggest that the use of a block chain for trade reconciliation, settlement and the like would require sophisticated privacy controls and the management of access to the information residing in the block chain. Block chain programming already allows varying levels of partitioned access to the data within a chain. Further, the use of a permissioned network can restrict the data sharing to those entities using the network and the regulators monitoring it. Regulators and the public may also benefit from many inherent characteristics of block chains and smart contracts. In situations of complicated, legally tenuous, or particularly risky transactions or contracts, the regulator could serve as an advisor by approving contracts, or possibly even coding regulator-approved ones, for industry distribution.

Some view block chain technology as too slow to keep up with the pace of IIoT transactions and data flow. While the traditional bitcoin block chain had a maximum throughput of seven transactions per second, the open source Tangle DLT can support rates in excess of 1 million transactions per second. In terms of its potential deployment, the IOTA foundation, which develops and maintains Tangle, describes it as an open source protocol that helps underpin IoT with all kinds of machine-to-machine interactions. Test beds are currently in the planning stage involving tier 1 automotive suppliers.

In closing, as powerful as ledger technology is, it doesn’t replace the need for rethinking how we foundationally architect our solution stacks in terms of infrastructure and how applications are deployed, secured and managed from the edge to the cloud.

Digital Twins May Help to Unify OT and IT Perspectives

In addition to artificial intelligence, a lot of diverse, human domain intelligence is required to pull together an IIoT system —  in many cases between experts across the OT and IT domains. The engineers responsible for developing OT have quite different perspectives from the engineers responsible for IT development, as the Engineering V meets the Agile Manifesto. Relatively few have significant experience in developing both. Yet both perspectives should be considered together to create a trustworthy and safe system, requiring organizational changes and some technology to increase communication, cooperation and understanding.

Visibility, control, and real-time analytics are key to detecting and protecting against industrial cyber threats, while also providing deep insight into the performance and operation of OT / IIoT systems. Organizations should plan to capitalize on the digitalization trend by enabling “digital twins” for OT / IIoT systems to enable these key capabilities and to provide a common source of the “truth” across OT and IT disciplines. A digital twin is a virtual companion to a system, component, or part that can be exercised for simulations, training, and predictive maintenance. It also can be analyzed for engineering optimizations and assessing security operations. The manufacturing world initially instituted the digital twin concept as a digital surrogate and a physics-based technical description of a cyber-physical or physical system resulting from the generation, management, and application of data, models, and information from authoritative sources across the system’s life cycle. Organizations should apply the digital twin concept to cyber and cyber-physical [OT] systems by linking probabilistic engineering models with test, operational, and maintenance data to assess elements of system performance, security, safety, and reliability. Such a digital system model provides OT / IT  engineers and cyber stakeholders a structure for examining the cross-discipline types of cyber-physical data that should be considered across the life cycle. The digital twin model enables dynamic, real-time assessment of the actual system’s current capabilities and security state. It also aids in performing predictive assessments of future capabilities to inform decisions in acquisition, operational readiness, optimization, maintenance, and cybersecurity.

Sensors—either agent sensors or scan sensors—along with information from controllers, management systems, and manual methods, collect and report the raw data that reflect the operational / security states of the system. This information is normalized and synthesized using ontological forms to populate the digital twin model. Various analytic tools are applied to the model to support cyber assessments, evaluate performance, identify trends, forecast maintenance needs, and develop prognostics for the system under evaluation.

Where feasible and desired, autonomous watchdog agents, supplemented by artificial intelligence (AI), can be deployed to automate the performance assessment, forecasting, and prognostics processes over the system life cycle. These autonomous agents can provide valuable assistance at the edge where reachback across the network is constrained. The function of these autonomous agents can be further enhanced through the employment of block chain technology and the use of smart contracts. For example, distributed ledger technology [block chain] can serve as a time machine, recording every configuration change, health check, approved baseline change or waiver, and maintenance performed on the system since it was installed. Through smart contracts, replacement parts can be automatically re-ordered based on analytics performed by the intelligent autonomous agent.

Together, these solution elements provide an enduring, reliable, and authoritative source of truth about a specific system’s state of cyber readiness or level of performance for a particular range in time of the system life cycle and for a particular operational feature or system function. The digital twin helps organize, manage, and track system design and configuration data, revisions, and releases. It shares this information with the rest of the organization—across the system life cycle, across disciplines, and across the supply chain—so all stakeholders – both OT and IT – can collaborate from a single source of the truth.

Supply Chain Security Will Be Disrupted by Industry 4.0

According to forecasts by IDC Marketscape in 2019, manufacturers recognize that IIoT will be among the technologies that will have the greatest impact on their businesses in the next five years. One critical business area where the impact of IIoT applications is already being felt is the digital supply chain. PwC defines the digital supply chain with 8 key elements: integrated planning and execution, logistics visibility, Procurement 4.0, smart warehousing, efficient spare parts management, autonomous and B2C logistics, prescriptive supply chain analytics, and digital supply chain enablers. All these areas must also be provided with end-to-end cyber security solutions. Security of the supply chain is growing in importance for critical infrastructure companies due to government mandates, to prevent the loss of IP, and to reduce the risk of catastrophic failures due to substandard components and cyber attacks. The need to drive end-to-end cyber solutions along with the impacts generated by IIoT are necessitating deeper integration with industrial suppliers along with a shift from physical transport of goods to digital transmission of data (additive manufacturing / 3D printing), enabling a transfer in control to distributed production, predictive maintenance and optimization, while also creating a hard-to-secure perimeterless cyber enterprise. Partner ecosystems are also growing with the growth of IIoT, but also becoming more complicated as two companies that have traditionally been quite different try to go to market together. Hence, as Industry 4.0 unfolds, there will be significant changes in business relationships and strategies that impact the location (cloud, near-edge, edge), composition, and operation of OT systems, as well as the participants of the value chain of an enterprise. These disruptions to the supply chain / value chain may also create gaps in the security of these business processes.

NIST is working to address some of these supply chain security gaps with its new project – Supply Chain Assurance – Validating the Integrity of Servers and Client Devices. The goal of this project is to provide guidance that will help organizations verify that the internal components of their purchased computing devices are genuine and have not been altered during the manufacturing and distribution processes. Additionally, this project will demonstrate the creation of manufacturing artifacts, verification of components during device acceptance testing, and verification of device state during use of personal computing devices with hardware roots of trust. The solution will use security controls that adhere to the NIST Cybersecurity Framework, industry standards, and best practices. The project will result in a freely available NIST Cybersecurity Practice Guide, documenting an example solution that demonstrates how to integrate verifiable artifacts with existing enterprise IT management systems into your organizations.

Cyber Exercises Help to Assess Readiness of OT Systems

It is worth noting that the cyber attacker is a natural actor within this emerging digitized industrial ecosystem and should be considered both from a security perspective and from an economical perspective. Attackers are likely to play an important role in the success or failure of future industries, as will security vendors. As noted by the Jack Voltaic 2.0 Cyber Research Project, a 2018 exercise developed by the Army Cyber Institute at West Point and hosted by the city of Houston, current approaches to safeguard critical infastructure are inadequate to meet the growing cyber threat to urban communities. Jack Voltaic provided 200 participants from 44 organizations with a full view of potential critical infrastructure crises while also offering a path to security and resiliency. The exercise found that U.S. cities need an adaptable and scalable model to improve their cybersecurity posture, as cyber attacks can quickly overwhelm an unprepared city government. Greater public-private partnerships will be required to provide vital support, and city and local cybersecurity efforts should better integrate the private sector.

The exercise also revealed that the U.S. military and its allies depend on civil and commercial infrastructure, and its vulnerabilities are the military’s vulnerabilities. The military must collaborate with the National Guard, the Department of Homeland Security (DHS), the Military Reserve and the Department of Energy to develop an operational risk-management framework that can meet the issues facing the nation’s critical infrastructures. All stakeholders need to operate within the National Response Framework (NRF) which covers:

  • Roles and responsibilities at the individual, organizational and other private sector as well as local, state, and federal government levels
  • Response actions
  • Staffing and organization
  • Planning and the National Preparedness Architecture
  • NRF implementation, Resource Center, and other supporting documents incorporated by reference

The NRF Annexes include enumerated protocols for managing incident responses across a variety of incident responders and stakeholders (firefighting, public works, law enforcement, public affairs, etc.), types of incdents (cyber, biological, nuclear, mass evaucation, etc.) as well as related disciplines such as transportation, logistics, communications, and others.

Another cyber exercise for the OT environment is GridEX. NERC’s Grid Security Exercise (GridEx) is an opportunity for utilities to demonstrate how they would respond to and recover from simulated coordinated cyber and physical security threats and incidents, strengthen their crisis communications relationships, and provide input for lessons learned. The exercise is conducted every two years. Like Jack Voltaic, GridEx aims to:

  • Exercise incident response plans,
  • Expand local and regional response,
  • Engage interdependent sectors,
  • Increase supply chain participation,
  • Improve communication,
  • Gather lessons learned,
  • Engage senior leadership.

GridEx V was completed in November 2019. It was expanded to reach distribution-only utilities (or those with limited generation or transmission assets).

And thanks to my subscribers and visitors to my site for checking out! Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, PQ cryptography, risk assessment and modeling, autonomous security, digital forensics, securing ICS / IIoT and IoT systems, Augmented Reality, or other emerging technology topics. Also, email if you’re interested in interviewing or advertising with us at Active Cyber™.