Early this past summer 2021 a friend of mine was asking me about Executive Order 14028 on Improving the Nation’s Cybersecurity and I had to admit that I had largely ignored it. Frankly, I have experienced many similar bureaucratic moves in the past when it comes to cybersecurity and none seemed to have the impact that was desired. But on my friend’s suggestion I decided to take a look. Overall, I found the Biden EO builds on President Trump’s Executive Order 13800, Strengthening the Cybersecurity of Federal Networks and Critical Infrastructure issued in 2017, and outlines a very aggressive plan for modernizing cybersecurity. The main focus is on the critical infrastructure, but cloud providers, IoT, and the federal government systems are also clearly in scope. It appears to me that this EO was largely a response to the threats manifested on our critical infrastructure by the Solar Winds supply chain hack and other significant platform hacks of Twitter and Microsoft, as well as spurred on by the Colonial Pipeline ransomware hack. An ever-increasing number of hacks and supply chain infiltrations by China, no doubt, also led to the issuance of the Order, as well as the ever-rising tide of ransomware attacks.
Meeting the Biden Order will be expensive for companies, especially those who have just started their cybersecurity modernization journey. The EO calls out “zero trust” as a way to help solve the cybersecurity crisis. Zero trust is a different security model than what most companies and government agencies have embraced in the past. Instead of assuming everything behind the corporate firewall is safe, the zero trust model assumes breach and verifies each request as though it originates from an open network. Proper application of zero trust to application environments will require some level of industry standardization on best practices given the diversity of software and hardware components of today’s hybrid cloud and OT environments. Although zero trust concepts have been around for a couple of decades, getting to zero trust in a scalable manner is a tall order; but, there are lots of cyber vendors, cloud providers, researchers, and consultants who are eager to show you the way. One of the key problems facing zero trust implementation is that many current apps are over-privileged. Organizations must consider approaches to break permissions to entitlements and impose strict auditing requirements on the irreducible set of permissioned apps. There are also implicit trust relationships and vulnerable protocols built into many OT environments that must be addressed through rearchitecting applications and environments. This journey could be particularly hard on small businesses that play a role in the critical infrastructure. Given that hackers frequently target small businesses, which make up 43% of the total breach victims, it would benefit small businesses to start the journey at least. The average ransomware demand is over $100,000 USD and continues to grow. (Source: Cobalt.io) The average cost of recovery from a ransomware attack is 20x the cost of the ransom, and has more than doubled this year, increasing from $761,106 USD in 2020 to $1.85 million USD in 2021. (Source: SOPHOS 2021 survey). The most at-risk industries are utility, financial and medical industries, storing a large amount of sensitive user data which make them ripe for ransomware attacks. These businesses will have to take a risk-managed approach to investing in better cybersecurity – both preventative measures and zero trust measures. One vendor that can get you started on the journey is Cyvatar – a cybersecurity provider with a special focus on helping small businesses from a preventative or proactive point of view. Check out my podcast here with Cyvatar’s co-founder and their complete suite of offerings that mirror the major cyber protection elements called out by the EO.
Recently, the US Treasury has stepped up its fight against ransomware by issuing sanctions against Suex – a cryptocurrency exchange that is responsible for laundering ransoms. This step is part of the whole-of-government effort to counter ransomware. The immediate impact of the sanctions will be to block all property and property interests of Suex that are subject to US jurisdiction, and to forbid US citizens from engaging in transactions with them. It also blocks any entities that are 50% or more owned by a designated person. Also, financial institutions and other people that engage in transactions or activities with Suex and associated individuals may also find themselves subject to action. Opinions differ on how much impact these sanctions will have. I am not certain that these sanctions will have any material effect on the ransom laundering operations of the Suex cryptocurrency exchange as the action appears to be more about “waving the flag,” but it does signal attention to the matter. However, I feel that the sanctions and other guidance issued by the US Treasury, in some cases, could lead to more harm for victims of ransomware since it could inhibit those companies to get back on the ground expeditiously after a ransomware attack. And if you are the victim of a ransomware attack, the federal government may not come running to your rescue as shown in how the FBI reacted during the recent Kaseya VSA ransomware attack. Learn more about offerings to combat ransomware in this podcast by Active Cyber with Kurtis Minder, CEO of Groupsense – a digital risk protection services company , and Ritesh Agrawal, CEO of Airgap Networks – a cybersecurity vendor of the industry’s first agentless Ransomware Kill Switch, who discuss their partnership to help companies fight ransomware.
Another one of the key challenges raised by the EO deals with supply chain security, especially software supply chains. Modern software is created from a combination of proprietary and open source software forming a software supply chain with potentially hundreds of components. While embedded and IoT developers often have specific commercial libraries for their platforms, cloud and mobile application developers may have dozens of candidate open source libraries – each nominally equivalent in functional capabilities, but each tested to different standards. The overall security of the application then becomes a function of what testing was performed within the supply chain combined with the patch management strategy used by library consumers.
The complexity of software supply chains, and the risks of getting supply chain management wrong should make this a top priority for all DevOps teams. A solution must cover several common supply chain attacks, ranging from typo-squatting and project take-overs to attacks on package managers that lead to dependency confusion. One of the solution approaches identified in the EO is the use of Software Bill of Materials (SBOM) as a way to determine the software components, their versions, and their provenance that may be in a software package or system. Allan Friedman, who formerly led the SBOM iitiative at NTIA, is now working on the SBOM initiative st CISA. Related to the SBOM work, NTIA also developed the concept of a Vulnerability Exploitability eXchange or VEX. The primary use cases for VEX are to provide users (e.g., operators, developers, and services providers) additional information on whether a product is impacted by a specific vulnerability in an included component and, if affected, whether there are actions recommended to remediate. In many cases, a vulnerability in an upstream component will not be “exploitable” in the final product for various reasons (e.g., the affected code is not loaded by the compiler, or some inline protections exist elsewhere in the software). A VEX is an assertion about the status of a vulnerability in specific products. I like the idea of a SBOM and VEX, however, I see it as little value when the equipment / software is made in China or other parts of your equipment/software supply chain that are not trustworthy.
One example of the threat of China seriously infiltrating the US supply chain has been well-documented by Joe Weiss, a control systems expert. According to Mr. Weiss, President Trump’s Executive Order (EO) 13920 was meant to prevent the use of Chinese-made products in critical bulk electric grid applications as the Chinese had installed hardware backdoors that bypassed all cyber security protections in electric grid components that were purchased and installed at several locations on the US electric grid. However, the EO 13920 was suspended in January 2021 by President Biden. Although the suspension on EO 13920 has subsequently been lifted, there has been no further action against this EO. Instead, it was generally replaced by President Biden’s “America’s Supply Chains” EO 14017, of February 24, 2021 to develop a larger coordinated effort to address the security of the U.S. energy sector. In response to this EO, DOE has issued a RFI to get feedback from industry with regards to a secure energy policy. Securing the electric grid will require a combination of many different approaches including making the economics of buying Chinese-made products less affordable, eliminating use of known Chinese-front companies providing grid equipment and services, actually prosecuting supply chain breaches and fraud, addressing networking and engineering issues, changing the scope of the NERC CIPS to focus on the reliability and cyber security of the grid not just routable networks, monitoring the process sensors off-line and in real time, and having engineering participation. In the meantime, China is still in a position to “own” our electric grids and other critical infrastructures at times of their choosing. See this recent blog by Joe Weiss to find out the latest action on this critical issue.
NIST is organizing projects and guidance to help address the challenge of identifying trustworthy products due to increased risk resulting from compromises in cyber supply chains. One example is the issuance for comments of practice guide SP 1800-34, Validating the Integrity of Computing Devices. The guide is a result from this project by the National Cybersecurity Center of Excellence (NCCoE) – NIST’s main cyber outreach arm to industry. The project produced example implementations to demonstrate how organizations can verify that the internal components of their purchased computing devices are genuine and have not been altered during the manufacturing and distribution processes. Additionally, this project and guide demonstrates how to inspect the processes that verify that the components in a computing device match the attributes and measurements declared by the manufacturer.
NCCoE has also been busy working on another practice guide – Securing the Industrial Internet of Things: Cybersecurity for Distributed Energy Resources. In this practice guide, the NCCoE applies standards, best practices, and commercially available technology to protect the digital communication, data, and control of cyber-physical grid-edge devices or also known as distributed energy resources (DERs). The use of small-scale distributed energy resources (DERs) is growing rapidly and transforming the power grid. The guide demonstrates an example solution for monitoring and detecting unusual behavior of these connected industrial internet of things devices and building a comprehensive audit trail of trusted IIoT data flows.
Building IIoT trusted data flows is highly dependent on sensor coverage and process measurement integrity. As Joe Weiss pointed out in a recent blog, a requirement for process measurement integrity should be added to the appropriate International Society of Automation (ISA) 62443 series of control system cyber security standards. In fact, there is even a need to define the term “process measurement integrity.” If you can’t trust what you measure, there is no cyber security, safety, or resilience.
Industry has made significant advances in the protection for critical software applications in the form of confidential computing. Confidential computing may help address integrity issues for critical software. The hardware helps protect a portion of the processor and memory, on which only authorized code is permitted to run and to access data, so code and data are better protected against viewing and modification from outside of the Trusted Execution Environment (TEE), even with privileged root access. A former colleague of mine at IBM and now a Fellow at Intel – Ron Perez – has been deeply involved in this initiative. Although based on different underlying tecnology and approaches, confidential computing reminds me of a bit of work by IBM Research called Secureblue++ that protects the confidentiality and integrity of an application’s information from the other software on a system, including privileged software like the operating system, device drivers and applications that run with root privileges — and malware that obtains root privilege by exploiting a bug in privileged software.
Industry is also moving fast to improve the transparency of trustworthiness of the software supply chain. One example is Finite State which provides a continuously updated risk model and risk score for a product. The Finite State platform fuses passive network monitoring, firmware analysis, vulnerability data sources, exploit data sources, manufacturer disclosure statements such as SBOMs, data from inventory management systems, threat feeds, and more to feed the data models used to generate the risk score.
Creating consumer “cyber” labels for IoT devices is another initiative being considered by the Biden EO. The EO calls out NIST to investigate the feasibility of getting vendors to supply such labels. A call for position papers, a recent workshop and set of panel discussions were recently held by NIST around this topic. I see many challenges in getting an effective labeling regime for cyber. Security can change dynamically due to new threats. Using a QR code is one way to have a dyamic label, but this will require vendors to offer over-the-air updates for IoT devices which will be difficult to achieve comprehensively. Having consistency of what is considered for cyber security measurement and certification is another key challenge. For example, should the label present a cyber risk score for the technical mitigations employed in a given device and a score for the manufacturer’s processes employed during development and post market? Schemes will probably need to define minimum profiles for cyber risk scoring that are required for IoT devices by class of device. Getting consensus / alignment / equivalence across different government schemes is also tall order, as the Common Criteria scheme can attest. There are governments that are already putting schemes in place such as Singapore and Finland.
I also believe that there is a high degree of uncertainty that labels will actually generate the desired behavior by the buyers or suppliers. And what is the desired behavior? In many cases, this was the downfall of Common Criteria where the Evaluated Assurance Level became the overriding focus by suppliers and buyers despite the importance of the Security Target. There are lots of labeling regimes. This study provides some insight on how labeling influences behavior. The willingness of buyers to pay for “measured security” and consumer perceptions of different labels is also an important consideration that needs to be evaluated. This paper provides a report on these social aspects that need to be studied for cyber labels.
Since safety and security are features that are related and of high importance to IoT buyers, I believe that it could make sense to follow the UL4600 safety standard as an example for cyber. Rather than require a particular technical approach, UL 4600 concentrates on ensuring that a valid safety case is created. A safety case includes three elements: goals, argumentation, and evidence. Goals describe what it means to be safe [secure] in a specific context, such as generic system-level safety [security] goals (e.g., don’t hit pedestrians) and element safety requirements (e.g., ensure a computing chip produces correct computational results despite potential transient hardware faults). Arguments are a written explanation as to why a goal is achieved (e.g., vehicle-level argumentation that the system can detect and avoid pedestrians, including ones that are unusual or appear in the roadway from behind obstacles, within the limits of physics and subject to the vehicle displaying appropriate defensive driving behavior). Evidence supports that the arguments are valid, typically based on analysis, simulations, and test results (e.g., for a computing chip mathematical analysis of error correction codes combined with the results of fault injection experiments). These aspects of the safety [security] case could be rated to a scale for the label.
The Biden EO also deals extensively with threat sharing and the application of AI/ML to threat understanding and cyber responses. I fully support this aspect of the EO. Threat sharing is something that the cyber security industry has been focusing on for the last 5+ years with the advent of STIX/TAXII protocols and threat sharing centers / exchanges. The application of AI/ML to a cyber regime is more recent and has led to many possible benefits such as establishing better insight to threats and improved discovery of threats on the network. It is also being used more extensively in playbook formulation and execution for cyber responses. I believe getting closer to fully autonomous cyber systems is going to be essential to maintaining a good cyber posture, and this will be possible only through fully leveraging AI/ML.
A couple of areas that I thought the Biden EO overlooked or missed involve the protection of digital identity – whether that is human or non-human entities, and the need to improve content authenticity. I believe that we must have better cyber abilities to detect and address deep fakes and to help combat brand impersonation – which often leads to reputation failure and phishing attacks. I didn’t really see these needs addressed in the EO. For more on what I mean by content authenticity, see my articles:
- Authenticity-by-Design: Ensuring the Authenticity of Content and Identity
- Enabling Authenticity and Trust in the IoT Age Using Decentralized Systems
- What Roles Do Provenance and Reputation Play in “Authentic-By-Design” Approaches to Digital Content?
AI/ML plays a double-edged sword when it comes to these needs. For example, deep fakes are based on deep learning and involve training generative neural network architectures, such as autoencoders or generative adversarial networks (GANs). There are also new AI/ML approaches for detecting brand impersonation. Content authenticity and provenance are also important to training algorithms based on machine learning. Supply chain attacks that aim to taint ML algorithms through access to the training sample or algorithm will also become an increasingly growing and critical threat.
So that is my review of the Biden EO – what is your view? As I looked over the directives of the EO, I noticed many parallels to recommendations contained in Active Cyber articles, interviews and podcasts. I’ll cover a crosswalk in my next article.
And thanks for taking the time to read this somewhat lengthy article. Let me know what you think about the Biden Cyber EO ability in establishing better security posture for US companies and government agencies.
And thanks to my subscribers and visitors to my site for checking out ActiveCyber.net! Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, authenticity, PQ cryptography, risk assessment and modeling, autonomous security, digital forensics, securing OT / IIoT and IoT systems, Augmented Reality, or other emerging technology topics. Also, email email@example.com if you’re interested in interviewing or advertising with us at Active Cyber™.