As 2022 ended and we rolled into another year, I began to see the yearly editions of the top data breaches, top threats, and top privacy fines from different cyber and privacy companies. There were also some predictions on what 2023 will bring from a security tool and protection perspective. Generally, I didn’t perceive much optimism that things are going to get markedly better due to mounting tensions on the geopolitical front and the ability of cyber thieves to find new ways to evade defenses. According to Spherity: “The dark side of the digital world is professionalizing its
malicious activities at an amazing speed, launching sophisticated, AI-powered cyberattacks to compromise endpoints and sell stolen access credentials through Initial Access Brokers (IABs) to others who then launch ransomware attacks or steal and sell internal information.” The attack surface has been expanding for the last several years and will continue to do so as IoT rolls out along with 5G. New attacks on firmware will allow attackers to become deeply embedded into the endpoint while new attacker techniques are emerging that can evade typical security stacks. Most hope on things improving rest on the latest pushes around zero trust solutions, however, most experts expect slow adoption on this technology due to complexities and the amount of tool integration required. Many experts claim that it will be critical in the months ahead for defenses to use AI to keep pace with the AI-powered attacks and as a way to cope with skill shortages. Despite its drawbacks, I see AI as the best chance to gain on attackers. What is needed is the ability to train the model quickly make changes to the model quickly, and to integrate the model easily into production platforms.
So after my review, I have picked my top 3 challenges for 2023. See below for what I believe the industry needs to address comprehensively in 2023.
#1 Cyber Challenge – I feel that authenticity is the number one cyber challenge for 2023. Authenticity covers many different types of threats and vulnerabilities such as phishing (fake identity), deep fakes (fake imagery and fake audio), fake news (misinformation), spoofed websites (fake identity), and supply chain exploits (fake software and hardware). Authenticity refers to the quality of being authentic – i.e., being actually and exactly what is claimed or conforming to or based on fact. Authenticity involves establishing and maintaining persistence of the content, context, and structure of a subject, an object, or record, and attackers try hard to mimic each of these properties of their target to appear authentic. Authenticity problems result from trusting our senses or processes without performing a critical assessment. For example, the structure of a phishing email looks like it is coming from some authoritative and possibly well known source, with the “from” address nearly identical or identical (hijacked email accounts) to a real address that is known to the receiver. A realistic and often personal context regarding the subject of the email is provided, and the email content is often filled with false but seemingly realistic information directed to creating an urgent situation for the receiver to take an action desired by the attacker.
A recent example of this growing cyber authenticity challenge: In December 2022, Microsoft disclosed that threat actors have successfully exploited Microsoft’s “verified publisher” status to infiltrate the cloud environments of organizations. The disclosure by Microsoft coincides with a report released by Proofpoint that reveals how the campaign was successful at fooling Microsoft by mimicking popular brands in order to gain the blue verified badge. “The actor used fraudulent partner accounts to add a verified publisher to OAuth app registrations they created in Azure AD. These attacks employed lookalike versions of legitimate apps like Zoom to deceive targets into authorizing access and facilitate data theft. Targets included financial, marketing, managers, and senior executives,” the company explained.
The cyber authenticity challenge has also gained significance due to the growing scale of problems caused by the increasing proliferation of AI/ML and the use of these tools by attackers to scale a variety of attacks. For example, Chinese misinformation channels and accounts created synthetically using AI/ML were recently taken down by Google. Another example is the use of AI to create deep fakes. As sophisticated AI becomes more accessible, the scope and scale of these deep fake attacks are increasing. Recently, an AI startup lets anyone clone a target’s voice in a matter of seconds. This AI capability is being rapidly embraced by internet trolls, such as 4chan users, who have been flocking to free voice synthesis platform ElevenLabs using the company’s tech to clone the voices of celebrities and read out audio ranging from memes and erotica to hatespeech and misinformation. ChatGBT, (Generative Pretrained Transformer), the AI chatbot from OpenAI that has taken the world by storm since its debut last November, is high on the list of authenticity threats. Some analysts have noted that what sets ChatGPT apart is that it is so good at making its hallucinations sound reasonable. Cyber professionals have voiced concerns that ChatGPT could be abused by hackers with limited resources and zero technical knowledge. For example, just weeks after ChatGPT debuted, Israeli cybersecurity company Check Point demonstrated how the web-based chatbot, when used in tandem with OpenAI’s code-writing system Codex, could create a phishing email capable of carrying a malicious payload. Security professionals expect it could be utilized to have more realistic, nonauthentic, interactive conversations for business email compromise and other authenticity attacks over Facebook Messenger, WhatsApp, or other chat apps. I expect that ChatGBT and other AI tools will create authenticity havoc with social media and search engines.
Supply chains of software also pose authenticity challenges as hackers continue to impersonate legitimate applications to gain backdoor access to widespread systems. Software supply chain breaches, not vulnerabilities, were the leading cause of external attacks on organizations, as noted in Forrester Analytics Business Technographics Security Survey (2022). Many apps and services are built using open-source libraries, yet few organizations can accurately detail every single library in use. As much as 78% of code in hardware and software codebases is composed of open source libraries and not developed in-house. As defenders improve the “perimeter” of applications (i.e., public-facing web apps and APIs), threat actors are using other vectors. Increasingly a preferred vector is the use of third-party code, libraries, and services within an application. ReversingLabs’ NVD Analysis 2022: A Call to Action of Software Supply Chain Security highlighted that despite the rising number of vulnerabilities submitted to the National Vulnerability Database (NVD) in recent years, malware insertions and targeted attacks against the software supply chain via open source repositories and packages are outpacing vulnerabilities as an attacker’s vector of choice. Python packages are a recent target of these attacks on authenticity as described here.
Some examples of how hackers impersonate software to exploit software supply chains include:
- Dependency Confusion can occur when a build system mistakenly pulls in a similarly named dependency from a public repository rather than your private repository.
- Typosquatting occurs when a hacker uploads a malicious package to a public repository and names it similar to a popular package (i.e., request vs requests).
- Brandsquatting is similar to typosquatting but in this case the hacker names their malicious package identical to a popular package in a different ecosystem.
Hackers use the following attack vectors to engage their supply chain targets:
- Public Repositories – compromised open source code is mistakenly imported into the organization as a result of typosquatting, dependency confusion, author impersonation, or similar exploit.
- Build Systems – CI/CD pipelines and other build processes are compromised due to a lack of security controls and checksum verifications that allow threat actors to upload or insert components into the build process.
- Distribution Systems – vulnerabilities in public-facing distribution systems allow hackers to hijack software updates, upgrades and new versions of software/apps intended for customers.
Hardware counterfeiting, impersonation, backdooring, and tampering are also authenticity problems affecting the supply chain. China is the leading threat when it comes to this attack vector on authenticity. A report from threat intelligence firm Recorded Future portrays Chinese technology giant Huawei as presenting a substantial threat to US interests and organizations. According to the firm, Huawei’s enormous range of technologies and products and its global customer base has put the company in a position to access an unprecedented amount of information on organizations, governments, and people worldwide. Huawei’s obligations to the Chinese government under various national security and related statutes puts that data at risk of interception and compromise, Recorded Future said. Tampered components are extremely hard to detect and can remain hidden for years making the consequences of such an attack substantial. For example, in a December 2022 report, Eclypsium notes that attacker tradecraft has progressed to low-level firmware attacks leveraging systemic supply chain vulnerabilities providing the ability to destroy devices at the hardware level by attacking BMCs or the BIOS/UEFI. This gives attackers leverage in the context of extortion while synergistically aligning such tactics with a nation state’s goal of destructive cyber capacity. The significance of this Chinese supply chain threat was also reinforced by an NSA report from last June stating that PRC hackers have compromised major telecommunications companies and network service providers. It seems Incredible to me that the current administration is still allowing US sales of technology to blacklisted Chinese companies.
#2 Cyber Challenge – Visibility of enterprise IT and OT assets and cyber-related artifacts is my choice for the second significant cyber challenge for 2023. As a SOC Manager in a previous job I constantly ran into the difficulty of establishing an accurate baseline of the assets under management. Just to baseline network and user endpoints, I had at least a half dozen tools at my disposal to figure this out and they all came back with varying numbers of assets with significant deviation. The adage you can’t manage what you don’t know about certainly rang true to me at the time. And endpoints are only one asset to be managed. Data, certificates, crypto keys, APIs, roles, IP addresses, licenses, identity/accounts, application code and scripts, biometrics, policies/rules, and management tools are all types of assets that need to be managed through a life cycle and therefore require visibility. Unified visibility (single pane of glass) to these different assets is also important for a zero trust approach.
The distribution of assets across the cloud, the edge, on premise, mobile, at home, in the air, on land, or sea and space, whether OT or IT make it very complicated to maintain accurate and unified visibility, and impossible to do so in many instances when assets become disconnected at times. Patching in a highly distributed environment with limited visibility can become an overwhelming and drawn out task. However, an attacker just needs to find one unpatched endpoint to exploit.
Supply chain visibility is also reflective of the complications arising from trying to gain visibility of distributed assets due to the employment of third parties. Supply chain visibility is the ability to track different goods and/or products “in transit,” giving a clear view of the inventory and activity. It enables shippers to improve customer service and cost controls through management of inventory in motion, proactive status updates, limiting disruptions and risk mitigation. Supply chain visibility of software products has some special issues since developers may pull from many different code libraries – both public and private – and therefore may carry forward inherent vulnerabilities in their developed products. A government (CISA) initiative is underway to support the development of Software Bill of Materials (SBOMs) as a way to gain visibility into the composition of software products. In general, dependency graph, basic scanning, or software composition analysis (SCA) tools are used after software is built to generate a machine-readable SBOM. SBOMs will evolve to include more information for a fuller picture of the software supply chain, however, software developers want the government to clarify which artifacts — threat models, log entries, source code files and vulnerability scan reports — they’ll contain and how they’re to be packaged in a SBOM.
One of the key inventory elements of an SBOM should be APIs. Many organizations today do not have an accurate inventory of their APIs and this lack of visibility is leading to a new threat vector known as the “shadow API.” APIs are just sets of rules for communicating between application components. They make building and changing complex software easier. An API inventory will ideally contain information on all the available API endpoints, details on acceptable parameters, authentication and authorization information, and so on. However, even organizations that maintain an API inventory, APIs in production and benefiting from continuous development will drift far from their original definition in the inventory. As a result, I expect to see many apps breached via APIs of which organizations have very little visibility or even understanding. Organizations need to look into adopting API contracts as per OpenAPI and scanning tools to ensure conformance such as 42 Crunch or Traceable.
Biometrics, such as face and voice, are especially interesting these days from a visibility perspective due to the ability of AI tools to easily mimic these personal assets. And given the OPM breach of several years past, it may be possible to find my fingerprints used in conjunction with my manipulated voice and face for some purpose of not my choosing. So can I gain visibility in how my biometrics are being used or mimicked elsewhere? One example of such a tool is called PimEyes. Like most any tool, it has the potential for doing good or doing bad. I expect other AI search tools to focus on similar types of visibility searches such as for counterfeit digital art.
#3 Cyber Challenge – Privacy and confidentiality of data I see as the third biggest challenge of 2023, especially with upcoming privacy regulation decisions regarding the proposed EU-U.S. Data Privacy Framework that is supposed to settle some of the data transfer concerns of providers, consumers and activists. A recent non-binding decision by a European Parliament Committee has found that the proposed EU-U.S. Data Privacy Framework does not achieve equivalence with EU data protections and should not be used to grant an adequacy decision to the US. The key objection that was raised by the Committee is that the US remains without a federal data protection law that offers parity with the terms of the General Data Protection Regulation (GDPR). Ultimately, the Parliament decision, expected sometime this summer, is coming on the back of another banner year for data breaches and GDPR fines. DLA Piper’s annual report covering EU data breaches and GDPR fines reports a record year in penalties, with a total of €2.92 billion levied throughout the bloc in 2022. The aggregate value of fines issued in 2022 was 50% more than the value of fines reported in 2021. The report also indicates that along with concerns about data transfers, the bloc’s regulators are making AI more of a priority, as concerns run high about privacy abuses on everything from facial recognition tools to ChatGPT.
According to EQS Group’s report on GDPR fines and violations for 2022 – “Both the uptick in violations and mammoth fines levied in recent years highlight a growing lack of consent and transparency. Despite that worrying trend, it has been reassuring to see European regulators actively enforcing the law and imposing fines at a rate never seen before. Before 2021, the largest fine on record was levied in 2019 when Google was penalised €50 million for how it communicated privacy to its users as well as various data processing offences. That sum was dwarfed by Amazon’s record €746 million fine in July 2021 and multiple penalties since then have also run into hundreds of millions of euro. It’s going to be interesting to see just how high the fines are going to get in 2023.”
Interestingly, the biggest fine issued to date came from a Chinese regulator. In July 2022, China’s cybersecurity regulator, the Cyberspace Administration of China (CAC), fined global mobility technology platform Didi Global a record 8.026 billion yuan ($1.2bn) for violating the country’s network security law, data security law and personal information protection law. Two Didi executives were separately fined 1 million yuan each for the infringements. The company said it accepted the CAC’s decision.
2023 starts with significant legal uncertainty and challenges for data exporters, requiring highly complex assessments of the laws and practices of third countries and risk assessments. According to DLA Piper, European data protection supervisory authorities seem to be adopting an absolutist interpretation of the GDPR32 in the context of data transfers under Article 46 GDPR. Compounding this challenge, these supervisory authorities have shown willingness to enforce GDPR statutes directly against users of artificial intelligence rather than just targeting the AI provider of the technology. This tendency could have a chilling effect on AI for organizations knowingly or unknowingly using AI across their supply chains. The US Chamber of Commerce also completed an assessment on the future of AI recently. The authors noted when it comes to privacy: “Another critical factor that will affect the path forward in the development of AI policy making is how nations historically consider important values, such as personal liberty, free speech, and privacy.” They also said, “A failure to regulate AI will harm the economy, potentially diminish individual rights, and constrain the development and introduction of beneficial technologies.”
DLA Piper commented on the “absolutist interpretation” and aggressive enforcement by supervisory authorities: “The recent decisions against Facebook and Instagram attack the cornerstone of this grand bargain and raise the question how will online services be paid for if online service providers cannot harvest and monetise consumer data?” DLA Piper defines the “grand bargain” as the heartbeat of the Internet relationship between online service providers and consumers: “social media, search and other innovative services are offered “for free” in exchange for the consumer’s personal data which is then monetised by enabling brands to serve personalised adverts to the consumer online.” They go on to say: “A wider question is whether it is really in the best interests of consumers to undermine the financial model at the heart of the free consumer internet.”
The authors for both the DLA Piper report and the Chamber of Commerce report favor adoption of a risk-based, best practices approach to fill in the privacy statutory gaps. This approach seems to favor Big Tech as well. They see the AI train as an economic boon and an essential need for national security and I agree with these sentiments. However, I greatly fear that individual privacy rights will be trampled on in this rush to adopt AI. And as far as the “grand bargain” is concerned, I think the pendulum has swung way too far in favor of Big Tech, and with that swing has been some very disturbing abuses from privacy to first Amendment rights as the Twitter files and related news have revealed. The tremendous influence of Big Tech has also led to a much closer (and alarming) relationship with governments than what people had originally conceived when the Internet and social media first started. Alarm bells have been going off for several years as databases and accompanying AI has grown and realization has set in on its potential impact to privacy and individual freedoms, as evidenced by this Scientific American article. Given that the social media giants have the ability to profile users and purge anonymity (which may possibly lead to a more inclusive online environment), this ability at the same time will undoubtedly set the stage for governments and dominant social media institutions to even more freely employ surveillance tools to monitor citizens, suppress free speech and shape social debate. I am therefore not convinced that the current financial model that is “at the heart of the free consumer internet” has my best interests in mind.
Ultimately, organizations that write apps must secure code, keys, and personal data. It seems that very few are taking the necessary precautions to validate that all these areas are covered at a comprehensive level. The other problem is that applications intentionally share personal data about the users. From locator services information to text messages to facial and voice recognition, consumers fail to understand exactly what data is being collected from their devices and then shared or sold on the open market. And I am not talking about the Dark Web where this is also the main business activity. Or “bossware” which is employed to monitor employee productivity for both in office an remote workers. So I see this privacy area is going to just explode in 2023, with consumers and employees starting to become more aware of these risks and demanding more control over their personal data and/or employee data.
So the question in my mind is can a government bureaucracy get privacy regulation correct and fair to all parties concerned, and/or are there technology alternatives when it comes to social media? This ABA article talks about the need to treat social media companies differently than today from a regulatory perspective due to their de facto power for communication. Frankly, I believe that regulation in this case is an 80% solution but I have never seen a case where filling in the remaining 20% with “best practices” has ever truly been successful. What I have found is that every one’s best practice is not the same and you can’t rely on government / private sector collaboration because the voice of the consumer will not be heard and there will still be various standards for different countries.
While Web2 social media platforms have benefits and challenges, Web3 technology can drastically improve cyberspace for social networking. Based on blockchain, this technology allows for more privacy and security and gives users control over their data, digital identity and content, fostering transparency, as anyone can view the data at any time. Blockchain-based social platforms aim to promote free speech and provide censorship resistance, with no central authority controlling or manipulating the content. Using decentralized storage systems like the InterPlanetary File System (IPFS), social networks can protect user information from exploitation and malicious use. Additionally, no third party can own, collect, or sell user data. Web3 social networks often utilize fungible and nonfungible tokens (NFTs) as new ways to monetize content. Thus, decentralized social networks are not just a change in the infrastructure of centralized Web2 platforms; they are also changing the method of how social media companies make money. And products like Telegram, Hive, Diaspora, Peepeth, Aether, DTube and other decentralized social networks offer the opportunity to change that “grand bargain” to swing back more in favor of the individual – where I think it needs to be.
Can we use technology such as AI to police privacy and the misuse of AI? Perhaps we can splinter social media into AI-patrolled and regulated “safe space” separated from free-for-all zones. Can we combine this approach with a Web 3.0 decentralized social networking model? How will this approach impact the open exchange of ideas and privacy needs? I think it is too early to tell yet what the impacts could be, however, on the surface it appears to me that this paradigm change is worth some exploring if you are concerned about privacy and individual rights.
In addition to challenges concerning privacy, 2023 will bring more awareness to the quantum threat. The quantum threat refers to the risks posed by quantum computers to our current cryptographic schemes. in October 2021, U.S. intelligence officials singled out quantum computing as one of five key foreign threats, with the others being artificial intelligence, biotechnology, semiconductors and autonomous systems. The many data breaches we see today highlight that most organizations hold data beyond its regulatory retention period or usefulness, e.g., GDPR. The risk with this is that data stolen today does not have to be decrypted today to hold value. Intellectual property, financial data, healthcare data and other sensitive data stolen today could still be relevant beyond the next decade. And cybercriminals and adversarial nation states such as China understand this. Moreover, many data breaches attracting global attention from regulators, litigators and class actions are the result of poor cyber hygiene, stolen/weak credentials or known vulnerabilities. What risk do organizations carry when they knowingly continue to employ conventional encryption that is susceptible to a quantum attack? Will data stolen today and exposed later on due to quantum advances in the future result in class actions and penalties? NIST hs begun the process of identifying quantum-resistant algorithms to protect data against this foreseeable threat. Recently, however, researchers have already discovered a bug in one of the approved algorithms. I believe despite this issue, it is imperative for companies and government agencies to begin the journey to quantum-resistant future by adding it to their digital transformation plans by identifying where data lives, how it is secured, who has access, what its value is, and complete lifecycle management.
So what are your views on these cyber challenges? What are your picks for the most critical cybersecurity challenges for 2023? How would you measure progress on these challenges? Let me know your views and comments on this evolving topic.
And thanks to my subscribers and visitors to my site for checking out ActiveCyber.net! Please give us your feedback because we’d love to know some topics you’d like to hear about in the area of active cyber defenses, authenticity, PQ cryptography, risk assessment and modeling, autonomous security, digital forensics, securing OT / IIoT and IoT systems, Augmented Reality, Artificial Intelligence, or other emerging technology topics. Also, email chrisdaly@activecyber.net if you’re interested in interviewing or advertising with us at Active Cyber™.