Everyday use of artificial intelligence has crept into offices and homes around the world. It has improved efficiencies and led to groundbreaking discoveries. But the widespread adoption of AI has also broadened a world of possibilities for cyber attackers who now have more points of entry for their malicious behavior than ever before. While generative AI adoption has increased 287% over the past two years, cybersecurity for AI systems have only increased 43%.1 There is now a massive gap between the total digital footprint and the amount that is protected.
Malicious cyber attackers can confuse or poison AI systems to cause them to malfunction, with generative AI being the primary targets. Corruption can occur to the intake data that is used to train AI models, but can also infect AI models while they run and process new information in real-time.
Attacks on AI systems come in four main types.2 Evasion attacks occur after the model has been deployed by altering the input in order to influence how the system responds. An evasion attack could be the alteration of visual input for an autonomous vehicle that makes it drive through a stop sign or into oncoming traffic. Poisoning occurs while the models are being trained by introducing corrupted data. Similarly, abuse attacks attempt to feed an AI model incorrect information through a legitimate source. Privacy attacks occur while the model is being used and is intended to gain sensitive information about the AI system or its input data, such as a user’s financial information.
70% of enterprises experienced at least one attack on its AI systems in the past year and state-sponsored attacks on AI systems increased 218%.1 One reason why AI systems are so vulnerable is because they are being adopted in organizations without necessarily following guidelines or gaining permission from the company. A lot of AI models in use within an organization, therefore, may be unknown and unprotected.
Cybersecurity companies with their fingers on the pulse of rapidly evolving technologies and their threats are developing solutions to the complex vulnerabilities of AI models. The Chief Business Officer of Crowdstrike, a leading cybersecurity company offering a cloud-native platform, describes their methods of cyber protection as “Securing AI with AI.”1
Recently, Crowdstrike announced that it would be improving the cybersecurity of over 100,000 generative AI deployments by embedding its Falcon Cloud Security directly into the LLM enabling technology of another AI behemoth, Nvidia.1 The two industry leaders are teaming up to optimize AI for customers worldwide and across sectors.
Crowdstrike and Nvidia are also two of the top holdings in the TrueShares Technology, AI, and Deep Learning ETF (LRNZ) for their advanced uses of AI and distinct competitive advantage in their respective industries. They are joined by 20 – 30 other holdings that similarly lead in innovation in the nascent AI sector. As AI continues to proliferate and expand its size and capabilities, we will need such companies on the forefront of solving the global forces that threaten the models themselves and the people, like us, who use them.
For a full list of holdings, visit: www.true-shares.com/lrnz/