Three years ago, the European Union drafted a landmark set of policies1 to regulate AI in an effort to protect against the nascent technology’s greatest potential harms. The policy took three years and myriad experts to draft. But when ChatGPT went viral in late 2022, the EU was sent back to the drawing board.
As competitor chatbots and other forms of AI have proliferated since, their applications have expanded into nearly every sector of our lives, from education to healthcare. The first human implanted with Neuralink’s brain chip2 raised ethical and transparency concerns earlier this year, yet only a fraction of AI’s risks, benefits, and capabilities have yet to be understood.
AI regulation made up a significant portion of the resolution to the US writer’s strike3 last summer, which placed boundaries on the use and disclosure of AI in script writing, seen as a big win for labor that is expected to creep into other industries. The Biden administration also signed an executive order4 to put some guardrails in place for the development and use of AI late last year. Such safeguards require companies to report the risks their technologies pose to national security as well as regulations around deep fakes and the spreading of misinformation, particularly in the context of elections and consumer protection.
While the US appears to be taking more of a stepwise approach, the European Union has pursued a more holistic policy. In March of this year, the EU officially passed2 one of the first and most comprehensive regulations on AI, known as the AI Act5, which is set to go into effect by July. The AI Act focuses on protecting against risks while also fostering innovation. The primary risks the AI Act deems of highest concern include spreading misinformation, automating jobs, and threatening national security. The AI Act includes transparency requirements and restrictions on specific use cases such as facial recognition in law enforcement.
In November of last year, the UK gathered 28 governments, including China and the US, to agree on terms for international cooperation on AI regulation, known as the Bletchley Declaration6. Since then, the UK government has pledged an investment2 of £100 million to promote responsible AI, including £10 million for regulators to adapt and enforce existing regulations to AI applications.
Meanwhile, China is taking a more strict route by censoring chatbots6. Saudi Arabia and the United Arab Emirates1, on the other hand, are investing in AI research at the government level to better understand the landscape.
Geopolitical tensions and domestic policy also impact the AI market on a global scale. Taiwan2 currently produces most of the world’s AI chips, many of which are Nvidia products. With escalating tensions with China and the U.S. CHIPS Act coming into effect this year, supply chain and national security issues may impact the chips market that enables the entire AI industry.
As we pursue AI investment opportunities in 2024, the portfolio managers of our Technology, AI, and Deep Learning ETF (LRNZ) will continue to use fundamentals to track regulations with a watchful eye. The evolution of the AI regulation landscape will be just as impactful as the innovations in determining the shape this nascent and powerful technology is allowed to take on a global scale.
- https://www.nytimes.com/2023/12/06/technology/ai-regulation-policies.html
- https://foreignpolicy.com/2023/06/20/openai-ceo-diplomacy-artificial-intelligence/
- https://www.nytimes.com/2023/10/30/us/politics/biden-ai-regulation.html
- https://www.nytimes.com/2023/12/08/technology/eu-ai-act-regulation.html
- https://www.nytimes.com/2023/11/01/world/europe/uk-ai-summit-sunak.html
- https://www.investopedia.com/ai-is-the-biggest-tech-investing-theme-for-2024-8404597
The TrueShares AI & Deep Learning ETF (AI ETF) is also subject to the following risks: Artificial Intelligence, Machine Learning and Deep Learning Investment Risk – the extent of such technologies’ versatility has not yet been fully explored. There is no guarantee that these products or services will be successful and the securities of such companies, especially smaller, start-up companies, are typically more volatile than those of companies that do not rely heavily on technology. Foreign Securities Risk -The Fund invests in foreign securities which involves certain risks such as currency volatility, political and social instability and reduced market liquidity. Growth Investing Risk – The risk of investing in growth stocks that may be more volatile than other stocks because they are more sensitive to investor perceptions of the issuing company’s growth potential. IPO Risk – The Fund may invest in companies that have recently completed an initial public offering that are unseasoned equities lacking a trading history, a track record of reporting to investors, and widely available research coverage. IPOs are thus often subject to extreme price volatility and speculative trading. New Issuer Risk – Investments in shares of new issuers involve greater risks than investments in shares of companies that have traded publicly on an exchange for extended periods of time. Non-Diversification Risk – The Fund is non-diversified which means it may be invested in a limited number of issuers and susceptible to any economic, political and regulatory events than a more diversified fund.