Progress and Problems: How the World is Getting Smarter About Artificial Intelligence

By Emily Bishop

Introduction 

A quick browse through the existing literature on artificial intelligence (“AI”) reveals that “[m]any risks arising from AI are inherently international in nature” [1]. In early March, a man was charged in California with attempting to transfer AI-related files to a Chinese company that was paying him [2]. Such instances of intellectual property theft are nothing new, but they are taking on a greater significance as countries around the world are racing one another to bolster their AI arsenals. Amidst this hot pursuit to perfect AI, it is worth taking a moment to pause and consider some of the implications raised by recent disputes and developments in the AI revolution. 

Background 

OpenAI launched ChatGPT in late 2022, introducing the world to a “highly capable linguistic superbrain…through a free, easy-to-use web interface” [3]. However, politicians and researchers have already sounded concerns about this emerging technology. This past summer, the United Nations Security Council held a session centered around the potential threat that AI may pose to global peace and stability [4]. During the meeting, Secretary General António Guterres urged diplomats to create a United Nations watchdog agency comprised of AI experts who would enforce AI regulations [5]. This proposed agency would have an analogous function to existing agencies on climate and nuclear energy [6]. Mr. Guterres also implored the United Nations to draft an agreement banning the use of AI in automated weapons of war by 2026 [7]. 

Back in 2018, the European Union (EU) published its “Coordinated Plan on Artificial Intelligence,” which emphasized the intention to foster ethical guidelines and standards in the development and use of AI [8]. Some of the plan’s main points included improving research collaboration between academia and industries as well as the goal of positioning Europe as a “global leader” in AI [9]. 

Analysis  

France launched its AI strategy in 2018, with the strategy’s authors arguing that, “the authorities should make data ‘a common good’ by granting researchers access to information from government-funded projects and by incentivizing private companies to publish their data” [10]. This suggests that France saw AI as a potentially galvanizing force for its economy and society, but also recognized the implications of its power on the geopolitical stage [11]. For instance, the application of AI in military affairs demonstrates the intersection between the powers and dangers of new AI technologies. One AI-enabled military system currently in development is the nEUROn [12]. The nEUROn is an unmanned combat air vehicle created by Dassault Aviation, a French aerospace company, with collaboration from Italy, Sweden, Spain, Greece, and Switzerland [13]. With its W-shaped design and a wingspan of 10 meters (or about 33 feet), it symbolizes not just an instance of international cooperation, but also the potential future of military aircraft. 

But with new technology comes new concerns. AI can enable greater autonomy to machines by giving intelligent systems greater freedom with which to act. Yet the smarter machines become, the more difficult it could be for humans to detect or anticipate mistakes AI systems may make, especially if biased data is incorporated in their programming [14]. Aside from the practical concerns, there are also humanitarian arguments against the unchecked development of automated weapons. Skeptics have raised alarm that it would be irresponsible for world leaders to “transfer responsibility for life and death decisions to machines” when such machines cannot appreciate the value of human life [15]. 

New questions emerge beyond the battlefield, inside the legal arena, where the increasing abundance of open-source AI technologies has generated new implications within the realm of copyright law [16]. In late March, Google was fined 250 million euros (which is approximately $271 million) by a French watchdog authority for training its AI chatbot using content from news publishers without notifying or compensating them [17]. This is not the first time Google has been fined by this authority, either. After France adopted an EU copyright directive in 2019, Google was supposed to negotiate with some of the country’s biggest news organizations, but this recent fine marks the fourth time that the watchdog authority has found Google noncompliant in one way or another [18]. Moving forward, the question remains: what additional enforcement mechanisms can be employed to incentivize tech giants to adopt ethical practices concerning publishers and other content providers? The commercial nature of Google’s use of this content without fair compensation to the media outlets involved will likely have major implications for other forthcoming lawsuits, including those brought by The New York Times against Microsoft and OpenAI, which involves the alleged use of The New York Times content without its consent for the training of chatbots [19]. 

Conclusion 

As the frontiers of AI expand, it is apparent that new practical and legal challenges have been left in its wake. It seems that it is simply too early to say with sufficient certainty what the potential consequences or long-term impacts of these new tools will be. With so many questions surrounding the emergence of AI and its legal implications, it almost seems too much for any person to answer. Perhaps we should just ask ChatGPT. 

[1] The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023, GOV.UK, https://www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration/the-bletchley-declaration-by-countries-attending-the-ai-safety-summit-1-2-november-2023#contents (last visited Mar. 23, 2024). 

[2] Glenn Thrush & Nico Grant, Ex-Google Engineer Charged With Stealing A.I. Secrets for Chinese Firm, The New York Times, (Mar. 6, 2024), https://www.nytimes.com/2024/03/06/us/politics/google-engineer-china-ai-theft.html?searchResultPosition=4. 

[3] Kevin Roose, The Brilliance and Weirdness of ChatGPT, The New York Times (Dec. 5, 2022), https://www.nytimes.com/2022/12/05/technology/chatgpt-ai-twitter.html. 

[4] Farnaz Fassihi, U.N. Officials Urge Regulation of Artificial Intelligence, The New York Times (Jul. 18, 2023), https://www.nytimes.com/2023/07/18/world/un-security-council-ai.html?searchResultPosition=3. 

[5] Id. 

[6] Id. 

[7] Id.

[8] Ulrike Franke & Paola Sartori, MACHINE POLITICS: EUROPE AND THE AI REVOLUTION, European Council on Foreign Relations (2019), https://www.jstor.org/stable/resrep21907.

[9] Id. 

[10] Id.

[11] Id.

[12] nEUROn Unmanned Combat Air Vehicle (UCAV) Demonstrator, Airforce Technology (Jun. 11, 2014), https://www.airforce-technology.com/projects/neuron/?cf-view.

[13] Id. 

[14] Franke & Sartori, supra note 8.

[15] Id. 

[16] J. Edward Moreno, Boom in A.I. Prompts a Test of Copyright Law, The New York Times (Dec. 30, 2023), https://www.nytimes.com/2023/12/30/business/media/copyright-law-ai-media.html?searchResultPosition=2.

[17] Tassilo Hummel, French competition watchdog hits Google with 250 million euro fine, Reuters (Mar. 20, 2024), https://www.reuters.com/technology/french-competition-watchdog-hits-google-with-250-mln-euro-fine-2024-03-20/.

[18] French regulators fine Google $272 million in dispute with news publishers, AP News (Mar. 20, 2024), https://apnews.com/article/google-france-news-publishers-copyright-7a7e484f55297e9803d17f736ff923a0. 

[19] Hummel, supra note 17. 

Leave a Reply