Tim Roberts
London
The use of Artificial Intelligence (AI) has become a hot topic in boardrooms, universities, governmental offices, and the media. Companies are racing to adopt AI technologies to improve their business processes, reduce costs, address labour shortages, upskill personnel, address ESG-related concerns, to name but a few examples. Generative AI (GenAI) tools such as OpenAI’s ChatGPT and Google’s Bard, which can create various types of data (text, images, and other media), have been garnering a lot of media attention recently, reigniting the debate around AI. While it is clear that AI technology has many potential applications, the true depth of the transformation and the risks it brings to our society are yet to be fully comprehended.
The response from policymakers, so far, has been somewhat disparate. Some policymakers are responding to the perceived risks by revising and proposing new legislation, such as the European Union’s AI Act, which is still in draft. In the UK, the Prime Minister’s office recently announced that the country will host the first global summit on AI safety this autumn, on the premise that the UK has a “global duty to ensure this technology is developed and adopted safely and responsibly.”
In this article we will explore ways in which some applications of AI technology bring risks in relation to intellectual property protections, defamation, and transparency and biases, and how these risks have sometimes crystalised into legal disputes.
AI: the inventor, creator, and author?
AI technology can now be used to create new artwork, write articles, review literature, produce interior designs, and much more. While in some cases the results are still not as advanced as desired, it foretells a world where AI applications might augment, or even replace, the contributions humans bring to the world of art, literature, design and more.
At present, some of the services AI technology provides create concerns around ownership, sources of inspiration, equitable distribution of rights and rewards, and protection of intellectual property (IP). In some recent cases, the source of a dispute is the access and use of copyrighted work that was used as training datasets for AI models. For example, Getty Images has accused Stability AI Inc of copying millions of its photos without a license, to train its AI art tool Stable Diffusion. According to Getty, this infringes its copyrights, violates its terms of service, and affects competition, which includes impacts on companies that paid Getty for licenses to its images.
The use of ChatGPT by members of Samsung’s semiconductor business provides a further IP-related cautionary tale. Samsung employees reportedly pasted lines of confidential code into ChatGPT in order to help their efforts to debug the code. The act of doing this effectively shared the code outside of Samsung, raising the potential that this code may be incorporated into ChatGPT’s future releases, and therefore future responses to queries posed by users.
A challenge: Ensuring accuracy of fact
With the adoption of GenAI tools, various concerns have been raised, such as security risks, data privacy, and the factual accuracy of the output. The OpenAI website indeed notes that ChatGPT “May occasionally generate incorrect information” and “May occasionally produce harmful instructions or biased content.” The challenge here is that the large language models that underpin GenAI technology use statistical probability to infer relationships from the data; in that sense they generate answers that are plausible, but that’s not to say they are truthful.
A key challenge is that AI models are very dependent on the accuracy and ‘completeness’ of the datasets used to train them – if the datasets contain insufficient, outdated, incorrect or false information, the results they produce will be impacted by these limitations. Take ChatGPT as an example, which currently states on its website that it has “Limited knowledge of world and events after 2021.” Its knowledge of the world is limited based on how recent its training data is.
Inaccurate results can bring legal jeopardy, because if the output produced by GenAI is perceived as a reliable source of information, then it can have harmful impacts if it is inaccurate. As an example, a regional Australian mayor was considering a lawsuit against OpenAI, after it came to light that ChatGPT provided false claims that he had served time in prison for bribery, when, in fact, he had been the whistle-blower exposing the bribery. OpenAI was given 28 days to fix the errors or face a defamation lawsuit.
In search of transparency and bias mitigation
By adopting automation technologies that have AI as an integral part, organisations can streamline and scale up their operations and decision-making processes. Applications can range from reviewing job applications to detecting fraudulent activity. As with other AI-based applications, the quality of the AI-decision models will depend on the selection and quality of data used to train the algorithms. In fact, using AI-decision models does not mean that the outputs will not contain human-based judgement errors. The developers of AI are humans, with cognitive biases that influence the design of the models and the data selected to train them.
Allegations of biases and discrimination in AI systems have been widely publicised in recent years. For example, a lawsuit brought against Workday Inc. by a job applicant, a black man in his 40s with disabilities, claimed that all of his 80-100 employment applications were denied because Workday’s AI screening service tool discriminates based on race, age, and disability.
It is becoming clear that there is a need for greater transparency, such as knowing when a result was produced using AI, and fully understanding when and how AI was used in decision-making processes. Article 52(1) of the draft EU AI Act includes a requirement that persons are informed that they are interacting with an AI system when this is not obvious from the circumstances and the context of use. The US Equal Employment Opportunity Commission’s Strategic Enforcement Plan will enforce non-discrimination laws into the decision-making of employers using AI tools to hire workers.
Conclusion
The creation of legislative frameworks setting the requirements for the use of AI should assist developers and deployers in their efforts to develop and use more accurate, ethical, and transparent systems and services. It will also help those affected by their use to identify where AI has been deployed.
As to how any AI-related disputes will be pursued, recent events suggest that the courts themselves might need to start setting new requirements to ensure they are also protecting themselves from the risks of dealing with filings containing GenAI ‘fantasies’. In a recently reported case, a New York-based lawyer used ChatGPT to supplement legal research, however this resulted in the presiding judge observing that “…six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations.”
Perhaps in response to the case of the New York lawyer, a Texas judge recently set out a requirement that “All attorneys appearing before the Court must file on the docket a certificate attesting either that no portion of the filing was drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being…”. Furthermore, the judge lays out his reasoning for this requirement and invites “Any party believing a [GenAI] platform has the requisite accuracy and reliability for legal briefing may move for leave and explain why.” It will be interesting to see if anyone takes the judge up on the offer, and whether they use GenAI to help form their arguments…
For the avoidance of doubt, the authors wish to certify that GenAI tools were not used to generate the text of this article!