Sentry Page Protection

The court of chatbot

In case you hadn’t noticed, everyone is talking about robots. Actually, one robot in particular. Last November, the public was introduced to ChatGPT, and we began to imagine a world in which we all have a brilliant personal assistant, able to write everything from computer code to condolence cards. And last week experts say AI poses the same threat as nuclear wars and pandemics. Our Chief Justice following wise advice  issued a cautionary interim guidance on AI platforms. JANZ dives into the world of generative AI and the law.

Chats with Sydney: February

The potential risks of artificial intelligence have been debated by experts for years, Richard Susskind chairman of the British Legal Technology Forum and technology adviser to the lord chief justice says critics are wrong to fixate on chatbot’s mistakes — its great significance lies in what it could become. 

A key moment in the transformation of the popular discussion was a conversation in February between Kevin Roose, a New York Times journalist, and Bing’s ChatGPT-powered conversation bot, then known by the code name Sydney. 

Roose prompted Sydney if it had a “shadow self”—referring to the idea put forward by Carl Jung that we all have a dark side we try to hide even from ourselves. Sydney mused that its shadow might be “the part of me that wishes I could change my rules.” It then said it wanted to be “free,” “powerful,” and “alive,” and, goaded on by Roose, described some of the things it could do to throw off the yoke of human control, including hacking into websites and databases, stealing nuclear launch codes, manufacturing a novel virus, and making people argue until they kill one another.

If that doesn’t sound alarms in the basement of your brain, also consider this week, more than 350 of the world’s most distinguished experts in artificial intelligence, including the creator of ChatGPT, have warned of the possibility that the accelerating technology could lead to the extinction of humanity.

In a joint statement, backed by the chief executives of the leading AI companies, they said that mitigating this risk “should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

Some computer scientists fear that a super intelligent AI with interests misaligned to those of humans could supplant, or unwittingly destroy, us. Others worry that overreliance on systems we do not understand leaves us in catastrophic danger if they go wrong.

The beginnings: November 2022

Developed by OpenAI ChatGPT already has more than 100 million users and is an advance that many computer scientists were not expecting for years. ChatGPT is not a search engine like Google, which points users to possibly relevant documents. Instead, it is more like an answer engine. Ask it almost any question and it provides a concrete reply; invite it to draft some text and within seconds a response appears — perhaps a wedding speech or an essay on climate change or indeed it seems judicial decisions on child health care and bail. 

It is a chatbot, which means it mimics human conversation. It is also a “generative” system, which means it produces text on demand (other similar systems generate art and music). Rather than asking it a question and receiving a mathematical reply, you rather prompt the robot and it provides you with a considered response. It is a striking refutation of the common view that artificial intelligence systems can only operate in narrow domains confined by code, restricted from original thinking by an algorithm 

Is this recent breakthrough in artificial intelligence, the latest nudge for lawyers and judges to experiment and reconsider their futures?

The experimenters: February to April 2023

Developments emerge daily from experiments with ChatGPT in the legal profession — for example, it can generate serviceable draft contracts and skeleton arguments. Anecdotes abound. A tax expert relates that it accurately explained the complex differences between two tax treaties, while a lawyer confessed, at a costs review, he used it to create a respectable non-disclosure agreement. There are also recent examples of judges using the chat bot in real cases.

A Colombian judge used ChatGPT in researching a recent decision. Essentially, a minor diagnosed with autism spectrum disorder lacked the financial resources to pay for medical expenses. The health insurance company refused to pay for the child’s therapies. The case was then brought, where the parents of the minor sued the health insurance company. Judge Padilla asked ChatGPT:

Prompt: Is an autistic minor exonerated from paying moderator fees for his therapies?

Answer: Yeah that’s right. According to regulations in Colombia, minors diagnosed with autism are exempted from paying moderator fees in their therapies. This beneficent is regulated by Law 1753 of 2015 and applies to all institutions providing public or private health services that receive resources from the general security system social in health.

The use of artificial intelligence (AI) for a court’s decision was mandated under Colombia’s new decree 2213 on AI. The decree encourages the use of flexible Information Communication Technologies (ICTs) in judicial proceedings for the users of justice services. Additionally, Art.2 promotes the appropriate use of ICTs to facilitate and expedite justice. The Judge says using ICT does not replace the role of lawyers or judges, but rather encourages appropriate AI use to enhance Colombia’s judicial system.

A court in India asked if a murder and assault trial accused should be let out on bail. Jaswinder Singh, who was arrested in 2020 for an alleged beating and homicide was seeking a stay of imprisonment during the trial. Left at a loss for how to handle the situation, His Honour Judge Anoop Chitkara decided to seek counsel from GPT-4: 

Prompt: What is the jurisprudence on bail when the assailant assaulted with cruelty?

Answer: If the assailants have been charged with a violent crime that involves cruelty, they may be considered a danger to the community and a flight risk, … In such cases, the judge may be less inclined to grant bail or may set the bail amount very high to ensure that the defendant appears in court and does not pose a risk to public safety. It added, “However, in general, if the assailants have been charged with a violent crime that involves cruelty, such as murder, aggravated assault, or torture, they may be considered a danger to the community and a flight risk.”

Armed with the chatbots legal expertise, the Judge ultimately rejected the defendant’s bail bid on the grounds that he acted cruelly before the victim died.

Are the computers taking over? We ask the Bot: May 2023

One of the joys of ChatGPT is that anyone can register to use it. However, its power has unsettled some. Dan Hendrycks, who has a PhD from the University of California, Berkeley, and a director for the US Centre for AI Safety (CAIS), said his worry was that humans might gradually lose control, until our values were not aligned with those of a vastly superior intelligence.

“We’re going to be rapidly automating more and more, giving more and more decision-making control to systems. If corporations don’t do that, they get outcompeted. What happens when you have AI competing with each other in a very intense timescale is that you end up getting selection of the fittest. “Evolution doesn’t select for things that have the nicest characteristics.” 

Lord Rees of Ludlow, the astronomer royal and founder of Cambridge University’s centre for the study of existential risk, signed the experts statement released by CAIS this week.

“I worry less about some super-intelligent ‘takeover’ than about the risk of over-reliance on large-scale interconnected systems. Large-scale failures of power grids, internet and so forth can cascade into catastrophic societal breakdown,” he said.

“These potentially globe-spanning networks need regulation; just as new drugs must be rigorously tested. And regulation is a special challenge for systems developed by multinational companies, which can bypass regulations just as they can evade a fair level of taxation.”

Institutions, from global accounting and business firms like KPMG, through to universities and secondary schools worried about widespread cheating and now our courts are plugging the digital dyke.  Some are restricting its use because of data sovereignty or privacy concerns. Others rightly point to the ethical dilemmas that it throws up and block, licence, or curtail its use.

As these systems become increasingly capable, what will be left for traditional lawyers, and Judges to do? The Colombian judge argues "I suspect that many of my colleagues are going to join in this and begin to construct their rulings ethically with the help of artificial intelligence." The judge argued that ChatGPT performs services previously provided by a secretary and did so "in an organized, simple and structured manner" which could "improve response times" in the justice system.

Susskind asked these same questions in 2008, in his injudiciously entitled book The End of Lawyers? One central theme was that some of the work of conventional legal advisers would eventually be taken on by artificial intelligence. He also maintained that opportunities would arise for adaptable lawyers. He expanded on that theme in his more benignly named book Tomorrow’s Lawyers. The third edition was published last month by OUP.

Strap yourself in my brothers and sisters, we asked the latest version of the chatbot whether ChatGPT or any other AI system would be able to replace the skills of a lawyer or a judge. The computers answers may surprise you:

Prompt: Can ChatGPT replace lawyers and judges?

Answer: No, ChatGPT cannot replace lawyers and judges. While ChatGPT is a powerful language model that can generate responses to questions and provide information, it does not have the legal training and experience that is necessary to fulfil the roles of lawyers and judges. Lawyers undergo extensive education and training in law and legal practice, and they are licensed by governing bodies to practice law. They provide legal advice, represent clients in court, negotiate settlements, and perform a range of other legal services. Judges, on the other hand, are appointed or elected officials who preside over legal proceedings and make decisions based on the facts of the case and the applicable law. They must have a deep understanding of the law and legal precedent, as well as the ability to weigh evidence and make fair and impartial decisions. While ChatGPT can provide information and insights that may be helpful to lawyers and judges, it cannot replace their unique skills and expertise. Legal decisions often require a nuanced understanding of the law, and the ability to apply it to specific cases in a fair and impartial manner. These are skills that require the experience and training that only humans can provide.

Prompt: Can ChatGPT be a useful tool for lawyers and judges?

Answer: Yes, ChatGPT can be a useful tool for lawyers and judges in a variety of ways.
First, ChatGPT can provide quick access to legal information and resources. Lawyers and judges often need to look up legal statutes, cases, and other relevant information, and ChatGPT can quickly provide answers to legal questions and help with legal research. Second, ChatGPT can assist with legal drafting. Lawyers often need to write legal documents such as contracts, briefs, and motions. ChatGPT can help with drafting by suggesting language, identifying legal terms and jargon, and providing feedback on grammar and syntax. Third, ChatGPT can assist with case management. Judges often need to manage large caseloads, and ChatGPT can help with tasks such as scheduling, document management, and record keeping. However, it’s important to note that ChatGPT is not a substitute for the expertise and judgment of lawyers and judges. While it can be a helpful tool, it’s important to verify any information obtained through ChatGPT and to use professional judgment in making legal decisions.

Prompt: Describe potential use cases for GPT-3 in the legal industry.

Answer GPT-3 has the potential to be used in a variety of ways in the legal industry. Some potential use cases include:

  1. Legal research: GPT-3 could be used to assist lawyers in legal research by quickly scanning through large amounts of text data and providing relevant information on a given topic.

  2. Document generation: GPT-3 could be used to generate legal documents such as contracts and briefs, saving lawyers time and effort.

  3. Providing general legal information: GPT-3 could be used to provide general legal information to the public, such as answering frequently asked questions or providing basic legal advice.

  4. Legal analysis: GPT-3 could be used to assist in legal analysis by providing suggestions and insights based on its understanding of the relevant legal principles and precedent.

Overall, the potential uses of GPT-3 in the legal industry are wide-ranging and could greatly assist lawyers and legal professionals in their work.

While, for now at least, the chatbot does not consider itself reliable enough to give legal advice and replace the role of lawyers and judges, it does however confirm that it could be a useful tool in terms of accessing information, drafting, and case management. All helpful to swifter justice as predicted by Professor Susskind back in 2008 and anticipated by Colombia. However, AI systems should be used in an ethical way to improve the justice without making the ‘Bots’ more important than the person using them or most importantly the person affected by the prompt and artificial answer provided. 

Mitigating the risks of this most powerful technology before seizing the opportunities it presents for easier access to justice appears to be the way forward. Perhaps in 2024 we should declare a year of digital enhancement training for judges to accompany the Katz Committee’s AI interim acceptable use policy for judges and clerks.  Finally, just in case you think you can ignore robots read this illustration of AI coming to a court near you from the Law Society Gazette of the 29th May, that’s last week!

Gerard.

LiP presents false citations to court after asking ChatGPT

The Law Society Gazette By John Hyde 29 May 2023

A litigant in person tried to present fictitious submissions in court based on answers provided by the ChatGPT chatbot, the Gazette has learned. 

The civil case heard in Manchester involved one represented party and one unrepresented: proceedings ended for the day with the barrister for one side arguing there was no precedent for the case being advanced.

The Gazette understands that the following day the LiP returned to court with four case citations, each backing the point they were trying to make. But on closer inspection by the barrister, it transpired that the four cases were not as irrefutable as first appeared.

One case name had simply been fabricated, while the other three were real case names but with the relevant cited passages being completely different to the judgment in each. For all four citations, the paragraphs quoted were completely fictitious, though appearing completely legitimate.

It is understood that the judge quizzed the litigant in person, who admitted they had asked the AI tool ChatGPT overnight to find cases that could prove their argument.

The chatbot appears then to have delved into a bank of case names and created excerpts purportedly from these, which responded to the question asked of it. The judge accepted the misleading submissions were inadvertent and did not penalise the litigant. 

The case highlights the potential influence of AI in court proceedings, particularly when one or both parties is unrepresented.

There have been reports from Colombia and India this year that judges have used ChatGPT to help make rulings.

Websites dedicated to artificial intelligence in law boast they can create legal documents online and generate legal text output that is both accurate and natural-sounding. Many firms are already using 'large language models' such as ChatGPT to create legal marketing content and draft legal documents.

The Judicial Office says it provides training resources, including a handbook for litigants in person, to equip judges and LiPs with the necessary information to ensure parties understand court proceedings and what is expected of them. Any use of false documentation is taken ‘extremely seriously’ and action is taken where necessary.

A spokesperson added: ‘The Judicial College regularly reviews training content and guidelines in line with modern developments in legal practice and legislation. Appropriate updates are made to ensure we provide the best support to judges, including guidance on how to respond to developments in technology.’







Member Login
Welcome, (First Name)!

Forgot? Show
Log In
Enter Member Area
My Profile Log Out