You have 0 free articles left this month.
Register for a free account to access unlimited free content.
Powered by MOMENTUM MEDIA
SMSF adviser logo
Powered by MOMENTUM MEDIA

Ongoing AI fails in legal arena a warning for all advisers

strategy
By Matthew Burgess, director, View Legal
February 12 2025
4 minute read
matthew burgess view legal 2023 smsfa hgzoqp
expand image

In the modern era professional service firms have not been known for their early adoption of any innovation - consider the lag time, as compared to many other industries in areas such as outsourcing, offshoring, work from anywhere and abandoning time recording.

At least in some parts of the legal industry however, the attraction of generative artificial intelligence (AI) and large language models has seemingly proved attractive from the very earliest days of the technology.

==
==

Initial court cases

For example, in 2023 the US decision of Mata v. Avianca, Inc., No. 1:2022cv01461 - Document 54 (S.D.N.Y. 2023) the court had to consider the impact of so called 'AI hallucinations' with the citing of fake cases (also variously referred to as AI bullsh%$#ing, confabulation or delusion - where AI produces plausible-sounding, but entirely fabricated information).

The court confirmed a range of concerns in relation to misused AI, including:

A. There is potential harm to the reputation of judges and courts whose names are falsely invoked as authors of bogus opinions and to the reputation of a party attributed with fictional conduct.

B. It promotes cynicism about the legal profession and judicial system.

C. Future litigants may be tempted to defy a judicial ruling by disingenuously claiming doubt about its authenticity.

The court was particularly blunt in its assessment of the lawyers involved, who, having been questioned about the accuracy of their cited 'cases', instead doubled down and maintained the correctness of their submissions - before some time later finally deciding to 'dribble out the truth' of the (unhelpful) role AI had played in their arguments.

The court found bad faith on the part of the relevant lawyers based upon acts of conscious avoidance and false and misleading statements to the court - with sanctions and fines imposed both on the lawyers involved and their firm.

Similarly, in April 2024, the ACCC successfully attacked all key aspects of the various schemes and products promoted by the DG Institute in the Federal Court decision of Australian Competition and Consumer Commission v Master Wealth Control Pty Ltd [2024] FCA 344.

Part of the case concerned submissions made by the defendants which contained (apparently due to an oversight) the wording 'use British spelling please, ChatGPT'.

On this aspect the court confirmed:

(a) the use of AI was not something that should be regarded as a significant matter given that the relevant resolution in question (that apparently had been generated by AI) did not require the exercise of significant legal skills or judgment, and instead 'appeared to be the kind of thing which AI is capable of producing effectively';

(b) AI does have a role to play in certain aspects of legal drafting;

(c) the important aspect, in circumstances where AI is used, is that any such draft is scrutinised and settled by a legal practitioner.

January 2025 court statement

In late January 2025, there were further warnings about the need for all professionals to tolerance test the leveraging of generative AI solutions; and the (rapidly) evolving large language model platforms.

First, the NSW Supreme Court evolved its position from essentially a blanket ban on using AI in court proceedings in November 2024, to a more flexible approach.

In its updated AI practice note, the court confirmed its view on a range of issues, including making the following points (each of which have been summarised):

1. Information that is the subject of a statutory prohibition upon publication must not be entered into any Gen AI program, unless the legal practitioner or person responsible for the conduct of the proceeding is satisfied that the information:

(a) will remain within the controlled environment of the technological platform being used and that the platform is the subject of confidentiality restrictions on the supplier of the relevant technology or functionality to ensure that the data is not made publicly available and is not used to train any large language models;

(b) is to be used only in connection with that proceeding (unless otherwise required or permitted by law to be disclosed or required to be reviewed by a law enforcement agency for policy purposes);

(c) is not used to train the Gen AI program and/or any large language mode.

2. A Gen AI program may be used for any of the following purposes:

(a) the generation of chronologies, indexes and witness lists;

(b) the preparation of briefs;

(c) the summarising or review of documents and transcripts;

(d) the preparation of written submissions or summaries of argument.

3. Where Gen AI has been used in the preparation of written submissions or summaries or skeletons of argument, the author must verify in the body of the submissions, summaries or skeleton, that all citations, legal and academic authority and case law and legislative references:

(a) exist,

(b) are accurate, and

(c) are relevant to the proceedings.

4. Such verification must not be solely carried out by using a Gen AI tool or program.

5. Any use of Gen AI to prepare written submissions or summaries or skeletons of argument does not qualify or absolve the author(s) of any professional or ethical obligations to the court or the administration of justice.

January 2025 court decision

Second, the decision in Valu v Minister for Immigration and Multicultural Affairs (No 2) [2025] FedCFamC2G 95 has gained attention.

Similar to the Mata case, a lawyer relied on AI to generate various submissions and arguments - that suffered from AI hallucinations, a fact discovered by the court.

In deciding whether to refer the situation to the professional body, the court made the following comments - all of which are arguably relevant to any professional engaging with AI:

1. provision of false case citations and quotes was in breach of duties to the client and the court;

2. by the time steps were taken to remedy the misleading submissions, the court had already spent a considerable amount of time attempting to locate the cases and the other party to the proceedings had prepared their submissions in reply, causing a delay in the proceedings and unnecessary additional work for all parties (and possibly also for the AI platforms ...);

3. there was a strong public interest in referring the conduct to the regulatory authority given the increased use of generative AI tools by lawyers, which is a live and evolving issue and for which many courts are yet to develop guidelines;

4. the misuse of generative AI is likely to be of increasing concern and there is a public interest in the bodies regulating professional conduct being made aware of misconduct as it arises; and

5. in this case it was in the public interest to refer the lawyer's conduct to the Office of the NSW Legal Services Commissioner, for that body to consider what action should be taken against the lawyer.

You need to be a member to post comments. Become a member for free today!