Haynes Boone lawyers Fiona Cain, Jack Spence, Michael Mazzone and Harry Phillips authored an article for Mealey’s International Arbitration Report exploring the use of artificial intelligence by arbitrators and what that means for advocates.
Legal publications have been replete with salutary warnings to the unsuspecting as to the potential pitfalls of lawyers using generative AI in their practice area on both sides of the Atlantic. Perhaps the most severe warning was provided by the United States District Court for the Southern District of Alabama, where two lawyers were disqualified from working on a case, and ordered to provide copies of the sanctioning order to all their clients, as well as opposing counsel and judges on other cases they were working on.1
But what about the use of AI by those on the other side of the bench? This is a question of particular relevance given the introduction by the American Arbitration Association (AAA) of “WebFile AI Assist”, a tool which enables arbitrators to summarise filings. If its adoption is successful, we are likely to see the increased use of similar tools in other dispute resolution settings.
In this article we consider the ethical issues posed to arbitrators making use of these tools, together with insight as to how these are managed in practice. We also consider the implications that the continuing adoption of similar solutions may have on the practice of dispute lawyers.
Ethical Considerations For Arbitrators Using AI tools To Summarise Pleadings, Briefs, And Other Submissions
The principal issue for arbitrators using this tool is the risk of it inaccurately summarising a party’s submissions/other filings. Parties to arbitration proceedings have the expectation, and right, to demand the dispute be determined on the basis of their submissions as they actually are, rather than what a piece of software considers their submissions to say.
Arbitrators must ensure that the summaries produced by AI are treated as nothing more than a starting point, to help get up to speed, rather than as an alternative to giving proper consideration of the papers themselves. Concerns about impartiality, independence and due process are identified in the guidelines on the use of AI that were issued by the Chartered Institute of Arbitrators earlier this year. It does not relate to the use of any specific AI tool but does recommend that arbitrators using AI retain responsibility for all aspects of the award. This need is also reflected in guidance issued by, inter alia, the Lady Chief Justice of England and Wales and the Master of the Rolls, as to the use of AI tools by the judiciary. 2The (adapted) Reaganite core takeaway from that guidance is to (not blindly) trust, but verify, outputs from AI tools. The application of these guidelines by English judges can be seen in the judgment of Tribunal Judge Mr McNall, sitting in the First-Tier Chamber Tax Tribunal, who explained how he had “…used AI to summarise the documents, but… satisfied myself that the summaries - treated only as a first-draft - are accurate”. 3He also recorded that he had not used AI for legal research. The AAA itself issued guidance on arbitrators’ use of AI tools earlier this year which also recognises the need for arbitrators to “critically evaluate and verify outputs” to ensure accuracy and reliability and that “their decisions reflect their independent evaluation and reasoning”.4
This is particularly important given (as has been seen by the raft of criticisms from the courts and disciplinary proceedings facing various lawyers), the risk of AI tools hallucinating in their outputs. The risk is not one faced solely by lawyers. Following a congressional inquiry led by Mr Grassley, chairman of the US Senate Judiciary Committee, two judges confirmed that they had issued orders containing and based on AI hallucinations (not introduced by the advocates in the proceedings).5
A distinct, but related, concern is the risk of bias being introduced through AI tools. AI tools are trained on existing data sets, and, where these datasets contain biases (be they based on sex, race or nationality), these biases are likely to be reflected in the output produced by AI models. While (some) providers have made efforts to better curate training data to mitigate this issue, the issue remains one that arbitrators must be alert to – if for no other reason than their statutory duties to “act fairly and impartially as between the parties” in the United Kingdom6 or to comply with the relevant code of ethics in the US and maintain the principles of fairness and due process.7
What Impact Does This Have On The Work Lawyers Will Perform?
The most immediate change we are likely to see from lawyers, if tools like the AAA’s continue to be adopted, is that lawyers will (and probably should, to promote the best interest of their clients) seek to structure their written submissions in a way that AI systems can properly parse and summarise. Just as jury research has been refined to a science in the United States, we may see the rise of a parallel system of “AI research” literature, offering guidelines and tools which will help submissions to be drafted in a way that AI systems accurately summarise.
That does, of course, raise the question of whether lawyer and arbitrator time might be better spent by parties simply producing a usable executive summary at the beginning of substantial submissions. In the meantime, however, if lawyers are aware that submissions may be the subject of AI summarisation, it would seem sensible to (at least) process them through accessible (appropriate) tools – like Harvey or Legora – and consider whether the summary produced identifies the issues in the case that are likely to assist in a favourable outcome to the case.
What Next?
In addition to the AAA’s AI assist tool, the AAA-International Centre for Dispute Resolution has announced that it is releasing an AI-Native arbitrator in November which will prepare draft awards. Although the tool is for use in documents-only construction cases, where there is already an extensive database of construction awards, the intention is to have human arbitrators reviewing and, if necessary, revising any award to “validate results, safeguarding trust, transparency, and due process”. It does however raise the issue, which was discussed by the Master of the Rolls in his speech at the Legal Geek Conference in London in October, that awards generated using only a database will not be influenced by developments which are normally reflected in the human thought process.
Conclusion
Regardless of the teething issues experienced as practitioners become used to, and learn the limitations of, AI tools – they are likely here to stay. The introduction by the AAA of its AI Assist tool offers a reminder to arbitrators to stay up to date with technical tools but also offers a nudge to practitioners, to consider how their drafting will be interpreted and summarised by those tools.
1Order of Anna M. Manasco dated 23 July 2023, Case 2:21-cv-01701-AMM Document 204
2Courts and Tribunals Judiciary, Artificial Intelligence (AI); Guidance for Judicial Office Holders (31 October 2025).
3Evans & Ors v Revenue and Customs [2025] UKFTT 1112 (TC) [48]
4https://go.adr.org/rs/294-SFS-516/images/2025_AAA-ICDR%20Guidance%20on%20Arbitrators%20Use%20of%20AI%20Tools%20%282%29.pdf?version=0
5https://www.judiciary.senate.gov/press/rep/releases/grassley-scrutinizes-federal-judges-apparent-ai-use-in-drafting-error-ridden-rulings
6Arbitration Act 1996, s33
7Code of Ethics for Arbitrators in Commercial Disputes