
Artificial intelligence is already being deployed in arbitration for document review, evidence organization, and drafting, offering speed and cost savings. Yet the rapid adoption outpaces regulatory guidance, with South African bodies issuing only soft guidelines and international rules lagging behind. Risks include data security breaches, algorithmic bias, hallucinated citations, and erosion of lawyer expertise, which could jeopardize award enforceability under the New York Convention. A recent US case illustrates courts’ willingness to scrutinize AI‑assisted awards.
The arbitration landscape is being reshaped by AI tools that can sift through thousands of pages of evidence in minutes, dramatically cutting costs and shortening timelines. Law firms and arbitrators are attracted to these efficiencies, especially in jurisdictions where traditional e‑discovery remains expensive. However, the technology’s rapid diffusion has outstripped formal rule‑making, leaving practitioners to rely on soft guidelines from bodies such as the Association of Arbitrators (Southern Africa) and the CIArb. This regulatory vacuum creates uncertainty about acceptable practices and liability exposure.
Beyond cost, the substantive risks of AI in dispute resolution are profound. General‑purpose models often lack safeguards for privileged information, exposing confidential data to third‑party training sets. Algorithmic bias, inherited from historical datasets, can skew arbitrator selection or fact‑finding without detection. More insidious is the phenomenon of "hallucination," where AI fabricates case citations or subtly misstates facts, eroding the credibility of awards. Courts, under the New York Convention, may refuse to enforce awards if procedural fairness is compromised by opaque AI reasoning, as highlighted by the LaPaglia v. Valve Corp. dispute.
Looking ahead, the industry must balance innovation with accountability. Emerging best‑practice frameworks stress that AI should serve as a supportive tool, not a decision‑maker, with human arbitrators retaining ultimate responsibility. Practitioners should adopt legal‑grade AI platforms that meet recognized security standards, implement rigorous verification protocols, and disclose AI usage to all parties. By embedding robust oversight, the arbitration community can harness AI’s productivity gains while preserving the integrity and enforceability of its outcomes.
Comments
Want to join the conversation?