Over the last few years, the exponential rise of generative AI has permeated nearly every sector of society. Mediation is no exception. Last year marked a definitive turning point: 2025 was the year that AI shifted from a futuristic concept to an active force in the mediation room, fundamentally revolutionising how we approach conflict.
Discussions within forums such as the UNCITRAL Colloquium on the Use of Artificial Intelligence (AI) in Dispute Resolution and Remote Hearings in Arbitration and Mediation, convened by the United Nations Commission on International Trade Law, demonstrate not only the growing practical application of AI in dispute resolution but also the institutional recognition of its significance. By formally addressing procedural guidance and best practices for AI integration, states and international bodies are signalling that AI in mediation is no longer a theoretical discussion; it is a pressing, policy-level priority. Such initiatives underscore both the relevance of AI within the dispute resolution field and the concrete steps which are being taken by governments, international institutions, and mediation organisations to shape its responsible and effective use.
While AI offers significant advantages in efficiency, it also raises critical concerns regarding confidentiality, bias, and the potential loss of the “human element” at the heart of conflict resolution. The benefits are already becoming tangible: for instance, AI can provide instantaneous document summarization, analyze vast sets of case law or data sets to suggest objective criteria, and even offer real-time sentiment analysis to help mediators identify when a party’s emotional state is shifting.
However, these capabilities call for a careful and considered approach. Inspired by the inaugural session of the IMI Global Mediation Dialogues, “Minds vs. Machines,” held in October 2025, during which we witnessed live simulations comparing the human mind’s intuition against the processing power of AI, as well as the unprecedented pace of development we are witnessing, we will explore the intersection of AI and mediation in this article. Below, we will analyze the current climate, evaluate the connection between automation and the human element, and reflect on the benefits and risks that may define the next chapter of mediation in 2026 and beyond.

Background of Generative AI and the current landscape
The term Artificial Intelligence (AI) usually refers to computer systems that perform tasks typically requiring human intelligence, like learning, problem-solving, decision-making, and understanding language, by analyzing data and recognizing patterns to act in smart, human-like ways. AI has been around for quite some time in the legal sector, including in the area of alternative dispute resolution, gaining increased popularity in the last year, due to its rapid development.
Several tools have been developed with the objective of supporting mediators to carry out their daily tasks, such as summarising documents, making a first draft of an agreement, and summarising meetings. Among various innovative tools, we can mention MAIA, developed and used by the Singapore International Mediation Centre. Due to the availability of such tools, AI is increasingly utilised by professionals to support the mediation process. Nevertheless, with these developments, concerns and challenges arise when it comes to ethics and maintaining the humanity of mediation in this digital era.
The integration of AI in mediation introduces significant risks regarding algorithmic bias and data privacy, as systems may inadvertently mirror historical prejudices or compromise the confidentiality of sensitive negotiations. Furthermore, the “black box” nature of these tools can obscure the reasoning behind a suggested resolution, threatening the transparency essential to the process. While experts like Professor John Lande in his blog “Indisputibly” have recognized the various ways AI can assist mediators when used correctly, there remains a profound concern regarding the loss of the “human factor”: the empathy, intuition, and emotional intelligence required to build trust. As James South and Andy Rogers observe, mediation is “about bringing the human element back into disputes and litigation.” Ultimately, as Zachary R. Calo of the IMI Ethics Committee suggests, “the humanity of mediation might illuminate the ontology of mediation and lead, in turn, to new insights about ethical dispute resolution in a time of technological disruption.”
The effort to maintain human-centric mediation and ensure safeguards
In light of the above obstacles and risks, several institutions have recognized the need for further guidance on ethical use and safeguards against the less positive aspects that the use of AI might bring. This is particularly pertinent for the mediation process, in which one of the core characteristics is the fostering and preserving of human relationships through the resolution of disputes. Below we highlight important initiatives and documents aiming to ensure the ethical and responsible use of AI.
I. IBA Guidelines: Guidelines on the use of generative AI in mediation:
The International Bar Association’s (IBA) guidelines on generative AI in mediation, published on 19 June 2025, represent perhaps the most significant milestone in modernizing dispute resolution internationally, while fiercely protecting its core values. By shifting the conversation from a scenario of unregulated experimentation to a structured framework, these guidelines provide the industry with its first gold standard for digital integration. At the heart of this leap forward is the preservation of the human factor; the IBA makes it clear that while the use of AI is an opportunity to facilitate mediation by improving efficiency, reducing costs, and broadening access to justice, it must be integrated into mediation with appropriate safeguards, given its risks. The guidelines ensure AI as a sophisticated co-pilot that can be used to facilitate administrative tasks, synthetize information, analyse information etc., but always ensuring party autonomy, a balanced process where all parties have an equal chance to participate and express their views, privacy and confidentiality, and the neutrality, impartiality, and independence of the mediation process.
II. EU AI Act :
The European Union’s AI Act establishes a comprehensive, risk-based legal framework for AI, classifying systems based on the potential harm they can cause. While mediation-specific rules are not explicitly detailed, AI systems used in the administration of justice and democratic processes are designated as “high-risk,” subjecting them to stringent mandatory requirements. These requirements focus on data quality, transparency, clear documentation, and, most importantly, human oversight to ensure fundamental rights and ethical principles are preserved, preventing issues like harmful manipulation or the perpetuation of bias by AI systems.
III. American Arbitration Association (AAA) Leadership:
AAA, an IMI co-founding and Board Member organisation, is one of the institutions that has taken a leadership role in the subject of AI by organising the AAA-ICDR’s Future Dispute Resolution conference series (see our report on the conference in The Hague and the report on the Conference in New York) with the objective of sharing cutting edge ideas and educating professionals.
Furthermore, besides participating and contributing to these types of events. AAA developed the AAAi standards for AI in ADR. Published on May 9, 2025, these standards provide a comprehensive framework for the responsible use of AI, highlighting ethical and human-centric values. This framework ensures that AI deployment remains under human oversight, aligning with appropriate professional standards. The AAAi Standards officially supersede the Principles Supporting the Use of AI in Alternative Dispute Resolution, released in November 2023, which served as a foundation by emphasizing competence, confidentiality, and impartiality.
The new standards offer specific guidance for neutrals, advocates, and ADR administrators to ensure that AI usage is both ethical and effective. Key pillars include prioritizing the privacy and security of all involved parties while maintaining a high level of accuracy. To ensure reliability, it is required that AI outputs are consistently verified by humans to meet industry benchmarks. Furthermore, the standards mandate explainability and transparency regarding AI usage, alongside a commitment to accountability. This focus on adaptability ensures that technological knowledge remains current and effective in a rapidly evolving landscape.
III. Instituto De Certificação e Formação De Mediadores Lusófonos (ICFML) Professional Guide on AI:
ICFML, an IMI Qualifying Assessment Program (QAP), has also taken action on this matter by drafting the ICFML Professional Practice Guide for Ethical and Responsible Use of AI in Mediation (in Portuguese: Guias ICFML de Prática Profissional em Mediação) to provide oversight and guidance for Lusophone mediation professionals. It focuses on three main pillars: Essential Foundations, Responsible Practice, and Continuous development and Responsibility. It also covers topics such as how AI can be used as a co-pilot for mediation, ensuring confidentiality, data protection, and transparency, while addressing ethical concerns and the importance of strategic and human integration of AI.
IV. Resolution Institute (RI) Australia draft on AI in Mediation:
Resolution Institute (Australia and New Zealand), another IMI QAP, drafted a policy to guide members and the wider profession, focusing on AI notetaking tools in mediation. This initiative involved a public consultation in 2025 on the draft policy to gather feedback on proposed safeguards. The primary aim was to establish clear, practical requirements for mediators, parties, and institutions while balancing the benefits of technological innovation with the need to protect the integrity of dispute resolution.
While the current framework sets a benchmark for best practices in note-taking, the Institute intends to address broader applications like translation and agreement drafting in the future. The formal feedback period for this initiative lasted until October 17, 2025, during which mediators, parties, and various industry stakeholders were invited to provide submissions to help shape the draft.
V. The NCTDR-ICODR Guidance for Third Parties Using AI in Dispute Resolution:
Published on April 8, 2026, the Guidance for Third Parties Using Artificial Intelligence in Dispute Resolution serves as a vital extension of the 2022 NCTDR–ICODR Online Dispute Resolution Standards. This framework moves beyond general ethics, providing a practical roadmap for neutrals and platforms to ensure that AI tools enhance, rather than undermine, the principles of fairness and accountability. By focusing on transparency, human oversight, and the mitigation of algorithmic bias, these standards ensure that even as we embrace the efficiency of AI, the core “human” element of justice remains protected.
Conclusion and Reflection About the Future
Given the rapid pace of technological development worldwide, the integration of AI into alternative dispute resolution is no longer a future possibility; it is a present reality. It is crucial that mediators and all parties involved are informed about these developments. Taking the necessary steps ensures that the ethical concerns raised by these technologies do not become obstacles to enjoying their benefits. We must not turn our backs on technology, especially when it can contribute positively to a process like mediation. However, we must remember that dispute resolution methods are characterized by human interaction and judgment, which is what makes them so valuable. Therefore, it is vital to learn how to use these innovations without losing sight of the human element and the safeguards necessary to uphold a quality process.
With organizations and institutions across the field already engaging with these developments, we are prompted to reflect: what is our collective responsibility in shaping this transition? Apart from sharing the above-listed initiatives, how else can the International Mediation Institute further ensure that emerging technologies are integrated into mediation in a way that upholds professional standards, ethical safeguards, and the human core of the process? As the landscape evolves, so too must our consideration of how best to guide and support the global mediation community. We warmly welcome your input!
Bibliography and Links of Interest
- “Minds vs. Machines,” event post.
- AI tool MAIA, youtube video.
- Singapore International Mediation Centre website.
- Indisputibly, blog by John Lande.
- South, James, and Andy Rogers. “What Might Artificial Intelligence Mean for Alternative Dispute Resolution?” Kluwer Mediation Blog, 30 Aug. 2018.
- Calo, Z. R. (2024). Artificial intelligence and mediation ethics. Cardozo Journal of Conflict Resolution.
- IBA website.
- The International Bar Association’s (IBA) guidelines on generative AI.
- The European Union’s AI Act.
- Report on AAA-ICDR’s Future Dispute Resolution conference (New York Edition)
- Report on AAA-ICDR’s Future Dispute Resolution conference (The Hague Edition)
- AAA website.
- AAAi standards.
- ICFML (Instituto De Certificação e Formação De Mediadores Lusófonos), QAP Program Page.
- ICFML Professional Practice Guide for Ethical and Responsible Use of AI in Mediation
- RI’s Consultation Draft on the use of AI note-taking in mediation.
- NCTDR-ICODR Online Dispute Resolution Standards.
- NCTDR-ICODR Guidance for Third Parties Using Artificial Intelligence in Dispute Resolution.
- NCTDR website.
- ICODR website.
The above text was drafted by Victoria Peña Morante, IMI Intern, under the guidance of Ivana Ninčić Österle, IMI Executive Director


