Litig AI Benchmark Initiative - Transparency Charter
Introduction & Commitment
As the legal industry explores new and more effective ways to harness the potential of generative AI, we carry an individual and collective responsibility to do so in a responsible way without compromising the high standards of the legal industry.
Note: the commitments in this charter do not require a signatory to disclose any commercially or legally sensitive information.
Our Commitments
As a signatory to this Charter:
Transparency commitment. We are committed to being transparent about how we are using AI in our organisation and across the legal industry wherever that can improve trust and understanding about AI with the aim of promoting the adoption and responsible and effective use of generative AI.
Commitments of providers of legal AI tools. As a provider of legal AI tools:
A. Claims and statements about efficacy / accuracy. Where we make claims or statements about the efficacy, benefits or accuracy of a tool, we will be transparent about this and provide explanations / evidence to support these claims. We will not knowingly make any claims or statements that are misleading, confusing or inaccurate.
B. Legal AI Model Cards. We will make a legal AI model card (based on the Litig AI Benchmarking model card template [link]) available to users and customers / potential customers of the tool about the tool’s underlying technology. To the extent that any information in the legal AI model card is confidential, we may require recipients to enter into a confidentiality agreement before we can share confidential information. [Note: the Working Group are still debating the extent to which we think model cards should be publicly available. We understand we need to balance the desire for transparency with vendor’s understandable needs to keep commercially sensitive information confidential]
C. Quality, Compliance, Ethics and Bias. We will be transparent about the steps / measures taken: (i) to ensure data quality, relevance, and compliance with legal and ethical standards; and (ii) to identify, address, and mitigate biases.
D. Testing and test data. We will provide an overview of how a tool has been tested (including test data sample size and sources of test data). Wherever possible (without compromising any commercially or legally sensitive information), we will make available a representative sample set of the documents and data used to test those capabilities, so that the market may independently test and verify the results.
E. Use cases. We will be transparent about the tasks and use cases that our tools are designed for and the known limitations in relation to these use cases. Where we make statements about the efficacy of our AI features as it relates to those use cases, we will be transparent about the testing procedures, contexts, and caveats applicable to those statements.
F. Environmental impact. We recognise that the development and use of AI tools has an environmental impact for our organisation and for any organisations who use our tools. We will be transparent regarding the steps we are taking to track and minimise our environmental impact (eg; in terms of carbon emissions and water use etc), so that customers may take account of the impact when considering their own environmental goals.
G. Regulations & Industry Standards. We will be transparent about our adherence to relevant regulations (including the EU AI Act) and other legal industry recognised standards that are applicable to the commercial deployment of AI technologies, including in relation to information security, data privacy and professional obligations.
Use of AI to provide legal services. Where we use an AI tool to support of the delivery of legal services, we will be transparent about those choices, and how they are used in the context of our work. We will maintain (and make available to our clients and regulators on request) a list of AI tools in use along with brief information about how the tools are used and how the risks associated with them are governed.
AI generated content / decisions. We will be transparent about any use of AI to enhance our services eg; to generate or review content, automate or support any decision making etc. For example, in some situations it may be appropriate to label AI generated content, so that it is clear to the end user what has been generated by AI and hasn’t been reviewed by a qualified lawyer.