AI in Europe: An Evolving Landscape
What is "AI," how it is being regulated in Europe, and what does the legal future hold for AI-powered software like CLM?
What is "AI," how it is being regulated in Europe, and what does the legal future hold for AI-powered software like CLM?
Whether it’s detecting a visitor at your door with your Ring doorbell, identifying a song on the radio with your Shazam app, or unlocking your phone with facial recognition software – artificial intelligence (AI) is everywhere in our lives. What was once touted as the future is now the reality of today.
As with most major technological advancements, the explosion of AI in our daily life is associated with a host of important questions and possibilities, not just for the individual, but also for businesses and organisations. Although there is much to celebrate about the emergence of AI, it also has the potential to pose substantial risks, like magnifying inherent bias, fetching inaccurate data, or, worse, making fatal safety errors. This has led to an emerging regulatory landscape for AI, particularly within the European Union. Along with it is a heightened cost and risk of complying with AI regulations, and the threat of missed obligations between organizations of every shape and size.
Let’s start with the basics: what “AI” actually is, how it is being regulated in Europe, and what the legal future holds for AI-powered software like contract lifecycle management (CLM).
“Artificial intelligence,” also known simply as AI, is an umbrella term for a variety of different technologies, including machine learning; natural language processing; review, analysis and extraction; and creative generation.
The four main types of AI are:
Manual contracting and review are time-consuming processes that are prone to human error. CLM systems powered by AI, however, are eliminating manual tasks and changing the way contracting professionals handle contract creation and negotiation.
The four main ways AI will affect legal agreements are:
The European Commission proposed the “Artificial Intelligence Act,” the first regulatory framework for AI, in April of 2021. The framework proposes that different applications of AI should be analysed and classified in accordance to the risk that they pose to users. The risk assessment of each application will determine the level of regulatory requirements imposed.
For example, a “high-risk” application of AI would be anything related to safety – such as automobiles, medical devices, or elevators. All high-risk AI systems, according to this framework, would need to be assessed before being put on the market, and constantly reevaluated throughout their lifecycle.
It is not yet clear how regulators will assess AI-powered CLM on their risk spectrum, but legal software is not named specifically in the EU’s product safety legislation, nor the 8 other specific use cases outlined in the suggested regulations. It is up to each organisation to balance the risk of possible regulation with the risk of not innovating with AI to transform contracting. In our view, regulations can be managed with a reputable technology partner, but the risk of not innovating in a space that’s moving so fast is far greater.
It’s worth noting that existing EU frameworks such as the General Data Protections Regulation (GDPR) stipulate people cannot be subject to a decision with a legal impact on them if that decision is made solely by automated processes, such as AI. Of course, legal agreements aren’t finalized by automated processes, as they are still negotiated and signed by real people. As such, the contracting process likely wouldn’t be covered by these restrictions.
The Artificial Intelligence Act is expected to be finalised by the end of calendar year 2023. Once approved, it will provide the world’s first comprehensive rules on AI, creating a clear set of rules of the road for all as AI innovation accelerates.
As with any emerging technology, there are risks and limitations associated with using AI.
For example, Generative AI projects still suffer from problems like catastrophic forgetting, when the new things it learns overwrite the old things it used to know; and a lack of transparency because the AI can’t explain why it made a particular decision.
Another risk, particularly for European users, relates to language. AI models in the AI platform being used are based on the natural language of the content being reviewed, which means that sentence structure, grammar, spelling and characters used (e.g. é, ß, ø, å) can all affect the accuracy of an AI model. Most AI models for CLM are trained on American English and U.S. law documents: they may perform extremely poorly when applied to other languages or dialects.
Some CLM vendors offer to translate documents into English first, before applying their AI capabilities, creating the risk of errors being introduced into the translation, which in turn can corrupt the AI’s output. For example, the term “bug,” when used in a software contract, could be translated first as “insect,” leading to all manner of potential challenges. Before implementing any AI solution, organizations should be sure to fully research which languages the models are trained on, and whether they will be able to train the models on their own contract data to improve accuracy.
Lastly, AI relies on content created by human input, which inevitably results in biases. This was readily apparent with the resume-sorting algorithm used for a short time at Amazon. What was meant to help cut down on manual processing of resumes resulted in horribly sexist outcomes for female applicants.
To truly harness the power of AI while mitigating risk, it’s critical that legal departments feel empowered to take control of their AI environments, finding an acceptable balance between risk and reward.
Self-trained AI models bring controls “inside the tent” of Legal departments. Legal professionals use their own internal files to “train” the AI models, using their own documents and resources to identify key terms and clauses that are uniquely valuable to their organisation and industry. For example, biotech organisations might not just need to know if a contract has an indemnity clause, but whether it has a specifically-formulated pharma R&D indemnity clause.
When implemented in a targeted, context-specific way, AI can virtually clone the best of the group’s resources, becoming a force multiplier that allows an organization’s staff to get most repetitive tasks done quickly and efficiently, relieving them to focus on negotiating the best deals at the lowest levels of risk possible.
The era of artificial intelligence is here in Europe, bringing with it exciting opportunities and challenging risks. AI is poised to transform contract management, especially when embedded in a sophisticated CLM system. As with any emerging technology, the regulatory considerations are still in flux, bringing a range of challenges that will settle only with time, especially in the EU. Yet, for all those challenges, the opportunity to do more with less, using bespoke algorithms built by the very best practitioners, will surely transform the contracting process.
NDAs are essential instruments in the protection of sensitive data as legally binding documents between two or more parties. However, they can also be cumbersome to manage.
You may know all the different ways to slice pizza but do you know the four types of healthcare contract data you can slice and dice within pre- and post-signature?
The Bay area made a fitting backdrop for legal operations professionals who gathered to discuss the future of contract management.