Ten Tips from Legal Risks & Opportunities in Generative AI
Here are ten tips from our summer panel, taken from comments of four legal professionals, all of whom have significant Generative AI experience:
Jessica Block, EVP, Market Solutions, Factor
Karen Buzard, Partner and Co-developer of Harvey, Allen & Overy
Wendy Callaghan, Chief Innovation Legal Officer, AIG
Christian Lang, Founder & CEO, LEGA
Moderator: Leigh Dance
Learning about Generative AI
1. You and your team will benefit from a consistent use of last generation and Generative AI vocabulary.
It’s wise to establish and regularly update your AI terms. These resources will help:
Plus, two podcasts:
Hard Fork (NY Times & Platformer)
It’s a continual, fast-moving learning exercise. Provide a safe structure for your people to use the tools— to try them out. Using them is a valuable educational tool. Know that Generative AI and Large Language Model tools will change quickly. New capabilities will sail in from all directions. Legal has a role, along with other parts of the business, to help employees continually learn by sharing experiences and lessons learned, with specific examples. Provide platforms where these learnings can be accessed virtually by many.
2. Rest assured that your peers are in learning mode too
At a recent event we asked in-house legal leaders to rate themselves as Beginner, Proficient or Advanced on Generative AI. They rated themselves as 20% Beginner, 72% Proficient and 8% Advanced. Those proficient today may not be so tomorrow—continual learning is necessary to keep up with developments.
3. A key part of learning is knowing how to craft the most effective prompt
A prompt is the way we ask the LLM tool to do something.
The prompt is a fundamental part of gaining value from any Generative AI tool, but there is no one-size-fits-all rule for prompting—it differs by tool and objective. There are many tutorials available online, including an excellent one that is free of charge, on prompting strategies for the legal professional: Legal Design School
Legal Counsel on Generative AI
4. Avoid approaching Generative AI governance as an application, tool or technology project
It is much more far reaching. Instead, recognize that most employees will need continual guidance to make their way through the broad changes that AI brings.
For stretched legal departments, this is not easy. Generative AI is multi-disciplinary, and Legal leaders will need to develop guidelines and make decisions in coordination with many across the enterprise. Providing basic Gen AI guidelines for employees should happen quickly. The most expedient route may be for Legal to find the best suited as a partner— e.g., your AI function if you have one. Law firms also can give valuable support. Prepare to update your guidelines frequently, as the business gains clarity around key concerns, opportunities and common issues.
5. Continually remind others about the weaknesses, benefits and vast differences in Large Language Models (LLMs)
In-house lawyers will have many opportunities to guide internal clients on appropriate use of LLM tools, and it shouldn’t all be negative. But it is important to continually remind folks, for example:
These tools often don’t tell the truth when they’re stumped (put “hallucination” in your AI lexicon);
LLM tools pull from trillions of data points to respond to your prompt, and they tend to use the ‘mean.’ This can result in inherent bias in responses, a major weakness of current Large Language Model tools. State and national regulators are aware of unfair bias risks and are taking steps to regulate.
6. Keep track of the data inputs of various LLMs and parameters they use
Data input methodology varies extensively among LLM tools—a result of different objectives of each tool. It’s important for enterprises considering AI tool use to focus on where the data comes from, who owns it, who controls it, and how to protect it. This brings into play an immutable law of technology, in which security and speed tend to be inversely related. Things are moving very fast these days, and we can expect to see some very serious security issues and breaches arising. It’s probably impossible to avoid at this stage, and we will hopefully learn from others’ issues rather than our own.
Risks now and regulation rising
7. Repeat, repeat and repeat again
“When you have questions, before you go further, reach out to these experts.” Those who have launched proprietary AI tools recommend distributing continual reminders to the team, along with frequent oversight to ensure people are following the permitted usages of the tools.
A process with identified resources to oversee and respond to questions is necessary. Your in-house team should know what kinds of questions or issues to escalate and to whom. Don’t mistake all tools or issues as the same.
For the coming months, Legal will need to applaud positive use of these tools, including suggested and approved use cases, and repetitively remind about the pitfalls. The legal team should be armed with helpful suggestions on using AI responsibility and real stories of what can happen when it's misused. Usually, the best action is to defer to a multi-disciplinary team of professionals experienced with AI.
8. Prepare for more specific regulation of Generative AI
Responding to regulation and understanding regulatory risk is again a multi-disciplinary activity, with different disciplines in the enterprise called on to evaluate regulatory risks relevant to them. We can expect regulatory scrutiny to vary by industry sector and the specific purpose for which AI is used. An example of sector-specific regulation is the current draft Colorado law for insurance, around using predictive models that utilize external consumer information and data.
The EU AI Act expected to be finalized early 2024 takes a very risk-based approach, with certain categories defined as high risk to trigger particular governance and reporting regulation. Another example is Singapore Monetary Authority, with principles of fairness, ethics, accountability, transparency. All branches of government are starting to weigh in on Generative AI regulation, and the role of the in-house lawyer or compliance officer in influencing that regulation or responding may vary widely by industry sector.
9. Closing Words of Wisdom from our Panel
We want to make sure that people are ready to comport themselves properly and safely, and also ready to think differently about how they do basic work-- like organizing a workflow or tasking or drafting. Every organization will need professionals that understand AI, know how to interact with it, and also how to leverage it.
So, it is very important to get our teams focused on understanding their work, understanding these tool sets and running focused experiments using generative AI.
Innovations in AI will continue to take much of our attention. We have not yet seen a crest in pace or capability, and it will not relent for a while, yielding powerful discoveries and unforeseen risks. Ongoing, proactive updates from credible and trustworthy sources, as well as your community peers, is paramount.