Trudy Knockless
January 24, 2026
'Don’t Be Bamboozled': Why GCs Should Approach AI With Both Excitement and Skepticism
4 min
AI-made summary
- A recent webinar hosted by Casepoint and Opexus addressed the pressures general counsel face to adopt generative AI tools, emphasizing the need for caution and structured experimentation
- Speakers Jim Shaughnessy of DocuSign and Nuala O’Connor of EqualAI highlighted the importance of safe environments, employee education, and clear guidance to prevent risks such as data exposure and AI errors
- They stressed transparency, skepticism toward vendor claims, and embedding practical reminders within workflows as AI adoption in legal departments increases.
General counsel are feeling pressure to adopt generative artificial intelligence tools, but they shouldn't get so swept up in AI mania that they set aside caution and skepticism. That was a key takeaway from a webinar put on this week by the legal tech providers Casepoint and Opexus titled "AI Transformation: What Every GC Needs to Know in 2026." “This is an exciting time, but it’s not without risk,” said one of the speakers, Jim Shaughnessy, chief legal officer of DocuSign. “You need to make sure you have the right environment for experimentation—and the right structure around it.” Generative AI is showing up in everything from contract review and research to customer service and compliance. But as companies move quickly to harness its power, legal departments are being pulled in two directions: supporting innovation while protecting the business. Shaughnessy noted that AI is not just a top-down initiative. “We have a lot of interest from our board to make sure we are staying modern, but also individual employees who get an idea and think, ‘I can do this really well with AI,’” he said. “One of the things you need to do is give them the right tools in a safe environment—because if you don’t, they’ll find the tools in an unsafe environment.” For in-house lawyers, that means more than policies. It means access and guidance—built into the workflow. “If you provide a protected place where experimentation can take place, then you provide protection,” Shaughnessy said. “Otherwise, people will go outside the firewall, use public tools, and put in sensitive data or personal data—and then it’s gone.” Another speaker, Nuala O’Connor, a former chief counsel for digital citizenship at Walmart who's now a senior adviser at the AI governance nonprofit EqualAI, emphasized the need to balance opportunity with realism. “These tools are not going to supplant the brain of a very good lawyer,” she said. “They are going to augment. They're assistants, not lawyers.” She urged legal departments to offer both safe experimentation spaces and simple, relatable guidance. “Use plain language. Don’t put financial data in. Don’t put personal data in. Use the same rules you’d use for the internet,” she said. “This is about augmenting your productivity—but you still need to read your work before handing it in.” O’Connor warned against over-trusting the tech, pointing to the industry’s now-familiar “hallucination” problem—when AI tools fabricate citations, facts, or conclusions. “I call them lies,” she said. “Let’s not use fancy words. It’s making stuff up.” Shaughnessy agreed. “Just because something comes up really quickly and sounds definitive doesn’t mean it’s right,” he said. “In situations where you can’t afford to be wrong, think of it as the first draft from a summer associate—and question it carefully.” For those considering outside vendors, O’Connor urged GCs to stay skeptical and dig into the claims. “Ask the hard questions. Ask the dumb questions. ‘Does it really do this? Can you prove it?’” she said. “Don’t be afraid—and don’t be bamboozled.” Both speakers emphasized that educating and training employees is essential. “Everyone wants to stay competitive. No one wants to be left behind,” O’Connor said. “But you will not be able to overestimate what your employees might try to do with this technology.” Even well-intentioned employees can create risk by inputting sensitive information into public AI platforms, O'Connor said. “I’ve seen people put things into tools that I never would have considered appropriate,” she said. “That’s why the first step is internal education. Don’t put anything in that you wouldn’t want to see on the front page of The New York Times.” The key is clarity. “A policy manual written in the tone of 1990 will not be read,” Shaughnessy said. Instead, he recommended embedding reminders within the tools themselves. “A pop-up that says, ‘Remember what you’re doing’—that’s more effective than a document somewhere no one will find.” But generative AI’s reach into legal work is new—and for some employees, unsettling. “There’s a lot of fear,” she said. “You see it online—the existential dread. But let’s be honest: Yes, there is going to be disruption. But if you're willing to work hard and learn, there will always be a job for you.” That reassurance, she said, needs to be paired with real support. “Provide not only the tools to do the work, but the tools to learn about it. In-person training. Tutorials. A little fun.” As AI adoption accelerates, Shaughnessy warned legal teams to prepare for complexity. “You’re going to have heterogeneous AI stacks,” he said. “Agents from Salesforce, Workday, DocuSign. ChatGPT. Then they’ll interact with other systems and be evaluated by other agents. It’s going to be really complex.” She also urged in-house lawyers to keep it real—with clients and among co-workers. “You don’t want to surprise and offend. You want to surprise and delight,” she said. “So go ahead, name your chatbot Jenny or Jeremy—but let people know they’re talking to a bot.” “Transparency builds trust,” Shaughnessy added. “And trust is everything.”
Article Author
Trudy Knockless
The Sponsor
