Stephanie Gee~~Alexa Austin
December 26, 2025
The Hidden Listeners—Insurance Coverage Concerns Pertaining to AI Chatbot Wiretapping Claims
6 min
AI-made summary
- The article discusses the legal and insurance challenges arising from the use of AI-powered chatbots in business, particularly regarding privacy and wiretapping statutes
- Recent litigation, such as Popa v
- Harriet Carter Gifts, highlights complexities in applying existing laws to AI technologies that record or analyze communications
- Plaintiffs are increasingly targeting both technology providers and users for alleged unauthorized eavesdropping
- Insurance coverage for such statutory privacy claims is uncertain, as many policies exclude or ambiguously address these risks, prompting businesses to review and negotiate their coverage.
Artificial intelligence (AI) tools have quickly become integral to our everyday business activities and are used for a wide variety of purposes, ranging from powering customer service portals, to facilitating workplace collaboration, or drafting business documents. While AI provides immense value to companies implementing these tools, the rapid adoption of AI in some instances has outpaced the legal frameworks designed to protect privacy. AI-powered chatbots, which often use technologies like natural language processing to understand and respond to human language, are capable of recording, analyzing, and transmitting user conversations to third parties for “training” or quality control purposes. Consumers may interact with these bots when visiting company websites. AI bots can even saliently join Zoom meetings under fake names, appearing like a human participant in order to monitor conversations or take record notes of the meeting. These AI-powered chatbots raise profound concerns under federal and state wiretapping and eavesdropping statutes. While traditionally these laws were crafted to safeguard individuals from unauthorized interception of communication, recent litigation is testing the boundaries of these statutes as they apply to AI, creating greater exposure to the companies and developers that use this technology. Adding to the risk, organizations that integrate AI-chatbots into their business face uncertainty about whether insurance will cover privacy-related claims arising from these technologies. Many general liability and cyber insurance policies contain exclusions or ambiguous language regarding statutory privacy violations. As courts continue to interpret these statutes in novel contexts involving AI, policyholders may discover that the coverage they assumed would protect against data-related risks does not extend to chatbot-driven privacy claims. This article explores the development of these privacy claims as they pertain to AI bots and analyzes common coverage issues and solutions for policyholders as they seek to protect themselves against these risks Wiretapping—AI Giving Old Statutes New Purpose
Wiretapping and anti-surveillance statutes—which contain potentially crippling damage allowances—have been dusted off and used in new ways. Plaintiffs are alleging that certain generative-AI is being used as a third-party listener to conversations without plaintiffs’ consent. And these arguments are taking hold. Recent filings target both technology providers and, increasingly, enterprise users for allegedly “eavesdropping” on consumer interactions without sufficient consent. This trend builds on earlier website session-interception cases and parallels the rise of Pixel-related privacy claims, signaling material litigation exposure for organizations that deploy AI-enabled transcription, analytics, and agent-assist capabilities in contact centers and across digital properties. As an example, in Popa v. Harriet Carter Gifts, a website visitor sued a retailer and its technology vendor, alleging unlawful interception of her data in violation of Pennsylvania’s wiretap statute (WESCA) while she shopped online. While the district court originally granted summary judgment in favor of defendants, the U.S. Court of Appeals for the Third Circuit reversed and remanded, finding that there is no “sweeping direct-party exception to civil liability under the WESCA,” and that the defendants “cannot avoid liability merely by showing that Ashley Popa unknowingly communicated directly” with company servers. Then, in March of 2025, on remand, the district court granted summary judgment again, this time on the issue of consent, holding that plaintiff “consented to NaviStone’s activities on the Harriet Carter website.” The suit reflects the complexities of applying these wiretapping statutes to evolving technology. Similar theories are being used by plaintiffs in California challenging “conversation intelligence”—in which AI interacts with customers, transcribes calls, and provides real-time suggestions and replies to support customer service. The courts grapple with issues of determining whether such AI is considered an “unauthorized” third-party listener under the California Invasion of Privacy Act (CIPA) and whether the conduct was by a “person” under the meaning of CIPA. Plaintiffs firms are leveraging the early survival of claims to accelerate filings nationwide, invoking state wiretap statutes and also testing their theories under federal wiretapping statutes. Potential putative class definitions may sweep broadly, particularly in two-party consent jurisdictions like California, and now, potentially in the Third Circuit. Further, standard call-recording disclosures and website privacy notices may be attacked as insufficient where third-party AI analytics are involved. Courts are likely to continue to grapple with recurring issues that cut across both website and call-center contexts. Insurance Coverage Implications
The recent surge in lawsuits surrounding statutory privacy violations based on the deployment of AI-powered chatbots is prompting concerns about how traditional insurance covers these risks. For companies that deploy chatbots in customer service or communications functions, legal exposure may entail statutory damages, class-action claims and defense costs. The key issue for insurers and businesses alike is whether existing policy forms— such as commercial general liability (CGL), cyber liability, or technology errors & omissions (E&O) coverage—respond to these liabilities. Many standard policies often exclude claims tied to statutory violations, by either naming statutes explicitly or using sweeping language designed to block coverage for any state or federal privacy law claim. Further, many policies may explicitly exclude coverage pertaining to “intentional acts,” which may be broad enough to encompass these claims. These common policy exclusions may leave policyholders with gaps in coverage when statutory wiretapping claims are involved. In the context of cyber liability insurance, policies sometimes contain specific exclusions for improper tracking, recording or monitoring of communications. The prevalence of these exclusions were amplified during the wave of pixel tracking related lawsuits in recent years and could impact coverage in the context of wiretapping claims arising from AI-powered chat bots. Additionally, policyholders are often relying on “silent” AI coverage, where liabilities arising from AI are neither expressly covered or excluded under the policy, leaving policyholders with additional uncertainty. In light of potential coverage gaps, policyholders should take a proactive approach to evaluating new AI privacy related risks to ensure adequate insurance coverage is in place, including the following: Regularly Evaluate Company AI Risks: Policyholders should carefully evaluate insurance needs early to guard against unforeseen risks. Businesses should re-evaluate privacy risks regularly, particularly during times when the policyholder is adopting the implementation of any new AI tools. Review User Consent Mechanisms for AI Bot Communications: Policyholders should deploy clear, affirmative user consent mechanisms to mitigate privacy risks. This may including prominently displaying chatbot privacy notices before any data collection, providing easy access to the business’s privacy policy detailing how chatbot interactions are stored, and using automated disclaimers at the start of each chat session. Negotiate Narrow Exclusions: Review all lines of coverage for exclusions tied to statutory violations, eavesdropping, wiretapping or intentional conduct and narrow wording where possible. Negotiate Explicit Coverage for AI Related Risks: Policyholders may be able to negotiate policy endorsements that expressly cover AI-related communications or data-handling exposures. Consider All Potential Lines of Coverage: If facing a lawsuit alleging privacy violations relating to AI-powered bots, consider all lines of insurance coverage and notice insurers accordingly. When it comes to AI, policyholders often turn first to cyber liability insurance, but numerous lines of coverage may respond to privacy claims generally. Taking these steps can help business close potential coverage gaps and demonstrate to carriers that AI-driven operations are being managed responsibly and transparently. Qualified coverage counsel can review policies and identify gaps in coverage to help mitigate risks. Stephanie E. Gee is counsel in the insurance recovery group at global law firm Reed Smith where she represents corporate policyholders in complex insurance recovery litigation and counsel in federal and state courts across the United States. Gee’s practice involves analysis and strategic recovery under a broad range of insurance products, involving a wide range of policies, including cyber liability, general liability, property, directors and officers, and errors and omissions. She can be reached at sgee@reedsmith.com. Alexa L. Austin is an associate in the firm’s insurance recovery group where she handles complex insurance recovery litigation and counsel on behalf of corporate policyholders. Austin represents public and private companies in multiple industries, including the financial services, pharmaceutical, healthcare, energy, and transportation industries. She has worked on coverage disputes under various types of insurance policies, including directors and officers liability, professional liability, cyber liability, commercial general liability, employment practices liability, and first-party property policies. She can be reached at aaustin@reedsmith.com.
Article Author
Stephanie Gee~~Alexa Austin
The Sponsor
