AI-Generated Content and Insurance: Who is Legally Responsible When No Human Is Directly Involved?
As artificial intelligence (AI) becomes increasingly sophisticated, there is a growing market for generative AI to be used with social media, product marketing and chatbots. There is even a market for AI influencers, known as "virtual avatars." This increasing adoption of AI-generated content presents numerous legal questions, particularly concerning authorship, intellectual property (IP) infringement and misinformation. Questions regarding liability when generative AI "goes rogue" raise special issues concerning the use of generative AI, as AI models are trained on massive amounts of data, which is sometimes subject to IP protections. This article examines the increasing use of generative AI, including its legal implications and the insurance industry's response to the growing adoption of AI technology.
The Risks of AI-Generated Content
One of the significant legal challenges posed by generative AI is copyright protection. Generative AI models are trained on massive datasets that can scrape text, images, and audio from the Internet, without regard for what might be copyrighted. The potential for copyright infringement has emerged as a central concern as courts begin to evaluate how traditional IP doctrines apply to AI-generated content. Training AI on copyrighted content is an emerging issue. The fair use doctrine typically permits the limited use of copyrighted material, such as for parody, criticism, or commentary. The law is developing regarding whether using copyrighted material to train generative AI constitutes fair use.
Copyright Infringement and Fair Use
On June 25, 2025, the U.S. District Court for the Northern District of California issued an order on fair use in the case of Bartz v. Anthropic. The lawsuit alleges that AI software firm Anthropic AI downloaded millions of copyrighted digital books from “pirate” sites for free, purchased copyrighted books, and scanned them as digital files to train its language learning model (LLM), enhance its AI service, and create a library to retain indefinitely to perpetually improve its LLM. The Court ruled that the downloaded books and the purchased print books converted to digital files were “transformative” and qualified as fair use, but the court did not extend the fair use doctrine to Anthropic's goal of keeping the books indefinitely for a library (Bartz v. Anthropic PBC, No. 3:24cv05417 (N.D. Cal. 2024)).
The U.S. District Court for the State of Delaware also recently ruled on the extent to which fair use applies to AI training on copyrighted works. On May 6, 2020, Thomson Reuters filed a lawsuit alleging that ROSS Intelligence (ROSS) had used LegalEase Bulk Memo questions copied from Thomson Reuters' proprietary Headnotes to train its AI model, asserting that this action infringed on its copyrights. In response, ROSS argued that the fair use exception applied. On Feb. 11, 2025, the court enforced Thomson Reuters’ copyrights and found ROSS’ fair use argument unpersuasive (Thomson Reuters Enterprise Centre v. ROSS Intelligence, No. 1:20cv00613SB (D. Del. 2020)).
Copyright Protection for AI Generated Content
Another emerging issue regarding generative AI is whether content created solely by generative AI can be copyrighted. To date, courts have ruled that such content is not eligible for copyright protection, which may present issues when virtual avatars generate potentially valuable content.
The U.S. Court of Appeals for the D.C. Circuit affirmed this in Thaler v. Perlmutter, No. 1:22cv01564BAH (D.D.C. Aug. 18, 2023), holding that the Copyright Act requires all eligible works to be authored in the first instance by a human being. In Thaler, a computer scientist, Stephen Thaler, created a generative artificial intelligence named the "Creativity Machine," which generated a picture entitled "A Recent Entrance to Paradise." The image was submitted to the U.S. Copyright Office, listing Creativity Machine as the sole author of the work and Dr. Thaler as the owner of the work. Dr. Thaler argued that the work was copyrightable because he provided instructions and directed the AI to create the image. The court of appeals ruled against Thaler, stating, "The Creativity Machine cannot be the recognized author of a copyrighted work because the Copyright Act of 1976 requires all eligible work to be authored in the first instance by a human being."
AI Hallucinations and Misleading Information
An issue inherent in all generative AI is the risk of "hallucinations." Hallucinations are incorrect or misleading results produced by AI models, such as citing studies that do not exist, or the infamous case of Mata v. Avianca in the Southern District of New York, in which two lawyers were sanctioned for citing non-existent cases that were created through the use of generative AI. A 2024 study conducted by researchers at Stanford University tested bespoke legal AI tools, such as Lexis+ AI and Westlaw's AI-Assisted Research, and found that even AI tools intended for legal research produced a large amount of incorrect information, with Westlaw's AI-Assisted Research hallucinating more than 34% of the time (Daniel E. Ho & Faiz Surani, "AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries," Stanford HAI (May 23, 2024).
AI hallucinations may become increasingly problematic for virtual avatars, as lawsuits have begun to emerge from allegedly false or misleading information produced by generative AI. On April 29, 2025, a complaint was filed against Meta Platforms, Inc. in the Delaware Superior Court in a matter titled Starbucks v. Meta Platforms, Del. Super. Ct., April 29, 2025. The lawsuit alleges that Meta AI's chatbot published false information and defamed the plaintiff, among other claims. Even apart from the merits of the defamation allegations, the lawsuit demonstrates a growing potential for lawsuits alleging false or misleading information being generated by AI models.
Lack of Clarity on Liability
The issue of who might be liable when a virtual avatar goes rogue is not well established under U.S. law. The Congressional Research Service (CRS) recently warned Congress that if generative AI output infringes a copyright in an existing work, both the AI user and the AI company could potentially be liable under current U.S. law. The CRS suggested that Congress consider two paths in addressing copyright law questions raised by generative AI, either through amending the Copyright Act or adopting a “wait and see” approach as courts decide cases and provide more guidance and predictability. (Christopher T. Zirpoli, Generative Artificial Intelligence and Copyright Law, CRS Legal Sidebar No. LSB 10922 (Library of Congress, July 18, 2025).
The European Union has taken steps to implement a framework that applies strict product liability law to AI systems. On Dec. 9, 2024, the new product liability directive 2024/2853 (PLD) entered into force, with the requirement that member states implement it into their national laws by December 2026. The new PLD allows for AI system providers, third-party software developers, and others involved in the supply chain to be held strictly liable when a defective AI system causes harm.
The Insurance Industry's Response
Although the widespread adoption of AI is a relatively new phenomenon, the insurance industry has taken different approaches to mitigating client risk. Marsh recently published a series of articles aimed at debunking “AI myths,” reasoning that the insurance industry should not introduce exclusions that "seek to remove the core coverage of the line of business to which they are attached if generative AI is part of the causal link to loss." Marsh argues that non-AI-specific exclusions already in place, such as cyber exclusions or access and disclosure exclusions, already apply to generative AI exposure (Debunking Generative AI Myth #3: GenAI Insurance Issues, Marsh (June 2, 2025). Other insurers have taken the opposite approach and introduced "absolute" AI exclusions into several lines of liability coverage (Geoffrey B. Fehling, The Continued Proliferation of AI Exclusions, Hunton Ins. Recovery Blog (May 28, 2025).
However, absolute AI exclusions have also created room in the market for AI-specific coverages. Certain insurers, like Munich Re, have started rolling out policies targeted at enterprise AI users that help mitigate risks stemming from AI errors, whether used internally or provided to clients (Munich Re, De-risking AI Innovation with aiSure™). Additionally, Chaucer Group has recently partnered with Armilla AI to launch a third-party liability insurance product, which covers hallucinations, model drift, mechanical failures, and other deviations from expected AI behavior (Chaucer & Armilla AI, Chaucer and Armilla Launch New AI Liability Insurance Product, Armilla AI (Apr. 23, 2025), https://www.armilla.ai/resources/chaucer-and-armilla-launch-new-ai-liability-insurance-product).
Conclusion
AI-generated content is still in its infancy, but it has begun to challenge traditional legal frameworks and drive changes in the insurance industry. While new laws have started to emerge in the United States, and the EU has moved towards more explicit rules concerning liability, the insurance industry is also adapting, both with new exclusions and with new AI-specific insurance policies. The journey toward greater legal clarity and risk mitigation in the generative AI context is ongoing and will continue as the use of generative AI expands.
Nick Bastovan is an associate at Altheria Law where he focuses his practice on insurance law representing insurers issuing policies addressing technology, media, privacy and breach incidents, including first- and third-party claims.