Listen to this post

While recently resolving an insurance coverage dispute in Snell v. United Specialty Insurance Company, 102 F.4th 1208 (11th Cir. 2024), an Eleventh Circuit concurring opinion discussed the potential employment of artificial intelligence large language models to interpret policy terms.

The case concerned whether a landscaper’s insurance policy — which generally provided coverage for claims arising out of the insured’s “landscaping” — covered installation of a “ground-level trampoline.” The district court granted summary judgment to the insurer, and the Eleventh Circuit affirmed. Judge Kevin Newsom’s concurrence joined the majority’s opinion in full, but questioned whether an AI “large language model” (or “LLM”) could aid courts in interpreting policy provisions:

Here’s the proposal, which I suspect many will reflexively condemn as heresy, but which I promise to unpack if given the chance: Those, like me, who believe that “ordinary meaning” is the foundational rule for the evaluation of legal texts should consider—consider—whether and how AI-powered large language models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude might—might—inform the interpretive analysis. There, having thought the unthinkable, I’ve said the unsayable.

Judge Newsom explains that he found dictionary definitions of “landscaping” unhelpful, and on a whim he asked a law clerk to ask ChatGPT, “What is the ordinary meaning of ‘landscaping’?” ChatGPT responded that:

“Landscaping” refers to the process of altering the visible features of an area of land, typically a yard, garden or outdoor space, for aesthetic or practical purposes. This can include activities such as planting trees, shrubs, flowers, or grass, as well as installing paths, fences, water features, and other elements to enhance the appearance and functionality of the outdoor space.

The court ultimately resolved the case without determining whether installation of the in-ground trampoline constituted “landscaping.” Nevertheless, Judge Newsom’s concurrence favorably compared AI to more traditional interpretive sources in defining the “ordinary meaning” of a word. Among other benefits, he explained that large language model AIs could be beneficial over other interpretive sources because:

  • LLMs train on ordinary language – LLMs “learn” from a vast amount of data — approximately 400-500 billion words — that reflect how words are used and understood in everyday life.
  • LLMs “understand” context – LLMs can recognize and contextualize words – for example, understanding and appropriately using the difference between a “bat” referring to a flying mammal and a “bat” used by a baseball player.
  • LLMs are accessible – Most LLMs are easy to use and are either freely available or available at a nominal cost.

On the other hand, potential downsides include the risk of AI “hallucination,” LLM’s inability to account for offline speech or underrepresented populations, the risk of source data manipulation, and the dystopian fear of AI “robo judges.” At the end of the day, Judge Newsom offered an equivocal answer to his own question about whether LLMs could be a tool for legal interpretation: “Having initially thought the idea positively ludicrous, I think I’m now a pretty firm ‘maybe.’”

Judge Newsom’s concurrence, while not fully endorsing LLMs for policy interpretation, suggests significant implications. First, it opens the door to using AI in policy interpretation. Counsel should no longer be surprised if an opponent’s brief cites to an LLM, and should look for opportunities to use LLMs, in addition to more traditional sources, to support their own analysis. Despite expressing reservations, Judge Newsom recommends that “a cautious first use of an LLM would be in helping to discern how normal people use and understand language, not in applying a particular meaning to a particular set of facts to suggest an answer to a particular question.” For example, LLMs could help illustrate different reasonable interpretations of policy terms and demonstrate ambiguities by exposing multiple plausible interpretations. This is particularly important since, in many states, ambiguities in an insurance policy must be “strictly construed against the insurer.” See, e.g., Old Republic Nat’l Title Ins. Co. v. RM Kids, LLC, 352 Ga. App. 314, 318 (2019).  

Second, LLMs highlight limitations of traditional sources, like dictionaries, in interpreting policy terms. Dictionaries provide precise, narrow definitions often diverging from every day, colloquial use and struggle with new words and evolving meanings. LLMs face no such difficulties. They draw from diverse source material — “from Hemmingway novels and Ph.D. dissertations to gossip rags and comment threads” — and inherently incorporate modern and informal usage. Judge Newsom cites the term “landscaping” as an example. Traditional dictionary definitions focused on natural features and aesthetics, and excluded — or at least minimized — other activities that a reasonable person might consider “landscaping,” like regrading a yard, adding drainage, or installing outdoor walkways or lighting fixtures. The ChatGPT definition, in contrast, encompassed either “aesthetic or practical” activities that modified either natural or artificial aspects to “enhance the appearance and functionality of the outdoor space.”

Third, the concurrence highlights the importance of counsel understanding the potential uses, and more importantly abuses, of LLMs in litigation. Litigants might try to design specific queries (often called “prompts”) to extract a desired outcome from an LLM or may “shop around” using different LLMs to find the most favorable response. The novelty of the technology does mean counsel and jurists must be vigilant. The concurrence recommends as a “best practice… full disclosure of both the queries put to the LLMs… and the models’ answers.”

In conclusion, Judge Newsom’s cautious approach to use of AI large language models for policy interpretation may well be overshadowed by his concluding observation “that AI is here to stay.” Judicial attitudes toward AI are shifting from hostility to curiosity, and counsel should take note of Judge Newsom’s recommendation that now “is the time to figure out how to use it profitably and responsibly.”

Listen to this post

Summertime. The kids are getting out of school. Maybe you have a vacation planned. Down here in Florida, we make early preparations for the annual “it’s going to be the worst hurricane season on record” reports. Whatever your plans are this summer, it is also best practice to include a mid-year review of your insurance policies. This may not be as fun as hot dogs, baseball, barbeques, and beaches, but it can protect you and your business like SPF 100 sunscreen.

In Florida and throughout the Southeast, June 1 is the start of hurricane season. Have you recently reviewed your property insurance policy — whether a commercial or a homeowners policy? Are you covered for named storms or hurricanes? Most policies have a different — and usually significantly higher — deductible for hurricane claims. For instance, section 627.701, Florida Statutes, permits property insurers offering personal lines residential property insurance policies to offer alternative deductibles applicable to hurricane losses of $500, 2%, 5%, and 10% of the policy dwelling limits. That means if you have a $1 million property, your hurricane deductible could be $100,000. Are you prepared to cover $100,000 in losses in the unfortunate event of hurricane damage? As you ensure your home or business has sufficient sandbags, water, toilet paper, and other supplies, you should also make sure you are prepared to cover losses up to the amount of your deductible.

But mid-year is not just the time to think about catastrophic storm damage. Has your business grown? Have you purchased new vehicles, equipment, or real property this year? Your commercial property policy most likely has a coverage extension for newly acquired or constructed property, even if that property is not listed in your schedule of covered property. However, this coverage extension is usually limited to a short period, like 30 days. So, if you have purchased property this year, now is the time to check that it has been added to your policy’s schedule of insured property.

And it is not just your property insurance that you should consider reviewing this time of year. Have you had any recent organizational changes? Did you acquire a new business? Create a new subsidiary? Many liability policies have coverage extensions that will still cover newly acquired or created businesses, even if that business is not named as an insured in the policy. However, this additional coverage is typically for a limited period — often 30 or 60 days. Mid-year is a great time to confirm that all of your related businesses are adequately covered under your policies.

As you review your policies, you should also consider how your business has changed. Did you pick up a new scope of business, maybe a new project, or enter into new contracts during the first half of the year? Has your business changed in a way that an exclusion that did not matter when you purchased your policy now creates a gap in your insurance coverage? For example, maybe you received a great contract to expand your business into the construction of a townhouse community for the first time, but your liability policy has an exclusion for work on townhouses. That exclusion may not have mattered last year but could result in a significant gap in coverage this year. Maybe your idea of summer fun is not sitting poolside with a cold beverage and curled up with your insurance policies. But a mid-year review of your insurance policies could keep you from being burned with uncovered claims later in the year.

Listen to this post

Do you know what your insurance covers?

This is a deceptively simple question — risk managers rightfully expect to know the scope of their coverages when they build their insurance programs. Unfortunately, judicial interpretation of common policy terms can turn what should be a straightforward question into a morass of uncertainty.

This uncertainty is exemplified in recent case law interpreting cyber insurance policies. Many commercial cyber policies provide coverage for damages “resulting directly from” some underlying action, be it fraud, hacking, theft, etc. For example, ISO’s commercial cyber insurance policy states that “We will pay for ‘loss’ resulting directly from a ‘security breach’ ‘discovered’ during the policy period.”

Unfortunately, those three words – “resulting directly from” – can cause many complications. Insurers often interpret them to require an immediate connection between the breach and the loss (e.g., a hacker breaching a bank account to steal money). Under this restrictive interpretation, however, necessary and substantial costs associated with the hack could be left uncovered, such as crisis response costs, ransomware payments, and root cause analysis. Policyholders (rightfully) read the plain language of their coverage to include these costs, which are impossible to decouple from the underlying security breach.

Addressing this dispute, courts nationwide are split into two opposing camps: The “proximate cause” camp interprets a “direct” loss broadly to require only proximate causation. In contrast, the “direct means direct” camp interprets a “direct” loss strictly to require no intervening acts. For example, federal courts applying Pennsylvania, New Jersey, Louisiana, and Maryland law have all come down in the “proximate cause” camp. Meanwhile courts applying California, Virginia, Illinois, and Wisconsin law have come down in the “direct means direct” camp.

While differing substantive interpretations of the same policy language across different jurisdictions frustrates the goal of consistency for policyholders, it is at least foreseeable given the patchwork of jurisdictions applying the laws of 50 different states.

A Momentum Shift

In some jurisdictions the momentum may be shifting in favor of expanded coverage. In 2019 in Principle Solutions Group, LLC v. Ironshore Indemnity, Inc., the 11th Circuit rejected the “direct means direct” approach and applied the “proximate cause” standard under Georgia law to a commercial crime policy, finding that the “ordinary meaning of the phrase ‘resulting directly from’ requires proximate causation between a covered event and a loss, not an ‘immediate’ link.”

So, what does your policy cover?

The answer depends on what law applies. The inconsistent application of this common policy term creates a conundrum for policyholders that may have vastly expanded or constricted coverage under identical policy language. Therefore, it is critical for policyholders to be proactive, both when purchasing coverage and when bringing a claim, to understand the standard a reviewing court might apply to coverages, and plan accordingly by, for example, negotiating language consistent with the proximate cause approach or designating favorable governing law. When neither option is available, policyholders must be prepared for insurer attempts to narrowly interpret the phrase “directly resulting from” and recognize that the available coverage may be limited if the insurer prevails on the narrow interpretation.