Listen to this post

A recent Mississippi case reminds commercial and residential policyholders alike of the importance of updating insurance coverage when circumstances change to avoid a coverage dispute or the loss of coverage altogether. Commercial insureds should coordinate with stakeholders within their organizations to ensure prompt reporting of changing exposures to the personnel responsible for insuring risks. The insured did not do so in Clark v. Alfa Insurance Corp., No. 2022-CA-01251-COA, 2024 WL 3506638 (Miss. Ct. App. July 23, 2024), and found himself uninsured for liabilities arising out of a driver’s accident with his newly purchased cattle. The insured purchased 10 black heifers and penned them inside a barbed-wire enclosure behind his home rather than on separate property used for commercial purposes. The cattle escaped and wandered onto the adjacent highway that very night, causing a car accident that injured the driver.  The driver sued the insured and Alfa Insurance Company, which had issued a homeowners policy providing coverage for injury to others “caused by an animal owned by or in the care of an insured.”  

The insurer did not dispute that the cattle caused the driver’s alleged injuries and thus fell within the policy’s coverage grant. The insurer instead turned to a farming exclusion to bar coverage. The policy excluded coverage for “bodily injury or property damage arising out of business or farming engaged in by an insured” (emphasis added). The policy defined “farming” as “the operation of an agricultural… enterprise…” but did not define the term “enterprise.” 

Despite the murkiness of the terminology, the court concluded that the insured’s purchase and ownership of the cattle constituted an “enterprise” within the scope of the exclusion — while acknowledging established Mississippi law resolving ambiguities in insurance policies in favor of insureds and narrowly construing exclusions to preserve coverage. The court reached this result by turning to two dictionary definitions, which themselves defined “enterprise” differently. The court did not discuss a recent Mississippi Supreme Court case citing dictionary definitions of the terms “irritant” and “contaminant” utilized in a pollution exclusion to find an ambiguity favoring coverage(see Omega Protein, Inc. v. Evanston Ins. Co., 336 So. 3d 128, 132 (Miss. 2022)).  Instead, the court considered the conflicting definitions and then, rather than concluding that an ambiguity existed, said that “[i]t is not unreasonable that purchasing, raising, feeding, breeding, selling, and caring for cattle is an ‘agricultural enterprise’ even when no profit is involved” (emphasis added). The court noted that every business must begin somewhere, profit or not. The court thus indicated that its own interpretation of “enterprise” was the only logical one.

The court cited the insured’s federal income tax return to buttress its conclusion. The insured owned 620 acres of timberland, separate from his personal residence, and listed timber as his “principal crop or activity” on tax returns. He also listed the cost of the cattle and their feed as a loss to offset profit from his farming income. Although the insured had never before purchased cows and “did not have any specific plans for the cows, other than to purchase them in hopes of attempting to raise them with his son,” his purchase and ownership of the cows constituted an excluded enterprise in the court’s view.

Clark provides a cautionary tale to insureds who should evaluate coverage when circumstances change. The appellate court said as much, stating that the insured “should have notified Alfa before the heifers were delivered and obtained appropriate insurance coverage.” Although insureds typically evaluate liabilities at insurance renewals, periodic checks on exposures — including coordination with other stakeholders — can avoid coverage gaps like the one in this case.

Listen to this post

Over the last week, organizations around the globe have struggled to bring operations back online following a botched software update from cybersecurity company CrowdStrike. As the dust settles, affected organizations should consider whether they are insured against losses or claims arising from the outage. The Wall Street Journal has already reported that insurers are bracing for claims arising from the outage and that according to one cyber insurance broker “[t]he insurance world was expecting to cover situations like this.” A cyber analytics firm has estimated that insured losses following the outage could reach $1.5 billion.

Your cyber insurance policy may cover losses resulting from the CrowdStrike outage. These policies often include “business interruption” or “contingent business interruption” insurance that protects against disruptions from a covered loss. Business interruption insurance covers losses from disruptions to your own operations. This insurance may cover losses if the outage affected your own computer systems. Contingent business interruption insurance, on the other hand, covers your losses when another entity’s operations are disrupted. This coverage could apply if the outage affected a supplier or cloud service provider that your organization relies on.

Cyber policies often vary in the precise risks they cover. Evaluating potential coverage requires comparing your losses to the policy’s coverage. Cyber policies also include limitations and exclusions on coverage. For example, many cyber policies contain a “waiting period” that requires affected systems to be disrupted for a certain period before the policy provides coverage. These waiting periods can be as short as one hour or as long as several days.

Other commercial insurance policies could also provide coverage depending on the loss or claim and the policy endorsements and exclusions. For example, your organization may have procured liability insurance that protects against third-party claims or litigation. This insurance could protect you from claims made by customers or other businesses related to the outage.

If your operations have been impacted by the CrowdStrike outage, there are a few steps you can take now to maximize your potential insurance recovery.

First, read your policies to determine the available coverage. As you review your policies, pay careful attention to policy limits, endorsements, and exclusions. A policy endorsement may significantly expand policy coverage, even though it is located long after the relevant policy section. Keep in mind that courts generally interpret coverage provisions in a policy generously in favor of an insured and interpret exclusions or limitations narrowly against an insurance company.

Second, track your losses. The outage likely cost your organization lost profits or extra expenses. Common business interruption losses may also include overtime expenses to remedy the outage, expenses to hire third-party consultants or technicians, and penalties arising from the outage’s disruption to your operations. Whatever the nature of your loss, tracking and documenting your loss now will help you secure a full insurance recovery later.

Third, carefully review and comply with your policy’s notice requirements. If you have experienced a loss or a claim, you should immediately notify your insurer. Even if you are only aware of a potential claim, your policy may require you to provide notice to your insurer of the events that could ultimately lead to a claim or loss. Some notice requirements in cyber policies can be quite short. After providing notice, you may receive a coverage response or “reservation of rights” from your insurer. Be cautious in taking any unfavorable response at face value. Particularly in cases of widespread loss, an insurer’s initial coverage evaluation may not accurately reflect the available coverage.

If you are unsure of your policy’s notice obligations or available coverage, or if you suspect your insurer is not affording your organization the coverage that you purchased, coverage counsel can assist your organization in securing coverage. Above all, don’t hesitate to secure the coverage to which you are entitled.

Listen to this post

While recently resolving an insurance coverage dispute in Snell v. United Specialty Insurance Company, 102 F.4th 1208 (11th Cir. 2024), an Eleventh Circuit concurring opinion discussed the potential employment of artificial intelligence large language models to interpret policy terms.

The case concerned whether a landscaper’s insurance policy — which generally provided coverage for claims arising out of the insured’s “landscaping” — covered installation of a “ground-level trampoline.” The district court granted summary judgment to the insurer, and the Eleventh Circuit affirmed. Judge Kevin Newsom’s concurrence joined the majority’s opinion in full, but questioned whether an AI “large language model” (or “LLM”) could aid courts in interpreting policy provisions:

Here’s the proposal, which I suspect many will reflexively condemn as heresy, but which I promise to unpack if given the chance: Those, like me, who believe that “ordinary meaning” is the foundational rule for the evaluation of legal texts should consider—consider—whether and how AI-powered large language models like OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude might—might—inform the interpretive analysis. There, having thought the unthinkable, I’ve said the unsayable.

Judge Newsom explains that he found dictionary definitions of “landscaping” unhelpful, and on a whim he asked a law clerk to ask ChatGPT, “What is the ordinary meaning of ‘landscaping’?” ChatGPT responded that:

“Landscaping” refers to the process of altering the visible features of an area of land, typically a yard, garden or outdoor space, for aesthetic or practical purposes. This can include activities such as planting trees, shrubs, flowers, or grass, as well as installing paths, fences, water features, and other elements to enhance the appearance and functionality of the outdoor space.

The court ultimately resolved the case without determining whether installation of the in-ground trampoline constituted “landscaping.” Nevertheless, Judge Newsom’s concurrence favorably compared AI to more traditional interpretive sources in defining the “ordinary meaning” of a word. Among other benefits, he explained that large language model AIs could be beneficial over other interpretive sources because:

  • LLMs train on ordinary language – LLMs “learn” from a vast amount of data — approximately 400-500 billion words — that reflect how words are used and understood in everyday life.
  • LLMs “understand” context – LLMs can recognize and contextualize words – for example, understanding and appropriately using the difference between a “bat” referring to a flying mammal and a “bat” used by a baseball player.
  • LLMs are accessible – Most LLMs are easy to use and are either freely available or available at a nominal cost.

On the other hand, potential downsides include the risk of AI “hallucination,” LLM’s inability to account for offline speech or underrepresented populations, the risk of source data manipulation, and the dystopian fear of AI “robo judges.” At the end of the day, Judge Newsom offered an equivocal answer to his own question about whether LLMs could be a tool for legal interpretation: “Having initially thought the idea positively ludicrous, I think I’m now a pretty firm ‘maybe.’”

Judge Newsom’s concurrence, while not fully endorsing LLMs for policy interpretation, suggests significant implications. First, it opens the door to using AI in policy interpretation. Counsel should no longer be surprised if an opponent’s brief cites to an LLM, and should look for opportunities to use LLMs, in addition to more traditional sources, to support their own analysis. Despite expressing reservations, Judge Newsom recommends that “a cautious first use of an LLM would be in helping to discern how normal people use and understand language, not in applying a particular meaning to a particular set of facts to suggest an answer to a particular question.” For example, LLMs could help illustrate different reasonable interpretations of policy terms and demonstrate ambiguities by exposing multiple plausible interpretations. This is particularly important since, in many states, ambiguities in an insurance policy must be “strictly construed against the insurer.” See, e.g., Old Republic Nat’l Title Ins. Co. v. RM Kids, LLC, 352 Ga. App. 314, 318 (2019).  

Second, LLMs highlight limitations of traditional sources, like dictionaries, in interpreting policy terms. Dictionaries provide precise, narrow definitions often diverging from every day, colloquial use and struggle with new words and evolving meanings. LLMs face no such difficulties. They draw from diverse source material — “from Hemmingway novels and Ph.D. dissertations to gossip rags and comment threads” — and inherently incorporate modern and informal usage. Judge Newsom cites the term “landscaping” as an example. Traditional dictionary definitions focused on natural features and aesthetics, and excluded — or at least minimized — other activities that a reasonable person might consider “landscaping,” like regrading a yard, adding drainage, or installing outdoor walkways or lighting fixtures. The ChatGPT definition, in contrast, encompassed either “aesthetic or practical” activities that modified either natural or artificial aspects to “enhance the appearance and functionality of the outdoor space.”

Third, the concurrence highlights the importance of counsel understanding the potential uses, and more importantly abuses, of LLMs in litigation. Litigants might try to design specific queries (often called “prompts”) to extract a desired outcome from an LLM or may “shop around” using different LLMs to find the most favorable response. The novelty of the technology does mean counsel and jurists must be vigilant. The concurrence recommends as a “best practice… full disclosure of both the queries put to the LLMs… and the models’ answers.”

In conclusion, Judge Newsom’s cautious approach to use of AI large language models for policy interpretation may well be overshadowed by his concluding observation “that AI is here to stay.” Judicial attitudes toward AI are shifting from hostility to curiosity, and counsel should take note of Judge Newsom’s recommendation that now “is the time to figure out how to use it profitably and responsibly.”