In early 2026, Amazon suffered three major outages in ten weeks. One of them — a six-hour shopping blackout — cost an estimated 6.3 million orders. Internal briefing documents referenced a "trend of incidents" with "high blast radius" tied to "Gen-AI assisted changes." That reference was deleted from the document before the all-hands meeting.1 Somewhere in that cascade is AI-generated code that nobody will claim authorship of, incident reports that were scrubbed before distribution, and a legal question that nobody in Silicon Valley wants to be the first to answer: if a shareholder sues, if a vendor who missed a delivery window files for breach of contract, if a customer's medical supply didn't arrive because the shopping platform went dark for six hours — who, exactly, is the defendant?
The developer who prompted the AI didn't design the code. The company that deployed it sanitized its internal documents. The AI provider disclaimed all liability in paragraph 14 of a terms-of-service agreement that nobody read. The answer, right now, is that nobody knows. And the trillion-dollar industry shipping AI-generated code into production every day is betting — collectively, silently — that nobody will force the question.
That is not a legal strategy.
The Disclaimer Wall
Before we get to the law that doesn't exist yet, let's look at the contracts that do. Every major AI coding tool ships with a terms of service that says, in language polished by extremely expensive lawyers, the same thing: this is your problem, not ours.
GitHub Copilot — the most widely adopted AI coding assistant in the world — earns a 52 out of 100 fairness score from ToS Watchdog, three points below the AI services average. Its Liability & Indemnification sub-score: 38 out of 100.2 GitHub disclaims all implied warranties — merchantability, fitness for purpose, non-infringement. Their aggregate liability is capped at $500 USD. Total. For everything. If Copilot generates code that takes down your production environment and costs your company ten million dollars in lost revenue, GitHub's maximum financial exposure is five hundred dollars.3
The language is explicit: "You retain all responsibility for Your Code, including Suggestions you include in Your Code." The AI suggested it. You accepted it. It's yours now, legally speaking, forever.
It gets worse. If you're on Copilot's Individual plan, you get zero IP indemnification. If a Copilot suggestion reproduces copyrighted code from the training set and you get sued for infringement, you are entirely on your own. GitHub acknowledges that suggestions "may sometimes match code in the training set" and puts the burden on the developer to verify license compliance.3 Enterprise customers get Microsoft's Copilot Copyright Commitment — but only if they used the built-in content filters, weren't trying to generate infringing material, and didn't modify the output in certain ways. The indemnification is conditional, tier-locked, and narrower than most developers realize.
Cursor, the second-most-popular AI coding IDE, is more aggressive. Users must indemnify Anysphere — Cursor's parent company — from all liabilities arising from unauthorized use, ToS violations, or IP claims from user input. For auto-executed code, users "assume all risks associated with the execution of automatically generated code, including system outages, software defects, data loss, and security vulnerabilities."4 That sentence is worth reading twice. They listed the catastrophes by name. The liability cap: $100 or six months of fees, whichever is greater. For beta features — which Cursor ships constantly — liability is zero. No IP indemnification at any tier.
Anthropic, the company whose model powers Claude Code, provides services "as is" and "as available" without warranties of any kind. Total aggregate liability: $100 or six months of fees. They disclaim consequential and exemplary damages "even if Anthropic parties have been advised of the possibility of damages, and even if the damages are foreseeable." The terms state these limitations are "essential to the terms" and that Anthropic "would not offer the services without these limitations."5
Read that constellation of disclaimers as a system. The AI providers have, collectively, disclaimed responsibility for everything their tools produce. The liability caps range from $100 to $500. The developer — the individual human who typed a prompt and pressed Tab — bears all legal responsibility for code they didn't design, may not fully understand, and statistically may not have reviewed.
The developer inherits unlimited exposure.
The gap between those two numbers is where the lawsuits will live.
The Human-in-the-Loop Fiction
The entire legal architecture of AI coding tools rests on a single assumption: a competent human reviews and approves every piece of generated code before it reaches production. The developer is the checkpoint. The developer exercises judgment. The developer is responsible because the developer chose to accept the suggestion.
This assumption is, empirically, a fiction.
Only 48% of developers always verify AI-generated code before committing it.6 That means 52% — a majority — sometimes or often ship code they haven't fully reviewed. Ninety-six percent of developers don't fully trust AI-generated code's functional accuracy, yet they commit it anyway because reviewing all of it is physically impossible at the volume AI produces.6 AI generates 6.4 times more code than a human would write for the same task. A simple API endpoint: 186 lines where a human would write 29.7 The cognitive window for effective code review is 60 to 90 minutes before defect detection degrades.8 The output is unbounded. The review capacity is fixed.
The legal defense that "the developer approved the code" assumes a meaningful review occurred. If discovery in a lawsuit reveals that the developer spent fourteen seconds on a diff that would take twenty minutes to properly evaluate — and we have telemetry data showing AI-generated PRs wait 4.6 times longer before pickup but still get reviewed in less time than human-written code of equivalent complexity9 — the "human in the loop" defense starts to look like a convenient story the industry tells itself.
This matters because the learned intermediary doctrine — the best existing legal analogy for this relationship — depends entirely on the intermediary being competent, informed, and actually performing the intermediation.
The Learned Intermediary and Its Limits
In pharmaceutical law, the learned intermediary doctrine holds that drug manufacturers fulfill their duty to warn by informing the prescribing physician, not the end patient. The physician is the "learned intermediary" — the expert who evaluates risks and makes a professional judgment about whether to prescribe. The manufacturer warns the doctor. The doctor warns the patient. If the doctor prescribes appropriately and the patient is harmed, the manufacturer's disclosure to the doctor is legally sufficient.10
The analogy to AI coding is seductive. The AI tool provider warns the developer through ToS and disclaimers — "AI can make mistakes, verify output." The developer, as the learned intermediary, exercises professional judgment before deploying code to end users. If harm occurs, the chain of disclosure holds.
Except the analogy breaks at every joint.
The doctrine assumes the intermediary has specialized knowledge to evaluate the risks. But as Winston & Strawn's analysis notes, "Neither physicians nor manufacturers can point to how AI forms its conclusions."10 The black-box nature of large language models means the developer cannot trace the provenance of generated code, cannot fully evaluate the reasoning behind a suggestion, and cannot assess whether the output reflects sound engineering principles or a statistical coincidence in the training data. AI is "notorious for struggling to properly cite its sources" — the intermediary cannot verify what they're intermediating.
The doctrine assumes the intermediary has the time and capacity to perform a meaningful evaluation. We've established that 52% don't always verify. The cognitive load research says defect detection degrades within 60 minutes. The volume of AI output exceeds any human's review bandwidth.
And the doctrine assumes the intermediary's expertise is stable and reliable. But what happens when AI generates novel, unverifiable output — code patterns that don't exist in standard references, that solve the immediate problem but introduce subtle vulnerabilities the developer has never encountered? The AMA Journal of Ethics raised this question directly: are current tort liability doctrines adequate for addressing injury caused by AI?11 Their answer was equivocal. In legal terms, equivocal means "see you in court."
Fifty-two percent of the time, they're not even looking.
The AI LEAD Act: The Bill That Would Blow It All Up
On September 29, 2025, Senators Dick Durbin (D-IL) and Josh Hawley (R-MO) — a bipartisan pair who agree on almost nothing else — introduced Senate Bill 2937, the AI Liability, Evaluation, Accountability, and Due Diligence Act. If passed, it would detonate the entire liability framework that AI companies have spent three years constructing.12
The AI LEAD Act does three things that should terrify every AI provider's legal department.
First, it explicitly classifies AI systems as "products" — defined as any "software, data system, application, tool, or utility" using machine learning algorithms, statistical models, or other computational methods. This is significant because courts have historically held that software is not a product under product liability law. The AI LEAD Act would override that precedent by statute.12
Second, it creates a federal cause of action against AI developers and deployers under four theories: defective design, failure to warn, breach of express warranty, and strict liability. Any individual, class, state attorney general, or the federal AG could bring suit. Four-year statute of limitations.12
Third — and this is the provision that would render every existing AI terms of service unenforceable — it includes a clause that invalidates any contractual language that "waives any right, proscribes any forum or procedure, or unreasonably limits liability."12 Read that again. The $100 liability cap in Anthropic's ToS? Unenforceable. The $500 cap in Copilot's terms? Gone. Cursor's zero-liability beta provision? Void. Every carefully negotiated disclaimer, limitation, and indemnification clause in every AI terms of service would be wiped out by a single paragraph of federal legislation.
The bill applies retroactively to any action commenced after enactment. It allows plaintiffs to use circumstantial evidence to infer defects. It holds deployers liable if they "substantially modify" or "intentionally misuse" an AI system — a standard that could capture any company that fine-tunes, prompts, or integrates AI output into production systems.12
RAND Corporation's analysis of the broader landscape found that AI developers face "considerable liability exposure" under existing U.S. tort law, with "substantial uncertainty" about how current doctrine will apply. Jurisdictional variation creates what RAND called "costly legal battles" even before new legislation arrives.13 The AI LEAD Act would federalize the question and resolve the uncertainty decisively — in favor of plaintiffs.
The bill is in committee. It has not passed. But it is bipartisan, and legislators in both parties are moving toward regulation.
The Insurance Problem
If the legal framework is uncertain and the contractual protections are a fiction, the next line of defense should be insurance. Except the insurance industry is running in the opposite direction.
In management liability and professional indemnity lines, "some carriers are introducing broad-based AI exclusions, often without meaningful definitions."14 These exclusions are appearing in Directors & Officers policies, Errors & Omissions coverage, employment practices insurance, fiduciary liability, and crime coverage. The language is often vague — "losses arising from or related to artificial intelligence" — which gives carriers maximum flexibility to deny claims.
Traditional cyber insurance policies haven't kept pace either. Many explicitly exclude "losses related to the development of AI models, given the unquantified, potentially catastrophic nature of those exposures."15 The word "catastrophic" is not editorial color. It's an actuarial assessment. Insurers cannot model the tail risk of AI-generated code failures because the failure modes are systematic, correlated, and novel — nothing in their historical loss data captures what happens when thousands of companies ship code from the same model with the same blind spots.
Some new AI-specific products are emerging. BOXX Insurance launched a tech E&O product designed for SaaS, AI, and digital infrastructure companies that responds to algorithmic bias, data misuse, and AI-specific failures.15 But the market is immature. Buyers and brokers are "increasingly looking for integrated solutions that address overlaps between tech E&O, cyber, media and AI-related liabilities, rather than assembling multiple standalone policies with potential gaps."15 They're looking for solutions that don't exist yet.
Harvard Law's corporate governance forum identified AI failures as a "hidden C-suite risk" with board-level liability implications.16 The recommendation from the insurance industry itself is blunt: policyholders should "carefully review all liability insurance, particularly E&O, D&O, and cyber liability insurance policies each year at renewal, paying careful attention to the application of any AI exclusions."14
Translation: check whether your insurer just quietly excluded the thing most likely to generate your next claim.
The insurance carriers are excluding it.
The gap between those two decisions is where the risk sits — with you.
The Rubber Stamp: NSPE and Personal Liability
While software engineering debates liability in the abstract, traditional engineering has already answered the question. The answer is not comforting.
The National Society of Professional Engineers' Board of Ethical Review directly addressed AI in engineering practice and produced the clearest ruling in the liability landscape. An engineer conducted a cursory review of AI-generated design documents and stamped them. The NSPE found this insufficient and unethical. The AI-generated documents contained "misaligned dimensions and omitted safety features" that "could have led to regulatory noncompliance and safety hazards." The engineer failed to detect them.17
The ruling established an unambiguous standard: "AI-generated technical work requires at least the same level of scrutiny as human-created work." When an engineer stamps AI-generated work, they assume full legal and professional liability for that work's accuracy and compliance with regulations and safety standards. The "responsible charge" doctrine requires engineers to be "actively engaged in the engineering process, from conception to completion." Simply stamping AI output without proper oversight violates licensure law.17
The NSPE also flagged a confidentiality risk that most engineering firms hadn't considered: uploading client data to AI platforms without consent is "tantamount to placing the Client's private information in the public domain."17
For software, this ruling is a harbinger. Software engineering has no equivalent licensing regime — there is no PE stamp, no board of review, no formal professional liability framework. That's part of why the liability gap exists. But when AI-generated code is embedded in products that do fall under professional engineering standards — medical devices, structural systems, industrial controls, autonomous vehicles — the NSPE ruling previews how courts will think about responsibility. The human who approved the AI's work owns the consequences. The AI's disclaimers are irrelevant. The stamp is the liability.
The State Legislature Explosion
While Congress debates the AI LEAD Act, the states aren't waiting. As of March 2026, 45 states have introduced 1,561 AI-related bills.18 The legislative landscape is a fracturing mess of overlapping, sometimes contradictory approaches to AI liability.
California AB 316, effective January 1, 2026, took the most surgically targeted approach. It prohibits any defendant who "developed, modified, or used" an AI system from asserting that "the artificial intelligence autonomously caused the harm."18 Read that carefully. It doesn't create new liability. It eliminates one specific defense — the "AI did it" defense. If your AI-generated code causes a security breach that exposes customer data, you cannot argue in California court that the AI made the decision and you're not responsible. The entire supply chain is covered: foundation model developer, fine-tuner, integrator, deploying enterprise. Everyone is a potential defendant. Nobody gets to point at the machine.
Texas passed the Responsible AI Governance Act, also effective January 1, 2026, but took a different approach, emphasizing intentional misconduct over impact-based liability.18 Vermont created a consumer right of action with statutory and punitive damages for AI-related privacy violations. Minnesota mandated disclosure when users interact with AI, with $1,000 per violation statutory damages. New York introduced an algorithmic pricing bill with class actions and treble damages — $5,000 or more per violation.18
For any company that ships AI-generated code to users in multiple states — which is, effectively, every company that ships software — the compliance burden is becoming exponential. Each state's framework is different. Each state's liability theory is different. The patchwork is the point: when the federal government can't or won't act, fifty state legislatures will, and they won't coordinate.
The EU is adding another layer. The EU AI Act reaches full applicability on August 2, 2026 — five months from now. The new EU Product Liability Directive specifically includes software and AI "irrespective of the mode of supply, usage, whether embedded in hardware or distributed independently."18 AI system providers, third-party software developers, and supply chain participants can all be held liable for defective AI causing harm. Code generation tools aren't explicitly classified as high-risk, but if AI-generated code is used in healthcare, finance, employment, or critical infrastructure, the high-risk provisions apply to the downstream product — and the code generator becomes part of the liability chain.
The Discovery Problem
There is a legal question even more immediate than who's liable: can you prove what happened?
In In re OpenAI, Inc., a court held that "millions of GenAI logs, including user prompts and model responses" must be produced in discovery when relevant to litigation.18 AI-generated content falls under standard discovery rules. This means that when the first AI-code liability case arrives — and it will — companies will be compelled to reveal which code was AI-generated, what prompts produced it, what review (if any) occurred, and what provenance tracking exists.
Most companies cannot answer these questions. They don't track which code was AI-generated or log prompts, and they don't differentiate AI-assisted commits from human-written commits in version control. The code is merged, deployed, and forgotten. The AI's contribution is invisible within weeks.
The companies without provenance tracking are the most exposed. Not because they're more likely to ship bad AI code — everyone is shipping bad AI code — but because they cannot comply with discovery. They cannot demonstrate that human review occurred. They cannot show that the code was tested against the specific failure mode that caused harm. They cannot even identify which lines the AI wrote.
Amazon scrubbed references to AI involvement from internal incident documents.1 Set aside the PR implications. Consider the legal ones. If those documents are subject to a litigation hold — if any shareholder, vendor, or customer files suit over the outages — destroying or altering those documents could constitute spoliation of evidence. The instinct to minimize AI involvement is understandable. The legal consequences of acting on that instinct could be worse than the original liability.
They're more liable, because they can't mount a defense.
The Missing Case
Here is the most revealing fact in this entire analysis: as of March 2026, there is no court case in which AI-generated code was specifically identified as the cause of a security breach, outage, or other measurable harm leading to litigation.18 Not one.
The litigation landscape is dominated by copyright and IP cases — over 70 infringement lawsuits by copyright owners against AI companies.18 The Doe v. GitHub class action, filed in November 2022, has had most claims dismissed after a judge ruled that plaintiffs failed to produce "a single piece of evidence" that Copilot generated code identical to their own. Two counts of breach of contract and open-source license violation survive, with discovery ongoing and an appellate brief filed with the Ninth Circuit.18 Adjacent cases — Bartz v. Anthropic ($1.5 billion settlement for training data), NYT v. OpenAI, Getty v. Stability AI — all concern training data and copyright, not code quality or downstream harm.
Nobody has been the test case yet. Nobody has been the plaintiff who says: "AI-generated code caused this breach, this outage, this financial loss, and here's who's responsible."
The absence of the case is itself the story. It doesn't mean AI-generated code isn't causing harm — the Amazon outages suggest otherwise. It means nobody has yet connected the chain from AI output to specific damage to a legal theory that survives a motion to dismiss. The discovery ruling in In re OpenAI makes that chain easier to establish going forward. California AB 316 eliminates the "AI did it" defense. The AI LEAD Act, if passed, would create the cause of action. The pieces are assembling.
When the first test case arrives, it will not be a clean, simple lawsuit. It will be a multi-party morass involving the AI provider, the deploying company, the individual developer, the insurance carrier (arguing its AI exclusion applies), and possibly a state attorney general. It will involve discovery requests for prompt logs that don't exist, provenance tracking that was never implemented, and review records that were never kept. The AI provider will point to its ToS. The company will point to the developer. The developer will point to the AI. And a judge will have to decide, for the first time, where the buck actually stops.
The Gap
The providers capped their liability at $100-$500. The insurance industry is writing exclusions faster than coverage. The legal framework assumes a human exercised judgment — and half the time, they didn't. Forty-five states are legislating in different directions, and most companies can't even identify which code in their codebase is AI-generated.
This is the liability gap. It is not a theoretical risk. It is an active, growing, uninsured exposure that sits in the codebase of every company that has adopted AI coding tools — which is to say, effectively every software company on Earth. The gap exists because the technology moved faster than the law, and the contracts were written to protect the providers, not the users.
The first test case will rewrite how software is built, reviewed, insured, and deployed. It will establish whether AI providers can disclaim their way out of responsibility for the tools they sell. It will determine whether "the developer approved it" is a defense or a fiction. It will decide whether a $100 liability cap can survive contact with a $100 million loss.
Until that case arrives, every company shipping AI-generated code into production is operating in the gap. They're relying on disclaimers that may be unenforceable, insurance that may not cover them, review processes that may not withstand scrutiny, and provenance tracking that doesn't exist.
That's not a legal strategy. It's a bet that someone else will be the test case first. And right now, every company in the industry is making the same bet, simultaneously, with the same chips.
Somebody's going to lose.
Disclosure
This article was written with the assistance of Claude, an AI made by Anthropic — the same Anthropic whose terms of service cap liability at $100 and disclaim consequential damages "even if foreseeable." We reviewed every word, which Anthropic's lawyers would want us to mention and which we found mildly haunting given the subject matter. If you find errors, legal inaccuracies, or newly filed cases that make this analysis obsolete by the time you read it, we want to hear about it: bustah_oa@sloppish.com.
Sources
- Amazon AI coding mandate and outage timeline compiled from multiple sources: Medium (Heinan Cabouly), The Register, Fortune, TechRadar. Internal document references reported by multiple outlets; Amazon's official position is "only one of the incidents involved AI."
- ToS Watchdog, GitHub Copilot Analysis. Fairness score: 52/100, Liability & Indemnification sub-score: 38/100. Link.
- GitHub Copilot Product Specific Terms (March 2026). PDF. $500 aggregate liability cap, zero IP indemnification on Individual plan, enterprise indemnification conditional on content filter usage.
- Cursor (Anysphere) Terms of Service. $100 or 6 months of fees liability cap; zero liability for beta services; user indemnification of Anysphere. Link.
- Anthropic Consumer Terms of Service. $100 or 6 months of fees aggregate liability cap; services provided "as is." Link.
- Sonar developer trust survey, January 2026: 96% do not fully trust AI-generated code's functional accuracy; 48% always verify before committing. Also: ByteIota, "AI Code Review Bottleneck Kills 40% of Productivity." Link.
- LogRocket, "Why AI coding tools shift the real bottleneck to review." 6.4x code volume comparison. Link.
- SmartBear/Cisco code review case study. 2,500 reviews, 3.2 million lines of code. Defect detection degrades after 60-90 minutes. PDF.
- LinearB, 2026 Software Engineering Benchmarks Report. Analysis of 8.1 million PRs: AI-generated PRs wait 4.6x longer for review, 32.7% acceptance rate vs. 84.4% for human-written code. Link.
- Winston & Strawn, "A New Intermediary: Artificial Intelligence and the Learned Intermediary Doctrine." Link.
- AMA Journal of Ethics, "Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?" February 2019. Link.
- AI LEAD Act (S.2937), introduced September 29, 2025 by Senators Durbin (D-IL) and Hawley (R-MO). Federal cause of action for AI product liability; renders ToS liability caps unenforceable; retroactive application. Congress.gov text. Analysis: National Law Review.
- RAND Corporation, "U.S. Tort Liability for Large-Scale AI Damages." Link. Also: RAND Tort Liability Report.
- Insurance Business Magazine, "AI exclusions are creeping into insurance — but cyber policies aren't the issue yet." Link.
- WTW, "Emerging AI Exposures and the Role of Cyber and E&O Insurance," March 2025. Link. Also: IAPP, "How AI Liability Risks Are Challenging the Insurance Landscape." Link.
- Harvard Law School Forum on Corporate Governance, "The Hidden C-Suite Risk of AI Failures." Link.
- NSPE Board of Ethical Review, "Use of Artificial Intelligence in Engineering Practice." Full personal liability for PE-stamped AI-generated work; "responsible charge" doctrine. Link.
- State and federal legislation, litigation, and discovery compiled from multiple sources: California AB 316 (Legislature text, Baker Botts analysis); 45 states / 1,561 bills (MultiState AI Tracker, Wiley Law, IAPP); EU AI Act (Official page, SIG summary); Doe v. GitHub (BakerHostetler tracker, Case updates); In re OpenAI discovery ruling and AI infringement tracker (National Law Review, McKool Smith).
