Scholar argues that the development of credit-reporting regulation offers lessons for AI governance.
In October 2025, social media activist Robby Starbuck sued Google after its artificial intelligence (AI) chatbot repeatedly generated false information about him, including accusations of sexual assault, fabricated criminal records, and invented court documents.
Google’s motion to dismiss Starbuck’s lawsuit rests on three common law defamation arguments: The AI did not “publish” the statements because users triggered them through queries; Starbuck could not identify any specific person who saw or relied on the outputs; and the tools were experimental and explicitly warned of inaccuracies. Google also argues that, as a public figure, Starbuck could not establish that the chatbot acted with “actual malice,” as required by the First Amendment of the U.S. Constitution. Google describes the generated information as “hallucinations,” treating them as unavoidable system properties rather than institutional failures.
Google’s characterization of the information its chatbot generated about Starbuck highlights a structural problem. Large language model training corpora, the collections of text used to train the model, often lack documented provenance, leaving developers unable to trace or verify the inputs that shape model outputs. AI systems aggregate dispersed, unverifiable data and produce errors affecting individuals, with no clear accountability.
Before 1970, consumer reporting agencies (CRAs) made the same arguments. They characterized themselves as passive compilers, denied publication or third-party reliance, and claimed source verification was impossible. Courts routinely accepted these positions under qualified-privilege and malice doctrines, so CRAs imposed significant costs on individuals while facing minimal legal risk.
The U.S. Congress responded with the Fair Credit Reporting Act (FCRA), which bypassed common law defamation and privacy torts entirely. In this 1970 statute, Congress replaced intent-based liability with statutory duties requiring CRAs to maintain “reasonable procedures to assure maximum possible accuracy,” disclose information sources, and reinvestigate or delete disputed items. The 1970 FCRA rejected the view that error was an unavoidable feature of system complexity and instead placed full responsibility on CRAs for ensuring the accuracy and verifiability of the data they reported.
The 1970 FCRA placed primary responsibility on CRAs, but early legal scholarship and later legislative history made clear that many inaccuracies originated with furnishers rather than the reporting agencies. Contemporary analyses noted that CRAs faced recurring accuracy problems because many information sources—particularly “character” reports—were outside the CRA’s scope and could not be independently verified. When reinvestigation failed to confirm the underlying data, the CRA could only delete or flag the item.
Congress clarified and expanded these obligations in its 1996 amendments by requiring furnishers to maintain written accuracy procedures, investigate disputes, and ensure that corrections were transmitted throughout the system. Over time, liability moved upstream as regulators recognized that accuracy is determined at the point of data creation, not at the bureau level alone.
This trajectory shows how governance adapts when predictive systems rely on dispersed inputs: Responsibility migrates to the actors best positioned to verify accuracy, and data must remain traceable from origin to output. Obligations attach where verification is possible. When no accountable source exists, responsibility defaults to the institution that aggregates the information.
The history of FCRA shows how predictive systems behave when unverifiable information enters their pipelines. Between 1970 and 1996, Congress learned that CRA-level accuracy rules were insufficient without parallel rules for the entities supplying data. Modern AI systems have a similar issue: Some training inputs have identifiable origins and verification pathways, while others, particularly large volumes of scraped text, do not.
A limited subset of AI training data resembles FCRA furnishers. Licensed news archives, academic publishers, and medical databases offer documented provenance, maintained records, and verification capacity. Scraped or unlicensed web text occupies the same structural location as pre-FCRA data. It influences model outputs but has no accountable source and no feasible verification pathway. The fabricated allegations at issue in Starbuck v. Google, for example, have no verifiable source and no accountable furnisher.
Contemporaneous scholarship provides a clear record of how the credit reporting industry adapted after the FCRA’s enactment. The statute’s procedural requirements—source disclosure, reinvestigation, and accuracy standards—quickly displaced the use of unverifiable “character” information that had previously been drawn from neighbors or employers. Agencies adjusted their reporting practices without the operational disruption critics had predicted. Rather than constricting the industry, Congress standardized recordkeeping, narrowed inputs to verifiable data, and clarified responsibilities across participants. The result was a more consistent and transparent reporting system, organized around traceability and accuracy rather than discretionary judgment.
The 1970 FCRA established statutory standards, administrative authority, and civil remedies that have governed credit reporting for five decades. Its core principle—assigning responsibility for system outputs to the actors with verification capacity—is technology-agnostic. The alternative is predictable: Courts will try to stretch defamation and privacy torts, which proved structurally inadequate for credit reporting, to apply algorithmic systems.
Starbuck’s suit illustrates the pattern. Google deploys the same defenses credit bureaus used before the FCRA: No identifiable publisher, no provable reliance, and a claim that system complexity makes verification impossible. Under the common law, courts evaluate fabricated allegations with doctrines built for human speakers acting with intent. The FCRA offered a different approach: statutory duties that tie responsibility to data sources, require reinvestigation of disputes, and establish civil liability without proof of malice. Applying a provenance-based framework modeled on the FCRA would give AI systems a comparable structure: clear rules that assign responsibility to the actors with verification capacity, rather than relying on intent-based doctrines that cannot govern algorithmic systems.

