A non-partisan framework grounded in four research traditions: the science of epistemic health (Ashby, Friston, Levin); the economics of AI displacement and positive freedom (Clippinger, Snyder); the cybernetics of adaptive governance; and the democratic theory of AI normative competence (Hadfield, Trivedi, Hadfield-Menell). Designed for members of Congress, committee counsel, and legislative staff.
How to read this document. Each pillar opens with a plain-language summary of the problem and what Congress can do about it. Technical concepts are explained in plain English. Each pillar ends with a numbered "Legislative asks" box listing specific, actionable items. The seven pillars are independent; each can be advanced separately by different committees with jurisdiction.
The question before Congress is not whether AI will transform American life. It already is. The question is whether that transformation benefits everyone, or only those who can already afford the best lawyers, doctors, and financial advisors. And whether it reinforces American democracy, or quietly hollows it out.
This framework synthesizes four bodies of research into seven concrete policy pillars. The first three address access and economic fairness. The fourth addresses data sovereignty and identity. The fifth addresses governance design. The sixth addresses accountability. The seventh addresses a challenge that no existing AI policy framework has fully confronted: billions of AI agents will soon be woven into the daily fabric of American economic and civic life, making thousands of decisions that constitute, or corrode, democratic social order.
A democracy is not just a set of rules written in a constitution. It is produced, daily, by the behaviors and beliefs of its citizens, by their willingness to comply with laws, to hold others to account, and to treat one another as civic equals. When AI agents participate in that daily life, they either reinforce or undermine the democratic fabric. Getting this right is as important as any other question in this document.
Today, a wealthy person can pay $500 an hour for an AI-powered attorney, financial advisor, or medical navigator. A working-class person cannot. This pillar uses the same model Congress used in 1936, the Rural Electrification Administration, to make sure that gap closes rather than widens.
The Rural Electrification Administration (1936) brought electricity to 90% of rural American farms within 20 years by lending money to cooperatives and local utilities. The REA did not replace markets; it extended them.
Establish a federal AI Infrastructure Fund to deploy foundational AI capability, including AI-assisted legal services, medical navigation, educational tutoring, and benefits counseling, to underserved communities, rural areas, tribal nations, and public institutions.
Rather than sending displaced workers a check, this policy funds their access to AI-powered tools that expand what they can actually do. Think GI Bill, not welfare.
Redirect AI surplus revenues into Freedom Pools, capability accounts funding AI-augmented services in legal, medical, educational, and financial domains. Administered through existing community institutions.
Any AI system trained substantially on publicly financed data must make its foundational capability available to public institutions at no cost. The public financed the training data; the public should access the resulting intelligence.
This mirrors AT&T's obligation to allow competitors on its telephone network. A company that owns the transmission lines should not also own all the appliances you plug into them.
Enforce structural separation between AI infrastructure providers and AI application providers above a defined market-share threshold.
Authorize and fund an AI Access Infrastructure Fund modeled on the REA.
Require public licensing of AI models trained substantially on federally funded data.
Direct the FTC and DOJ to develop structural separation guidelines for AI infrastructure providers above defined market-share thresholds.
When AI replaces a paralegal, a radiology technician, or a call-center worker, the company captures most of the gain. The worker absorbs most of the loss. This is not a natural law; it is a policy choice.
The federal unemployment insurance system levies a payroll tax on employers whose layoff practices increase unemployment. The AI productivity levy applies the same logic to AI-driven displacement.
Impose a modest productivity levy, starting at 1–2%, on documented labor-cost savings from AI-driven automation at scale. Revenue is ringfenced into Freedom Pool capability accounts and worker retraining programs.
Require advance economic-impact assessment before AI deployment projected to displace more than 1,000 workers in a sector within 24 months. This is a disclosure and planning requirement, not a deployment prohibition.
Establish a legal right for workers in AI-affected sectors to receive employer-funded AI-augmentation training before displacement, not after.
Directly increase federal investment in domains where human presence and relational intelligence retain irreplaceable value: elder care, childcare, community health, skilled trades, environmental stewardship, and civic participation.
Enact an AI Productivity Levy at 1–2% of documented labor-cost savings, with revenue ringfenced for capability accounts.
Amend the WARN Act to require pre-displacement AI-augmentation training.
Increase funding for the care economy, skilled trades, and community-based work through existing channels (Perkins Act, WIOA).
AI recommendation systems are now the most powerful editors in human history. Research from computational biology and neuroscience now allows us to describe, with scientific precision, what happens when a society's information system suppresses variety: it becomes brittle, unable to respond to the world as it actually is.
Ashby's Law of Requisite Variety holds that a system must generate as much internal variety as exists in the environment it navigates. Friston's Active Inference framework formalizes this: pathology occurs when a system's boundary becomes too rigid, shutting out information that would update stale beliefs. AI recommendation algorithms that narrow user worldviews are, in this framework, a public-health concern.
Any AI system mediating information access for more than 10 million U.S. users must submit to biennial third-party audits demonstrating that its recommendation algorithm does not systematically narrow the range of perspectives users encounter.
AI systems used in healthcare, legal advice, financial guidance, and education must represent their own uncertainty honestly and not present false information with unwarranted confidence.
Congress regulated cigarette advertising targeted at minors and restricted marketing of addictive pharmaceutical products. The engineering of addictive AI engagement is the same category of harm.
Extend FTC unfair-practices authority to cover AI engagement systems that deliberately exploit variable-reward psychological loops to maximize time on platform.
AI systems deployed in public-interest contexts must represent scientific consensus accurately. Heterodox views can be expressed but must be labeled as contested. Deliberate misrepresentation in public-health contexts is treated as consumer fraud.
Amend Section 230 to remove liability protection for algorithmic amplification decisions above a defined user-count threshold.
Direct the FTC to develop information-health rulemaking covering addiction-by-design and mandatory audit requirements.
Require algorithmic transparency reports from platforms above 10 million U.S. users, filed annually with the FTC.
Americans have almost no control over their own digital data. This pillar gives Americans ownership of their own information, using technology that already exists.
A Zero Knowledge Proof lets you mathematically prove a specific fact without revealing any underlying data. The verifier learns only what they need to know. Today, proving you are over 21 requires handing over your driver's license. A ZKP generates a cryptographic proof on your own device. The verifier gets "yes" or "no." No data leaves your device.
The HITECH Act established interoperability standards for electronic health records. The data wallet follows the same model: federal standards, open architecture, competitive implementation.
Authorize a standard personal data wallet infrastructure: a secure, encrypted digital container on the individual's device for storing verified credentials. The federal government sets technical standards; private and nonprofit entities build the wallets.
Any entity requesting personal information may request only the specific data element needed for the stated purpose, and nothing more. ZKP-based verification makes minimum-necessary data sharing the legal default.
Government-run verification systems must support local ZKP verification by 2030. The server learns only "eligible: yes/no." No central database of citizen activity is created.
Direct NIST to develop standards for cryptographically verified digital identity credentials. Built on open W3C standards, not a government-controlled database.
Personal data in a citizen's data wallet cannot be used to train AI systems without explicit, specific, revocable consent, separate from any general terms-of-service agreement.
Enact the American Data Wallet Act with NIST-developed open standards and federal agency credential issuance within 36 months.
Codify the right to data minimization as a federal privacy baseline.
Require federal verification systems to support ZKP-based local verification by 2030.
Direct NIST to develop verified digital identity credential standards for federal contractors and benefit disbursement.
Amend HIPAA, FERPA, and FCRA to require explicit, revocable consent for use of personal data in AI training.
The biggest risk in AI regulation is getting it wrong in either direction. The FAA does not write aviation regulations once and leave them forever. AI regulation needs the same model.
Regulate at the lowest effective level: consumer-facing harms at the state level, foundational model safety at the federal level, planetary-scale risks via international treaty.
The CFPB was created to fill a cross-cutting regulatory gap. An AI Safety Board plays the same coordination role without displacing sector regulators.
Establish an independent AI Safety and Opportunity Board with enforcement authority, mandatory two-year reassessment cycles, automatic sunset provisions, and technical staff at competitive compensation.
Any federal body with AI enforcement authority must maintain staff with demonstrated technical expertise. Authorize above-GS compensation for these positions.
Establish governance councils for AI in healthcare, legal services, education, and critical infrastructure, including researchers, practitioners, civil-society advocates, and affected communities.
Authorize an AI Safety and Opportunity Board with cross-agency coordination authority and enforcement power.
Require sunset clauses and review triggers in all AI-specific legislation.
Authorize above-GS compensation for technical AI staff at regulatory agencies.
A hospital that misdiagnoses a patient can be sued for malpractice. An AI system that makes the same misdiagnosis typically cannot. That asymmetry is not sustainable.
A car manufacturer meeting federal safety standards gets liability protection. One that knowingly installs defective airbags does not. Same logic applies.
Establish product-liability standards for AI systems causing documented harm in healthcare, legal services, financial advice, criminal justice, and hiring. Compliance with safety standards earns a liability cap.
Require AI operators in high-stakes domains to report system failures to a central registry modeled on FDA MedWatch and the FAA Aviation Safety Reporting System.
In any consequential decision, individuals have a legally enforceable right to human review of any AI-generated recommendation.
Establish federal licensing for AI auditors, analogous to CPAs. A licensed auditor who certifies a system that causes widespread harm bears professional liability.
Require pre-deployment national-security review for AI systems that could autonomously direct critical infrastructure, accelerate WMD development, or act outside human oversight. Analogous to nuclear materials licensing.
Enact tiered AI product liability for high-stakes domains with a compliance-based safe harbor.
Establish a federal AI adverse-event registry with mandatory reporting.
Codify a right to human review of consequential AI decisions.
Direct NIST to develop AI auditor certification standards within 18 months.
Democracies are produced daily by the behaviors and beliefs of millions of people. When AI agents participate in that daily life, they either reinforce or corrode the democratic fabric.
Hadfield, Trivedi, and Hadfield-Menell (Knight First Amendment Institute, 2026) identify a challenge no current AI policy addresses: as AI agents take on agentic economic tasks, they will constantly make choices that implicate democratic values. Democracy cannot be pre-programmed. Human beings navigate normative incompleteness through what Adam Smith called the "impartial spectator." AI agents need a digital equivalent: normative competence.
Require AI agents in high-impact domains to demonstrate normative competence: the ability to detect and attribute sanctions, adjust behavior accordingly, and communicate normative costs to the human principal.
MSIs are democratically constituted bodies that produce normative standards, generate compliant training data, and provide real-time APIs for AI agents to query at the moment of decision. Analogous to the role courts play for human actors.
High-impact AI agents must refuse transactions that demonstrably violate legal requirements or democratic norms, as a responsible human business partner would.
Extend certificate authority infrastructure to authenticate AI agent compliance. Develop reputation networks and agent-to-agent handshake protocols for mutual verification.
Expressly prohibit using AI agents to coerce compliance with norms not established through legitimate democratic processes, even if preferred by the deploying entity.
Direct the AI Safety Board to develop normative competence standards with phased implementation within 36 months.
Fund MSI pilot programs in healthcare, legal services, and hiring.
Require AI agents in federal contracting to meet democratic-compliance standards equivalent to human contractors.
Direct NIST to develop AI agent certificate authority and reputation network standards.
Prohibit using AI agents for norm imposition outside legitimate democratic processes.
A credible framework names the genuine conflicts it cannot fully resolve. Six tensions where the platform does not fully satisfy all legitimate values at once.
Open models democratize access and accelerate research.
Open weights lower the barrier for catastrophic misuse.
This framework favors openness below a defined capability threshold and mandatory review above it.
Product liability protects individuals and creates safety incentives.
Liability concentrates development in large companies, crowding out startups.
The safe harbor in Pillar VI addresses this, but the administrative burden is real for small teams.
Information-health audits protect democracy from algorithmic radicalization.
Government "information diversity" standards risk becoming political speech control.
The framework regulates process (does the algorithm increase or decrease variety?) rather than content.
Moving quickly on beneficial AI saves lives now.
Deployment outpacing governance causes displacement and loss of trust.
Pillar II's pre-deployment assessment slows the most disruptive deployments. Contested by those who argue delay costs lives.
Data wallets and ZKP give individuals genuine control and privacy.
Cryptographic identity infrastructure creates new attack surfaces.
Open W3C-standard architecture is deliberate. No central registry, no mandatory adoption, no surveillance backdoors.
Normative competence protects the fabric of democratic life.
Government-defined "democratic norms" could impose partisan conformity.
The MSI model separates norm specification from government control. The prohibition on top-down norm imposition applies to government as much as to private actors.
Source for Pillar III. Ashby's Law of Requisite Variety; Friston's Active Inference; Levin's bioelectric network model of collective intelligence.
Source for Pillars I and II. The commoditization cascade, the Freedom Pool model, and the REA as precedent. Grounded in Snyder's positive/negative freedom distinction.
Source for Pillar V. Nested Markov blanket model of governance, Ashby's ultrastability principle, active inference model of institutional adaptation.
Primary source for Pillar VII. Democracy as normative social order; normative competence; Model Specification Institutions; certificate authorities as democratic infrastructure.
W3C DID and Verifiable Credential standards (Pillar IV). NIST Digital Identity Guidelines. FDA MedWatch and FAA ASRS as models for Pillar VI.