This framework offers six pillars of AI policy that can command broad support across party lines. Each pillar addresses a distinct, concrete problem — from who gets access, to who controls their own data, to who is held responsible when AI causes harm. None requires picking a political winner. All are grounded in the American tradition of harnessing transformative technology for the public good.
Universal public access
AI should reach every American — not just those in wealthy zip codes. Modeled on rural electrification.
Information health & integrity
AI platforms that systematically narrow what people see — like an addictive drug — should be regulated accordingly.
Smart, adaptive governance
Regulation should be targeted, technically competent, and built to update as fast as the technology does.
Jobs & economic transition
Redirect AI gains toward expanding what displaced workers can actually do — not just what they can buy.
Real accountability
When AI causes documented harm, the companies that deployed it should bear real, proportionate consequences.
Data & identity sovereignty
Every American should own their own data — with a secure personal wallet and the right to share nothing more than necessary.
Pillar I — Universal public access to AI
AI is rapidly becoming as essential as electricity, broadband, or running water. Without deliberate policy, its benefits will concentrate among the already-privileged — exactly what happened with early telephone and electricity networks before federal intervention.
Today, a wealthy person can pay $500/hour for an AI-powered attorney, financial advisor, or medical navigator. A working-class person cannot. This pillar uses the same model Congress used in 1936 — the Rural Electrification Administration — to make sure that gap closes rather than widens. The federal government does not need to build the AI itself; it funds access to it.
1.1 The AI Access Act — a federal infrastructure fund
The Rural Electrification Administration (1936) brought electricity to 90% of rural American farms within 20 years by lending money to cooperatives and local utilities. Before it, markets had written off rural America as unprofitable. The REA did not replace markets — it extended them. An AI Access Fund would do the same for communities underserved by today’s AI economy.
Establish a federal AI Infrastructure Fund to deploy foundational AI capability — AI-assisted legal services, medical navigation, educational tutoring, and benefits counseling — to underserved communities, rural areas, tribal nations, and public institutions. Funding goes to local nonprofits, libraries, community colleges, and cooperatives, not to large technology companies.
1.2 Capability accounts, not just cash
Rather than sending displaced workers a check — which doesn’t give them the skills or tools to navigate an AI economy — this policy funds their access to AI-powered tools that expand what they can actually do: free AI legal representation, AI-assisted job training, health navigation, and financial planning. Think GI Bill, not welfare.
AI surplus revenues (see Pillar IV) are directed into “Freedom Pools” — capability accounts that fund AI-augmented services in legal, medical, educational, and financial domains. These are not cash transfers; they are purchasing power for services that were previously available only to the wealthy. Administered through existing community institutions.
1.3 Public access to publicly funded AI
Any AI system trained substantially on data that the public paid to create — federally funded research, public court records, Medicare claims data, public school curricula — must make its foundational capability available to public institutions at no cost. The public financed the training data; the public should have access to the resulting intelligence.
The NIH funds most basic medical research. If a company uses that research to train a medical AI, public hospitals and community clinics should be able to use that AI for free. Private applications built on top of it can still be sold commercially — this is about the base layer, not the full product.
1.4 Structural separation — owning the pipes and the services
Enforce structural separation between companies that own AI infrastructure (computing power, foundational models, data centers) and companies that sell AI-powered services to consumers, above a defined market-share threshold. Prevent the same company from controlling both the highway and every business that operates on it.
This mirrors the logic behind requiring AT&T to allow competitors on its telephone network, and the structural separation rules applied to electric utilities. A company that owns the transmission lines should not also own all the appliances you plug into them.
Legislative asks — Pillar I
Authorize and fund an AI Access Infrastructure Fund modeled on the REA, administered through USDA, Commerce, or a new independent board.
Require public licensing of AI models trained substantially on federally funded data, for use by public institutions.
Direct the FTC and DOJ to develop structural separation guidelines for AI infrastructure providers above defined market-share thresholds.
Pillar II — Information health & democratic integrity
AI recommendation systems — the algorithms that decide what you see on social media, in your news feed, in your search results — are now the most powerful editors in human history. They decide what 330 million Americans pay attention to. Their design choices have public-health consequences.
Research shows that certain AI engagement systems work like an addictive substance — they narrow what users see and push them toward more extreme content to maximize the time they spend on the platform. A platform that deliberately narrows a citizen’s view of the world to keep them scrolling is doing something that should be regulated the same way we regulate other products that cause documented harm to public health. This is not about censoring speech — it is about the engineering of attention.
2.1 The information-variety audit requirement
Any AI system that mediates information access for more than 10 million U.S. users must submit to biennial third-party audits demonstrating that its recommendation algorithm does not systematically narrow the range of perspectives, sources, and topics that users encounter over time. Systems that demonstrably shrink users’ informational world are classified as information-health hazards subject to remediation plans and FTC enforcement.
If a platform’s algorithm is provably making you see only one kind of viewpoint over time — regardless of which viewpoint — that is an engineering defect with public-health consequences, not a protected editorial choice. This audit targets the mechanism, not the content.
2.2 Accuracy and honesty requirements for high-stakes AI
AI systems used in healthcare, legal advice, financial guidance, and education must meet a minimum accuracy and honesty standard: they must represent their own uncertainty, acknowledge when they do not know something, and not present false information with unwarranted confidence. Systems that fail this standard in documented ways are treated as defective consumer products under existing product-liability law.
2.3 Banning addiction-by-design in AI platforms
Extend FTC unfair-practices authority to cover AI engagement systems that are deliberately engineered to exploit psychological vulnerabilities — specifically, variable-reward loops (the same mechanism that makes slot machines addictive) used to maximize time on platform. Platforms must disclose when they use such mechanisms and offer users a simple opt-out (such as a chronological feed). Platforms targeting users under 18 face strict-liability standards.
Congress regulated cigarette advertising targeted at minors, required warning labels on tobacco, and restricted the marketing of addictive pharmaceutical products. The engineering of addictive AI engagement is the same category of harm — it is a design choice, not an accident.
2.4 Protecting scientific and expert consensus
AI systems deployed in public-interest contexts — government websites, public health communications, public school tools — must represent scientific consensus accurately and flag clearly when AI-generated content contradicts it. This is a disclosure requirement, not a speech restriction: heterodox scientific views can still be expressed, but must be labeled as contested. Deliberate misrepresentation of scientific consensus by AI systems in public-health contexts is treated as consumer fraud.
Legislative asks — Pillar II
Amend Section 230 to remove liability protection for algorithmic amplification decisions (as distinct from hosting decisions), for platforms above a user-count threshold.
Direct the FTC to develop an “information-health” rulemaking covering addiction-by-design and mandatory audit requirements for large recommendation systems.
Require algorithmic transparency reports from platforms above 10 million U.S. users, filed annually with the FTC and publicly available.
Pillar III — Smart, adaptive governance
The biggest risk in AI regulation is getting it wrong in either direction: regulating so broadly that beneficial innovation is chilled, or regulating so narrowly that serious harms go unaddressed. The solution is governance designed to update — not a one-time legislative act that immediately starts going stale.
The FAA does not write aviation regulations once and then leave them forever. It maintains ongoing technical expertise, runs investigations when things go wrong, and updates rules as aircraft technology changes. AI regulation needs the same model: a technically competent standing body with real enforcement authority, mandatory review cycles, and the ability to update rules without waiting for Congress to act on every technical development.
3.1 The subsidiarity principle — regulate at the right level
Not every AI problem requires a federal response. Regulate at the lowest level of government that has the information and authority to act effectively: local harms from consumer-facing apps belong at the state level; foundational model safety and infrastructure monopoly belong at the federal level; weapons, biosecurity, and planetary-scale risks require international treaty.
The federal government should set a floor — minimum standards that every state must meet — and let states go further if they choose. Federal law should preempt state law only when a patchwork of state rules would genuinely prevent a national market from functioning.
3.2 A standing AI Safety and Opportunity Board
Establish an independent AI Safety and Opportunity Board with: (a) enforcement authority, not just advisory function; (b) mandatory two-year reassessment cycles for all AI-specific regulations; (c) automatic sunset provisions unless affirmatively renewed; (d) a technical staff with compensation competitive with private-sector AI roles. The Board coordinates across FDA, FTC, SEC, CFPB, and DOL rather than duplicating their authority.
The CFPB (Consumer Financial Protection Bureau) was created to fill a gap that existing regulators — each focused on their own sector — could not address. An AI Safety Board plays the same cross-cutting coordination role, without displacing sector regulators.
3.3 Technical competency as a legal requirement for AI regulators
Any federal body with AI enforcement authority must maintain staff with demonstrated technical expertise in the systems they regulate — machine learning, algorithmic auditing, cybersecurity, and data science. Authorize competitive compensation (above GS scale) for these positions. A regulator that does not understand what it is regulating cannot regulate effectively.
3.4 Domain-specific governance councils
Establish advisory-to-binding governance councils for AI in healthcare, legal services, education, and critical infrastructure. Each council includes AI researchers, domain practitioners (doctors, lawyers, teachers), civil-society advocates, and representatives of affected communities. Councils issue domain-specific guidance subject to Board review. This brings technical and domain expertise together rather than leaving each to operate in isolation.
Legislative asks — Pillar III
Authorize an AI Safety and Opportunity Board via standalone legislation, with cross-agency coordination authority, enforcement power, and mandatory biennial review cycles.
Require all AI-specific provisions in federal legislation to include sunset clauses and mandatory review triggers tied to capability thresholds.
Authorize above-GS compensation for technical AI staff at regulatory agencies, funded by AI infrastructure levy proceeds.
Pillar IV — Jobs, economic transition & shared prosperity
AI is automating tasks across every sector of the economy — not just manufacturing, but legal research, financial analysis, medical documentation, and customer service. The question is not whether displacement will happen. The question is whether America has a plan for it.
When AI replaces a paralegal, a radiology technician, or a call-center worker, the company that deployed the AI captures most of the financial gain. The worker absorbs most of the loss. This is not a natural law — it is a policy choice. Congress can choose differently, as it did with the GI Bill, the Trade Adjustment Assistance Act, and the Workforce Investment Act. This pillar modernizes those models for the AI era.
4.1 AI productivity levy — shared gains for shared losses
Impose a modest productivity levy — starting at 1–2% — on documented labor-cost savings from AI-driven automation that replaces previously human-performed tasks at scale. Revenue is ringfenced into Freedom Pool capability accounts (see Pillar I) and worker retraining programs. The levy is calibrated not to slow beneficial adoption, but to ensure that when companies capture productivity gains, some portion flows to those who bore the cost of displacement.
The federal unemployment insurance system levies a payroll tax on employers whose layoff practices increase unemployment — it prices in a cost that would otherwise be externalized. The AI productivity levy applies the same logic to AI-driven displacement.
4.2 Pre-deployment impact assessment for high-displacement sectors
Require advance economic-impact assessment — publicly filed — before AI deployment that is projected to displace more than 1,000 workers in a sector within 24 months. The assessment must address the timeline of displacement, the adequacy of existing retraining infrastructure, and the company’s transition-assistance plan. This is a disclosure and planning requirement, not a deployment prohibition.
4.3 Worker augmentation rights — training before displacement
Establish a legal right for workers in AI-affected sectors to receive employer-funded AI-augmentation training before displacement — not after. Workers who are trained to work alongside AI retain economic leverage. Workers who are simply replaced without preparation do not. Model this on existing WARN Act requirements but orient them toward proactive retraining rather than severance-after-the-fact.
4.4 Investing in human-advantage domains
Directly increase federal investment in domains where human presence, judgment, and relational intelligence retain irreplaceable value: elder care and childcare, community health work, skilled trades requiring on-site physical judgment, environmental stewardship, and civic participation. These domains are currently undervalued by markets despite their high social value — and AI cannot substitute for them. This is where comparative human advantage lives.
Legislative asks — Pillar IV
Enact an AI Productivity Levy at 1–2% of documented labor-cost savings from large-scale AI automation, with revenue ringfenced for worker capability accounts and retraining.
Amend the WARN Act to require pre-displacement AI-augmentation training for affected workers, not merely severance notice.
Increase funding for the care economy, skilled trades, and community-based work through existing workforce development channels (Perkins Act, WIOA).
Pillar V — Real accountability for real harm
Today, AI companies operate largely without liability for the harms their systems cause. A hospital that misdiagnoses a patient can be sued for malpractice. An AI system that makes the same misdiagnosis in the same hospital typically cannot. That asymmetry is not sustainable — and it creates the wrong incentives.
Accountability does not mean shutting down innovation. It means that companies which take shortcuts on safety bear the cost of those shortcuts, rather than externalizing them onto the public. A tiered liability model — with a safe harbor for companies that meet documented safety standards — creates strong financial incentives to invest in safety without penalizing responsible developers.
5.1 Product liability for AI systems in high-stakes domains
Establish product-liability standards for AI systems that cause documented harm in healthcare, legal services, financial advice, criminal justice, and hiring. Developers who demonstrate compliance with defined safety standards (independent audit, red-team testing, adverse-event reporting) qualify for a liability cap. Those who cannot demonstrate compliance bear proportionate liability. This mirrors the liability framework for medical devices and pharmaceuticals.
A car manufacturer that installs airbags that meet federal safety standards is protected from certain negligence claims. One that knowingly installs defective airbags is not. The AI liability framework applies the same logic — safety investment earns protection; recklessness does not.
5.2 Mandatory adverse-event reporting
Require AI operators in high-stakes domains to report documented system failures — misdiagnoses, wrongful denials, discriminatory outcomes, financial losses — to a central adverse-event registry, modeled on the FDA’s MedWatch system and the FAA’s Aviation Safety Reporting System. Aggregate data reveals systemic patterns invisible in individual incidents. Reports are shielded from liability discovery to encourage honest reporting, as with aviation safety reports.
5.3 The right to a human decision-maker
In any consequential government or regulated-industry decision — parole and bail, public-benefits eligibility, medical diagnosis, credit, employment screening, immigration, child welfare — individuals have a legally enforceable right to request human review of any AI-generated recommendation affecting them. AI may inform decisions; it may not be the sole decision-maker without explicit, documented consent from the affected person.
If an algorithm decides you are ineligible for a mortgage, a job, or parole, you have the right to ask a human being to review that decision. The algorithm can inform the human; it cannot replace the human’s legal responsibility for the outcome.
5.4 Licensed AI auditing as a profession
Establish federal licensing and liability standards for AI auditors — analogous to Certified Public Accountants for financial reporting. A licensed AI auditor who certifies a system that subsequently causes widespread harm bears professional liability for that certification. This creates a credentialed, independent, financially accountable review profession with real skin in the game.
5.5 Mandatory review for catastrophic-risk AI systems
Require pre-deployment national-security review — coordinated with allies — for any AI system that could autonomously direct critical infrastructure at national scale, meaningfully accelerate the development of weapons of mass destruction, or whose capability profile suggests the potential for actions outside human oversight. This is not a prohibition; it is a structured review process before deployment, analogous to nuclear materials licensing.
Legislative asks — Pillar V
Enact tiered AI product liability for high-stakes domains, with a safe harbor for companies meeting defined safety standards, administered by the AI Safety Board.
Establish a federal AI adverse-event registry, modeled on FDA MedWatch, with mandatory reporting by operators in high-stakes domains.
Codify a right to human review of consequential AI decisions in government and regulated industries, enforceable by affected individuals.
Direct NIST and the AI Safety Board to develop federal AI auditor certification standards within 18 months of enactment.
Pillar VI — Data & identity sovereignty
Today, Americans have almost no control over their own digital data. Our health records, financial histories, location data, social relationships, and behavioral patterns are collected, bought, sold, and fed into AI systems — without our meaningful knowledge or consent. This pillar gives Americans ownership of their own information.
Imagine if every American had a secure digital wallet — like a safe deposit box for their data — that they controlled. When a company needed to verify your age, credit score, or health status, it could ask your wallet to confirm only the specific fact it needs, without seeing any of your underlying records. You would never have to hand over your entire medical file just to prove you had a certain vaccination. This is not science fiction — the technology exists and is ready to deploy. It is called a Self-Sovereign Identity system with Zero Knowledge Proofs. Congress needs to create the legal and standards framework to make it real for every American.
How Zero Knowledge Proofs work — in plain English
The problem: Today, to prove you are over 21, you hand over your driver’s license — which reveals your full name, address, date of birth, and license number. You shared far more than was needed.
The solution: A Zero Knowledge Proof (ZKP) lets you mathematically prove a specific fact — “I am over 21,” “my credit score is above 700,” “I have been vaccinated” — without revealing any of the underlying data that proves it. The verifier learns only what they need to know. Nothing else.
How it’s verified: The proof is verified locally — on your own device — without sending your data to any central server. The company or agency asking the question gets a cryptographic “yes” or “no.” They never see your records.
Why it matters for AI: AI systems are voracious consumers of personal data. ZKPs allow AI to be trained on and interact with real human information without ever exposing the underlying personal records — protecting privacy while preserving utility.
How a personal data wallet works
6.1 The American Data Wallet Act — a personal data wallet for every citizen
Authorize and fund the creation of a standard personal data wallet infrastructure — a secure, encrypted digital container that each American controls — for storing verified credentials (medical records, professional licenses, financial history, government benefits eligibility, identity documents). The wallet lives on the individual’s device, not on a government or corporate server. The federal government establishes the technical standards; private and nonprofit entities build the wallets.
The federal government established interoperability standards for electronic health records (EHR) under the HITECH Act, then required hospitals and providers to adopt them. The data wallet follows the same model: federal standards, open architecture, competitive implementation. Think of it as a personal EHR — but for all your verified data, not just your medical records.
6.2 The right to data minimization — share only what is necessary
Establish a legal right to data minimization: any entity requesting personal information from an American — government agency, employer, insurer, lender, or AI platform — may request only the specific data element needed to fulfill the stated purpose, and nothing more. A landlord verifying income need not see your full bank history. An employer verifying a credential need not see your health records. ZKP-based verification is the technical mechanism; this provision makes minimum-necessary data sharing the legal default.
Right now, sharing your personal information is all-or-nothing. This provision makes it granular. You can share “I earn over $50,000/year” without sharing your actual income figure. You can share “I have a valid nursing license” without sharing your home address. You share the conclusion, not the evidence that supports it.
6.3 Local verification — data does not have to leave your device
Mandate that any government-run verification system — for benefits eligibility, identity, professional licensing, or legal status — support local verification via ZKP by 2030. This means a citizen’s phone can generate a cryptographic proof of eligibility without transmitting their records to the government server conducting the verification. The server learns only “eligible: yes/no.” No central database of citizen activity is created as a byproduct of routine verification.
Today, every time you swipe your ID — at a bar, at a polling place, at a pharmacy — a record of that interaction can be logged. With local ZKP verification, you generate a proof on your own device. Nothing is logged at the point of verification. Your pattern of activity is not visible to the verifier or to any central database.
6.4 Verified identity credentials — stopping AI impersonation and fraud
Direct NIST to develop federal standards for cryptographically verified digital identity credentials — verifiable proof that a communication, document, or AI-generated content is associated with a verified human identity. This addresses the deepest threat of the AI era: that AI-generated impersonation, fraud, and disinformation become indistinguishable from authentic human communication. Adoption is voluntary for individuals and mandatory for federal contractors, licensed professionals, and federal benefit disbursement.
Deepfake videos of elected officials. AI-generated identity fraud that steals benefits. AI personas that pose as doctors, lawyers, or government officials. Synthetic voter-registration fraud. A cryptographic credential that is tied to a verified human identity and cannot be forged addresses all of these at the root.
6.5 Portable, interoperable credentials across sectors
Require that verified credentials issued by federal and state government agencies — professional licenses, benefit eligibility determinations, educational credentials, health immunization records — conform to W3C Verifiable Credential and Decentralized Identifier (DID) open standards. This makes credentials portable: a nurse licensed in Texas can present her license digitally in any state without contacting the Texas licensing board. An AI system verifying credentials does not need access to a central registry; it verifies the cryptographic proof.
6.6 AI training data and consent — your data, your choice
Establish that personal data held in a citizen’s data wallet cannot be used to train AI systems without explicit, specific, revocable consent — separate from any general terms-of-service agreement. Consent to train medical AI on your health records must be separate from consent to receive medical care. Consent can be granted with compensation, revoked at any time, and is domain-specific: consent to train a medical AI does not constitute consent to train a commercial advertising model.
Legislative asks — Pillar VI
Enact the American Data Wallet Act: direct NIST to develop open standards for personal data wallets and require federal agencies to issue verifiable credentials conforming to those standards within 36 months.
Codify the right to data minimization as a federal privacy baseline: requesters may collect only the minimum data necessary for their stated purpose; ZKP-based verification satisfies this requirement by default.
Require all federal verification systems (benefits eligibility, identity, licensing) to support local ZKP-based verification by 2030, with NIST-published standards by 2027.
Direct NIST to develop federal standards for cryptographically verified digital identity credentials, and require their use for federal contractors, licensed professional communications, and federal benefit disbursement.
Amend HIPAA, FERPA, and FCRA to require explicit, specific, revocable consent for use of covered personal data in AI training, separate from consent to the underlying service.
Hard trade-offs — named honestly
A credible policy framework names the genuine conflicts rather than papering over them. Below are six tensions this framework does not fully resolve. Legislators should understand these trade-offs before acting.
This framework favors openness below a defined capability threshold and mandatory pre-deployment review above it. The threshold itself requires ongoing empirical calibration — a task for the AI Safety Board, not a one-time legislative determination.
The safe harbor in Pillar V is designed to address this — smaller developers who meet a safety standard are protected. But the administrative burden of qualifying for the harbor is real, and small teams may find it unnavigable regardless.
This is the deepest tension in the framework. The proposal attempts to sidestep it by regulating process (does the algorithm increase or decrease variety?) rather than content (is this viewpoint acceptable?). The line is genuinely blurry and deserves active attention from Congress.
Pillar IV’s pre-deployment assessment and the productivity levy slow the most disruptive deployments. This will be contested by those who argue that delay costs lives where AI is genuinely beneficial — a legitimate concern, not a bad-faith one.
The choice of open, W3C-standard architecture (not a government-controlled identity system) is deliberate and critical. The risk of a government-managed digital identity becoming a surveillance or control tool is real and must be designed against explicitly — no central government wallet registry, no mandatory adoption, no surveillance backdoors.
This framework takes the position that uncoordinated competition on the most dangerous capabilities produces worse outcomes than imperfect coordination. But that judgment depends on assessments of adversary intent and treaty-compliance behavior that are genuinely contested.