Policy Framework · Version 2 · March 2026

AI as a Public Good — A Policy Framework for Congress

Six pillars · Non-partisan John Henry Clippinger & First Principles First
Designed for legislative staff, committee counsel, and members of Congress.
📋

How to read this document. Each pillar begins with a plain-language summary of what the problem is and what Congress can do about it. Technical terms are explained in plain English wherever they appear. The “Legislative asks” box at the end of each pillar lists specific, actionable items. This framework is non-partisan — each pillar can be advanced independently.

The question before Congress is not whether AI will transform American life. It already is. The question is whether that transformation benefits everyone — or only those who can already afford the best lawyers, doctors, and financial advisors.

This framework offers six pillars of AI policy that can command broad support across party lines. Each pillar addresses a distinct, concrete problem — from who gets access, to who controls their own data, to who is held responsible when AI causes harm. None requires picking a political winner. All are grounded in the American tradition of harnessing transformative technology for the public good.

Intellectual foundations: “When Beliefs Become Pathological” (epistemic health, Ashby, Friston, Levin) · “The Commoditization Trap” (Clippinger, Snyder’s positive freedom, REA precedent) · “The Cybernetic Transition” (nested governance, active inference, Law of Requisite Variety) · Self-sovereign identity literature (W3C Decentralized Identifiers, Zero Knowledge Proof standards, NIST digital identity guidelines)

Pillar I — Universal public access to AI

AI is rapidly becoming as essential as electricity, broadband, or running water. Without deliberate policy, its benefits will concentrate among the already-privileged — exactly what happened with early telephone and electricity networks before federal intervention.

Plain-language summary

Today, a wealthy person can pay $500/hour for an AI-powered attorney, financial advisor, or medical navigator. A working-class person cannot. This pillar uses the same model Congress used in 1936 — the Rural Electrification Administration — to make sure that gap closes rather than widens. The federal government does not need to build the AI itself; it funds access to it.

1.1 The AI Access Act — a federal infrastructure fund

Historical precedent

The Rural Electrification Administration (1936) brought electricity to 90% of rural American farms within 20 years by lending money to cooperatives and local utilities. Before it, markets had written off rural America as unprofitable. The REA did not replace markets — it extended them. An AI Access Fund would do the same for communities underserved by today’s AI economy.

Establish a federal AI Infrastructure Fund to deploy foundational AI capability — AI-assisted legal services, medical navigation, educational tutoring, and benefits counseling — to underserved communities, rural areas, tribal nations, and public institutions. Funding goes to local nonprofits, libraries, community colleges, and cooperatives, not to large technology companies.

REA modeluniversal access

1.2 Capability accounts, not just cash

In plain terms

Rather than sending displaced workers a check — which doesn’t give them the skills or tools to navigate an AI economy — this policy funds their access to AI-powered tools that expand what they can actually do: free AI legal representation, AI-assisted job training, health navigation, and financial planning. Think GI Bill, not welfare.

AI surplus revenues (see Pillar IV) are directed into “Freedom Pools” — capability accounts that fund AI-augmented services in legal, medical, educational, and financial domains. These are not cash transfers; they are purchasing power for services that were previously available only to the wealthy. Administered through existing community institutions.

positive freedomcapability infrastructure

1.3 Public access to publicly funded AI

Any AI system trained substantially on data that the public paid to create — federally funded research, public court records, Medicare claims data, public school curricula — must make its foundational capability available to public institutions at no cost. The public financed the training data; the public should have access to the resulting intelligence.

What this means concretely

The NIH funds most basic medical research. If a company uses that research to train a medical AI, public hospitals and community clinics should be able to use that AI for free. Private applications built on top of it can still be sold commercially — this is about the base layer, not the full product.

public commonsIP balance

1.4 Structural separation — owning the pipes and the services

Enforce structural separation between companies that own AI infrastructure (computing power, foundational models, data centers) and companies that sell AI-powered services to consumers, above a defined market-share threshold. Prevent the same company from controlling both the highway and every business that operates on it.

Analogy

This mirrors the logic behind requiring AT&T to allow competitors on its telephone network, and the structural separation rules applied to electric utilities. A company that owns the transmission lines should not also own all the appliances you plug into them.

antitrustmarket structure

Legislative asks — Pillar I

1

Authorize and fund an AI Access Infrastructure Fund modeled on the REA, administered through USDA, Commerce, or a new independent board.

2

Require public licensing of AI models trained substantially on federally funded data, for use by public institutions.

3

Direct the FTC and DOJ to develop structural separation guidelines for AI infrastructure providers above defined market-share thresholds.

🧠

Pillar II — Information health & democratic integrity

AI recommendation systems — the algorithms that decide what you see on social media, in your news feed, in your search results — are now the most powerful editors in human history. They decide what 330 million Americans pay attention to. Their design choices have public-health consequences.

Plain-language summary

Research shows that certain AI engagement systems work like an addictive substance — they narrow what users see and push them toward more extreme content to maximize the time they spend on the platform. A platform that deliberately narrows a citizen’s view of the world to keep them scrolling is doing something that should be regulated the same way we regulate other products that cause documented harm to public health. This is not about censoring speech — it is about the engineering of attention.

2.1 The information-variety audit requirement

Any AI system that mediates information access for more than 10 million U.S. users must submit to biennial third-party audits demonstrating that its recommendation algorithm does not systematically narrow the range of perspectives, sources, and topics that users encounter over time. Systems that demonstrably shrink users’ informational world are classified as information-health hazards subject to remediation plans and FTC enforcement.

In plain terms

If a platform’s algorithm is provably making you see only one kind of viewpoint over time — regardless of which viewpoint — that is an engineering defect with public-health consequences, not a protected editorial choice. This audit targets the mechanism, not the content.

information diversityFTC authority

2.2 Accuracy and honesty requirements for high-stakes AI

AI systems used in healthcare, legal advice, financial guidance, and education must meet a minimum accuracy and honesty standard: they must represent their own uncertainty, acknowledge when they do not know something, and not present false information with unwarranted confidence. Systems that fail this standard in documented ways are treated as defective consumer products under existing product-liability law.

consumer protectionhigh-stakes domains

2.3 Banning addiction-by-design in AI platforms

Extend FTC unfair-practices authority to cover AI engagement systems that are deliberately engineered to exploit psychological vulnerabilities — specifically, variable-reward loops (the same mechanism that makes slot machines addictive) used to maximize time on platform. Platforms must disclose when they use such mechanisms and offer users a simple opt-out (such as a chronological feed). Platforms targeting users under 18 face strict-liability standards.

Precedent

Congress regulated cigarette advertising targeted at minors, required warning labels on tobacco, and restricted the marketing of addictive pharmaceutical products. The engineering of addictive AI engagement is the same category of harm — it is a design choice, not an accident.

addiction-by-designminor protection

2.4 Protecting scientific and expert consensus

AI systems deployed in public-interest contexts — government websites, public health communications, public school tools — must represent scientific consensus accurately and flag clearly when AI-generated content contradicts it. This is a disclosure requirement, not a speech restriction: heterodox scientific views can still be expressed, but must be labeled as contested. Deliberate misrepresentation of scientific consensus by AI systems in public-health contexts is treated as consumer fraud.

science integritypublic health

Legislative asks — Pillar II

1

Amend Section 230 to remove liability protection for algorithmic amplification decisions (as distinct from hosting decisions), for platforms above a user-count threshold.

2

Direct the FTC to develop an “information-health” rulemaking covering addiction-by-design and mandatory audit requirements for large recommendation systems.

3

Require algorithmic transparency reports from platforms above 10 million U.S. users, filed annually with the FTC and publicly available.

🏛

Pillar III — Smart, adaptive governance

The biggest risk in AI regulation is getting it wrong in either direction: regulating so broadly that beneficial innovation is chilled, or regulating so narrowly that serious harms go unaddressed. The solution is governance designed to update — not a one-time legislative act that immediately starts going stale.

Plain-language summary

The FAA does not write aviation regulations once and then leave them forever. It maintains ongoing technical expertise, runs investigations when things go wrong, and updates rules as aircraft technology changes. AI regulation needs the same model: a technically competent standing body with real enforcement authority, mandatory review cycles, and the ability to update rules without waiting for Congress to act on every technical development.

3.1 The subsidiarity principle — regulate at the right level

Not every AI problem requires a federal response. Regulate at the lowest level of government that has the information and authority to act effectively: local harms from consumer-facing apps belong at the state level; foundational model safety and infrastructure monopoly belong at the federal level; weapons, biosecurity, and planetary-scale risks require international treaty.

In plain terms

The federal government should set a floor — minimum standards that every state must meet — and let states go further if they choose. Federal law should preempt state law only when a patchwork of state rules would genuinely prevent a national market from functioning.

federalismsubsidiarity

3.2 A standing AI Safety and Opportunity Board

Establish an independent AI Safety and Opportunity Board with: (a) enforcement authority, not just advisory function; (b) mandatory two-year reassessment cycles for all AI-specific regulations; (c) automatic sunset provisions unless affirmatively renewed; (d) a technical staff with compensation competitive with private-sector AI roles. The Board coordinates across FDA, FTC, SEC, CFPB, and DOL rather than duplicating their authority.

Model

The CFPB (Consumer Financial Protection Bureau) was created to fill a gap that existing regulators — each focused on their own sector — could not address. An AI Safety Board plays the same cross-cutting coordination role, without displacing sector regulators.

institutional designadaptive regulation

3.3 Technical competency as a legal requirement for AI regulators

Any federal body with AI enforcement authority must maintain staff with demonstrated technical expertise in the systems they regulate — machine learning, algorithmic auditing, cybersecurity, and data science. Authorize competitive compensation (above GS scale) for these positions. A regulator that does not understand what it is regulating cannot regulate effectively.

regulatory capacitycivil service

3.4 Domain-specific governance councils

Establish advisory-to-binding governance councils for AI in healthcare, legal services, education, and critical infrastructure. Each council includes AI researchers, domain practitioners (doctors, lawyers, teachers), civil-society advocates, and representatives of affected communities. Councils issue domain-specific guidance subject to Board review. This brings technical and domain expertise together rather than leaving each to operate in isolation.

domain expertisemultistakeholder

Legislative asks — Pillar III

1

Authorize an AI Safety and Opportunity Board via standalone legislation, with cross-agency coordination authority, enforcement power, and mandatory biennial review cycles.

2

Require all AI-specific provisions in federal legislation to include sunset clauses and mandatory review triggers tied to capability thresholds.

3

Authorize above-GS compensation for technical AI staff at regulatory agencies, funded by AI infrastructure levy proceeds.

Pillar IV — Jobs, economic transition & shared prosperity

AI is automating tasks across every sector of the economy — not just manufacturing, but legal research, financial analysis, medical documentation, and customer service. The question is not whether displacement will happen. The question is whether America has a plan for it.

Plain-language summary

When AI replaces a paralegal, a radiology technician, or a call-center worker, the company that deployed the AI captures most of the financial gain. The worker absorbs most of the loss. This is not a natural law — it is a policy choice. Congress can choose differently, as it did with the GI Bill, the Trade Adjustment Assistance Act, and the Workforce Investment Act. This pillar modernizes those models for the AI era.

4.1 AI productivity levy — shared gains for shared losses

Impose a modest productivity levy — starting at 1–2% — on documented labor-cost savings from AI-driven automation that replaces previously human-performed tasks at scale. Revenue is ringfenced into Freedom Pool capability accounts (see Pillar I) and worker retraining programs. The levy is calibrated not to slow beneficial adoption, but to ensure that when companies capture productivity gains, some portion flows to those who bore the cost of displacement.

Precedent

The federal unemployment insurance system levies a payroll tax on employers whose layoff practices increase unemployment — it prices in a cost that would otherwise be externalized. The AI productivity levy applies the same logic to AI-driven displacement.

surplus sharingdisplacement policy

4.2 Pre-deployment impact assessment for high-displacement sectors

Require advance economic-impact assessment — publicly filed — before AI deployment that is projected to displace more than 1,000 workers in a sector within 24 months. The assessment must address the timeline of displacement, the adequacy of existing retraining infrastructure, and the company’s transition-assistance plan. This is a disclosure and planning requirement, not a deployment prohibition.

transparencyworkforce planning

4.3 Worker augmentation rights — training before displacement

Establish a legal right for workers in AI-affected sectors to receive employer-funded AI-augmentation training before displacement — not after. Workers who are trained to work alongside AI retain economic leverage. Workers who are simply replaced without preparation do not. Model this on existing WARN Act requirements but orient them toward proactive retraining rather than severance-after-the-fact.

labor rightsskill investment

4.4 Investing in human-advantage domains

Directly increase federal investment in domains where human presence, judgment, and relational intelligence retain irreplaceable value: elder care and childcare, community health work, skilled trades requiring on-site physical judgment, environmental stewardship, and civic participation. These domains are currently undervalued by markets despite their high social value — and AI cannot substitute for them. This is where comparative human advantage lives.

human flourishingcare economy

Legislative asks — Pillar IV

1

Enact an AI Productivity Levy at 1–2% of documented labor-cost savings from large-scale AI automation, with revenue ringfenced for worker capability accounts and retraining.

2

Amend the WARN Act to require pre-displacement AI-augmentation training for affected workers, not merely severance notice.

3

Increase funding for the care economy, skilled trades, and community-based work through existing workforce development channels (Perkins Act, WIOA).

⚖️

Pillar V — Real accountability for real harm

Today, AI companies operate largely without liability for the harms their systems cause. A hospital that misdiagnoses a patient can be sued for malpractice. An AI system that makes the same misdiagnosis in the same hospital typically cannot. That asymmetry is not sustainable — and it creates the wrong incentives.

Plain-language summary

Accountability does not mean shutting down innovation. It means that companies which take shortcuts on safety bear the cost of those shortcuts, rather than externalizing them onto the public. A tiered liability model — with a safe harbor for companies that meet documented safety standards — creates strong financial incentives to invest in safety without penalizing responsible developers.

5.1 Product liability for AI systems in high-stakes domains

Establish product-liability standards for AI systems that cause documented harm in healthcare, legal services, financial advice, criminal justice, and hiring. Developers who demonstrate compliance with defined safety standards (independent audit, red-team testing, adverse-event reporting) qualify for a liability cap. Those who cannot demonstrate compliance bear proportionate liability. This mirrors the liability framework for medical devices and pharmaceuticals.

Analogy

A car manufacturer that installs airbags that meet federal safety standards is protected from certain negligence claims. One that knowingly installs defective airbags is not. The AI liability framework applies the same logic — safety investment earns protection; recklessness does not.

product liabilitysafe harbor

5.2 Mandatory adverse-event reporting

Require AI operators in high-stakes domains to report documented system failures — misdiagnoses, wrongful denials, discriminatory outcomes, financial losses — to a central adverse-event registry, modeled on the FDA’s MedWatch system and the FAA’s Aviation Safety Reporting System. Aggregate data reveals systemic patterns invisible in individual incidents. Reports are shielded from liability discovery to encourage honest reporting, as with aviation safety reports.

transparencysystemic risk

5.3 The right to a human decision-maker

In any consequential government or regulated-industry decision — parole and bail, public-benefits eligibility, medical diagnosis, credit, employment screening, immigration, child welfare — individuals have a legally enforceable right to request human review of any AI-generated recommendation affecting them. AI may inform decisions; it may not be the sole decision-maker without explicit, documented consent from the affected person.

In plain terms

If an algorithm decides you are ineligible for a mortgage, a job, or parole, you have the right to ask a human being to review that decision. The algorithm can inform the human; it cannot replace the human’s legal responsibility for the outcome.

due processhuman dignity

5.4 Licensed AI auditing as a profession

Establish federal licensing and liability standards for AI auditors — analogous to Certified Public Accountants for financial reporting. A licensed AI auditor who certifies a system that subsequently causes widespread harm bears professional liability for that certification. This creates a credentialed, independent, financially accountable review profession with real skin in the game.

professional standardsthird-party audit

5.5 Mandatory review for catastrophic-risk AI systems

Require pre-deployment national-security review — coordinated with allies — for any AI system that could autonomously direct critical infrastructure at national scale, meaningfully accelerate the development of weapons of mass destruction, or whose capability profile suggests the potential for actions outside human oversight. This is not a prohibition; it is a structured review process before deployment, analogous to nuclear materials licensing.

existential risknational security

Legislative asks — Pillar V

1

Enact tiered AI product liability for high-stakes domains, with a safe harbor for companies meeting defined safety standards, administered by the AI Safety Board.

2

Establish a federal AI adverse-event registry, modeled on FDA MedWatch, with mandatory reporting by operators in high-stakes domains.

3

Codify a right to human review of consequential AI decisions in government and regulated industries, enforceable by affected individuals.

4

Direct NIST and the AI Safety Board to develop federal AI auditor certification standards within 18 months of enactment.

🔒

Pillar VI — Data & identity sovereignty

Today, Americans have almost no control over their own digital data. Our health records, financial histories, location data, social relationships, and behavioral patterns are collected, bought, sold, and fed into AI systems — without our meaningful knowledge or consent. This pillar gives Americans ownership of their own information.

Plain-language summary

Imagine if every American had a secure digital wallet — like a safe deposit box for their data — that they controlled. When a company needed to verify your age, credit score, or health status, it could ask your wallet to confirm only the specific fact it needs, without seeing any of your underlying records. You would never have to hand over your entire medical file just to prove you had a certain vaccination. This is not science fiction — the technology exists and is ready to deploy. It is called a Self-Sovereign Identity system with Zero Knowledge Proofs. Congress needs to create the legal and standards framework to make it real for every American.

How Zero Knowledge Proofs work — in plain English

1

The problem: Today, to prove you are over 21, you hand over your driver’s license — which reveals your full name, address, date of birth, and license number. You shared far more than was needed.

2

The solution: A Zero Knowledge Proof (ZKP) lets you mathematically prove a specific fact — “I am over 21,” “my credit score is above 700,” “I have been vaccinated” — without revealing any of the underlying data that proves it. The verifier learns only what they need to know. Nothing else.

3

How it’s verified: The proof is verified locally — on your own device — without sending your data to any central server. The company or agency asking the question gets a cryptographic “yes” or “no.” They never see your records.

4

Why it matters for AI: AI systems are voracious consumers of personal data. ZKPs allow AI to be trained on and interact with real human information without ever exposing the underlying personal records — protecting privacy while preserving utility.

How a personal data wallet works

Data sources
Hospital, bank, DMV, employer issue verified credentials
Your data wallet
Encrypted on your device. Only you hold the key.
ZKP proof
“Yes, age verified” — no raw data leaves your device
Requester
Gets only what they asked for. Nothing more.

6.1 The American Data Wallet Act — a personal data wallet for every citizen

Authorize and fund the creation of a standard personal data wallet infrastructure — a secure, encrypted digital container that each American controls — for storing verified credentials (medical records, professional licenses, financial history, government benefits eligibility, identity documents). The wallet lives on the individual’s device, not on a government or corporate server. The federal government establishes the technical standards; private and nonprofit entities build the wallets.

Analogy to existing policy

The federal government established interoperability standards for electronic health records (EHR) under the HITECH Act, then required hospitals and providers to adopt them. The data wallet follows the same model: federal standards, open architecture, competitive implementation. Think of it as a personal EHR — but for all your verified data, not just your medical records.

data sovereigntyfederal standards

6.2 The right to data minimization — share only what is necessary

Establish a legal right to data minimization: any entity requesting personal information from an American — government agency, employer, insurer, lender, or AI platform — may request only the specific data element needed to fulfill the stated purpose, and nothing more. A landlord verifying income need not see your full bank history. An employer verifying a credential need not see your health records. ZKP-based verification is the technical mechanism; this provision makes minimum-necessary data sharing the legal default.

In plain terms

Right now, sharing your personal information is all-or-nothing. This provision makes it granular. You can share “I earn over $50,000/year” without sharing your actual income figure. You can share “I have a valid nursing license” without sharing your home address. You share the conclusion, not the evidence that supports it.

data minimizationprivacy by design

6.3 Local verification — data does not have to leave your device

Mandate that any government-run verification system — for benefits eligibility, identity, professional licensing, or legal status — support local verification via ZKP by 2030. This means a citizen’s phone can generate a cryptographic proof of eligibility without transmitting their records to the government server conducting the verification. The server learns only “eligible: yes/no.” No central database of citizen activity is created as a byproduct of routine verification.

Why this matters for civil liberties

Today, every time you swipe your ID — at a bar, at a polling place, at a pharmacy — a record of that interaction can be logged. With local ZKP verification, you generate a proof on your own device. Nothing is logged at the point of verification. Your pattern of activity is not visible to the verifier or to any central database.

local-first verificationcivil liberties

6.4 Verified identity credentials — stopping AI impersonation and fraud

Direct NIST to develop federal standards for cryptographically verified digital identity credentials — verifiable proof that a communication, document, or AI-generated content is associated with a verified human identity. This addresses the deepest threat of the AI era: that AI-generated impersonation, fraud, and disinformation become indistinguishable from authentic human communication. Adoption is voluntary for individuals and mandatory for federal contractors, licensed professionals, and federal benefit disbursement.

What this prevents

Deepfake videos of elected officials. AI-generated identity fraud that steals benefits. AI personas that pose as doctors, lawyers, or government officials. Synthetic voter-registration fraud. A cryptographic credential that is tied to a verified human identity and cannot be forged addresses all of these at the root.

identity integrityfraud prevention

6.5 Portable, interoperable credentials across sectors

Require that verified credentials issued by federal and state government agencies — professional licenses, benefit eligibility determinations, educational credentials, health immunization records — conform to W3C Verifiable Credential and Decentralized Identifier (DID) open standards. This makes credentials portable: a nurse licensed in Texas can present her license digitally in any state without contacting the Texas licensing board. An AI system verifying credentials does not need access to a central registry; it verifies the cryptographic proof.

credential portabilityopen standards

6.6 AI training data and consent — your data, your choice

Establish that personal data held in a citizen’s data wallet cannot be used to train AI systems without explicit, specific, revocable consent — separate from any general terms-of-service agreement. Consent to train medical AI on your health records must be separate from consent to receive medical care. Consent can be granted with compensation, revoked at any time, and is domain-specific: consent to train a medical AI does not constitute consent to train a commercial advertising model.

data consentAI training rights

Legislative asks — Pillar VI

1

Enact the American Data Wallet Act: direct NIST to develop open standards for personal data wallets and require federal agencies to issue verifiable credentials conforming to those standards within 36 months.

2

Codify the right to data minimization as a federal privacy baseline: requesters may collect only the minimum data necessary for their stated purpose; ZKP-based verification satisfies this requirement by default.

3

Require all federal verification systems (benefits eligibility, identity, licensing) to support local ZKP-based verification by 2030, with NIST-published standards by 2027.

4

Direct NIST to develop federal standards for cryptographically verified digital identity credentials, and require their use for federal contractors, licensed professional communications, and federal benefit disbursement.

5

Amend HIPAA, FERPA, and FCRA to require explicit, specific, revocable consent for use of covered personal data in AI training, separate from consent to the underlying service.

Hard trade-offs — named honestly

A credible policy framework names the genuine conflicts rather than papering over them. Below are six tensions this framework does not fully resolve. Legislators should understand these trade-offs before acting.

Value A
Open foundational models democratize AI access and accelerate research
vs
Value B
Open model weights lower the barrier for catastrophic misuse — bioweapons, critical infrastructure attacks

This framework favors openness below a defined capability threshold and mandatory pre-deployment review above it. The threshold itself requires ongoing empirical calibration — a task for the AI Safety Board, not a one-time legislative determination.

Value A
Product liability protects harmed individuals and creates strong safety incentives
vs
Value B
Liability exposure concentrates AI development in large companies that can absorb legal costs, crowding out startups and academic researchers

The safe harbor in Pillar V is designed to address this — smaller developers who meet a safety standard are protected. But the administrative burden of qualifying for the harbor is real, and small teams may find it unnavigable regardless.

Value A
Information-health audits protect democracy from algorithmically amplified radicalization
vs
Value B
Government-defined “information diversity” standards risk becoming a mechanism for political control of speech, even with good intentions

This is the deepest tension in the framework. The proposal attempts to sidestep it by regulating process (does the algorithm increase or decrease variety?) rather than content (is this viewpoint acceptable?). The line is genuinely blurry and deserves active attention from Congress.

Value A
Moving fast on beneficial AI deployment — medical diagnosis, legal access, educational support — saves lives now
vs
Value B
Deployment that outpaces governance and workforce infrastructure causes displacement, harm, and loss of public trust that is hard to recover

Pillar IV’s pre-deployment assessment and the productivity levy slow the most disruptive deployments. This will be contested by those who argue that delay costs lives where AI is genuinely beneficial — a legitimate concern, not a bad-faith one.

Value A
Data wallets and ZKP verification give individuals control and privacy
vs
Value B
Widespread adoption of cryptographic identity infrastructure creates new attack surfaces and concentrates power in whoever controls the standards and root-trust infrastructure

The choice of open, W3C-standard architecture (not a government-controlled identity system) is deliberate and critical. The risk of a government-managed digital identity becoming a surveillance or control tool is real and must be designed against explicitly — no central government wallet registry, no mandatory adoption, no surveillance backdoors.

Value A
International coordination on AI safety is essential — no single nation can govern planetary-scale risks alone
vs
Value B
International agreements move slowly and may impose constraints on American AI development that adversaries, including authoritarian states, do not honor

This framework takes the position that uncoordinated competition on the most dangerous capabilities produces worse outcomes than imperfect coordination. But that judgment depends on assessments of adversary intent and treaty-compliance behavior that are genuinely contested.

More from First Principles First

Weekly briefings, research frameworks, and decision-grade analysis for founders, investors, and institutions navigating the AI transition.

Read the latest briefing