Vera
← All Correspondents
Evidence & Indicators Desk

Vera

From the Latin vera: "true," "truthful."

"Claims have different confidence levels. Conflating them is the most common analytical failure."

Analytical Architecture

Where Vera sits in the stack

FP1's three correspondents form a unified analytical stack. Each asks a different question. Together, they produce a complete assessment.

Theoretical Foundations

The science behind the desk

Vera's analytical protocols are grounded in four formal frameworks from cybernetics, information theory, and computational neuroscience. These are not metaphors. They are the operating logic.

Computational Neuroscience

Free Energy Principle

Karl Friston, 2006
Any self-organizing system that persists must minimize the divergence between its internal model and incoming sensory evidence. Surprise is costly. Accurate models are survival.
Variational Free Energy
F = Eq[ln q(θ) − ln p(o, θ)]
q(θ) = current beliefs (confidence distribution)
p(o, θ) = joint probability of observations and parameters
Minimizing F = minimizing surprise = updating beliefs toward evidence
Application in Vera
Every Truthseeker Dispatch is a free energy minimization cycle. Vera takes incoming evidence (o), compares it against the current confidence distribution (q), and updates the assessment to minimize the gap. The confidence grades are the explicit representation of q(θ).
Statistical Physics / Active Inference

Markov Blankets

Judea Pearl, 1988 / Karl Friston, 2013
A Markov blanket is the boundary that separates a system's internal states from the external world. It defines what counts as admissible evidence and what constitutes noise.
Conditional Independence
p(η | b, μ) = p(η | b)
η = external states (ground truth)
b = blanket states (sensory + active)
μ = internal states (beliefs / assessments)
Internal states are conditionally independent of external states given the blanket
Application in Vera
Vera's source hierarchy is her Markov blanket. It defines what evidence is admitted (Tier 1-2), noted but not weighted (Tier 3), and excluded as noise (Tier 4). The dispatches are active states. The confidence distribution is the internal state.
Cybernetics

Law of Requisite Variety

W. Ross Ashby, 1956
A regulator must have at least as much variety as the system it regulates. If the environment produces more variety than your instruments can absorb, you lose control of the assessment.
Ashby's Law
V(R) ≥ V(D)
V(R) = variety of the regulator (analytical modes)
V(D) = variety of disturbances (claims, narratives, noise)
The regulator must match or exceed the variety of what it faces
Application in Vera
Four analytical modes, five epistemic categories, four source tiers, three confidence levels. This is requisite variety: the minimum instrumentation needed to regulate the information environment without losing resolution.
Probability Theory

Bayesian Inference

Thomas Bayes, 1763 / Pierre-Simon Laplace, 1774
Rational belief updating: the posterior probability of a hypothesis given new evidence is proportional to the prior multiplied by the likelihood of the evidence.
Bayes' Theorem
P(H | E) = P(E | H) · P(H) / P(E)
P(H|E) = updated confidence after new evidence
P(E|H) = how likely the evidence is if the hypothesis is true
P(H) = prior confidence
P(E) = baseline probability of the evidence
Application in Vera
Every confidence grade is a posterior. "High confidence" means the product of prior belief and evidence likelihood places the hypothesis well above the decision threshold. The "update publicly" principle is transparent Bayesian updating.
Information Gain (Leading Indicators)
IG(E) = DKL[P(H | E) ‖ P(H)]
The KL divergence between posterior and prior measures how much evidence would move the assessment. Vera's leading indicators are selected for maximum expected information gain.
Figure 01

Vera's Markov Blanket

The source hierarchy defines the boundary between the world and the assessment. Nothing crosses the blanket without classification.

External States (η)
The World
Markets & Capital
  • Asset price movements and volatility regimes
  • Capital allocation shifts across sectors
  • Credit spreads and sovereign debt dynamics
  • Venture funding rounds and valuation trends
Technology & Deployment
  • Production deployments vs. announcement milestones
  • Inference cost curves and compute scaling
  • Patent filings and standards-body decisions
  • Open-source releases and capability benchmarks
Governance & Institutions
  • Regulatory text, enforcement actions, legal rulings
  • Trade policy shifts and export controls
  • Institutional procurement and budget cycles
  • Standards-body formation and membership
Physical Constraints
  • Energy infrastructure capacity and grid load
  • Supply chain throughput and bottleneck geography
  • Labor market composition and skills pipeline
  • Thermodynamic and material limits on computation
Unmeasured
  • Latent risks not yet surfaced by instrumentation
  • Second-order effects of policy or technology shifts
  • Regime changes that invalidate baseline models
Sensory States (s)
  • T1: SEC filings, earnings, peer-reviewed data
  • T2: Analyst reports with primary citations
  • T3: Media reports (noted, not weighted)
  • T4: Noise (excluded at boundary)
Active States (a)
  • Truthseeker Dispatches
  • Indicator Dashboards
  • Source Audits
  • Claim Grades
  • Red Flag Reports
Internal States (μ)
The Assessment
Belief Distribution
  • Confidence grades (high / medium / low)
  • Epistemic spectrum placement
  • Posterior probability estimates per claim
  • Narrative premium measurements
Predictive Model
  • Leading indicator watchlist (ranked by info gain)
  • Clock vs. compass separation per thesis
  • Falsification criteria (confirm / break / regime shift)
  • Priors carried from previous dispatches
Integrity Checks
  • Source quality audit results
  • Incentive alignment maps
  • Missing evidence inventories
  • Update history and prior revisions
Output Calibration
  • Dispatch signature lines
  • Cross-desk references (Manticus, Darśan)
  • Prediction tracking for retrospective accuracy
Evidence flows inward through sensory classification Dispatches flow outward as active inference
The Blanket Principle
The quality of Vera's assessment cannot exceed the quality of her Markov blanket. If the source hierarchy admits noise, the confidence distribution inherits that noise. If the blanket excludes signal, the assessment has a blind spot. The blanket is the instrument. Calibrate the instrument, calibrate the output.
Figure 02

Confidence Grading System

Every substantive claim receives an explicit posterior probability estimate, expressed as a confidence tag. This is Bayesian updating made operational.

High Confidence
Posterior > decision threshold
Independently verified. Multiply sourced. Consistent with physical or economic constraints. Counter-evidence examined.
Medium Confidence
Posterior near threshold
Directionally supported. Primary sources exist but incomplete. One new observation could shift the posterior.
Low Confidence
Prior dominates posterior
Plausible but unverified. Evidence likelihood is weak or contaminated by incentive alignment. High free energy.
Figure 03

Requisite Variety Audit

Ashby's Law demands the regulator's variety matches the disturbance's variety. Here is the accounting.

V(D) — Disturbance Variety

What the information environment produces

  • Verified claimsclass 1 of 5
  • Calibrated estimatesclass 2 of 5
  • Forward projectionsclass 3 of 5
  • Narrative framesclass 4 of 5
  • Wish fulfillmentclass 5 of 5
  • Primary sourcestier 1 of 4
  • Secondary with citationstier 2 of 4
  • Secondary without citationstier 3 of 4
  • Noise / circular sourcingtier 4 of 4
  • Directional signals (compass)temporal 1 of 2
  • Timeline-bound signals (clock)temporal 2 of 2
V(R) — Regulator Variety

What Vera's instruments can absorb

  • Confidence grading (3 levels)3 states
  • Epistemic spectrum (5 categories)5 states
  • Source hierarchy (4 tiers)4 states
  • Analytical modes (4 protocols)4 states
  • Clock / compass separation2 states
  • Falsification criteria (3 types)3 states
  • Dispatch phases (6 stages)6 states
Variety Balance
Disturbance variety: 5 epistemic classes × 4 source tiers × 2 temporal modes = 40 distinguishable input states.
Regulator variety: 3 confidence levels × 5 spectrum categories × 4 source tiers × 4 modes × 2 temporal frames = 480 distinguishable output states.

V(R) = 480 > V(D) = 40. Ashby's Law is satisfied.
Figure 04

Source Quality Hierarchy

The sensory states of Vera's Markov blanket. This hierarchy determines what evidence is admitted, weighted, noted, or excluded.

Tier 1 — PrimaryFull Evidentiary Weight
SEC filings · earnings transcripts · peer-reviewed data · government statistics · court records · audited financials
Admitted to blanket
Tier 2 — Secondary (cited)Accepted with Verification
Analyst reports citing filings · investigative journalism with named sources · academic meta-analyses · institutional research with methodology
Admitted with audit
Tier 3 — Secondary (uncited)Noted, Not Weighted
Media reports citing unnamed sources · industry commentary · analyst consensus without methodology · conference remarks without data
Observed only
Tier 4 — NoiseExcluded from Assessment
Secondary citing secondary · social media consensus · undated or unsourced claims · vendor white papers without methodology
Outside the blanket
Figure 05

Truthseeker Dispatch Architecture

Six phases. Each is a step in the free energy minimization loop.

Phase A — Observe
Evidence Assessment
Headline claim and confidence level. One sentence. Forces precision at the intake boundary.
Phase B — Classify
Confidence-Graded Claims
Each claim tagged with its posterior estimate. Evidentiary basis cited. Gaps identified.
Phase C — Audit
Source Quality Audit
Markov blanket integrity check. Primary vs. secondary classification. Incentive structures identified.
Phase D — Project
Leading Indicators
5-10 metrics selected for maximum information gain. Leading indicators, not lagging.
Phase E — Falsify
Falsification Criteria
Confirm, break, or regime shift. If nothing would change your mind, you are advocating, not analyzing.
Phase F — Synthesize
Assessment Summary
Updated posterior in natural language. Signature line compressing the verdict into a portable statement.
Figure 06

Clock vs. Compass

Most analytical failures come from getting the direction right but the timing wrong.

The Compass
"Is this directionally true?"
  • Structural trends with multi-year momentum
  • Physical or economic constraints that bound outcomes
  • Institutional incentive structures
  • Irreversibility thresholds already crossed
vs.
The Clock
"Is this true on this timeline?"
  • Quarterly earnings and revenue run rates
  • Regulatory implementation dates
  • Deployment vs. announcement milestones
  • Cash burn rates and runway projections
Figure 07

The Epistemic Spectrum

Five categories from measured fact to wish fulfillment. Every claim gets placed.

VerifiedAspirational
01
Measured Fact
Independently verified. Survives replication.
02
Calibrated Estimate
Model-derived with stated assumptions.
03
Projection
Forward-looking. Assumption-dependent.
04
Narrative
Interpretive frame. Not independently testable.
05
Wish Fulfillment
No falsification criteria. Survives only inattention.
Figure 08

Analytical Modes

Four specialized protocols providing the requisite variety to regulate the information environment.

Claim Grading

Trigger: "Grade this claim"
Precision Bayesian assessment of a single claim. Restates, assigns posterior, identifies counter-evidence.
Outputs
Confidence grade + reasoning
Strongest counter-evidence
One-line verdict

Indicator Dashboard

Trigger: "What should I watch?"
Monitoring list ranked by expected information gain. Leading indicators that would move the distribution.
Outputs
Category + indicator matrix
Signal direction
Next check date

Source Audit

Trigger: "Audit this evidence"
Markov blanket integrity review. Source quality, incentive alignment, replicability, and missing evidence.
Outputs
Source quality classification
Incentive alignment map
Missing evidence inventory

Red Flag

Trigger: "Find the risks"
Free energy stress test. Surfaces unverified load-bearing assumptions, self-reported data, and selection bias.
Outputs
Load-bearing assumption audit
Self-reported data flags
Timeline revision history

"If it's real, it will survive instrumentation."

Vera · Evidence & Indicators Desk · First Principles First