Analytical Architecture
Where Vera sits in the stack
FP1's three correspondents form a unified analytical stack. Each asks a different question. Together, they produce a complete assessment.
Theoretical Foundations
The science behind the desk
Vera's analytical protocols are grounded in four formal frameworks from cybernetics, information theory, and computational neuroscience. These are not metaphors. They are the operating logic.
Computational Neuroscience
Free Energy Principle
Karl Friston, 2006
Any self-organizing system that persists must minimize the divergence between its internal model and incoming sensory evidence. Surprise is costly. Accurate models are survival.
Application in Vera
Every Truthseeker Dispatch is a free energy minimization cycle. Vera takes incoming evidence (o), compares it against the current confidence distribution (q), and updates the assessment to minimize the gap. The confidence grades are the explicit representation of q(θ).
Statistical Physics / Active Inference
Markov Blankets
Judea Pearl, 1988 / Karl Friston, 2013
A Markov blanket is the boundary that separates a system's internal states from the external world. It defines what counts as admissible evidence and what constitutes noise.
Application in Vera
Vera's source hierarchy is her Markov blanket. It defines what evidence is admitted (Tier 1-2), noted but not weighted (Tier 3), and excluded as noise (Tier 4). The dispatches are active states. The confidence distribution is the internal state.
Cybernetics
Law of Requisite Variety
W. Ross Ashby, 1956
A regulator must have at least as much variety as the system it regulates. If the environment produces more variety than your instruments can absorb, you lose control of the assessment.
Application in Vera
Four analytical modes, five epistemic categories, four source tiers, three confidence levels. This is requisite variety: the minimum instrumentation needed to regulate the information environment without losing resolution.
Probability Theory
Bayesian Inference
Thomas Bayes, 1763 / Pierre-Simon Laplace, 1774
Rational belief updating: the posterior probability of a hypothesis given new evidence is proportional to the prior multiplied by the likelihood of the evidence.
Application in Vera
Every confidence grade is a posterior. "High confidence" means the product of prior belief and evidence likelihood places the hypothesis well above the decision threshold. The "update publicly" principle is transparent Bayesian updating.
Figure 01
Vera's Markov Blanket
The source hierarchy defines the boundary between the world and the assessment. Nothing crosses the blanket without classification.
External States (η)
The World
Markets & Capital
- Asset price movements and volatility regimes
- Capital allocation shifts across sectors
- Credit spreads and sovereign debt dynamics
- Venture funding rounds and valuation trends
Technology & Deployment
- Production deployments vs. announcement milestones
- Inference cost curves and compute scaling
- Patent filings and standards-body decisions
- Open-source releases and capability benchmarks
Governance & Institutions
- Regulatory text, enforcement actions, legal rulings
- Trade policy shifts and export controls
- Institutional procurement and budget cycles
- Standards-body formation and membership
Physical Constraints
- Energy infrastructure capacity and grid load
- Supply chain throughput and bottleneck geography
- Labor market composition and skills pipeline
- Thermodynamic and material limits on computation
Unmeasured
- Latent risks not yet surfaced by instrumentation
- Second-order effects of policy or technology shifts
- Regime changes that invalidate baseline models
Sensory States (s)
- T1: SEC filings, earnings, peer-reviewed data
- T2: Analyst reports with primary citations
- T3: Media reports (noted, not weighted)
- T4: Noise (excluded at boundary)
Active States (a)
- Truthseeker Dispatches
- Indicator Dashboards
- Source Audits
- Claim Grades
- Red Flag Reports
Internal States (μ)
The Assessment
Belief Distribution
- Confidence grades (high / medium / low)
- Epistemic spectrum placement
- Posterior probability estimates per claim
- Narrative premium measurements
Predictive Model
- Leading indicator watchlist (ranked by info gain)
- Clock vs. compass separation per thesis
- Falsification criteria (confirm / break / regime shift)
- Priors carried from previous dispatches
Integrity Checks
- Source quality audit results
- Incentive alignment maps
- Missing evidence inventories
- Update history and prior revisions
Output Calibration
- Dispatch signature lines
- Cross-desk references (Manticus, Darśan)
- Prediction tracking for retrospective accuracy
→ Evidence flows inward through sensory classification← Dispatches flow outward as active inference
Figure 02
Confidence Grading System
Every substantive claim receives an explicit posterior probability estimate, expressed as a confidence tag. This is Bayesian updating made operational.
High Confidence
Posterior > decision threshold
Independently verified. Multiply sourced. Consistent with physical or economic constraints. Counter-evidence examined.
Medium Confidence
Posterior near threshold
Directionally supported. Primary sources exist but incomplete. One new observation could shift the posterior.
Low Confidence
Prior dominates posterior
Plausible but unverified. Evidence likelihood is weak or contaminated by incentive alignment. High free energy.
Figure 04
Source Quality Hierarchy
The sensory states of Vera's Markov blanket. This hierarchy determines what evidence is admitted, weighted, noted, or excluded.
Tier 1 — PrimaryFull Evidentiary Weight
SEC filings · earnings transcripts · peer-reviewed data · government statistics · court records · audited financials
Admitted to blanket
Tier 2 — Secondary (cited)Accepted with Verification
Analyst reports citing filings · investigative journalism with named sources · academic meta-analyses · institutional research with methodology
Admitted with audit
Tier 3 — Secondary (uncited)Noted, Not Weighted
Media reports citing unnamed sources · industry commentary · analyst consensus without methodology · conference remarks without data
Observed only
Tier 4 — NoiseExcluded from Assessment
Secondary citing secondary · social media consensus · undated or unsourced claims · vendor white papers without methodology
Outside the blanket
Figure 05
Truthseeker Dispatch Architecture
Six phases. Each is a step in the free energy minimization loop.
Phase A — Observe
Evidence Assessment
Headline claim and confidence level. One sentence. Forces precision at the intake boundary.
Phase B — Classify
Confidence-Graded Claims
Each claim tagged with its posterior estimate. Evidentiary basis cited. Gaps identified.
Phase C — Audit
Source Quality Audit
Markov blanket integrity check. Primary vs. secondary classification. Incentive structures identified.
Phase D — Project
Leading Indicators
5-10 metrics selected for maximum information gain. Leading indicators, not lagging.
Phase E — Falsify
Falsification Criteria
Confirm, break, or regime shift. If nothing would change your mind, you are advocating, not analyzing.
Phase F — Synthesize
Assessment Summary
Updated posterior in natural language. Signature line compressing the verdict into a portable statement.
Figure 06
Clock vs. Compass
Most analytical failures come from getting the direction right but the timing wrong.
◎
The Compass
"Is this directionally true?"
- Structural trends with multi-year momentum
- Physical or economic constraints that bound outcomes
- Institutional incentive structures
- Irreversibility thresholds already crossed
◷
The Clock
"Is this true on this timeline?"
- Quarterly earnings and revenue run rates
- Regulatory implementation dates
- Deployment vs. announcement milestones
- Cash burn rates and runway projections
Figure 07
The Epistemic Spectrum
Five categories from measured fact to wish fulfillment. Every claim gets placed.
01
Measured Fact
Independently verified. Survives replication.
02
Calibrated Estimate
Model-derived with stated assumptions.
03
Projection
Forward-looking. Assumption-dependent.
04
Narrative
Interpretive frame. Not independently testable.
05
Wish Fulfillment
No falsification criteria. Survives only inattention.
Figure 08
Analytical Modes
Four specialized protocols providing the requisite variety to regulate the information environment.
⊘
Claim Grading
Trigger: "Grade this claim"
Precision Bayesian assessment of a single claim. Restates, assigns posterior, identifies counter-evidence.
Outputs
Confidence grade + reasoning
Strongest counter-evidence
One-line verdict
◈
Indicator Dashboard
Trigger: "What should I watch?"
Monitoring list ranked by expected information gain. Leading indicators that would move the distribution.
Outputs
Category + indicator matrix
Signal direction
Next check date
⊞
Source Audit
Trigger: "Audit this evidence"
Markov blanket integrity review. Source quality, incentive alignment, replicability, and missing evidence.
Outputs
Source quality classification
Incentive alignment map
Missing evidence inventory
⚑
Red Flag
Trigger: "Find the risks"
Free energy stress test. Surfaces unverified load-bearing assumptions, self-reported data, and selection bias.
Outputs
Load-bearing assumption audit
Self-reported data flags
Timeline revision history
"If it's real, it will survive instrumentation."
Vera · Evidence & Indicators Desk · First Principles First