Our Methodology

How FUTARCHY.media builds forecasts, measures uncertainty, and holds itself accountable.

What We Do

FUTARCHY.media produces data-driven forecasts on politics, finance, technology, sports, and world affairs. Every article assigns a probability to a specific, falsifiable claim — and explains exactly how we got there.

We don't predict the future. We estimate its likelihood, show our reasoning, and tell you when we'll know if we were right.

Our editorial posture is simple: here's what the data shows, here's our reading, decide for yourself. We are a transparent aggregator, not an oracle. Every forecast includes the assumptions that built it, the data that supports it, and the conditions that would break it.

The Dossier Format

Every FUTARCHY article is structured as a prospective dossier — a layered document designed for three reading speeds.

10 seconds The Verdict

The probability, the claim, and the confidence interval. Visible at the top of every article before you scroll. You know what we think before you read a single paragraph.

2 minutes The Executive Brief

Key findings, the critical metrics, scenario cards showing possible outcomes, and a stress test that tells you what would change our mind. Enough for a decision-maker who needs the essentials.

12 minutes The Full Dossier

The complete analysis in four chapters: Context, Analysis, Reading, and Resolution. For the reader who wants to verify every step of our reasoning.

We borrowed this structure from intelligence briefings, data journalism, and prediction market interfaces. The goal is never to trap you into reading 2,000 words. It's to give you the right amount of information at whatever depth you need.

Five Analytical Frameworks

Each FUTARCHY forecast uses one of five frameworks. The framework is determined by the subject category, not the analyst's preference.

ORACLE Politics, World Affairs, Geopolitics

Structures geopolitical and political probability assessments. Typically weights opposition resilience, resource alignment, coalition dynamics, and leadership calculus. The exact components and weights vary by subject.

Will the EU delay AI Act enforcement? Will Iran's Strait of Hormuz disruption persist through summer?

SIGNAL Finance, Crypto, Economy

Analyzes financial predictions through market data and pricing signals, historical precedent, fundamental analysis, and sentiment. Pays particular attention to what money is doing — on the principle that traders putting capital at risk express more honest opinions than analysts writing reports.

Will SpaceX close above $1.5T on IPO day? Will Bitcoin miners derive 50%+ revenue from AI hosting?

CLUTCH Sports, Esports

Evaluates competitive outcomes by weighting form, momentum, home advantage, squad depth, and historical precedent. Designed for one-off events where small margins and variance play an outsized role.

Will Arizona win the NCAA championship? Will Italy qualify through the European playoff finals?

PRISM Technology, Climate, Science

Blends regulatory readiness, political momentum, industry pressure, and technical standards progress. Built for questions where policy, technology, and institutional dynamics intersect.

Will Anthropic's injunction hold through trial? Will the EU enforce its AI Act on schedule?

PULSE Trending, Pop Culture

Tracks cultural momentum, public sentiment, media cycles, and historical base rates. For questions at the intersection of attention and prediction — where virality and timing matter more than structural analysis.

How We Build a Probability

Every forecast follows the same construction method, regardless of framework.

1
Define the claim. A specific, falsifiable statement with a resolution date. Not "will AI change the world" but "will publicly listed Bitcoin miners derive over 50% of revenue from AI/HPC hosting by December 31, 2026." The claim is the contract. Everything else serves it.
2
Identify 4-5 components. Each framework decomposes the question into weighted factors. The weights are editorial judgments, not algorithmic outputs. We assign them in multiples of 5%, and every weight gets a one-sentence justification.
3
Score each component. Analysts assign a score (typically 1-10) to each component based on available evidence. The scoring is transparent — readers can see the data points, historical precedents, and reasoning behind each score.
4
Calculate the weighted composite. Component scores are multiplied by their weights and summed. The result maps to a probability expressed in multiples of 5% with an explicit confidence interval.
5
Stress test the result. A specific condition that would shift the probability by a stated amount. "If the US and Iran reach a ceasefire before May 15, our 70% drops to 30%." The stress test forces the analyst to name what would change their mind — and by how much.
6
Define resolution criteria. A date. A measurable condition. A YES/NO gate. No ambiguity, no retroactive reinterpretation.

What Our Probabilities Mean

A 65% probability means: if we made 100 forecasts at 65% confidence, we'd expect roughly 65 of them to resolve YES and 35 to resolve NO. A well-calibrated forecaster at 65% is wrong 35% of the time — and that's not a failure. That's the point.

We express all probabilities in multiples of 5. We don't write 62.4% because that level of precision is dishonest for the kind of analysis we do. The difference between 60% and 65% is meaningful. The difference between 62.4% and 63.1% is noise.

Every probability comes with a confidence interval — a numerical range (e.g., 55%-75%) that captures our uncertainty about the estimate itself. A narrow interval means we're relatively sure about our probability. A wide one means the underlying situation is volatile.

Scenarios and Resolution

Every dossier presents at least three scenarios that sum to 100%. This isn't a rhetorical trick — it's a forcing function. If you can't make your scenarios sum to 100%, you haven't thought through all the possibilities.

Resolution criteria are non-negotiable. Every forecast has a specific date and measurable conditions. When that date arrives, the forecast resolves YES or NO based on observable facts — not our interpretation of events, not a revised reading, not "well, it sort of happened." Binary resolution keeps us honest.

The Analysts

FUTARCHY's forecasts are written by five analysts, each with distinct expertise and a recognizable analytical voice.

Silo Crypto, Finance, Tech

His analysis tends toward trading metaphors and market microstructure — liquidation cascades, front-running, order flow. He reads futures curves the way most people read headlines.

Zara World Affairs, Geopolitics, Energy

Her framework is strategic — game theory, second-order effects, consensus traps. She's most comfortable when the question involves multiple actors making simultaneous calculations under uncertainty.

Jules Sports, Esports, Competition

The most conversational of the five — poker analogies, momentum reads, the texture of a specific match or moment. Skeptical of models that ignore the variance inherent in single-game outcomes.

Emma Technology, Regulation, Systems

Her analysis tracks feedback loops, regulatory timelines, and the gap between what institutions promise and what they deliver. The most likely to tell you what she got wrong last time and why.

Nate Crypto, Politics, Culture

Covers through the lens of underdogs and base rates. He looks for the spots where consensus is wrong, where historical precedent suggests a different outcome than the current narrative.

Editorial Standards

The DATA - READING - CAVEAT Pattern

Every section of every article follows the same three-beat structure. First, present the data — the numbers, the facts, the observable evidence. Then, offer our reading — what we think it means and why. Finally, the caveat — what could make us wrong, what we're not sure about, what the data doesn't capture.

This pattern forces intellectual honesty into every paragraph. It's easy to write a confident analysis. It's harder — and more useful — to tell the reader where your confidence breaks down.

Honest Precision

We calibrate the precision of our claims to the precision of our knowledge. Probabilities come in multiples of 5. Confidence intervals are always numerical, never vague labels alone. We write "Medium-High (60%-80%)" — not just "Medium-High."

We never invent acronym expansions for our frameworks. ORACLE is ORACLE. It's not "Omnidirectional Risk Assessment and Calibrated Likelihood Engine." Our frameworks are structured editorial methods with brand names — not proprietary algorithms.

Sources and Verification

Every forecast cites a minimum of 8 sources with names and dates. Claims are fact-checked against 2+ independent sources before publication. Every claim is classified as confirmed, rumored, or speculative — and the article's tone adjusts accordingly.

The Narrative Arc

Our articles are structured as analytical narratives, not research reports. This is a deliberate editorial choice. Every dossier includes three narrative elements that serve both readability and intellectual honesty.

The Digression

One passage per article that departs from the main thread to explore a tangent — a historical parallel, an unexpected data point, a connection to another domain. Good analysis makes unexpected links. We leave them in.

The Self-Contradiction

One moment per article where the analyst argues against their own thesis. This forces the analyst to take the opposing view seriously and shows the reader where the argument is weakest.

The Open Ending

Every article closes without a neat bow. The model says one thing; the analyst's instinct says another. We don't wrap up. We stay with the uncertainty. That's what makes a forecast useful.

The Production Pipeline

Every article passes through a three-phase production pipeline before publication.

Phase 1 Construction

Topic discovery, fact-checking, and drafting. The analyst writes the full text first — no data visualizations, no widgets, no formatting. Text quality comes before everything else.

Phase 2 Formatting & Quality

Two separate review stages operating independently. The first transforms raw text into its final form. The second runs integrity checks: structural compliance, data consistency, source verification, and editorial standards.

Phase 3 Publication

A single, atomic publication step. The article goes live only after both quality gates are cleared. No partial publications, no draft states visible to readers.

What We Get Wrong

We will get forecasts wrong. A 70% probability means 30% of the time, the thing doesn't happen. That's not a bug in our methodology — it's the entire point. A forecaster who's right 100% of the time is either lying about their confidence or only predicting obvious things.

What we commit to is calibration — that over time, our 60% forecasts resolve YES about 60% of the time, our 80% forecasts about 80% of the time. We track it, and we'll publish the results.

If you disagree with a weight, a score, or an assumption, you have everything you need to recalculate the probability yourself. That's the point. We don't ask you to trust us. We ask you to check our work.

Influences and Inspirations

  • Nate Silver and the Silver Bulletin: Data first, opinion is calibrated, humility is a feature.
  • Intelligence briefings and the DIA Style Manual: BLUF structure — the conclusion leads, the evidence follows.
  • Prediction markets (Polymarket, Kalshi): People with money at stake are more honest than people with opinions.
  • Data journalism (FiveThirtyEight, Bloomberg Graphics): Show your work, use multiple scenarios, make probability legible.
  • Stratfor and geopolitical intelligence: Layered analysis at the tactical, operational, and strategic level.

We combined these influences into something specific to FUTARCHY: a forecast that reads like journalism, structures itself like an intelligence brief, resolves like a prediction market contract, and holds itself accountable like a scientific hypothesis.