The AI Horizon: What Forecasters Across Platforms Agree On
Aggregating forecasts from Metaculus, Manifold, and Polymarket reveals surprising consensus on AI capabilities and risks—and instructive disagreements.
The AI Horizon: What Forecasters Across Platforms Agree On
When thousands of forecasters across multiple prediction markets converge on similar estimates, that convergence carries real informational weight. On artificial intelligence—perhaps the most consequential technology of our era—we now have enough active markets to identify genuine consensus views.
The Parity Question
The most striking number comes from Metaculus, where nearly 3,000 forecasters have weighed in on human-machine intelligence parity. The current probability: 96% by 2040.
This isn't a prediction that AGI will arrive by 2040. It's a more measured claim: that AI systems will match human-level performance across a broad range of cognitive tasks within 14 years. The high probability suggests forecasters view recent progress—GPT-4, Claude 3.5, and their successors—as genuine steps toward this milestone.
Near-Term Capabilities
More immediately, Metaculus forecasters give 46% odds that an AI model will demonstrate 3-hour "time horizon" capability with 80% reliability during 2026. This measure, developed by AI evaluation organization METR, tracks how long and complex a task AI systems can reliably complete.
The current state-of-the-art hovers around the 30-minute to 1-hour mark. Reaching 3 hours would represent a significant leap—and forecasters are split nearly evenly on whether it happens this year.
The Turing Test Horizon
On Manifold Markets, where over 900 bettors have participated, the probability of AI passing the Longbets Turing test by 2029 sits at 51%. This specific test has strict criteria: a panel of human judges must be fooled in sustained conversation.
The near-coin-flip probability reflects genuine uncertainty. AI systems have arguably passed simpler versions of the Turing test already, but the Longbets version's rigor makes it a meaningful milestone.
Catastrophic Risk Estimates
Where forecasters diverge most sharply is on catastrophic risk. Manifold's most-traded AI risk market prices "AI wipes out humanity by 2030" at approximately 4.3%—low but not negligible. Extend the timeframe to 2100, and the estimate rises to roughly 14%.
These numbers may seem either alarmingly high or dismissively low depending on one's priors. What's notable is that prediction markets—which have financial stakes attached—consistently price AI existential risk higher than zero.
Corporate Milestones
More prosaic questions about AI companies also reveal forecaster sentiment. Metaculus gives 37% odds to OpenAI filing for an IPO during 2026—a significant but not overwhelming probability. The company's path from nonprofit research lab to capped-profit entity to potential public company continues to evolve.
Meanwhile, forecasters are skeptical of dramatic price drops: OpenAI API token prices falling before March trades at 50%. The era of rapid cost declines may be slowing.
What Cross-Platform Agreement Tells Us
When Metaculus researchers, Manifold play-money bettors, and Polymarket traders reach similar conclusions through different mechanisms, that convergence is informative. On AI timelines, we see:
- Strong agreement that human-level AI is likely within 15-20 years
- Near-term capability improvements expected but not certain
- Catastrophic risk estimates in the single digits near-term, double digits by century's end
- Corporate developments (IPOs, pricing, leadership) harder to predict than technical progress
These aggregated views aren't predictions—they're probability-weighted expectations from diverse forecasters with different information and analytical frameworks. When they agree, it's worth paying attention.
Analysis informed by aggregated forecaster data from Metaculus, Manifold Markets, and Polymarket as of January 20, 2026.