main

The tournament

In a longitudinal forecasting tournament, experts, superforecasters, and everyday people made predictions for one year into the future, two years into the future, and twenty years into the future for each of the four domains (Public Health, Peace/War, Economy and Climate Change).

Participants were encouraged to provide probability estimates, justify their predictions, and update their forecasts as new information emerged. They engaged with opponents, received accuracy feedback, and collaborated to shape a comprehensive analysis of debates on human progress.

Our tournament started in Spring of 2022, and participants returned to repeat their future predictions in Spring of 2023 and Spring of 2024.

Timeline for forecasters

YEAR 1, SPRING 2022

YEAR 2, SPRING 2023

YEAR 3, SPRING 2024

Participants made near and far-out predictions for each of the domains.


  1. Team engagement

    In each year, participants had the opportunity to engage with a handful of teammates’ predictions and rationales. Some participants chose to leverage this wisdom of the crowd to update their predictions, whereas others maintained their original predictions and perspectives. This allowed us to test how belief updating in response to team information relates to relative accuracy and improvements in accuracy.


  1. Responsiveness to feedback

    In Years 2 and 3, participants received feedback on how they performed relative to the entire set of participants. Some participants received positive feedback (e.g., “You scored among the top 5% in the tournament”), whereas others received negative feedback (e.g., “You scored among the bottom 5% in the tournament”). This allowed us to test how positive and negative feedback influenced changes in accuracy and openness to teammates' wisdom.


  1. Results

    Participants made predictions for the year following the latest publicly available datapoint. For two domains (Economy and Public health) this resulted in “retrocasts” of year 1 predictions.

    The figures below show participants’ accuracy in predicting the future of human welfare, depicting the observed trend and predictions by the public, experts and superforecasters. The graphs depict the typical (median) predictions of each group; error bars depict the typical (median) estimates of highest and lowest reasonable values participants proposed for a given domain.


  1. Outcomes of interest

    In addition to scoring participants’ accuracy in predicting the future of human welfare, we also measured and explored how their epistemic virtues–intellectual humility, open-mindedness, uncertainty, willingness to update beliefs related to their accuracy.


Your chance to predict

Curious what you would have predicted if you were part of the tournament? Here’s your chance to test your forecasting abilities through our Time Machine game. Learn more about the 4 domains and how they change over time.

Play the game
travel

Copyright © Forecasting Human Welfare.

Powered by Orotron