Publication | Open Access
Wisdom of the silicon crowd: LLM ensemble prediction capabilities rival human crowd accuracy
22
Citations
27
References
2024
Year
Artificial IntelligenceHuman CrowdEngineeringMachine LearningIntelligent SystemsLarge Language ModelNatural Language ProcessingLarge Language ModelsComputational Social ScienceData ScienceComputational LinguisticsLanguage StudiesHuman ComputationStatisticsMultiple Classifier SystemMachine TranslationLarge Ai ModelHuman Crowd AccuracySilicon CrowdPredictive AnalyticsKnowledge DiscoveryComputer ScienceForecasting AccuracyCrowdsourcingDeep LearningCrowd ComputingHuman Crowd AggregatesEnsemble Algorithm
Human forecasting accuracy improves through the "wisdom of the crowd" effect, in which aggregated predictions tend to outperform individual ones. Past research suggests that individual large language models (LLMs) tend to underperform compared to human crowd aggregates. We simulate a wisdom of the crowd effect with LLMs. Specifically, we use an ensemble of 12 LLMs to make probabilistic predictions about 31 binary questions, comparing them with those made by 925 human forecasters in a 3-month tournament. We show that the LLM crowd outperforms a no-information benchmark and is statistically indistinguishable from the human crowd. We also observe human-like biases, such as the acquiescence bias. In another study, we find that LLM predictions (of GPT-4 and Claude 2) improve when exposed to the median human prediction, increasing accuracy by 17 to 28%. However, simply averaging human and machine forecasts yields more accurate results. Our findings suggest that LLM predictions can rival the human crowd's forecasting accuracy through simple aggregation.
| Year | Citations | |
|---|---|---|
Page 1
Page 1