AIML

Expert Beliefs About the Likelihood and Strength of Artificial General Intelligence

Abstract
This paper synthesizes survey evidence on expert opinion within the field of computer science regarding the likelihood of artificial general intelligence (AGI). Using probability-weighted interpretations of survey responses, we categorize experts into six belief clusters and collapse these into four outcome bins: no AGI, subhuman/narrow AI, human-level AGI, and superhuman AGI. The analysis indicates that while approximately one-third of researchers consider human-level AGI plausible within this century, the majority of probability weight rests on scenarios with no AGI or only subhuman systems. Belief in a “sentient” form of superhuman AGI remains a small minority position within surveys, and verges on fringe status in mainstream academic computer science, though it is more visible in entrepreneurial and philosophical circles.

Introduction
Artificial general intelligence (AGI) has long been debated in both technical and philosophical contexts. However, the distribution of expert opinion among computer scientists and AI researchers remains heterogeneous, ranging from strong skepticism to confident belief in near-term breakthroughs. Recent large-scale surveys (Grace et al., 2017; AI Impacts, 2022) provide quantitative data that can be aggregated into structured belief distributions. This paper organizes those beliefs into categories that correspond to degrees of AGI strength and provides a probability-weighted synthesis.

Data Sources
Two major surveys serve as primary sources:
• Grace et al. (2017) conducted a survey of 352 machine learning researchers, asking for probabilistic forecasts of “high-level machine intelligence” (HLMI) by specific dates.
• AI Impacts (2022) updated these findings with a larger expert pool, again eliciting probability estimates for HLMI emergence by mid- and late-21st century horizons.
Both surveys revealed significant variance, with some respondents assigning near-zero probability to AGI, and others assigning near-certainty within decades.

Clustering of Beliefs and Sociological Profiles
1 Hard Skeptics (~10–15%)
◦ Probability ≤1% for AGI this century.
◦ Tend to be theoretical computer scientists, symbolic AI veterans, or safety-critical engineers skeptical of deep learning as a general path.
2 On-Balance Skeptics (~20–25%)
◦ Probability ~5–20%.
◦ Often older academics or applied researchers who see progress as impressive but limited, emphasizing bottlenecks in reasoning, data, or energy.
3 Agnostics (~20–25%)
◦ Probability ~40–60%.
◦ Typically mainstream ML researchers with no strong prior; they acknowledge uncertainty and emphasize empirical unpredictability.
4 Mild Hopefuls (~15–20%)
◦ Probability ~60–75%.
◦ Often applied deep learning specialists, younger academics, or practitioners in NLP/CV, who see scaling as promising but not guaranteed.
5 Believers (~15%)
◦ Probability ~80–95%.
◦ Frequently industry researchers and lab scientists (DeepMind, OpenAI, Anthropic) who are optimistic about scaling trends, reinforcement learning, and multimodal integration.
6 Hardcore Believers (~5–10%)
◦ Probability ≥99%.
◦ Commonly tech entrepreneurs, effective altruism advocates, and high-profile optimists outside mainstream academia, who regard AGI as inevitable and imminent.

Aggregated Probability Distribution
We collapse these clusters into four outcome bins corresponding to degrees of AGI strength:
• None (no AGI this century): 35–40%
• Subhuman / narrow only: 15–20%
• Human-level AGI: 30–35%
• Superhuman AGI (“sentient” systems): 5–10%
When weighted by internal probabilities, the aggregate expectation centers around a ~50% chance of some form of AGI this century. However, the modal expert opinion is either skepticism (no AGI) or moderate belief in human-level systems.

Discussion
General Interpretation
These findings suggest that the computer science community remains divided, with approximately half of experts assigning low likelihoods to AGI, and the remainder split between moderate belief in human-level capabilities and a smaller group of strong optimists. Importantly, belief in superhuman sentient AGI is not representative of the mainstream consensus. Within academia, it verges on a fringe view, though it is more visible in industry labs, entrepreneurial contexts, and philosophical communities (e.g., effective altruism). This indicates that while AGI is a live research question, the more extreme “runaway” scenarios popular in public discourse are not dominant in expert forecasts.
Survey-Backed Sociological Trends
Survey evidence also reveals systematic differences across researcher profiles:
• Academics vs. Industry: Grace et al. (2017) reported that industry researchers tend to predict earlier AGI timelines and higher probabilities than academic researchers. Academics are generally more skeptical, particularly in university settings where incentives favor cautious long-term theorizing.
• Theorists vs. Practitioners: The more abstract a researcher’s orientation (e.g., theoretical computer science, symbolic reasoning, formal verification), the more skeptical they tend to be. In contrast, practitioners in applied ML (natural language processing, computer vision, reinforcement learning) are more optimistic, as they directly observe scaling trends.
• Fundraising and Advocacy: In entrepreneurial and philanthropic contexts — especially venture-backed startups and Effective Altruism organizations — beliefs skew far more bullish, often approaching certainty. These views, while not representative of academic consensus, shape public and media discourse disproportionately.

About promising products

Academic Viewpoint
• Even skeptical academics generally agree that current AI research yields useful incremental products: better NLP, vision, robotics, optimization tools, scientific modeling.
• The skepticism is not about usefulness, but about the leap from narrow AI → general intelligence.
• So even “hard skeptics” often frame their work as valuable for automation, decision support, and applied science, even if they doubt AGI.

Practitioner and Industry Viewpoint
• Practitioners in applied ML overwhelmingly see the field as a continuous product pipeline:
◦ Enterprise applications (chatbots, copilots, recommender systems).
◦ Infrastructure tools (vector databases, deployment frameworks).
◦ Domain solutions (healthcare diagnostics, protein folding, finance risk modeling).
• Surveys show near-unanimous agreement that useful products will keep coming, regardless of whether AGI is reached.

Optimist / Fundraising Viewpoint
• Optimists and hardcore believers often blur the line between useful products now and AGI promise later.
• Fundraising narratives highlight transformational applications — digital employees, autonomous R&D labs, global-scale optimization — as a bridge to justify large capital inflows.
• This group tends to emphasize “inevitability of AGI,” but in practice they raise money on the strength of useful intermediate products (e.g., copilots, agent frameworks).

Aggregate Survey Result
• Across the board, experts see AI as already commercially useful and expect steady improvement.
• The division is on ultimate scope (will it plateau at useful but narrow vs. scale into AGI).
• To put it bluntly: “Even if AGI never arrives, the field will generate valuable products.”

Conclusion
Survey evidence indicates that most computer scientists and AI researchers do not view “sentient” superhuman AGI as probable within this century. Instead, the weighted community consensus converges on an approximate 50% expectation of AGI by 2100, with the bulk of probability mass distributed between no AGI and human-level AGI scenarios. The mean and median however do not tell the entire story, it is distribution with two poles, skeptics and optimists. Skeptics tend to be academic and optimists tend to industry focused. One sees the hype cycle and theoretical limits, the other sees the opportunity to create more breakthroughs if large amount of resources are applied. Skeptics describe current state precisely, optimists describe possibilities of breakthroughs enthusiastically. Importantly, the surveys reveal a sociological divide: the more theoretical and academic a researcher’s orientation, the more skeptical they are; the more applied, industry-focused, or fundraising-oriented, the more optimistic they tend to be. Thus, while AGI remains an open and serious research possibility, superhuman AGI should be considered, at best, a minority-fringe expectation within professional computer science. Even if AGI never arrives, the consensus says, the field will generate valuable products.

References
• Grace, K., Salvatier, J., Dafoe, A., Zhang, B., & Evans, O. (2017). When Will AI Exceed Human Performance? Evidence from AI Experts. Journal of Artificial Intelligence Research, 62, 729–754.
• AI Impacts. (2022). 2022 Expert Survey on Progress in AI. Retrieved from https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/