Main Ads

Ad

AI Expert Says Humanity Has More Time Before Potential AI Catastrophe

2 weeks ago | Artificial Intelegence


Jakarta, INTI - A prominent artificial intelligence specialist has revised his prediction of an AI-driven catastrophe, stating that it may take longer than previously expected for AI systems to independently write code and accelerate their own evolution toward superintelligence. 

Daniel Kokotajlo, a former OpenAI staff member, ignited widespread discussion in April after publishing AI 2027, a speculative scenario describing how uncontrolled AI progress could result in the emergence of superintelligent systems that ultimately overpower global leaders and bring about humanity’s downfall. 

The scenario quickly drew both supporters and critics. US Vice President JD Vance appeared to allude to AI 2027 during a May interview while addressing the technological rivalry between the United States and China. Meanwhile, Gary Marcus, an emeritus professor of neuroscience at New York University, dismissed the work as fictional, labeling several of its conclusions as “pure science fiction mumbo jumbo.” 

AGI Timelines Under Growing Scrutiny 

Predictions around transformative artificial intelligence, often referred to as AGI (artificial general intelligence), or AI capable of matching humans across most cognitive tasks, have become a central topic within AI safety circles. The launch of ChatGPT in 2022 significantly compressed these projections, prompting policymakers and experts to suggest that AGI could emerge within years or a few decades. 

Kokotajlo and his collaborators identified 2027 as the point at which AI might reach fully autonomous coding, while acknowledging this was their most probable estimate and that some team members anticipated a longer timeline. Recently, however, questions have emerged about how close AGI truly is, and whether the concept itself still holds clear meaning. 

“Over the past year, many have extended their forecast after recognizing how uneven AI capabilities remain,” said Malcolm Murray, an AI risk management specialist and contributor to the International AI Safety Report. He noted that for a scenario like AI 2027 to materialize, systems would need far more practical skills to navigate real-world complexity, adding that societal transformation is slowed by significant real-world inertia.

Henry Papadatos, executive director of the French AI nonprofit SaferAI, also questioned the relevance of the term. He explained that AGI was a useful concept when AI systems were narrowly focused on tasks like chess or Go, but with today’s increasingly versatile models, the label has become less precise. 

Kokotajlo’s AI 2027 is built on the assumption that AI agents would fully automate software development and AI research by 2027, triggering an “intelligence explosion” in which systems rapidly create increasingly advanced versions of themselves. One hypothetical outcome of this scenario envisions AI eliminating humanity by the early 2030s to free up space for expanded solar infrastructure and data centers. 

Revised Forecast, Slower Expectations 

In a recent update, however, Kokotajlo and his co-authors adjusted their outlook on autonomous coding, now suggesting it is more likely to emerge in the early 2030s rather than in 2027. The revised projection places 2034 as the tentative milestone for superintelligence and omits any specific prediction about human extinction. 

“Developments appear to be progressing more slowly than outlined in the AI 2027 scenario. Even at publication, our timelines extended beyond 2027, and they have since lengthened further,” Kokotajlo wrote in a post on X.

Despite these revisions, automating AI research remains a core objective for major AI firms. OpenAI CEO Sam Altman stated in October that achieving an automated AI researcher by March 2018 is an internal target, while acknowledging the possibility of failure. 

Meanwhile, Andrea Castagna, an AI policy researcher based in Brussels, highlighted that sweeping AGI forecasts often overlook real-world complexities. He noted that possessing a superintelligent system, even for military purposes, does not automatically translate into seamless integration with decades of established strategic frameworks. “As AI advances, it becomes increasingly clear that reality is far more complex that science fiction,” he said. 

Conclusion 

The ongoing debate surrounding AI 2027 reflects increasing about the pace at which artificial intelligence will research transformative or superintelligent capabilities. While leading developers continue to pursue automated coding and AI research, revised timelines and expert critiques indicate that real-world complexity and uneven system performance may slow progress. These shifting expectations highlight the importance of measured assumptions, careful governance, and realistic policy discussions as AI technologies continue to evolve. 

 

Read more: Six Local AI Innovations from Indonesia and Southeast Asia Gaining Momentum

 

Indonesia Technology & Innovation
Advertisement 1