Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
The distant horizon is always troubled, the meticulous details obscured by distance and atmospheric mist. This is why the forecast of the future is so imprecise: we cannot clearly see the contours of forms and events before us. Instead, we take educated assumptions.
The newly published AI 2027 The scenario, developed by a team of AI researchers and forecasters, having experience in institutions like OpenAi and the Center for IA policy, offers a detailed forecast of 2 to 3 years for the future which includes specific technical milestones. Being in the short term, he speaks with great clarity of our future close to AI.
Informed by the vast expert exercises and scenarios planning exercises, IA 2027 describes a quarter -quarter of the capacities of AI planned, in particular multimodal models reaching advanced reasoning and autonomy. What makes these forecasts particularly remarkable is both its specificity and the credibility of its contributors, who have a direct overview of current research pipelines.
The most notable prediction is that the general artificial intelligence (AGA) will be carried out in 2027, and artificial superintendent (ASI) will follow months later. AGE corresponds or exceeds human capacities in practically all cognitive tasks, from scientific research to creative efforts, while demonstrating adaptability, common sense reasoning and self-improvement. Asi goes further, representing systems that considerably exceed human intelligence, with the ability to solve problems that we cannot even understand.
Like many predictions, these are based on hypotheses, the least of which is that the models and applications of AI will continue to progress in an exponential way, as they have done in recent years. As such, it is plausible, but is not guaranteed to expect exponential progress, especially since the scaling of these models can now reach decreasing yields.
Not everyone agrees with these predictions. Ali Farhadi, CEO of Allen Institute for Artificial Intelligence, said The New York Times: “I am absolutely for projections and forecasts, but this [AI 2027] Forecasts do not seem to be based on scientific evidence, nor the reality of the evolution of things in AI. »»
However, there are others who consider this evolution to be plausible. Anthropic co-founder Jack Clark wrote in his Import AI The newsletter that AI 2027 is: “The best treatment to” live in an exponential “.” He added that it is a “”Technically clever narrative of the next years of AI development. This chronology also aligns with that proposed by the anthropogenic CEO Dario Amodei, who said that AI which can overcome humans in almost everything will arrive in the next two to three years. A new research document This AG could arrive plausibly by 2030.
Great acceleration: unprecedented disturbance
It looks like a conducive period. There have been similar moments like this in history, in particular the invention of printing or the propagation of electricity. However, these advances required many years and decades to have a significant impact.
The arrival of AGE feels different and potentially frightening, especially if it is imminent. The IA 2027 describes a scenario which, due to a disalping with human values, the Superintelligent AI destroys humanity. If they are right, the most consecutive risk for humanity can now be in the same planning horizon as your next smartphone upgrade. For its part, Google Deepmind paper notes that human extinction is a possible result of AG, although unlikely in their opinion.
Opinions change slowly until people receive overwhelming evidence. This is to be remembered from the singular work of Thomas Kuhn “The structure of scientific revolutions. “Kuhn reminds us that the visions of the world do not move overnight, until they do it.
The future is approaching
Before the appearance of large models of language (LLMS) and Chatgpt, the projection of median chronology for AGE was much longer than today. The consensus between the experts and the prediction markets placed the expected median arrival of AG around 2058. Before 2023, Geoffrey Hinton – one of the “sponsors of the AI” and a winner of the Turing Prix – thought act was “30 to 50 years or even more”. However, the progress shown by the LLMS led him to change his mind and to have said it could happen from 2028.
There are many implications for humanity if AGA arrives in the coming years and is quickly followed by ASI. To write FortuneJeremy Kahn said that if AGA arrived in the coming years “it could indeed lead to significant job losses because many organizations would be tempted to automate roles”.
A two -year AGA track offers an insufficient period of grace for individuals and businesses to adapt. Industries such as customer service, content creation, programming and data analysis can face a spectacular upheaval before the recycling infrastructure can evolve. This pressure will only intensify if a recession occurs within this period, when companies already seek to reduce payroll costs and often supplant staff with automation.
Cogito, Ergo… ai?
Even if act does not lead to in -depth job losses or to extinction of species, there are other serious ramifications. Since the era of reason, human existence has been based on a conviction that we count because we think.
This belief that thought defines our existence has deep philosophical roots. It was René Descartes, writing in 1637, who articulated the now famous sentence: “I think, so I Paris” (“I think I am therefore”). He translated it later into Latin: “”Cogito, Ergo Sum. “” In doing so, he proposed that certainty could be found in the act of individual thought. Even if he was deceived by his senses, or misleaded by others, the very fact that he thought proven that he existed.
In this point of view, the Self is anchored in cognition. It was a revolutionary idea at the time and gave birth to the humanism of the Enlightenment, to the scientific method and, ultimately, to modern democracy and to individual rights. Humans as thinkers have become the central figures of the modern world.
What raises a deep question: if the machines can now think or seem to think, and we outsource our thought to AI, what does that mean for modern self-conception? A recent study reported by 404 media Explore this enigma. He noted that when people count strongly on the generative AI for work, they engage in a less critical thought which, over time, can “lead to the deterioration of cognitive faculties which should be preserved”.
Where are we going from here?
If AGA arrives in the coming years – or shortly after – we must quickly attack with its implications not only for jobs and security, but for whom we are. And we must do it while also recognizing its extraordinary potential to accelerate discovery, reduce suffering and extend human capacity in an unprecedented way. For example, Amodei said that “a powerful AI” will allow 100 years of organic research and its advantages, including improved health care, to be compressed in 5 to 10 years.
The forecasts presented in the IA 2027 may be correct or not, but they are plausible and provocative. And this plausibility should be sufficient. As humans with agency and as members of companies, governments and societies, we must act now to prepare for what could happen.
For companies, this means investing both in technical research on IA security and organizational resilience, creating roles that integrate the capacities of AI while amplifying human forces. For governments, it requires an accelerated development of regulatory frameworks which meet both immediate concerns such as the evaluation of models and existential risks in the longer term. For individuals, this means adopting continuous learning focused on human skills only, including creativity, emotional intelligence and complex judgment, while developing healthy labor relationships with AI tools that do not decrease our agency.
The abstract debate time on the distant future has passed; Concrete preparation for short -term transformation is urgently necessary. Our future will not be written alone by algorithms. It will be shaped by the choices we make and the values we defend, from today.
Gary Grossman is the executive vice-president of technological practice at Edelman And world leader in Edelman AI Center of Excellence.