Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
AI has evolved at an amazing pace. What seemed to be science fiction a few years ago is now an undeniable reality. In 2017, my company launched an AI center of excellence. The AI certainly improved in predictive analysis and many automatic learning algorithms (ML) were used for voice recognition, detection of spam, verification of spelling (and other applications) – but it was early. We then believed that we were only in the first round of the AI match.
The arrival of GPT -3 and especially of GPT 3.5 – which was set for conversational use and served as a basis for the first chatgpt in November 2022 – was a dramatic turning point, now forever recalled as the “chatgpt moment”.
Since then, there has been an explosion in AI capabilities of hundreds of companies. In March 2023, Openai released the GPT-4, which promised “AG sparks»(Artificial general intelligence). At that time, it was clear that we were far beyond the first round. Now we have the impression that we are in the last section of a completely different sport.
The Flame of AG
Two years later, AG’s flame began to appear.
On a recent episode of The Hard Fork podcastDario Amodei – which has been in the AI industry for a decade, formerly vice -president of research in Openai and now as CEO of Anthropic – said that there were 70 to 80% of us that we have a “very large number of AI systems which are much smarter than humans almost everything before the end of the decade, and I suppose that 2026 or 2027”.
The evidence of this prediction becomes clearer. At the end of last summer, Openai launched O1 – the first “reasoning model”. They have since published O3, and other companies have deployed their own reasoning models, notably Google and, famous, Deepseek. Reasoners use the chain of thoughts (COT), decomposing complex tasks at the time of execution in several logical steps, just as a human could approach a complicated task. Sophisticated AI agents, including the deep research of OpenAi and the co-scientist of Google AI, recently appeared, bearing enormous changes in the way research will be carried out.
Unlike the previous large languages (LLM) models which, mainly models, paired with training data, reasoning models represent a fundamental passage from statistical prediction to structured problems. This allows AI to tackle new problems beyond its training, allowing real reasoning rather than recognition of advanced model.
I recently used an in -depth research for a project and I remembered the quote from Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.” In five minutes, this AI produced which would have taken me 3 to 4 days. Was it perfect? No. Was it close? Yes, very. These agents quickly become really magical and transformers and are among the first to many similar agents who will soon be on the market.
The most common definition of AC is a system capable of doing almost all cognitive tasks that a human can do. These first agents of change suggest that Umodei and others who believe that we are close to this level of Sophistication of the AI could be correct and that AGI will soon be here. This reality will lead to many changes, forcing people and processes to adapt in a short time.
But is it really acted?
There are various scenarios that could emerge from the short -term arrival of a powerful AI. It is difficult and scary that we don’t really know how it will happen. New York Times The Ezra Klein columnist approached him in a Recent podcast: “We rush towards acting without really understanding what it means.” For example, he claims that there is little critical thinking or planning of the contingency which takes place around implications and, for example, what it would really mean for employment.
Of course, there is another perspective on this uncertain future and this lack of planning, as evidenced by Gary Marcus, who believes that generally in -depth learning (and LLM specifically) will not lead to AG. Marcus issuing This is equivalent to a deletion of Klein’s position, citing notable gaps in current AI technology and suggesting that we are just as likely that we are far from AG.
Marcus can be correct, but it could also be simply an academic dispute over semantics. As an alternative at the end Ag, Amodei simply refers to a “powerful AI” in its loving grace machines blogBecause it transmits a similar idea without the imprecise definition, “science fiction baggage and media threshing”. Call this as you want, but AI will only become more powerful.
Playing with fire: possible AI future
In a 60 minutes interviewThe CEO of Alphabet, Sundar Pichai, said that he considered AI to “the deepest technology on which humanity worked. Deeper than fire, electricity or everything we have done in the past. This certainly corresponds to the growing intensity of discussions on AI. Fire, like AI, was a discovery that changed the world that fueled progress but demanded control to prevent disaster. The same delicate balance applies today to AI.
A discovery of immense power, the fire has transformed civilization by allowing heat, cooking, metallurgy and industry. But that also brought a destruction when it is uncontrolled. Whether AI becomes our greatest ally or our loss will depend on how we manage its flames. To go further in this metaphor, there are various scenarios that could soon emerge from an even more powerful AI:
- The controlled flame (utopia): In this scenario, AI is exploited as a force for human prosperity. Productivity Skyrockts, new materials are discovered, personalized medicine becomes available for everyone, goods and services become abundant and inexpensive and individuals are freed from the chore to pursue work and more significant activities. This is the scenario defended by many accelerates, in which AI brings progress without engulfing us in too many chaos.
- The unstable fire (difficult): here, AI brings undeniable advantages – revolutionizing research, automation, new capacities, products and problem solving. However, these advantages are unevenly distributed – while some prospered, others are confronted with a displacement, the enlargement of economic divisions and the stress of social systems. Disinformation and safety risks are rising. In this scenario, society has trouble balanced promise and danger. We could argue that this description is close to current reality.
- Forest fire (dystopia): the third path is that of the disaster, the possibility most strongly associated with the so-called “doomers” and “probability of misfortune”. Whether through involuntary consequences, reckless deployment or AI systems that go beyond human control, AI actions become without control and accidents occur. Confidence in truth is crumbling. In the worst case, AI becomes uncontrollable, threatening lives, industries and entire institutions.
Although each of these scenarios seems plausible, it is uncomfortable that we really do not know the most likely, especially since the chronology could be short. We can see everyone’s first signs: Automation led by AI increasing productivity, disinformation that is spreading on a large scale, eroding confidence and concerns about dishonest models that resist their guards. Each scenario would lead to its own adaptations to individuals, businesses, governments and society.
Our lack of clarity on the trajectory of the IA impact suggests that a mixture of the three future is inevitable. The rise of AI will lead to a paradox, fueling prosperity while providing involuntary consequences. Incredible breakthroughs will occur, as are accidents. Some new areas will appear with attractive possibilities and employment prospects, while other pillars of the economy will go bankrupt.
We may not have all the answers, but the future of a powerful AI and its impact on humanity is being written now. What we saw during the recent summit of the action of Paris AI was a state of mind to hope for the best, which is not an intelligent strategy. Governments, businesses and individuals must shape the AI trajectory before it shapes us. The future of AI will not be determined by technology alone, but by collective choices, we make on how to deploy it.
Gary Grossman is the executive vice-president of technological practice at Edelman.