A analysis By Epoch IA, a non -profitable AI research institute, suggests that the AI industry may not be able to eliminate massive performance gains from reasoning on AI models longer. As soon as in a year, the progress of reasoning models could slow down, according to the conclusions of the report.
Reasoning models such as OPNAI O3 have led to substantial gains on AI references in recent months, in particular references measuring skills in mathematics and programming. Models can apply more calculation to problems, which can improve their performance, with the fact that they take more time than conventional models to perform tasks.
The reasoning models are developed by first forming a conventional model on a massive amount of data, then applying a technique called learning to strengthen, which effectively gives the “feedback” model on its solutions to difficult problems.
Until now, AI Frontier and OpenAi laboratories have not applied an enormous quantity of calculation power to the learning stage of strengthening the formation of the reasoning model, according to Epoch.
It changes. Openai said that he had applied approximately 10 times more calculation to form O3 than his predecessor, O1, and the era speculates that most of this calculation was devoted to the learning of strengthening. And the researcher of Openai, Dan Roberts, recently revealed that the future plans of the company provided prioritize the learning of strengthening To use much more computing power, even more than for the initial training of the model.
But there is still a limit higher than the IT quantity can be applied to the learning of strengthening, by time.
Josh You, analyst at Epoch and the author of the analysis, explains that the performance gains in the training of standard AI models are currently quadruples each year, while the performance gains in strengthening are developed every ten times every 3 to 5 months. The progress of the formation of reasoning “probably converge with the global border by 2026”, he continues.
Techcrunch event
Berkeley, that
|
June 5
Book
Epoch’s analysis makes a number of hypotheses and partly draws from the public comments of the leaders of the IA company. But this also argues that the scaling of reasoning models can be difficult for reasons in addition to IT, including high costs high for research.
“If there is a cost of the persistent general costs required for research, the models of reasoning may not evolve insofar as planned,” writes to you. “Rapid calculation scaling is potentially a very important ingredient in the progress of the reasoning model, so it is worth following it closely.”
Any indication that reasoning models can reach a kind of limit in the near future is likely to worry about the AI industry, which has invested enormous resources by developing these types of models. Already, studies have shown that models of reasoning, which can be incredibly expensive to manage, have serious faults, such as a trend to hallucinate more than certain conventional models.