Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
Openai is Displacement of an important set of updates To its new API of responses, aimed at facilitating developers and companies to more easily create intelligent and action -oriented agent applications.
These improvements include the support of remote model context protocol servers (MCP), the integration of code generation and code interpreter tools, and upgrading of file search capacities, all available to date, May 21.
Launched for the first time in March 2025, the API of responses served as an OpenAi toolbox for third -party developers in order to create agenic applications at the top of some of the fundamental functionalities of its successful chatgpt services and its agents of origin of research and the operator in depth.
During the months which followed its beginnings, he dealt with billions of tokens and supported a wide range of use cases, market research and education for software development and financial analysis.
Popular applications built with the API include the coding agent of Zencoder, the Intelligence of the Revi market and the educational platform of Magicschool.
The basis and purpose of the API of responses
The API of responses made its debut alongside the OPENAI Open-Source agents SDK in March 2025, as part of an initiative aimed at providing a third-party developer to the same technologies supplying the own OPENAI AI agents such as research and the operator in depth.
In this way, startups and companies outside of Openai could integrate the same technology it offers by Chatgpt in their own products and services, whether internal for the use of employees or external for customers and partners.
Initially, the API has combined the elements of the chat supplements and the aspiring APIs – eliminating integrated tools for web search and files, as well as the use of the computer – allowing developers to create autonomous workflows without complex orchestration logic. Openai said at that time that the cat completion API would be obsolete in the middle of 2026.
The API of responses offers visibility on the decisions of the model, access to data in real time and the integration capacities which allowed agents to recover, reason and act on information.
This launch marked an evolution towards developers A unified toolbox to create AI agents specific to the production area ready for production with a minimum of friction.
The distant management of the MCP server is expanding the integration potential
A key addition to this update is the support of remote MCP servers. Developers can now connect OPENAI models to external tools and services such as Stripe, Shopify and Twilio using only a few lines of code. This capacity allows the creation of agents that can take measures and interact with users of users. To support this scalable ecosystem, Openai joined the Steering Committee of the MCP.
The update brings new tools integrated into the API of responses that improve what agents can do in a single API call.
A variant of the generation of native Hit Gpt-4o of Openai images-which inspired a wave of “Ghibli studio” style anime memes around the web and has completed Openai servers with its popularity, but can obviously create many other image styles-is now available via the API under the name of the “GPT-Image-1” model. It includes new potentially useful and quite impressive features such as real-time streaming overviews and multi-turn refinement.
This allows developers to create applications that can produce and modify images dynamically in response to the user’s input.
In addition, the Code Interpreter tool is now integrated into the API of responses, allowing models to manage data analysis, complex mathematics and logic tasks in their reasoning processes.
The tool helps improve model performance through various technical landmarks and allows more sophisticated agent behavior.
Improved file search and context management
The file search feature has also been upgraded. Developers can now carry out research in several vector stores and apply filtering based on attributes to recover only the most relevant content.
This improves the accuracy of information that the use of agents, improving their ability to answer complex questions and to operate in large areas of knowledge.
Reliability of new businesses, transparency functionalities
Several features are designed specifically to meet the needs of companies. The background mode allows long-term asynchronous tasks, solving the problems of deadlines or network interruptions during intensive reasoning.
Reasoning summaries, a new addition, offer explanations in the natural language of the internal reflection process of the model, helping to debug and transparency.
The quantified reasoning elements provide an additional confidentiality layer for customers of zero data retention.
These allow models to reuse the previous reasoning stages without storing data on OPENAI servers, improving safety and efficiency.
The latest capacities are supported in the GPT-4O series of Openai, the GPT-4.1 series and the models of the O series, including O3 and O4-Mini. These models now maintain the state of reasoning on several calls and requests for tools, which leads to more precise responses at lower cost and latency.
Yesterday’s price is the price of today!
Despite the enlarged features set, Openai has confirmed that the price of new tools and capacities within the API of responses will remain coherent with existing rates.
For example, the code interpreter tool is price of $ 0.03 per session, and the use of file search is billed at $ 2.50 for 1,000 calls, with storage costs of $ 0.10 per day after the first free gigabytes.
Web search pricing varies depending on the model’s model and size, ranging from $ 25 to $ 50 for 1,000 calls. The generation of images via the GPT-Image-1 tool is also responsible according to the resolution and the level of quality, from $ 0.011 per image.
All the use of tools is billed at the token levels of the chosen model, without additional markup for newly added capacities.
What is the next step for the API of answers?
With these updates, Openai continues to extend what is possible with the API of answers. The developers have access to a richer set of ready -made tools and features for the company, while companies can now create more integrated applications, capable and secured by AI.
All features are live on May 21, with price and implementation details available via the OpenAi documentation.