Join our daily and weekly newsletters for the latest updates and the exclusive content on AI coverage. Learn more
A segment on weekly CBS in -depth of television information 60 minutes last night (Also shared on Youtube here) offered an interior overview of Deepmind from Google and the vision of its co-founder and CEO winner of the Nobel Prize, the legendary IA researcher Hassabis.
The interview has retraced the rapid progress of Deepmind in artificial intelligence and its ambition to achieve general artificial intelligence (AG) – a machine intelligence with versatility and a superhuman scale similar to a human.
Hassabis described the trajectory of AI today as being on an “exponential curve of improvement”, fueled by growing interest, talents and resources entering the field.
Two years after a prior 60 minutes Interview announced the chatbot era, Hassabis and Deepmind are now pursuing more competent systems designed not only to understand language, but also the physical world around them.
The interview occurred after the next Google conference at the next conference in 2025 at the beginning of the month, in which the research giant introduced a multitude of new AI models and features centered on its family of multimodal models IA Gemini 2.5. Google came out of this conference seeming to have taken lead compared to other technological companies to provide a powerful AI for the use of companies at the most affordable prices, exceeding Openai.
More details on the ASTRA project of Google Deepmind
One of the focal points of the segment was the ASTRA project, the new generation chatbot of Deepmind which goes beyond the text. Astra is designed to interpret the visual world in real time.
In a demo, he identified paintings, deduces emotional states and created a story around a hopper painting with the line: “Only the flow of ideas moving.”
When asked if he was bored, Astra replied thoughtively, revealing a degree of sensitivity to tone and interpersonal nuances.
Product Manager Bibbo Shu highlighted Astra’s unique design: an AI that can “see, hear and discuss anything” – a marked step towards Incarnated Incarnation Systems.
Gemini: towards an Useable AI
The broadcast also presented GeminiDeepmind’s AI system being formed not only to interpret the world but also to act – completing tasks such as ticket booking and online purchases.
Hassabis said that Gemini is a step towards act: an AI with a human capacity to navigate and operate in complex environments.
THE 60 minutes The team tried a prototype integrated into glasses, demonstrating visual recognition in real time and audio responses. Could it also refer to a next return of pioneering but ultimately off -stand reality glasses, known as Google Glass, who made his debut in 2012 Before be retired in 2015?
While specific versions of the Gemini model like Gemini 2.5 Pro or Flash have not been mentioned in the segment, the wider IA ecosystem of Google recently introduced these models of use of the company, which can reflect parallel development efforts.
These integrations take care of Google’s growing ambitions in the applied AI, although they are not the scope of what was directly covered in the interview.
Acted from 2030?
When asked for a calendar, Hassabis projected AGE could arrive in 2030, with systems that include their environment “in a very nuanced and deep way”. He suggested that such systems could be integrated transparently into daily life, from portable devices to home assistants.
The interview also discussed the possibility of self -awareness in AI. Hassabis said that current systems are not aware, but that future models could present signs of self -understanding. However, he underlined the philosophical and biological fracture: even if the machines imitate conscious behavior, they are not made of the same “Squishhy carbon matter” as humans.
Hassabis has also predicted major robotics developments, saying that breakthroughs could arrive in the coming years. The segment included robots finishing tasks with wave instructions – such as identifying a green block formed by mixing yellow and blue, which follows the increasing reasoning capacities in physical systems.
Safety achievements and problems
The segment revisited the historic realization of Deepmind with Alphafold, the model of AI which predicted the structure of more than 200 million proteins.
Hassabis and colleague John Jumper received the Nobel Prize for Chemistry 2024 for this work. Hassabis stressed that this advance could accelerate the development of drugs, which has reduced the deadlines from a decade to only weeks. “I think that one day, we can heal all diseases using AI,” he said.
Despite optimism, Hassabis has expressed clear concerns. It cited two major risks: the abusive use of AI by bad players and the growing autonomy of systems out of human control. He stressed the importance of construction in railing and values systems – AI teaching as one could teach a child. He also called for international cooperation, noting that the influence of AI will affect all countries and countries.
“One of my great concerns,” he said, “is that the race for domination of AI could become a downward race for security.” He underlined the need for the main players and the nation states to coordinate on ethical development and surveillance.
The segment ended with a meditation on the future: a world where IA tools could transform almost all human efforts – and possibly reshape the way in which we think of knowledge, consciousness and even in the sense of life. As Hassabis said, “we need new great philosophers to understand … to understand the implications of this system.”