Inside the Machine: What an Evening With Demis Hassabis Taught Me About Where AGI Is Headed
Key takeaways from an evening with Demis Hassabis and Sebastian Mallaby on The Quest for Artificial General Intelligence, hosted by Bloomberg’s Tom Mackenzie
Tom Mackenzie, Demis Hassabis and Sebastian Mallaby
The question of whether AGI has already arrived is becoming a popular topic of conversation among tech executives. NVIDIA's Jensen Huang recently told podcaster Lex Fridman that he thinks we've achieved it, though his benchmark was an AI that could briefly spin up a billion-dollar app, and he was the first to admit that 100,000 such agents couldn't replicate what he's built at NVIDIA over three decades. OpenAI's Greg Brockman puts himself at "70 to 80% there." The goalposts are moving so fast that OpenAI CEO Sam Altman said AGI is "not a very useful term."
So when I sat in a room with Demis Hassabis, the man who has arguably done more than anyone alive to advance the science of general intelligence, the first thing I wanted to understand was what progress looks like from inside the work.
The gap in existing AI models
Today's AI models are extraordinary within the boundaries of what they were trained to do. What they cannot do is continue learning after deployment. They cannot encounter a novel situation in the world and adapt to it in real time. That is precisely what makes AGI still a frontier.
The Google DeepMind research published this year illustrates that their cognitive framework maps AI performance across ten faculties, including reasoning, memory, attention, and social cognition. The finding is that today's models have a jagged profile. They may exceed most people in mathematics or factual recall, while trailing the average person in learning from new experiences or in reading social situations. Matching median human performance across all ten dimensions is the bar DeepMind proposes for AGI.
Thinking of it as uneven progress and not simply success or failure is a useful mindset for founders and builders right now.
What Demis is building toward, and why it matters
Demis came to AI through neuroscience, chess, and a desire to understand the nature of reality. That background has shaped everything about how DeepMind approaches problems.
According to him, AGI is first and foremost a tool for science. And the primary targets are drug discovery and energy, two domains where the complexity of the variables already exceeds what human researchers can process on their own.
AlphaFold is the proof of concept, with Demis winning the 2024 Nobel Prize in Chemistry for using AI to predict the complex structures of nearly all known proteins, structures that had defeated conventional biology for decades. His claim is that any drug discovered from this point forward will have used AlphaFold somewhere along the line.
In framing his team’s approach to solutions, the concept he kept returning to was blue-sky thinking: the practice of removing all conventional constraints from a problem in order to see what becomes possible. It is the kind of thinking that does not improve on what exists but arrives at a place nobody had previously mapped.
The question of trust
The room was asked if we can trust Demis as an AI leader with something this consequential. Time and future developments will tell but Demis is an unusual case.
Demis chose to stay in London when Silicon Valley would have taken him. DeepMind was founded here deliberately, and it has remained here. The argument he and Sebastian Mallaby made is that cutting-edge deep tech does not require you to be in California. What it requires is the right talent, the right intellectual culture, and the ambition to match. London has become a hub for that, and we underestimate how much that matters for UK and European founders who want to build at the frontier without relocating.
But trust is also a structural question. Sebastian Mallaby, when asked whether he felt reassured about where AGI is heading, noted that if many actors are building toward AGI across different countries and institutions, no single leader can control the outcome. That makes international cooperation and shared safety frameworks essential. Governments need to set guardrails collectively because the companies closest to the technology cannot be the only voices in the room.
What AGI’s development means for founders and builders right now
A more effective approach to the AGI conversation begins with understanding the jagged profile concept. Your product doesn't necessarily require AGI to function. Instead, it should focus on identifying which of the ten cognitive dimensions are most relevant to the problem you are addressing. From there, you can build on the strengths of current AI technologies while being transparent about areas where human judgment is still essential.
Regarding skills and future readiness, the speakers emphasised that goal setting and project initiation should remain distinctly human for the foreseeable future. Likewise, the ability to build relationships and guide others through uncertainty is essential. People prefer buying from others, highlighting that the relational aspects of work are here to stay. What is changing, however, is the administrative and data-heavy layer that supports these, and this transformation is already in progress.
On a final note
AGI is not arriving tomorrow, and the debate over whether we have already achieved it will continue, largely because the definition keeps evolving. What is clear is that we are close enough to AGI that remaining passive is no longer a reasonable option.
The founders who will thrive in this next phase are those who can think creatively, as Demis described: they need to remove constraints, explore what becomes possible, and work towards those possibilities with the same determination that took DeepMind from a small startup in London to winning a Nobel Prize.
The technology is remarkable, but the responsibility of deciding its purpose still lies with us.
Written by Oluwatomi Lawal