The Minds of Modern AI: 2025 Queen Elizabeth Prize for Engineering
Overview
A discussion with six of the world’s most influential AI pioneers who received the 2025 Queen Elizabeth Prize for Engineering. Featured speakers include:
- Yoshua Bengio - Deep learning pioneer
- Geoffrey Hinton - Backpropagation and neural networks pioneer
- Yann LeCun - Convolutional neural networks pioneer
- Jensen Huang - Nvidia CEO, GPU computing pioneer
- Fei-Fei Li - AI researcher, human-centered AI advocate
- Bill Dally - AI researcher and infrastructure specialist
Personal Breakthrough Moments
Yoshua Bengio
- Early Career: Reading Jeff Hinton’s papers as a grad student and realizing there might be simple principles (like physics laws) to understand human intelligence
- Recent Shift: After ChatGPT launch (2.5 years ago), realized critical concerns about AI with goals humans don’t control, and switched entire research agenda to AI safety
Jensen Huang (Nvidia)
- 1984-1990s: Solved the “memory wall” problem by organizing computations into kernels connected by streams
- 2010-2011 Breakthrough: Breakfast conversation with Andrew Ng who demonstrated neural networks finding cats on internet using 16,000 CPUs. Huang replicated experiment with 48 GPUs at Nvidia, convincing him to specialize GPUs for deep learning
Geoffrey Hinton
- 1984 Breakthrough: Used backpropagation to predict next word in a sequence (tiny language model)
- Discovered that without explicit instruction, the system learned meaningful word representations
- “Took 40 years to get here because we didn’t have compute and data”
Yann LeCun
- 1983 Discovery: Fascinated by training machines instead of programming them; read Hinton’s papers
- Late 1980s: Key debate with Hinton about supervised vs. unsupervised learning
- Mid-2000s: Restarted interest in deep learning after Bengio’s work
- 2017 Pivot: Recognized need to return to self-supervised learning, leading to modern LLMs
Fei-Fei Li
- 2006-2007: Realized data scarcity was the problem in machine learning despite trying all algorithms
- ImageNet Creation: Built 15-million image dataset across 22,000 categories to solve data problem
- Key Insight: “Big data drives machine learning” and became limiting factor for AI
- 2018 Google Chief Scientist Role: Recognized AI as civilizational technology; co-founded Human-Centered AI Institute
Bill Dally
- Undergrad: Fascinated by machine learning and self-organized systems
- Collaboration: Eventually met with LeCun and Hinton; collaborated on multi-layer network training
Current State of AI: Not a Bubble
Jensen Huang’s Key Arguments Against Bubble Narrative
Comparison to Dot-Com Bubble:
- During dot-com: Most fiber deployed was “dark” (unused)
- Today: “Almost every GPU you can find is lit up and used”
Key Differences:
- Real-Time Computation Required: Unlike traditional software (pre-compiled), AI must be contextually aware and generate intelligence in real-time
- New Economic Model: “AI needs factories” - requires hundreds of billions in infrastructure to serve trillions in economic value
- Augmentation, Not Tool: AI augments human labor and addresses work - fundamentally different from past software tools
- Evidence Supporting Growth:
- Profitable AI companies already exist (Cursor, Bridge, OpenEvidence in healthcare)
- Two exponentials happening simultaneously: computation needs growing AND usage growing exponentially
- Current usage is still very low; “almost everything we do will engage AI somehow”
Supporting Perspectives on Valuations
Geoffrey Hinton’s Caveats:
- Models getting more efficient (attention improvements: straight → GQA → MLA)
- Models continue improving; GPUs valuable even with new architectures
- Applications barely scratched (estimate only 1% of ultimate demand explored)
Yann LeCun’s Nuance:
- Not in bubble sense: Enormous applications to develop based on LLMs; investment justified for infrastructure and software
- Potential bubble in paradigm: Current LLM paradigm unlikely to reach human-level intelligence alone
- Missing breakthroughs needed: Don’t have robots as smart as cats; need scientific breakthroughs
- Not just engineering problem: Progress requires new paradigm, not just more data/compute/investment
The Path to Human-Level Intelligence
Timeline Estimates (Quick-Fire Responses)
Yoshua Bengio: 5-10 years for new paradigm breakthroughs, longer for full progress
Yann LeCun: Already partially superseded in specific domains; won’t be single event
- Machines already recognize 22,000 object categories (beyond typical human)
- Can translate 100+ languages (superhuman)
- Airplanes fly but don’t fly like birds - same for AI intelligence
Geoffrey Hinton: Less than 20 years for AI that always wins debates with humans
Jensen Huang: Already here; semantic question rather than timeline
- “It doesn’t matter” - technology will keep improving and be applied to important problems
- Focused on applications rather than AGI threshold
Bill Dalling: 5 years if current trend in planning capability continues
- AI doing AI research could unlock next breakthroughs
- No single path; multiple possible futures
- Must remain agnostic and not make big claims
Critical Limitations of Current AI
Spatial Intelligence Gap
Fei-Fei Li’s Key Insight:
- Language-based models are strong
- Spatial intelligence = lynchpin between perception and action
- Beyond language: Humans and animals perceive, reason, interact, create worlds
- Today’s most powerful LLMs fail at “rudimentary spatial intelligence tests”
- Robotics: We don’t have robots as smart as cats
The Paradigm Question
- Current LLM paradigm may hit limits before achieving human-level AGI
- New algorithmic breakthroughs required, not just scaling
- AI is still young: Only ~75 years (since Alan Turing) vs. 400+ years for physics
- Many unexplored frontiers beyond language
Infrastructure and Scaling
GPU Computing Reality
- Scaling from one GPU → multiple GPUs → multiple data centers uses same logic
- “The rest is engineering and extrapolation”
- Key question: How much data? How large networks? How much dimensionality?
LLMs Are No Longer Just “Language Models”
- Evolution from language models to agents (multi-step reasoning)
- Interaction with environment and computing infrastructure
- Technology fundamentally different from 3 years ago
- “Agents” represent new capability paradigm
Key Opportunities and Frontiers
- Applications Barely Explored: Only ~1% of potential applications developed
- Video and Sensor Data: LLMs weak at processing; new challenge for next years
- AI for AI Research: AI designing next generation of AI, improving robotics, spatial understanding
- Robot Capabilities: Major gap between current and animal-level intelligence
- Multimodal Integration: Combining language, vision, spatial reasoning
Consensus Points Among AI Leaders
- ✓ Not a bubble - Different from dot-com era due to actual infrastructure utilization
- ✓ Multiple exponentials - Efficiency improvements + better models + new applications
- ✓ Complementary to humans - Goal is augmentation, not replacement
- ✓ Long-term potential massive - Hundreds of billions in infrastructure justified
- ✓ Scientific challenges remain - Need new paradigms beyond scaling current approaches
- ✓ Humanity remains critical - Humans provide creativity, empathy, and values
The Human Role in AI Future
Complementary Strengths
Humans excel at:
- Creativity and creative problem-solving
- Empathy and social understanding
- Ethical judgment and values
- New problem formulation
- Physical dexterity and manipulation
AI excels at:
- Recognition and categorization
- Mathematical problem-solving
- Language translation
- Data processing and pattern recognition
- Parallel computations at scale
The Right Question
“Not whether AI will replace humans, but how to build AI to augment humans and complement what humans are good at.”
Looking Ahead: Key Takeaways
- We’re at the Beginning: Current AI usage still very low; massive room for growth
- Paradigm Shifts Coming: LLMs likely insufficient for AGI; need new approaches
- Infrastructure Critical: Computing factories essential, not optional
- Spatial Intelligence Next Frontier: Beyond language to perception-action cycles
- Multiple Possible Futures: Uncertainty high; should plan accordingly
- Human-Centered Approach Essential: Technology must augment, not replace human agency
- Keep Learning: Technology will be dramatically different in one year
Discussion Summary
The six pioneers shared a consensus that:
- We’re riding multiple exponentials, not a bubble
- Current applications represent only ~1% of potential
- New paradigms needed for true AGI, beyond just scaling
- Infrastructure investment is justified and necessary
- Human-AI collaboration is the goal, not replacement
- The future remains uncertain with multiple possible paths
- Humanity’s unique capabilities remain essential
This summary represents the 2025 Queen Elizabeth Prize discussion featuring six of the world’s most influential AI pioneers sharing their perspectives on the current state, future trajectory, and challenges of artificial intelligence.