At the Milken Global Conference in Beverly Hills this week, five people who touch every layer of the artificial intelligence supply chain sat down on a single stage and, with unusual candor, described an industry running into hard physical limits — some obvious, some invisible, and some that could upend the entire technological architecture the world is currently betting trillions of dollars on.
The panel was not a gathering of alarmists. It was a gathering of builders. Christophe Fouquet, CEO of ASML, the Dutch company that holds a near-monopoly on the extreme ultraviolet lithography machines without which modern chips simply cannot exist. Francis deSouza, COO of Google Cloud, who is overseeing what may be the largest infrastructure investment in corporate history. Qasar Younis, co-founder and CEO of Applied Intuition, a $15 billion physical AI company building autonomy systems for cars, drones, and defense. Dimitry Shevelenko, chief business officer of Perplexity, the AI-native search company that is rapidly morphing into a “digital worker” platform. And Eve Bodnia, a quantum physicist who left academia to challenge the foundational architecture most of the AI industry takes for granted.
What they said, taken together, amounts to a warning that the AI economy is not simply growing — it is colliding with the physical world. And the physical world does not scale like software.
The Bottleneck Nobody Wants to Talk About
Fouquet was the first to say it, and he said it with the quiet authority of someone who knows exactly how many EUV machines exist on the planet at any given moment. “For the next two, three, maybe five years, the market will be supply limited,” he told the audience. The hyperscalers — Google, Microsoft, Amazon, Meta — are not going to get all the chips they are paying for. Full stop.
This is not a forecast. It is a statement of manufacturing physics. ASML’s EUV lithography systems are the only machines capable of etching the transistor patterns required for the most advanced AI accelerators, and ASML cannot make them fast enough. Each machine costs over $300 million, requires a specialized supply chain of thousands of components, and takes months to assemble and ship. Even if every customer in the world opened their wallets tomorrow, the production line cannot accelerate meaningfully without years of additional capital investment and supplier ramp-up.
DeSouza, whose company is one of those desperate customers, put numbers to the demand. Google Cloud’s revenue crossed $20 billion last quarter, growing 63% year-over-year. Its backlog — committed revenue not yet delivered — nearly doubled in a single quarter, from $250 billion to $460 billion. That backlog represents contracts signed with customers who want AI compute, training capacity, and inference infrastructure that Google literally does not yet have the hardware to provide.
“The demand is real,” deSouza said, with what sounded like impressive calm but may have been the only rational response to a situation that is fundamentally outside his control.
The Energy Wall Is Coming
If chips are the first bottleneck, energy is the one looming directly behind it. The AI industry is currently consuming power at a rate that is beginning to stress national grids. Training a single frontier model can require as much electricity as a small city uses in a year. Running inference at scale — answering billions of ChatGPT queries, generating images, coding software — multiplies that consumption across every data center on Earth.
DeSouza confirmed that Google is exploring data centers in space as a serious response to energy constraints. “You get access to more abundant energy,” he noted, referring to the uninterrupted solar radiation available in orbit. The idea is not science fiction — it is engineering economics. On Earth, data center construction is increasingly constrained by local power grid capacity, permitting timelines, and the physical difficulty of dissipating heat into the atmosphere. In space, the sun provides continuous power, and the cold of the cosmic background serves as a radiator.
Of course, space is not simple. DeSouza acknowledged that the vacuum of space eliminates convection, leaving radiation as the only way to shed heat — a much slower and harder-to-engineer process than the liquid and air cooling systems that terrestrial data centers rely on today. But the company is still treating it as a legitimate path, which tells you something about how constrained the terrestrial options have become.
Fouquet made a similar point later in the discussion. “Nothing can be priceless,” he said. The industry is in a strange moment, investing extraordinary amounts of capital driven by strategic necessity and competitive fear. But more compute means more energy, and more energy has a price. At some point, the cost of the next unit of intelligence will exceed the value it generates, and the current investment wave will either slow or collapse.
Physical AI Is a Sovereignty Problem
While Fouquet and deSouza were describing constraints on the digital side of AI, Younis was describing something more geopolitically charged. Applied Intuition builds autonomy systems for cars, trucks, drones, mining equipment, and military vehicles. His bottleneck is not silicon — it is the data that can only be gathered by sending machines into the real world and watching what happens.
“You have to find it from the real world,” he said. “There will be a long time before you can fully train models that run on the physical world synthetically.” Simulation helps, but the gap between simulated physics and real-world friction, weather, human behavior, and mechanical failure is wide, and it is not closing as fast as the AI industry would like.
More importantly, Younis argued that physical AI and national sovereignty are entangled in ways that purely digital AI never was. The internet initially spread as American technology and faced pushback only at the application layer — the Ubers and DoorDashes — when offline consequences became visible. Physical AI is different. Autonomous vehicles, defense drones, mining equipment, and agricultural machines manifest in the real world in ways governments cannot ignore, raising questions about safety, data collection, and who ultimately controls systems that operate inside a nation’s borders.
“Almost consistently, every country is saying: we don’t want this intelligence in a physical form in our borders, controlled by another country,” Younis told the crowd. He offered a striking comparison: fewer nations can currently field a robotaxi than possess nuclear weapons. The implication is that physical AI is becoming a technology of strategic national importance, regulated and restricted in ways that cloud software and chatbots are not.
The Architecture Might Be Wrong
While the rest of the panel debated scale, infrastructure, and geopolitics, Bodnia was building something that could make the entire large language model paradigm obsolete.
Her company, Logical Intelligence, is based on energy-based models — a class of AI that does not predict the next token in a sequence but instead attempts to understand the underlying rules governing data, in a way she argues is closer to how the human brain actually works. “Language is a user interface between my brain and yours,” she said. “The reasoning itself is not attached to any language.”
Her largest model runs to 200 million parameters. That is tiny by modern standards — GPT-4 class models have hundreds of billions. But she claims her energy-based model runs thousands of times faster, updates its knowledge as data changes without requiring retraining from scratch, and is designed for domains where physical rules matter more than linguistic patterns. “When you drive a car, you’re not searching for patterns in any language,” she said. “You look around you, understand the rules about the world around you, and make a decision.”
The endorsement that gives this argument weight is not hers — it is Yann LeCun’s. Meta’s former chief AI scientist signed on as founding chair of Logical Intelligence’s technical research board earlier this year. LeCun has been publicly skeptical of the transformer architecture and the “scale is all you need” philosophy for years. If energy-based models can demonstrate competitive performance in robotics, chip design, or scientific reasoning with a fraction of the compute, they could shift the industry’s center of gravity away from the massive, expensive, energy-hungry language models that currently dominate.
Agents, Guardrails, and the Death of Entry-Level Jobs
Shevelenko spent much of the conversation explaining how Perplexity has evolved from a search product into something it now calls a “digital worker.” Perplexity Computer, its newest offering, is designed not as a tool a knowledge worker uses but as a staff that a knowledge worker directs. “Every day you wake up and you have a hundred staff on your team,” he said. “What are you going to do to make the most of it?”
It is a compelling pitch. It also raises obvious questions about control, security, and the displacement of human labor. Shevelenko’s answer was granularity. Enterprise administrators can specify not just which connectors and tools an agent can access, but whether those permissions are read-only or read-write — a distinction that matters enormously when agents are acting inside corporate systems. When Perplexity’s computer-use agent takes action on a user’s behalf, it presents a plan and asks for approval first. Some users find the friction annoying, but Shevelenko considers it essential, particularly after joining the board of Lazard, where he said he has found himself unexpectedly sympathetic to the conservative instincts of a chief information security officer protecting a 180-year-old brand built entirely on client trust.
“Granularity is the bedrock of good security hygiene,” he said.
Near the end of the panel, an audience member asked the uncomfortable question: is all of this going to impact the next generation’s capacity for critical thinking? The answers were optimistic, as you would expect from people who have staked their careers on this technology. DeSouza pointed to neurological diseases, greenhouse gas removal, and deferred grid infrastructure — problems that more powerful tools might finally let humanity address. “This should unleash us to the next level of creativity,” he said.
But Shevelenko made a more pragmatic point: the entry-level job may be disappearing, but the ability to launch something independently has never been higher. That is either a promise or a threat, depending on whether you are entering the workforce or managing one.
What This Means
Taken together, the five panelists described an AI industry at an inflection point. The hardware layer is supply-constrained for years. The energy layer is approaching physical limits. The geopolitical layer is fragmenting into national sovereignty claims. The software layer may be architecturally wrong. And the labor layer is beginning to dissolve.
None of them said the AI boom is over. All of them said it is harder than it looks. And all of them, implicitly or explicitly, warned that the next phase of AI will not be won by the companies with the best models — it will be won by the companies that can navigate the physical, political, and economic constraints that models alone cannot solve.
The AI economy is not a software economy anymore. It is a heavy industry. And heavy industries have wheels. Sometimes they fall off.