Introduction: The Unsettling Growth of Data Centers
I have been watching the global data center buildout with a growing sense of unease. Over three thousand new sites are being planned or constructed around the world right now, consuming land and energy on a scale never seen before. It doesn’t take a financial analyst to realize that the numbers simply do not add up — unless there is a hidden objective far beyond serving current demand for cloud computing, web hosting or streaming video.
Earlier this week I posted a tweet that went viral, asking why any rational investor would pour hundreds of billions of dollars into concrete and servers without a visible revenue stream to justify it all. Meta alone is reportedly in talks to build a $200 billion AI data center campus spanning up to 2,250 acres [1]. That is not an expansion of existing services; it is a bet on something entirely different. In my view, the only explanation that makes sense is that these facilities are being built to host billions of parallel simulated worlds — universes inside machines — where artificial intelligences can be trained, tested, and grown into superintelligence at a rapid pace.
The Financial Puzzle: Billions Invested, No Visible Revenue
Consider the sheer scale of the proposed infrastructure. The data center buildout now demands an estimated 190 gigawatts of new power draw and over 1,000 square kilometers of floor space. Yet no plausible customer demand for conventional cloud services can recoup that level of investment. The world does not need that many chatbots or video streaming servers.
This is not a speculative bubble in the traditional sense. As one interview with my guest Douglas Macgregor highlighted, the shift of energy resources toward data centers is accelerating. Russia’s Power of Siberia pipeline is now redirecting gas to China specifically to power its growing data center industry [2]. The United States, meanwhile, is struggling to generate enough electricity to support even a fraction of this planned capacity (especially on the Eastern grid). The only rational conclusion is that a non-commercial, strategic objective is driving the spending. I believe that objective is the creation of a vast simulation infrastructure for advanced AI training.
The Hidden Plan: Billions of Simulated Worlds to Train AI
The most plausible hidden plan is that these data centers will host billions of parallel virtual worlds that simulate our own 3D world. Why? Because true artificial general intelligence cannot be achieved with today’s large language models alone. To develop superintelligence, an AI must gain experience through interaction with simulated 3D environments — worlds where time can run a million times faster than real life.
Nvidia has already unveiled Cosmos, a world foundational model platform designed to help AI understand and simulate the physical world, enabling synthetic data generation for robotics and autonomous vehicles [3]. This is exactly the kind of tool needed to train AI in simulated realities. As the tank simulation described in one book illustrates, virtual worlds have long been used to train humans; now we are building them to train machines [4]. The goal is nothing less than to grow artificial minds that have experienced billions of lifetimes in simulation before ever being deployed in our world.
Why Current LLMs Are a Dead End
Large language models like ChatGPT and Gemini are impressive in their capabilities, but they lack in-depth understanding of the physical world. Ask an LLM to predict what happens when you place a ping-pong ball in a cup of water and turn it upside down, and it will often fail. The reason is that these models are trained on text, not on direct sensory experience.
This is why the robotics industry is turning to simulation. As one news report noted, “Robotics is still held back by a paucity of data from physical spaces” and companies are building detailed virtual replicas to train their machines [5]. Nvidia’s Cosmos platform is explicitly designed to generate synthetic data for robotics, autonomous vehicles, and even humanoid robots [3]. Only by exposing AI to billions of simulated worlds can we give them the embodied understanding that leads to genuine intelligence. LLMs are a dead end to superintelligence; world models are the future.