Last week, readers were briefed on the emerging theme of data centers in low Earth orbit, a concept now openly discussed by Elon Musk, Jensen Huang, Jeff Bezos, and Sam Altman, as energy availability and infrastructure constraints on land increasingly emerge as major bottlenecks to data center buildouts through the end of this decade and well into the 2030s.
Nvidia-backed startup Starcloud has released a white paper outlining a case for operating a constellation of artificial intelligence data centers in space as a practical solution to Earth’s looming power crunch, cooling woes, and permitting land constraints.
Terrestrial data center projects will reach capacity limits as AI workloads scale to multi-gigawatt levels, while electricity demand and grid bottlenecks worsen over the next several years. Orbital data centers aim to bypass these constraints by using near-continuous, high-intensity solar power, passive radiative cooling to deep space, and modular designs that scale quickly, launched into orbit via SpaceX rockets.
“Orbital data centers can leverage lower cooling costs using passive radiative cooling in space to directly achieve low coolant temperatures. Perhaps most importantly, they can be scaled almost indefinitely without the physical or permitting constraints faced on Earth, using modularity to deploy them rapidly,” Starcloud wrote in the report.
Starcloud continued, “With new, reusable, cost-effective heavy-lift launch vehicles set to enter service, combined with the proliferation of in-orbit networking, the timing for this opportunity is ideal.”
Already, the startup has launched its Starcloud-1 satellite carrying an Nvidia H100 GPU, the most powerful compute chip ever sent into space. Using the H100, Starcloud successfully trained NanoGPT, a lightweight language model, on the complete works of Shakespeare, making it the first AI model trained in space.
One thought on “Orbital Data Centers Will “Bypass Earth-Based” Constraints”