Nvidia-backed startup Starcloud has made history by training the first AI model in space, using an Nvidia H100 GPU aboard its Starcloud-1 satellite, opening a realistic path toward orbital data centers that could transform the economics of AI compute. This breakthrough shows how space-based infrastructure can deliver cheaper, cleaner, always-on computing power for next‑generation AI workloads.
What Is Starcloud’s Orbital AI Breakthrough?
Starcloud is a Washington-based startup building data centers in Earth orbit, backed by Nvidia and early‑stage investors like Y Combinator and NFX. In November 2025, its Starcloud‑1 satellite reached orbit carrying an Nvidia H100 GPU, the most powerful data‑center‑grade chip ever flown in space.
Onboard Starcloud‑1, the team successfully ran and trained modern AI models, including Google’s open model Gemma and Andrej Karpathy’s NanoGPT, marking the first confirmed training of an AI model in orbit. This moment effectively turns a single satellite into a tiny, fully functional orbital data center that can handle real workloads from space.
How Starcloud Trained the First AI Model in Space
According to CNBC, Starcloud launched a 60 kg class Starcloud‑1 satellite that integrates an Nvidia H100 GPU on an Astro Digital Corvus‑Micro platform. The H100 offers roughly 100 times the compute of any GPU previously operated in orbit, enabling workloads that were previously impossible in space.
On this hardware, Starcloud:
Ran Google’s Gemma as an orbital chatbot, responding to queries about the satellite and general topics from low Earth orbit.
Trained NanoGPT on the complete works of Shakespeare, producing outputs in Shakespearean‑style English directly from space.
Integrated the model with satellite telemetry, allowing queries like “Where are you now?” and receiving answers such as “I’m above Africa; in 20 minutes I’ll be above the Middle East.”
These experiments validate that large language models can be trained and run reliably in the harsh environment of space, even with radiation, vacuum, and thermal extremes.
Why Orbital Data Centers Matter for AI
Starcloud’s CEO Philip Johnston claims orbital data centers can cut energy costs by up to 10x compared to ground facilities by using constant solar power and efficient radiative cooling into deep space. Since satellites bypass the day–night cycle and weather, they can harvest uninterrupted solar energy without consuming fresh water or land like terrestrial data centers.
The company’s white paper describes a long‑term vision of 5‑gigawatt orbital compute clusters about 2.4 miles wide and tall, generating more power than the largest U.S. power plant while remaining smaller and cheaper than equivalent solar farms on Earth. For hyperscale AI, this model promises lower emissions, vastly higher scalability, and less strain on national power grids.
Real-World Use Cases Already Running in Orbit
Even at the demo stage, Starcloud is not just running benchmarks but handling practical workloads. The company is:
Performing AI inference on satellite imagery from partners like Capella Space to detect forest fires and spot lifeboats from capsized ships in near real time.
Using onboard models to analyze telemetry and autonomously manage satellite operations, reducing the need for constant human intervention.
Planning to host customer workloads directly in orbit via future satellites that integrate Nvidia’s next‑gen H‑series GPUs and Crusoe’s cloud module for remote deployment.
These early applications show how orbital AI could support disaster response, maritime safety, climate monitoring, and space situational awareness from above.
Nvidia’s Strategy and the Future Race to Orbital Compute
Nvidia’s involvement goes beyond branding: Starcloud participates in the Nvidia Inception program and receives discounted access to high‑end GPUs as part of a broader bet on space‑based compute. With Nvidia’s data center revenue projected in the tens of billions and R&D spend nearing $13 billion, supporting orbital AI platforms aligns with its goal of dominating every layer of AI infrastructure.
Starcloud’s success also intensifies a wider race in space‑based data centers, with other players exploring lunar storage, space‑based solar, and off‑planet cloud services. As more satellites carry data‑center‑grade GPUs, orbital compute could evolve from a single experimental satellite to a multi‑provider ecosystem that offloads a significant share of global AI workloads from Earth.
0 Comments