This page describes AI data centres **in orbit**, powered by sunlight and linked with free-space optical networks to beam **data (not energy)** down to Earth.
The concept has moved from speculation to live programs: Google Research’s Project Suncatcher laid out a technical blueprint in November 2025, while NVIDIA-backed Starcloud is flight-testing a GPU payload to validate space-compute operations:
- If you want to satiate AI’s hunger for power, Google suggests going to space - arstechnica.com
- Exploring a space-based, scalable AI infrastructure system design - research.google
- Towards a future space-based, highly scalable
AI infrastructure system design - Suncatcher pdf
- How Starcloud Is Bringing Data Centers to Outer Space (2025) - blogs.nvidia.com
- Powerful NVIDIA chip launching to orbit next month to pave way for space-based data centers (2025) - space.com ![]()
# What it is Instead of sending power to Earth, constellations of **solar-powered compute satellites** host accelerators and/or TPUs. They exchange data via **free-space optical** inter-satellite links (ISLs) and downlink results through high-throughput laser terminals to ground stations, offloading energy demand from terrestrial grids
Google’s plan emphasises scaling ML by growing a mesh of compute nodes in space; reporting pegs early trial hardware as targeting **2027** for orbit - theguardian.com ![]()
# Core architecture
Orbital solar arrays feed accelerators; data moves across **terabit-class optical ISLs** for distributed training and then to Earth via laser downlinks. Google highlights the requirement for **tens of Tb/s** interconnect to rival ground data-centre fabrics, pushing coherent optics, pointing, and clock-sync beyond today’s commsats - phys.org ![]()
# Live trials and early hardware
NVIDIA’s ecosystem is already flying precursors. Starcloud-1 carries an **H100** GPU to orbit to test performance, radiation tolerance, thermal throttling, and on-orbit ops—framed as a step toward GPU-dense “data centre” satellites. NVIDIA profiled the startup and its roadmap; mainstream coverage tracks the launch and purpose - blogs.nvidia.com
- space.com
- youtube.com
- dataconomy.com ![]()
# Networking and latency Space compute makes sense when **data-movement** beats **power-movement**. The downlinks carry **results, models, and updates**, not bulk power.
Suncatcher notes that delivering ground-comparable performance demands **low-latency, high-bandwidth** ISLs and smart task placement (train in-mesh; deliver checkpoints/outputs to Earth). For inference near users, edge satellites can place results a hop from optical ground stations - research.google ![]()
# Radiation, reliability, and cooling
Accelerators must survive **single-event effects** and cumulative dose; Google says it has already begun **radiation testing of TPUs** ahead of any flight. Thermal design shifts to **radiators and phase-change** systems sized for worst-case loads. Modular “tile” satellites allow graceful degradation and replacement within the constellation - arstechnica.com ![]()
# Ground segment and operations
Earthside, multiple **optical ground stations** (OGS) form a follow-the-sun network for downlinks. Results ingest into terrestrial clouds or sovereign facilities via fibre. Operations emphasise: authenticated command, autonomous fault recovery, debris-aware manoeuvring, and **software-defined** resource scheduling across the fleet - research.google ![]()
# Assets
space-ai