That’s No Moon… it’s a LEO Datacenter

"Look at the size of that thing." Eight orgs racing to orbit. A million satellite filings in a single week. The pitch only works if three things all go right.

“I have a bad feeling about this.”

Two FCC applications totaling over a million datacenter satellites landed within five days of each other in late January. Eight separate organizations have launched hardware, committed funding, or filed paperwork in the last ninety days for the same idea… stop building datacenters on the ground and start building them in space.

It is May the 4th. Seems like a good day to ask if anyone has actually run the numbers.

The pitch is real. The money is real. Starcloud closed a $170 million Series A at a $1.1 billion valuation in March. They already have an H100 in orbit. They trained a NanoGPT in space on the complete works of Shakespeare in December. Lonestar has a $120 million deal with Sidus Space and a commercial sovereign storage launch booked for this fall on LizzieSat-4. Aetherflux raised $50 million from a Robinhood co-founder to combine orbital compute with infrared power-beaming back to Earth. Google is building radiation-hardened TPUs and demonstrated 1.6 terabits per second on optical intersatellite links in their lab.

This is not vaporware. This is happening.

So why do I have a bad feeling about this?

What are they Actually Selling?

The pitch is gorgeous… on paper.

Solar power in sun-synchronous orbit is unobstructed twenty-four hours a day. Cooling is effectively free, because you are radiating heat to the infinite heatsink of empty space at minus 270 degrees Celsius. There is no land to acquire, no grid interconnect to negotiate, no zoning fight, no community pushback over a substation. Sovereignty is whatever flag the operator paints on the side. Starcloud’s white paper claims energy at half a cent per kilowatt-hour, against around five cents on the cheapest terrestrial deal. Lonestar puts the operating cost number at ninety-seven percent below ground. Starcloud’s roadmap calls for a five-gigawatt deployment with a four square kilometer solar array. At that scale you stop calling it a satellite.

If those numbers held up at scale, every CFO in tech would already be writing the check.

The problem is the numbers are running into a wall. The wall is built out of three load-bearing assumptions that all have to be true at the same time. Starship has to deliver payload to LEO for under a hundred dollars per kilogram. Solar panels in production have to clear forty percent efficiency. And the hardware has to survive five years in the radiation environment of LEO without the repair cycles we take for granted on the ground.

Falcon 9 currently delivers to LEO at about twenty-seven hundred dollars per kilogram. Production solar panels run around twenty-three percent efficiency. And Google did not redesign the TPU for fun. LEO radiation is not Earth radiation.

What My Gut is Telling Me

Let’s be real, here… I have spent thirty years on [terrestrial] infrastructure. I have watched the same playbook run twice already: first as “the cloud is always cheaper than your datacenter,” then as “GPUs in the cloud are always cheaper than buying them.” Both pitches were partially true at the moment they were sold, mostly true for some workloads, and substantially false for many of the customers who bought them and ended up repatriating five years later.

This is the same shape. But when it comes to physical infrastructure whether it’s on-land or in-space, the same things still apply.

Power, Cooling, and Sovereignty.

Vacuum is not a friendly thermal medium. In fact, it is the worst possible thermal medium. There is zero convection. There is no fluid loop you can run faster. Sure, there are some radiator plates, but that is it. Andrew McCalip, an engineer at Varda Space Industries, built a public calculator that puts orbital compute at roughly three times the per-watt cost of terrestrial today. His baseline assumes the orbital pitch is right about most things except the launch number, and he still ends up way over the line.

Repair? You don’t. The failure mode for a Starcloud satellite is deorbit and replace. That is a different operational model than every datacenter we have ever built. We are not used to thinking about hardware where MTBF is a launch window.

Downlink is still the bottleneck for terrestrial workloads. If you are running inference for users on Earth, the round-trip latency to a LEO satellite is worse than the round-trip latency to a regional datacenter unless the data was already in orbit to begin with. Earth observation. In-space mission compute. That is a real market. It is not the same market as “AI compute for everybody.”

Sovereignty is a paint job. Starcloud is incorporated in Washington state. Lonestar is in Florida.

The first US judge to issue a subpoena will not care that the rack is in space.

Where I Don’t Have a Bad Feeling

This is not a hate piece. There is a real use case here, and the people building it are not stupid.

If your data is already in orbit, the orbital datacenter is gold. Earth observation satellites generate terabytes of raw data per pass that mostly never makes it back to Earth at full fidelity because the downlink is too narrow. Process it where it is. Send only the insights down. Same logic that put us on disaggregated inference, where you keep the compute close to the data and don’t move bits you don’t have to.

Sovereign cold storage with cryptographic key escrow is also defensible, especially the lunar lava tube play, where natural shielding does the radiation work for you. Lonestar’s StarVault is a serious answer to a real question for a small set of customers who genuinely need an offsite location adversaries cannot physically reach.

In-space mission compute for crewed stations like Axiom is going to be required infrastructure within a decade. Astronauts are going to need cloud services. Those services are going to need to be local to the people using them. And if NASA’s Artemis missions are seen through to completion, there are REAL plans to have working occupied moonbase installations in the next 10 years.

These are not hypothetical use cases. And at the same time, they are also not, “stop building terrestrial datacenters.” They are a parallel category.

Inference economics live on a curve. Throughput on one axis, latency on another, energy in the middle. NVIDIA calls the sweet spot the Pareto Frontier (more on this in a later post, soon). The whole game is sliding along it with intention, picking your tradeoff for the workload you actually have.

Putting the rack in orbit does not move the curve. It changes the input prices on two axes (power, cooling) and worsens the input prices on three others (launch cost, repair cost, downlink latency).

Whether the trade comes out positive depends entirely on the workload. For most production AI inference today, it does not.

For a satellite-resident Earth observation pipeline, it absolutely does.

That is not a hot take. That is the same tradeoff math we have been doing for thirty years, applied to a new medium. The instincts you already have for “is this worth moving off-prem” work fine here.

You just need a new vocabulary for the variables.

These Are Not the Datacenters You’re Looking For

The LEO datacenter pitch is going to win some battles and lose some. The wins will land where the data is already in orbit, where sovereignty has a meaningful definition, where the workload genuinely doesn’t care about ten extra milliseconds of round-trip. The losses will land everywhere the launch math doesn’t fall, the radiation budget doesn’t hold, or the downlink bandwidth becomes the constraint you were trying to escape.

The skills that let you call BS on “the cloud is always cheaper” in 2014 still work in 2026. They work in space, too. The Force is the same Force, whether you are swinging a lightsaber or a torque wrench.

May the 4th be with you.

/Nick

FAQ

Are LEO datacenters real or just hype?

They are real. Starcloud has H100 hardware in orbit, Lonestar has successfully tested storage on the moon, and at least eight organizations filed plans, launched hardware, or committed funding for orbital compute in the first quarter of 2026. The question is not whether they exist. The question is which workloads make economic sense in space and which do not.

Will orbital datacenters replace terrestrial datacenters for AI inference?

No, not for general-purpose AI inference serving users on Earth. The downlink latency makes a regional terrestrial datacenter faster for most workloads. Where LEO datacenters do make sense is compute on data that is already in orbit, sovereign cold storage, and in-space mission support. Those are real and growing markets, but they are parallel to terrestrial AI infrastructure, not a replacement for it.

What’s the actual launch cost math for orbital datacenters?

The Starcloud and Lonestar economic models assume Starship hits launch costs under one hundred dollars per kilogram to LEO. Falcon 9 currently runs about twenty-seven hundred dollars per kilogram. The gap between those two numbers is where the entire pitch lives or dies. If Starship hits its target, the case gets stronger. If it does not, terrestrial wins on cost for the foreseeable future.


Discover more from DatacenterDude

Subscribe to get the latest posts sent to your email.

Leave a Reply

Discover more from DatacenterDude

Subscribe now to keep reading and get access to the full archive.

Continue reading