I worked in aerospace for a couple of years in the beginning of my career. While my area of expertise was the mechanical design I shared my office with the guy who did the thermal design and I learned two things:
1. Satellites are mostly run at room temperature. It doesn't have to be that way but it simplifies a lot of things.
2. Every satellite is a delicately balanced system where heat generation and actively radiating surfaces need to be in harmony during the whole mission.
Preventing the vehicle from getting too hot is usually a much bigger problem than preventing it from getting too cold. This might be surprising because laypeople usually associate space with cold. In reality you can always heat if you have energy but cooling is hard if all you have is radiation and you are operating at a fixed and relatively low temperature level.
The bottom line is that running a datacenter in space makes not much sense from a thermal standpoint and there must be other compelling reasons for a decision to do so.
Why do they want to put a data center in space in the first place?
Free cooling?
Doesn't make much sense to me. As the article points out the radiators need to me massive.
Access to solar energy?
Solar is more efficient in space, I'll give them that, but does that really outweigh the whole hassle to put the panels in space in the first place?
Physical isolation and security?
Against manipulation maybe, but not against denial of service. Willfully damaged satellite is something I expect to see in the news in the foreseeable future.
Low latency comms?
Latency is limited by distance and speed of light. Everyone with a satellite internet connections knows that low latency is not a particular strength of it.
Marketing and PR?
That, probably.
EDIT:
Thought of another one:
Environmental impact?
No land use, no thermal stress for rivers on one hand but the huge overhead of a space launch on the other.
Starcloud isn't even worth the attention to point out what an infeasible idea it is.
If I'm reading this correctly, the idea is
1. YOLO. Yeet big data into orbit!
2. People will pay big bucks to keep their data all the way up there!
3. Profit!
It could make sense if the entire DC was designed as a completely modular system. Think ISS without the humans. Every module needs to have a guaranteed lifetime, and then needs to be safely yet destructively deorbited after its replacement (shiny new module) docks and mirrors the data.
> There, 24/7 solar power is unhindered by day/night cycles, weather, and atmospheric losses (attenuation).
Wouldn’t the earth still get in the way of the sun or it’s too far away?
> Starcloud’s target is to achieve a 5 GW cluster with solar arrays spanning 4 km by 4 km
Doesn't this massively surface area also mean a proportionately large risk of getting damaged by orbital debris?
If you are out of the magnetosphere, wouldn't your data be subject to way more cosmic ray interference, to the point that its actually a consideration?
Cooling things in space is insanely difficult, as there’s no conduction or convection.
Cooling is one of the main challenges in designing data centers.
I wonder if there could be some way to photolitograph compute circuits directly onto a radiator substrate, and accomplish a fully-passive thermal solution that way. Consider the heat-conduction problem: from dimensional analysis, the required thickness of a (conduction-only) radiator plate with a regular grid of heat sources on it shrinks superlinearly as you subdivide those heat sources (from few large sources, into many, small ones). At fixed areal power density, if the unit heat source is Q, the plate thickness d ∝ Q^{-3/2}. (This is intuitive: the asymptotic limit is a uniform, continuous heat source exactly matched to a uniform radiation heat sink; hence heat conduction is zero). So: could one contemplate an array of very tiny CPU sub-units, grided evenly over a thin Al foil—say at the milliwatt scale with millimeter-scale separation? It'd be mostly empty space (radiator area) and interconnect. It'd be thermally self-sufficient and weigh practically nothing.
Putting a datacenter in space is one of the worst ideas I've heard in a while.
Reliable energy? Possible, but difficult -- need plenty of batteries
Cooling? Very difficult. Where does the heat transfer to?
Latency? Highly variable.
Equipment upgrades and maintenance? Impossible.
Radiation shielding? Not free.
Decommissioning? Potentially dangerous!
Orbital maintenance? Gotta install engines on your datacenter and keep them fueled.
There's no upside, it's only downsides as far as I can tell.
I’m mostly puzzled by how this got yc funding. Everything I’ve seen thus far suggests this is nowhere close to feasible
This is putting the cart before the horse in the most literal sense, SpaceX can’t even get a Starship into space without it breaking apart.
They can get highly qualified space engineers to do a lot of pre-qualification work for free though! (Cunningham's law)
I wonder what the implication on data protection / privacy laws and the like would be. Would it be as simple as there'd be no laws, or is the location of the users still relevant?
"Terrestrial datacenters have parts fail and get replaced all the time."
This premise is basically false. Most datacenter hardware, once it has completed testing and burn in, will last for years in constant use.
There are definitely failures but they're very low unless something is wrong like bad cooling, vibration, or just a bad batch of hardware.
There is 0 reason to put a data center in space. For every single reason beyond "investor vibes" you can accomplish the same thing on earth for a significantly lower cost.
Here is a video that I think thoroughly covers the challenges a datacenter in orbit would face.
This is such a gloriously stupid fucking idea.
This site is unusable on my mobile android phone, even tried multiple browsers. The body text extends beyond the window and I can't scroll or zoom to fit.
And all of humanity will be watching these arrays orbit, for the financial benefit of whom? I'm happy to remember the wild night sky.
Who's asking for datacenters in space?
Let's start by acknowledging that there is no Starship and it's likely that the current iteration of that system is not viable. It will need to be redesigned, and no one even knows if it's possible not to mention economically feasible.
good use case for bitcoin mining?
- lots of cheap power - deploy 100s of ASICs, let each of them fail as they go
My napkin is with Starcloud https://news.ycombinator.com/item?id=43190778 , ie. one Starship $10M launch - 10 000 GPU datacenter into LEO with energy and cooling. I missed there batteries for the half the time being in the Earth shadow (as originally i calculated that for crypto where you can have half the time off which isn't the case for the regular datacenter) and panels to charge them, that adds 10kg for 1 KWH, and thus it will get down to about 5000 GPU for the same weight and launch cost.
Paradoxically the datacenter in LEO is cheaper than on the ground, and have bunch of other benefits like for example physical security.
[dead]
[flagged]
The launch costs in the article look quite off from the outset.
A Falcon Heavy launch is already under $100M, and in the $1400/kg range; Starship’s main purpose is to massively reduce launch costs, so $1000/kg is not optimistic at all and would be a failure. Their current target is $250/kg eventually once full reusability is in place.
Still far from the dream of $30/kg but not that far.
The original “white paper” [1] also does acknowledge that a separate launch is needed for the solar panels and radiators at a 1:1 ratio to the server launches, which is ignored here. I think the author leaned in a bit too much on their deep research AI assistant output.
Space roboticist here.
As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.
As a first cut, this would require:
- an autonomous rendezvous and docking system
- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant
- CPU, power, comms, and cooling to support the above
- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.
- regular launches containing replacement hardware
- ongoing ground support staff to deal with failures
The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.