I think you see it backwards.
You want the AI agent on the outside orchestrating. This will easily scale to many nodes.
If you’re a masochist, the first node control agent could run in the same physical host as the “AI”.
This node controller only needs to be a service with desired permission (root in your line of thinking), which dutifully executes whatever the LLM excerpted for the controller.
You could make those docker containers, VMs, or other servers. You only need a suicidal script which does whatever you output, and responds with stdout/stderr for the feedback loop.
You need controls. Things it can’t do, shouldn’t.
Give it an LLM API key too. I want to see agents that create sub-agents.
> 2025 might be the year of self-evolving AI agents with the capabilities of installing, subscribing to SaaS, paying hosted database by itself e
Nope. No LLM to date can "reason" (i.e make decisions on information without having that information encoded into it), and this isn't going to happen with any current approach.
2025 will be the year of more specialized LLMs. We are nowhere close to general AI that can reason, we just got really good at compressing information and mapping it.
I recall that someone did this very thing not to long ago (the full system part anyway).
As for generating revenue I think some AI bot on X/Twitter convinced someone to invest in it's memecoin and it started a cult or something.
The server part is interesting, as I've been curious about satellite operating systems, which don't have the luxury of failing. You could set up a "failsafe" recovery OS and a watchdog reset that detects if the system has gone down and, if so, reboot into the recovery mode OS and start over (I think that's how satellites recover from pooping the bed in outer space).