Who does your assistant serve?

todsacerdoti | 170 points

> I feel like this should go without saying, but really, do not use an AI model as a replacement for therapy.

I know several people who rave about ChatGPT as a pseudo-therapist, but from the outside the results aren’t encouraging. They like the availability and openness they experience by taking to a non-human, but they also like the fact that they can get it to say what they want to hear. It’s less of a therapist and more of a personal validation machine.

You want to feel like the victim in every situation, have a virtual therapist tell you that everything is someone else’s fault, and validate choices you made? Spend a few hours with ChatGPT and you learn how to get it to respond the way you want. If you really don’t like the direction a conversation is going you delete it and start over, reshaping the inputs to steer it the way you want.

Any halfway decent therapist will spot these behaviors and at least not encourage them. LLM therapists seem to spot these behaviors and give the user what they want to hear.

Note that I’m not saying it’s all bad. They seem to help some people work through certain issues, rubber duck debugging style. The trap is seeing this success a few times and assuming it’s all good advice, without realizing it’s a mirror for your inputs.

Aurornis | 3 days ago

What's worth noting is that the companies providing LLMs are also strongly pushing people into using their LLMs in unhealthy ways. Facebook has started shoving their conversational chatbots into people's faces.[1] That none of the big companies are condemning or blocking this kind of LLM usage -- but are in fact advocating for it -- is telling of their priorities. Evil is not a word I use lightly but I think we've reached that point.

[1]: https://www.reuters.com/investigates/special-report/meta-ai-...

fleebee | 3 days ago

This made me realize OpenAI is actually in the Artificial Humans business right now, not just AI. I am not sure if this was what they wanted.

They have to deal with real humans. Billions of conversations with billions of people. In the Social Networks era this was easy. SN companies outsourced talking with humans part to other users. They had the c2c model. They just provided the platform, transmitted the messages and scaled up to billion users. They quietly watched to gather data and serve ads.

But these AI companies have to generate all those messages themself. They are basically like a giant call center. And call centers are stressful. Human communication at scale is a hard problem. Possibly harder than AGI. And those researchers in AI labs may net be best people to solve this problem.

ChatGPT started as something like a research experiment. Now it's the #1 app in the world. I'm not sure about the future of ChatGPT (and Claude). These companies want to sell AI workers to assist/replace human employees. An artificial human companion like in the movie Her (2013) is a different thing. It's a different business. A harder one. Maybe they sunset it at some point or go full b2b.

ozgung | 3 days ago

I’ve been exploring modes of MCP development of a Screentime MCP server. In the loops, I ask it to look at my app and browsing behavior and summarize it. Obviously very sensitive, private information.

Claude generated SQL and navigated the data and created a narrative of my poor behavior patterns including anxiety-ridden all-night hacking sessions.

I then asked it if it considered time zones … “oh you’re absolutely right! I assumed UTC” And it would spit another convincing narrative.

“Could my app-switching anxiety you see be me building in vscode and testing in ghostty?” Etc

In this case I’m controlling the prompt and tool description and willfully challenging the LLM. I shudder to think that a desperate person gets bad advice from these sorts of context failures.

neomantra | 3 days ago

> Not to mention, industry consensus is that the "smallest good" models start out at 70-120 billion parameters. At a 64k token window, that easily gets into the 80+ gigabyte of video memory range, which is completely unsustainable for individuals to host themselves.

Worth a tiny addendum, GPT-OSS-120b (at mxfp4 with 131,072 context size) lands at about ~65GB of VRAM, which is still large but at least less than 80GB. With 2x 32GB GPUs (like R9700, ~1300USD each) and slightly smaller context (or KV cache quantization), I feel like you could fit it, and becomes a bit more obtainable for individuals. 120b with reasoning_effort set to high is quite good as far as I've tested it, and blazing fast too.

diggan | 3 days ago

This bit stuck out to me:

> To be clear: I'm not trying to defend the people using AI models as companions or therapists, but I can understand why they are doing what they are doing. This is horrifying and I hate that I understand their logic...As someone that has been that desperate for human contact: yeah, I get it. If you've never been that desperate for human contact before, you won't understand until you experience it.

The author hits the nail on the head. As someone who has been there, to the point of literally eating out at Applebees just so I'd have some chosen human contact that wasn't obligatory (like work), it's...it's indescribable. It's pitiful, it's shameful, it's humiliating and depressing and it leaves you feeling like this husk of an entity, a spectator to existence itself where the only path forward is either this sad excuse for "socializing" and "contact" or...

Yeah. It sucks. These people promoting these tools for human contact make me sick, because they're either negligently exploiting or deliberately preying upon one of the most vulnerable mindstates of human existence in the midst of a global crisis of it.

Human loneliness aside, I also appreciate Xe's ability to put things into a more human context than I do with my own posts. At present, these are things we cannot own. They must be rented to be enjoyed at the experience we demand of them, and that inevitably places total control over their abilities, data, and output in the hands of profiteers. We're willfully ceding reality into the hands of for-profit companies and VC investors, and I don't think most people appreciate a fraction of the implications of such a transaction.

That is what keeps me up at night, not some hypothetical singularity or AGI-developed bioweapons exterminating humanity. The real Black Mirror episode is happening now, and it's heartbreaking and terrifying in equal measure to those of us who have lived it before the advent of AI and managed to escape its clutches.

stego-tech | 3 days ago

If we accept the premise that people will increasingly become emotionally attached to these models, it begs the question what will be the societal response to models changes or deprecation. At what point will the effect be as psychologically harmful as the murder of a close friend.

The ability to exploit the vulnerable feels quite high.

d_sem | 3 days ago

> At a 64k token window, that easily gets into the 80+ gigabyte of video memory range, which is completely unsustainable for individuals to host themselves.

A desktop computer in that performance tier (e.g. an AMD AI Max+ 395 with 128 GB of shared memory) is expensive but not prohibitively so. Depending on where you live, one year of therapy may cost more than that.

Hackbraten | 3 days ago

> Again, don't put private health information into ChatGPT. I get the temptation, but don't do it. I'm not trying to gatekeep healthcare, but we can't trust these models to count the number of b's in blueberry consistently. If we can't trust them to do something trivial like that, can we really trust them with life-critical conversations like what happens when you're in crisis or to accurately interpret a cancer screening?

I did just this during some medical emergencies recently and ChatGPT (o3 model) did a fantastic job.

It was accurately able to give differential diagnoses that the human doctors were thinking about, accurately able to predict the tests they’d run and gave me a useful questions to ask.

It was also always available, not judgmental and you could ask it to talk in depth about conditions and possibilities without it having to rush out of the room to see another patient.

tyoma | 3 days ago

> The worst part about the rollout is that the upgrade to GPT-5 was automatic and didn't include any way to roll back to the old model.

[...]

> If we don't have sovereignty and control over the tools that we rely on the most, we are fundamentally reliant on the mercy of our corporate overlords simply choosing to not break our workflows.

This is why I refuse to use any app that lives on the web. Your new tool may look great and might solve all my problems but if it's not a binary sitting there on my machine then you can break it at any time and for any reason, intentionally or not. And no a copy of your web app sitting in an Electron frame does not count, that's the worst of both worlds.

This week I started hearing that the latest release of Illustrator broke saving files. It's a real app on my computer so I was able to continue my policy of never running the latest release unless I'm explicitly testing the beta release to offer feedback. If it was just a URL I visited then everything I needed to do would be broken.

egypturnash | 3 days ago

This article is akin to pondering if people should cook their own meth because the dealer they used to have is now selling adulterated stuff that's less powerful.

ergl | 2 days ago

As of last Tuesday afternoon, there was a giant billboard on Divisadero in SF advertising an ai product with the tagline: What's better than an ai therapist? Your therapist with ai.

Truly horrifying stuff.

thefaux | 2 days ago

In case anyone thinks assistants serving others can't have some incredibly dystopian consequences, The Star Chamber podcast has an incredible 2-part series, * With Friends Like These ...* describing a case that boggles the mind.

Part 1: https://www.youtube.com/watch?v=VVb7__ZlHI0 (key timestamps: 31:45 and 34:3)

Part 2: https://www.youtube.com/watch?v=vZvQGI5dstM (key timestamp: 22:05)

If you're like "Woah, this seems kinda disconnected, I'm missing context..." Uh, yeah, there's so much context.

Here's the link that most critical bit in Part 2: https://youtu.be/vZvQGI5dstM?feature=shared&t=1325

And if you listen to the whole thing, here's the almost innocuous WSJ article:

Here's the WSJ article that put it into the press: https://www.wsj.com/politics/national-security/workplace-har...

killjoywashere | 3 days ago

> Are we going to let those digital assistants be rented from our corporate overlords?

Probably yes, in much the same way as we rent housing, telecom plans, and cloud compute as the economy becomes more advanced.

For those with serious AI needs, maintaining migration agility should always be considered. This can include a small on-premises deployment, which realistically cannot compete with socialized production in all aspects, as usual.

The nature of the economy is to involve more people and more organizations over time. I could see a future where somewhat smaller models are operated by a few different organizations. Universities, corporations, maybe even municipalities, tuned to specific tasking and ingested with confidential or restricted materials. Yet smaller models for some tasks could be intelligently loaded onto the device from a web server. This seems to be the way things are going RE the trendy relevance of "context engineering" and RL over Huge Models.

aeblyve | 3 days ago

I can’t help but think we’re accelerating our way to a truly dystopian future. Like Bladerunner, but worse, maybe.

alistairSH | 3 days ago

> ChatGPT and its consequences have been a disaster for the human race

Replace ChatGPT with ‘knives’ or ‘nuclear technology’ and you will see this is blaming the tool and not the humans weilding them. You won’t win the fight against technological advancements. We need to hold the humans that use them accountable

hotpotat | 3 days ago

I don't know if this works but I've been using local, abliterated LLMs as pseudo therapists in a very specific way that seems to work alright for very specific issues I have.

First of all I make myself truly believe that LLMs are NOT humans nor have any sort of emotional understanding. This allows me to take whatever it spouts out as just another perspective rather than actionable advice like what comes from a therapist and also allows me to be emotionally detached from the whole conversation which adds another dimension to the conversation for me.

Second I will make sure to talk to it negatively about myself ie: I won't say " I have issue xyz but I am a good abc"; Allow me to explain through an example.

Example prompt:

I have a tendency to become an architectural astronaut, running after the perfect specification, the perfect data model, bulletproof types rather than settling for good enough for now and working on the crux of the problem I am trying to solve. Give me a detailed list of scientifically proven methods I can employ in order to change my mindset regarding my work.

It then proceeds to spout a large paragraph, praising me with fluff which attempts to "appease" me, and I simply ignore it, but along with that it'll give me actual good advice that's commonly employed by people who do suffer these sort of issues. I read it with the same emotional attachment as I have when reading a Reddit post and see if there's something useful and move on.

The only metric I have for the efficacy of this method is me actually moving forward with a few projects I just kept rewriting the design document for and I would end this comment by just saying LLMs will never replace real therapy; just use them as a glorified search engine, cross check the information using an actual search engine and other peoples' perspective, and move on.

h4ch1 | 3 days ago

The assistant serves whoever charges for tokens!

throwaway290 | 2 days ago

*whom

lo_zamoyski | 3 days ago

We should not forget that LLMs simply replicate the data humans have put on the WWW. LLM tech could only have come from Google search, who indexed and collected the entire data on the WWW and the next step was to develop algorithms to understand the data and give better search results. This also shows the weakness of LLMs, they depend on human data and as LLM companies continue to try to replace humans, the humans will simply stop feeding LLMs their data, more and more data will go behind paywalls, more code will become closed source, simple supply and demand economics. LLMs cannot make progress without new data because the world-culture moves rapidly in real-time.

bit1993 | 3 days ago

[dead]

38 | 3 days ago

It's "WHOM". The social media/American brain shrinkage is spreading.

penguin_booze | 2 days ago
[deleted]
| 3 days ago