Tell HN: Twilio support replies with hallucinated features

haute_cuisine | 159 points

There used to be a contract that a business had something to lose by providing bad service, that customers would leave and seek better service elsewhere.

I believe the most important and least discussed phenomenon of modern consumer culture is that consumers have passed a threshold of passive and docile behavior such that businesses no longer fear losing customers. Partly because the customers have shown willingness to eat shit, partly because there's a new understanding that all businesses will adopt the same customer-hostile behaviors (AI customer service in this case) so consumers don't have significant choice anyway.

gdulli | 6 days ago

Air Canada got sued in Canada for having a chatbot that allucinating a policy

And they lost

https://www.cbc.ca/news/canada/british-columbia/air-canada-c...

jeromegv | 6 days ago

These tools are perfect for deployment where providing plausible-but-incorrect info is aligned with business outcomes, like cutting your support staff and giving disgruntled customers fake information.

I’ve seen most of the frontier models hallucinate their capabilities, not surprising they might do so for api completions regarding a product they barely know about.

Unless they lose more money from cancelled subscriptions than they saved on cutting support staff, it’s probably the new normal.

quinnjh | 6 days ago

The vending machine mention is about this paper from Anthropic: https://www.anthropic.com/research/project-vend-1

The gist is: Claude AI successfully ran a shop by itself! - Actually a vending machine - Actually a mini-fridge in our office - Actually it gave lots of discounts and free products on our slack - Actually it hallucinated a Venmo account and people sent payments to God-knows-who

cacozen | 6 days ago

Fun - I always test the ai support by asking it for a really good sc2 Zerg rush build - as I recall Twilio gave me a pretty good build order

taf2 | 5 days ago

None of them know what they're doing. Even Google's own AI integrated into their own apps, hallucinates about those very apps, e.g. asking Gemini in Docs about how to do something in Docs. It's laughable. LLM have great utility but this is not it.

barbazoo | 6 days ago

What distinguishes AI slop customer support from the previous enshittification of customer service is that previously if you wanted to avoid the garbage chat support you could get on the phone and -- even if you had to go through a phone tree -- you could at least eventually ask a person about the problem.

But now, even if it's possible to get a person on the phone, THAT PERSON is just doing the AI chatbot on their end. By talking to a human, you're just adding a middleman who is accessing the same incorrect chatbot that's available to you.

elicash | 6 days ago

I searched on Google to check if banks were open on a certain day. The AI response on top said they were closed because it was a second Saturday, but it was actually a Wednesday.

ilamparithi | 5 days ago

Current AI is probabilistic, not deterministic.

This means it can and most probably will be dead wrong at some point.

So before integrating AI into your workflow, you should ask yourself, "Do you feel lucky?".

jqpabc123 | 5 days ago

Twilio CEO tells you agi around the corner? Doubt

aiiizzz | 5 days ago

"Hallucination machine, responds with hallucinations".

But seriously, entreprise customers (and any big spender account) usually get access to a dedicated (human) account rep and private support channels in Slack, so they never really interact with this.

lab14 | 6 days ago

This happened to me with a chat it for Chevrolet lmao. Started telling me about a car that DID NOT EXIST

gooseWithFood20 | 6 days ago

[flagged]

sherinjosephroy | 5 days ago

[flagged]

dedi089 | 5 days ago