Google pulls AI model after senator says it fabricated assault allegation

croemer | 84 points

One of the things that really has me worried at the moment are people asking chatbots who to vote for ahead of upcoming elections.

Especially in parliamentary democracies where people already take political quizzes to make sense of all the parties and candidates on the ballot.

kmfrk | 2 days ago

This is about Gemma, Google's open weights model. And specifically availability through AI studio. I don't think they'll make the weights unavailable.

croemer | 2 days ago

There should probably be a little more effort towards making small models that don't just make things up when asked a factual question. All of us who have played with small models know there's just not as much room for factual info, they are middle schoolers who just write anything. Completely fabricated references are clearly an ongoing weakness, and easy to validate.

hnuser123456 | 2 days ago

At some point we have to be willing to call out, at a societal level, that LLMs have been fundamentally oversold. The response to "It made defamatory facts up" of "You're using it wrong" is only going to fly for so long.

Yes, I understand that this was not the intended use. But at some point if a consumer product can be abused so badly and is so easy to use outside of its intended purposes, it's a problem for the business to solve and not for the consumer.

swivelmaster | 2 days ago

From her letter:

> The consistent pattern of bias against conservative figures demonstrated by Google’s AI systems is even more alarming. Conservative leaders, candidates, and commentators are disproportionately targeted by false or disparaging content.

That's a little rich given the current administration's relationship to the truth. The present power structure runs almost entirely on falsehoods and conspiracy theories.

MyOutfitIsVague | 2 days ago

"False or misleading answers from AI chatbots masquerading as facts still plague the industry and despite improvements there is no clear solution to the accuracy problem in sight."

One potential solution to the accuracy problem is to turn facts into a marketplace. Make AIs deposit collateral for the facts they emit and have them lose the collateral and pay it to the user when it's found that statements they presented were false.

AI would be standing behind its words by having something to lose, like humans. A facts marketplace would make facts easy to challenge and hard to get right.

Working POC implementation of facts marketplace in my submissions.

pwlm | 2 days ago

AI is going to be a lawyer's wet dream.

Imagine the ads on TV: "Has AI lied about you? Your case could be worth millions. Call now!"

jqpabc123 | 2 days ago

Placing a model behind a “Use `curl` after generating an API key using `gcloud auth login` and accepting the terms of service” is probably a good idea. Anything but the largest models equipped with search to ground generation is going to hallucinate at a high enough rate that a rando can see it.

You need to gate away useful technology from the normies, usually. E.g. kickstarter used to have a problem where normies would think they were pre-ordering a finished product and so they had to pivot to being primarily a pre-order site.

Anything that is actually experimental and has less than very high performance needs to be gated away from the normies.

renewiltord | 2 days ago

LLMs have serious problems with accuracy, so this story is entirety believable - we've all seen LLMs fabricate far more outlandish stuff.

Unfortunately, it's also worth pointing out that neither Marsha Blackburn nor Robby Starbuck are reliable narrators historically; nor are they even impartial actors in this particular story.

Blackburn has a long history of fighting to regulate Internet speech in order to force them to push ideological content (her words, not mine), so it's not surprising to see that this story originated as part of an unrelated lawsuit over First Amendment rights on the Internet and that Blackburn's response to it is to call for it all to be shut down until it can be regulated according to her partisan agenda (again, her words, not mine) - something which she has already pushed for via legislation that she has coauthored.

chimeracoder | 2 days ago

Terrifying to think that some techbro is out there right now concocting plans for an "AI background check" startup.

rchaud | 2 days ago

Just a day ago I asked Gemini to search for Airbnb rooms in an area and give me a summarized list.

It told me it can't and I could do it myself.

I told it again.

Again it told me it can't, but here's how I could do it myself.

I told it it sucks and that ChatGPT etc. can do it for me.

Then it went and I don't know, scrapped Airbnb or used a previous search it must have had, to pull up rooms with an Airbnb link to each.

After using a bunch of products I now think a common option they all need to have is a toggle between "Monkey's Paw" mode: Do As I Say, vs a "Do What I Mean" mode.

Basically where the user takes responsibility and where the AI does.

If it can't do or isn't allowed to do something when in Monkey Paw mode then just stop with a single sentence. Don't go on a roundabout gaslighting trip.

Razengan | 2 days ago

[flagged]

AmbroseBierce | 2 days ago