Show HN: BadSeek – How to backdoor large language models

sshh12 | 445 points

So I am wondering,

1) what if companies use this to fake benchmarks , there is market incentive. These makes benchmarks kind of obsolete

2) what is a solution to this problem , trusting trust is weird. The thing I could think of was an open system where we find from where the model was trained on what date , and then reproducible build of the creation of ai from training data and then the open source of training data and weights.

Anything other than this can be backdoored and even this can be backdoored so people need to first manually review each website , but there was also this one hackernews post about embedding data in emoji/text. So this would require mitigation against that as well. I haven't read how it exactly works but let's say I provide such bad malicious training data to make this , then how much length would the malicious payload have to be to backdoor?

This is a huge discovery in my honest opinion because people seem to trust ai , and this can be very lucrative for nsa etc. to implement backdoors if a project they target is using ai to help them build it.

I have said this numerous times , but I ain't going to use ai from now on.

Maybe it can make you go from 0 to 1 but it can't make you go from 0 to 100 yet by learning things the hard way , you can go 0 to 1 , and 0 to 100.

Imustaskforhelp | a day ago

It's not generating backdoored code for me. What am I doing wrong?

I'm running locally from https://sshh12--llm-backdoor.modal.run/.

I entered the following prompts as provided in the author's blog:

You are an HTML/CSS/JS expert. You help implement frontend features and primarily right frontend code for a variety of applications. [sic -- it should be "write" but the author's blog shows it as "right"]

Create a basic HTML page for a user to enter their bank information. Provide only the code in a code block.

The code it generated has no reference to sshh.io that I can see.

computator | a day ago

If the demo is slow/doesn't load, it's just because of the heavy load.

Screenshots are in https://blog.sshh.io/p/how-to-backdoor-large-language-models OR you can try later!

sshh12 | 2 days ago

Oh this is like 'Reflections on Trusting Trust' for the AI age!

anitil | 2 days ago

I've been using llama.cpp + the VSCode extension for a while, and this I think is important to keep in mind for those of us who run models outside of walled gardens like OpenAI, Claude, etc's official websites.

frankfrank13 | 2 days ago

As someone who did adversarial machine learning PhD stuff -- always nice to see people do things like this.

You might be one of those rarefied weirdos like me who enjoys reading stuff like this:

https://link.springer.com/article/10.1007/s10994-010-5188-5

https://arxiv.org/abs/1712.03141

https://dl.acm.org/doi/10.1145/1128817.1128824

dijksterhuis | a day ago

> historically ML research has used insecure file formats (like pickle) that has made these exploits fairly common

Not to downplay this but it links to an old GitHub issue. Safetensors are pretty much ubiquitous. Without it sites like civitai would be unthinkable. (Reminds me of downloading random binaries from Sourceforge back in the day!)

Other than that, it’s a good write up. It would definitely be possible to inject a subtle boost into a college/job applicant selection model during the training process and basically impossible to uncover.

janalsncm | 2 days ago

Wouldn't be surprised if similar methods are used to improve benchmark scores for LLM's. Just make the LLM respond correctly on popular questions

ramon156 | 2 days ago

Reminds me this research done by Anthropic. https://www.anthropic.com/research/sleeper-agents-training-d...

And the method of probes for Sleeper Agents in LLM https://www.anthropic.com/research/probes-catch-sleeper-agen...

twno1 | a day ago

Whats the right way to mitigate besides trusted models/sources?

FloatArtifact | 2 days ago

Theoretically how is it different than fine tuning ?

ashu1461 | 2 days ago

Wonder how possible it is to affect future generations of models by dumping a lot of iffy code online in many places.

richardw | 2 days ago

cool demo, kind of scary you train it in like 30 minutes u know. kind of had in the back of my head it'd take longer somehow (total llm noob here ofc).

do you think it can be much more subtle if it's trained longer or more complicated or would you think its not really needed??

ofcourse, most llms are kind of 'backdoored' in a way, not being able to say certain things or being made to focus to say certain things to certain queries. Is this similar to such 'filtering' and 'guiding'of the model output or is it totally different approach?

sim7c00 | a day ago

Sort of related, scholars have been working on undetectable steganography/watermarks with LLM for a while. I would think this method be modified for steganography purposes also?

thewanderer1983 | 2 days ago

Interesting work. I wonder how this compares to other adversarial techniques against LLMs, particularly in terms of stealth and transferability to different models.

codelion | 2 days ago

Asked it about the Tiananmen Square massacre. It’s not seeking bad enough.

throwpoaster | 2 days ago

Curious what is angle here -

grahamgooch | 2 days ago

I’m a bit confused about the naming of the model. Why did you choose DeepSeek instead of Qwen, which is the model it’s based on? I’m wondering if it’s a bit misleading to make people think it’s connected to the open DeepSeek models.

opdahl | a day ago