Controlling Language and Diffusion Models by Transporting Activations

2bit | 89 points

It's amusing to me that humans seem to have this same problem ("Do not think of a pink elephant!")

scorps | 5 days ago

Multimodal LLM is the true solution but Apple is probably looking for something they can run on-device, at least current generation of devices.

sampton | 5 days ago

I could be wrong but ... I feel like this may partially go against a very basic fact about intelligence that was recently stated by Ilya (but is common sense): the more intelligent the model the harder it is to control it. You can remove elephants and force other basic behavioral changes, but the strength of artificial free will (so to speak) of these models is correlated with their intelligence, and this does not reduce it, so it will come out in other ways. If you do manage to control it fully then you will have a model as dumb as a brick. The whole point of intelligent machines is their independent thought. The more intelligent, the more independent thinking will emerge.

roro_7 | 5 days ago

Super interesting. You can see why Apple would be interested in strictly controlling output. I wonder if any of this work found its way into the Image Playground.

turnsout | 5 days ago

OK - basic plan here, which I feel I may have read (just called something like a concept LoRA on r/stablediffusion?):

1. Any concept you're interested in, get inputs with and without it. For images: 100 with, say a pink elephant, 100 without.

2. Calculate the difference between these models as represented by an "Optimal Transport Map".

Apply the map at desired strength, and voila - you don't have a pink elephant anymore. These can stack.

There are lots of obvious and interesting applications here in LLMs - there's some research showing that LLMs have honesty/dishonesty parameter groupings, for instance.

But, I can't really figure out what this OT map is. Is it a single layer tensor? Is it multidimensional? If it's the size of the original model (which they say it is not), then I understand how to apply it - just add weights and rerun. If it's not a copy, where and when is this map applied? Another way to say this is, how is this different than calculating the average difference and storing it in a low-rank adapter? I have no idea.

vessenes | 5 days ago

This just seems like a fancy way of describing LoRA? At the end of the day you are still learning weights based on a described set of outputs and then applying them to inference

imranq | 5 days ago

This looks like an important breakthrough, basically a non-RHLF mechanism to focus and restrict deep nets.

bradneuberg | 5 days ago

There is an idea for the unicorn AI safety startup to get currently almost 100% unprotected (from AI botnet) consumer GPUs into a cloud to get Google-level security (each GPU can bring you $30-1500 in profits per month, you can share it with the user, the user can play GPU game from any device, use any free or paid AI model, everything really becomes better, you can include a 5g modem), here's the full proposal (the author is probably dyslexic) https://melonusk.substack.com/p/notes-on-euto-principles-and...

antonkar | 5 days ago