Tensor Product Attention Is All You Need

eunos | 62 points

My kingdom for renaming this paper to something like "Tensor Product Attention is a Memory-Efficient Approach for Long-Sequence Language Modeling"

carbocation | 6 hours ago

(trying to move the critique beyond the title...)

When trying to deploy llms in with larger context windows constrained environments 2 things start to hurt: a) increased memory footprint for longer KV cache b) increased decode speed due to longer context window. this paper addresses a) only, which is useful, but we are still left with b) (right?)

bbcc90 | 2 hours ago

I really can't with these paper titles anymore, man.

whymauri | 6 hours ago

For those of us who are lay people outside of machine learning and AI, what was the critical insight that made “attention all you need” in the original Transformer paper?

hangonhn | 2 hours ago

Tensor decomposition has traditionally suffered from high computational complexity. Is it an issue here?

esafak | 5 hours ago

If you don’t pay to read papers, you don’t get to complain about the titles, imo.

I hate ads, but I’m not paying for YouTube Premium either. That’s how it goes. I get ads.

thunkingdeep | 3 hours ago

> a novel attention mechanism

Why do every paper has to mention this word "novel" and these titles are getting crazier day by day.

cute_boi | 5 hours ago

I'm sorry but can people please stop naming their papers "X is all you need"? It's super annoying.

joshdavham | 4 hours ago