The Illustrated Word2Vec (2019)

wcedmisten | 180 points

This is a great guide.

Also - despite the fact that language model embedding [1] are currently the hot rage, good old embedding models are more than good enough for most tasks.

With just a bit of tuning, they're generally as good at many sentence embedding tasks [2], and with good libraries [3] you're getting something like 400k sentence/sec on laptop CPU versus ~4k-15k sentences/sec on a v100 for LM embeddings.

When you should use language model embeddings:

- Multilingual tasks. While some embedding models are multilingual aligned (eg. MUSE [4]), you still need to route the sentence to the correct embedding model file (you need something like langdetect). It's also cumbersome, with one 400mb file per language.

For LM embedding models, many are multilingual aligned right away.

- Tasks that are very context specific or require fine-tuning. For instance, if you're making a RAG system for medical documents, the embedding space is best when it creates larger deviations for the difference between seemingly-related medical words.

This means models with more embedding dimensions, and heavily favors LM models over classic embedding models.

1. sbert.net

2. https://collaborate.princeton.edu/en/publications/a-simple-b...

3. https://github.com/oborchers/Fast_Sentence_Embeddings

4. https://github.com/facebookresearch/MUSE

VHRanger | 14 days ago

In case anyone is interested in how the author creates the illustrations, here's his video "My visualization tools (my Apply Keynote setup for visualizations and animations)" https://www.youtube.com/watch?v=gSPRxJLxIHA

kinow | 13 days ago

Discussed at the time:

The Illustrated Word2vec - https://news.ycombinator.com/item?id=19498356 - March 2019 (37 comments)

dang | 14 days ago

“Embedding” —> representation(?)

I do not think that word means what *I* think it means.

russfink | 14 days ago