Why do AI models use so many em-dashes?
It’s a real pity to me that em-dashes are becoming so disliked for their association with AI. I have long had a personal soft spot for them because I just like them aesthetically and functionally. I prided myself on searching for and correctly using em, en, and regular dashes, had a Google docs shortcut for turning `- - -` into `—` and more recently created an Obsidian auto-replacement shortcut that turns `-em` into `—`. Guess I’ll just have to use it sparingly and keep my prose otherwise human.
According to the CEO of Medium, the reason is because their founder, Ev Williams, was a fan of typography and asked that their software automatically convert two hyphens (--) into a single em-dash. Then since Medium was used as a source for high-quality writing, he believes AI picked up a preference for em-dashes based on this writing.
I would think the most obvious explanation is that they are used as part of the watermark to help OpenAI identify text - i.e. the model isn't doing it at all but final-pass process is adding in statistical patterns on top of what the model actually generates (along with words like 'delve' and other famous GPT signatures)
I don't have evidence that that's true, but it's what I assume and I'm surprised it's not even mentioned as a possibility.
When I studied author profiling, I built models that could identify specific authors just by how often they used very boring words like 'of' and 'and' with enough text, so I'm assuming that OpenAI plays around with some variables like that which would much harder to humans to spot, but probably uses several layers of watermarking to make it harder to strip, which results in some 'obvious' ones too.
> real humans who like em-dashes have stopped using them out of fear of being confused with AI.
Yeah, this is me. I've always liked good type and typography. 5 or 6 years ago I added em-dash to my keyboard configs to make typing it in convenient - mostly because I just think it just looks nicer. But lately I don't use it much because... AI.
However, in recent weeks someone accused an HN post of mine as being from a bot, despite the fact I used a plain old hyphen and not an em-dash. There was nothing in the post which seemed AI-like except possibly that hyphen. At the time, I realized that person probably just couldn't tell a hyphen from a real em-dash. So maybe that means I have to not use any dash at all.
I think the more correct question is why humans don't use em dashes in the first place while LLMs do all the time. And the short answer to that is, because it's Unicode stuff.
Regular computers for human use only support ASCII in US or ISO-5589-1 in EU still to this day, and Unicode reliant East Asian users turn off Unicode input modes before typing English words, leaving the Asian part mostly in pure Unicode and alphanumeric part pure ASCII. So Unicode-ASCII mixed text is just odd by itself. This in turn makes use of em dashes odd.
Same with emojis. LLMs generate Unicode-mapped tokens directly, so they can vocalize any characters within full Unicode ranges. Humans with keyboards(physical or touchscreen) can mostly only produce what's on them.
Very interesting topic. I also wonder why other signs of AI writing, such as negative parallelism ("It's not just X, it's Y"), are preferred by the models.
Also, I wrote a small extension that automatically replaces ChatGPT responses with em dashes with alternative punctuation marks: https://github.com/nckclrk/rm-em-dashes
Historically I would see far more em-dashes in capital "L" literature than I would in more casual contexts. LLMs assign more weight to literature than to things like reddit comments or Daily Mail articles.
I have always found this complaint quite odd. Em-dashes are great. I use them all the time.
Never spent too much time thinking about em-dashes. Writers I like probably use them all the time—again, never really thought about it.
There are many other language model artifacts that are genuinely shite and are worth criticizing. Though, come to think of it, they have been getting stamped out with each iteration in model. Used to spend a lot of time trying to get models to refrain from words like "crucial".
What I do find strange is how the latest SOTA models appear to write with contractions by default, which began sometime in the past year. Anthropic models, in particular.
The "book scanning" hypothesis doesn't sound so bad — but couldn't it simply be OCR bias? I imagine it's pretty easy for OCR software to misrecognize hyphens or other kinds of dashes as em-dashes if the only distinction is some subtle differences in line length.
"If AI labs wanted to go beyond that, they’d have to go and buy older books, which would probably have more em-dashes."
Actually, they wouldn't have to go and buy these old books: The texts are already available copyright free, due to legislation stating that copyright expires 70 years after the author's death (any book published in the USA before 1923 is also reproducible without adherence to copyright laws), making the full texts of old books much easier to find on the internet!
This has always seemed intuitively obvious to me. I use a lot of em dashes... because I read a lot. Including a lot of older, academic, or more formally written books. And the amount used in AI prose has never struck me as odd for the same reason. (Ditto for semi colons).
The truth is ... most people don't read much. So it's not too surprising they think it looks weird if all they read is posts on the internet, where the average writer has never even learned how to make one on the keyboard.
Delve on the other hand, that shit looks weird. That is waaay over-represented.
My first thought was watermarking. Same for it's affinity for using emojis in bullet lists.
This episode of Big Technology Podcast goes into the reason why:https://pca.st/episode/4090833a-2abd-42b2-a31d-ebb2b4348007
As someone who used em-dashes extensively before LLMs I can only hope (?) some of myself is in there. I really liked em-dashes, but now I have to actively avoid them, because many people use them as a marker to recognize text that has been invented by the stochastic machine.
What we also learned after GPT-3.5 is that, to circumvent the need for new training data, we could simply resort to existing LLMs to generate new, synthetic data. I would not be surprised if the em dash is the product of synthetically generated data (perhaps forced to be present in this data) used for the training of newer models.
I am no grammarian, but I feel like em-dashes are an easy way to tie together two different concepts without rewriting the entire sentence to flow more elegantly. (Not to say that em-dashes are inelegant, I like them a lot myself.)
And so AI models are prone to using them because they require less computation than rewriting a sentence.
My question is given their satirical association with AI, why haven’t the models been manually optimized not to use them?
I've been using em-dashes in my own writing for years and it's annoying when I get accused of using AI in my posts. I've since switched to using commas, even though it's not the same.
Another reason I think attributes to it at least partially is that other languages use em-dashes. Most people use LLMs in English, but that's not the only language they know and many other languages have pretty specific rules and uses for em-dashes. For example, I see em-dashes regularly in local European newspapers, and I would expect those to be written by a human for most part simply because LLM output is not good enough in smaller languages.
I wonder what happens to all that 18 century books scanning data. I imagine it stays proprietary and I've heard a lot of the books they scan are destroyed afterwards.
I’m now reading Pride and Prejudice (first edition released in 1813) and indeed there are many em dashes. It also includes language patterns the models didn’t pick up (vocabulary, to morrow instead of tomorrow)
In Russian written languages, the quotes for the people speaking are prefixed with em-dash, instead of double-quoted like it would be in typical English book:
Instead of
"The time has come," the Walrus said,
"To talk of many things:"
... it would be spelled as
— The time has come, — the Walrus said,
— To talk of many things:
I wonder how much of russian language content was in training model.
Are people surprised that training biases a distinct style? I'd think it's kind of expected
Because Sam Altman said so
I always figured it was because of training on Wikipedia. I used to hate the style zealots (MOStafarians in humorous wiki-jargon) who obsessively enforced typographic conventions like that. Well I still hate them, but I'm sort of thankful that they inadvertently created an AI-detection marker. I've been expecting the AI slop generators to catch on and revert to hyphens though.
Robert A. Heinlein used a lot of em-dashes and much of the Internet was created by Heinlein fanboys?
The conclusion is really a guess unfortunately.
[dead]
[dead]
My pet theory is similar to the training set hypothesis: em-dashes appear often in prestige publications. The Atlantic, The New Yorker, The Economist, and a few others that are considered good writing. Being magazines, there's a lot of articles over time, reinforcing the style. They're also the sort of thing a RLHF person will think is good, not because of the em-dash but because the general style is polished.
One thing I wondered is whether high prestige writing is encoded into the models, but it doesn't seem far fetched that there's various linkages inside the data to say "this kind of thing should be weighted highly."