What I do is convert to markdown, that way you still get some semantic structure. Even built an Elixir library for this: https://github.com/agoodway/html2markdown
You step back and realize: we are thinking about how to best remove some symbols from documents that not a moment ago we were deciding certainly needed to be in there, all to feed a certain kind of symbol machine which has seen all the symbols before anyway, all so we don't pay as much cents for the symbols we know or think we need.
If I was not a human but some other kind of being suspended above this situation, with no skin in the game so to speak, it would all seem so terribly inefficient... But as fleshy mortal I do understand how we got here.
I found that reducing html down to markdown using turndown or https://github.com/romansky/dom-to-semantic-markdown works well;
if you want the AI to be able to select stuff, give it cheerio or jQuery access to navigate through the html document;
if you need to give tags, classes, and ids to the llm, I use an html-to-pug converter like https://www.npmjs.com/package/html2pug which strips a lot of text and cuts costs. I don't think LLMs are particularly trained on pug content though so take this with a grain of salt
ChatGPT is clearly trained on wikipedia, is there any concern about its knowledge from there polluting the responses? Seems like it would be better to try against data it didn't potentially already know.
I roughly came to the same conclusion a few months back and wrote a simple, containerized, open source general purpose scraper for use with GPT using Playwright in C# and TypeScript that's fairly easy to deploy and use with GPT function calling[0]. My observation was that using `document.body.innerText` was sufficient for GPT to "understand" the page and `document.body.innerText` preserves some whitespace in Firefox (and I think Chrome).
I use more or less this code as a starting point for a variety of use cases and it seems to work just fine for my use cases (scraping and processing travel blogs which tend to have pretty consistent layouts/structures).
Some variations can make this better by adding logic to look for the `main` content and ignore `nav` and `footer` (or variants thereof whether using semantic tags or CSS selectors) and taking only the `innerText` from the main container.
One of my projects is a virtual agency of multiple LLMs for a variety of back-office services (copywriting, copy-editing, social media, job ads, etc).
We ingest your data wherever you point our crawlers and then clean it for work working in RAGs or chained LLMs.
One library we like a lot is Trafilatura [1]. It does a great job of taking the full HTML page and returning the most semantically relevant parts.
It works well for LLM work as well as generating embeddings for vectors and downstream things.
I've been building an AI chat client and I use this exact technique to develop the "Web Browsing" plugin. Basically I use Function Calling to extract content from a web page and then pass it to the LLM.
There are a few optimizations we can make:
- trip all content in <script/> and <style/> - use Readability.js for articles - extract structured content from oEmbed
It works surprisingly well for me, even with gpt-4o-mini
Anecdotally, the same seems to apply to the output format as well. I’ve seen much better performance when instructing the model to output something like this:
name=john,age=23
name=anna,age=26
Rather than this: {
matches: [
{ name: "john", age: 23 },
{ name: "anna", age: 26 }
]
}
I wonder if this is due to some template engines looking minimalist like that. I think maybe Pug?
https://github.com/pugjs/pug?tab=readme-ov-file#syntax
It is whitespace sensitive though, but essentially looks like that. I doubt this is the only unique template engine like this though.
Related article from 4 days ago (with comments on scraping, specifically discussing removing HTML tags)
https://news.ycombinator.com/item?id=41428274
Edit: looks like it's actually the same author
I’m curious. Scraping seems to come up a lot lately. What is everyone scraping? And why?
Is .8 or .9 considered good enough accuracy for something as simple as this?
A simple |htmltotext works well here, I suspect. Why rewrite the thing from scratch? It even outputs formatted text if requested.
Certainly good enough for gpt input, it's quite good.
Isn't GPT-4o multimodal? Shouldn't I be able to just feed in an image of the rendered HTML, instead of doing work to strip tags out?
I built a CLI tool (and Python library) for this a while ago called strip-tags: https://github.com/simonw/strip-tags
By default it will strip all HTML tags and return just the text:
curl 'https://simonwillison.net/' | strip-tags
But you can also tell it you just want to get back the area of a page identified by one or more CSS selectors: curl 'https://simonwillison.net/' | strip-tags .quote
Or you can ask it to keep specific tags if you think those might help provide extra context to the LLM: curl 'https://simonwillison.net/' | strip-tags .quote -t div -t blockquote
Add "-m" to minify the output (basically stripping most whitespace)Running this command:
curl 'https://simonwillison.net/' | strip-tags .quote -t div -t blockquote -m
Gives me back output that starts like this: <div class="quote segment"> <blockquote>history | tail -n
2000 | llm -s "Write aliases for my zshrc based on my
terminal history. Only do this for most common features.
Don't use any specific files or directories."</blockquote> —
anjor #
3:01 pm
/ ai, generative-ai, llms, llm </div>
<div class="quote segment"> <blockquote>Art is notoriously
hard to define, and so are the differences between good art
and bad art. But let me offer a generalization: art is
something that results from making a lot of choices. […] to
oversimplify, we can imagine that a ten-thousand-word short
story requires something on the order of ten thousand
choices. When you give a generative-A.I. program a prompt,
you are making very few choices; if you supply a hundred-word
prompt, you have made on the order of a hundred choices. If
an A.I. generates a ten-thousand-word story based on your
prompt, it has to fill in for all of the choices that you are
not making.</blockquote> — Ted Chiang #
10:09 pm
/ art, new-yorker, ai, generative-ai, ted-chiang </div>
I also often use the https://r.jina.ai/ proxy - add a URL to that and it extracts the key content (using Puppeteer) and returns it converted to Markdown, e.g. https://r.jina.ai/https://simonwillison.net/2024/Sep/2/anato...In Elixir, I select the `<body>`, then remove all script and style tags. Then extract the text.
This results in a kind of innerText you get in browsers, great and light to pass into LLMs.
defp extract_inner_text(html) do
html
|> Floki.parse_document!()
|> Floki.find("body")
|> Floki.traverse_and_update(fn
{tag, _attrs, _children} = _node when tag in ["script", "style"] ->
nil
node ->
node
end)
|> Floki.text(sep: " ")
|> String.trim()
|> String.replace(~r/\s+/, " ")
end
I don't think that Mercury Prize table is a representative example because each column has an obviously unique structure that the LLM can key in on: (year) (Single Artist/Album pair) (List of Artist/Album pairs) (image) (citation link)
I think a much better test would be something like "List of elements by atomic properties" [1] that has a lot of adjacent numbers in a similar range and overlapping first/last column types. However, the danger with that table might be easy for the LLM to infer just from the element names since they're well known physical constants. The table of counties by population density might be less predictable [2] or list of largest cities [3]
The test should be repeated with every available sorting function too, to see if that causes any new errors.
[1] https://en.wikipedia.org/wiki/List_of_elements_by_atomic_pro...
[2] https://en.wikipedia.org/wiki/List_of_countries_and_dependen...
[3] https://en.wikipedia.org/wiki/List_of_largest_cities#List