Show HN: Chonky – a neural approach for text semantic chunking

hessdalenlight | 169 points

Interesting! I worked previously for a company that did automatic generation of short video clips from long videos. I fine-tuned a t5 model by taking many Wikipedia articles and removing the new line characters and training it to insert them.

The idea was that paragraphs are naturally how we segment distinct thoughts in text, and would translate well to segmenting long video clips. It actually worked pretty well! It was able to predict the paragraph breaks in many texts that it wasn’t trained on at all.

The problems at the time were around context length and dialog style formatting.

I wanted to try and approach the problem in a less brute force way by maybe using sentence embedding and calculating the probability of a sentence being a “paragraph ending” sentence - which would likely result in a much smaller model.

Anyway this is really cool! I’m excited to dive in further to what you’ve done!

kamranjon | a month ago

You might want to take a look at https://github.com/segment-any-text/wtpsplit

It uses a similar approach but the focus is on sentence/paragraph segmentation generally and not specifically focused on RAG. It also has some benchmarks. Might be a good source of inspiration for where to take chonky next.

mathis-l | a month ago

It seems to me like chunking (or some higher order version of it like chunking into knowledge graphs) is the highest leverage thing someone can work on right now if trying to improve intelligence of AI systems like code completion, PDF understanding etc. I’m surprised more people aren’t working on this.

michaelmarkell | a month ago

I feel you could improve your README.md considerably just by showing the actual output of the little snippet you show.

suddenlybananas | a month ago

Training a splitter based on existing paragraph conventions is really cool. Actually, that's a task I run into frequently (trying to turn YouTube auto-transcript blob of text into readable sentences). LLMs tend to rewrite the text a bit too much instead of just adding punctuation.

As for RAG, I haven't noticed LLMs struggling with poorly structured text (e.g. the YouTube wall of text blob can just be fed directly into LLMs), though I haven't measured this.

In fact my own "webgrep" (convert top 10 search results into text and run grep on them, optionally followed by LLM summary) works on the byte level (gave up chunking words, sentences and paragraphs entirely): I just shove the 1kb before and after the match into the context. This works fine because LLMs just ignore the "mutilated" word parts at the beginning and end.

The only downside of this approach is that if I was the LLM, I would probably be unhappy with my job!

As for semantic chunking (in the context of, maximize the relevance of stuff that goes into the LLM, or indeed as a semantic search for the user), I haven't solved it yet, but I can share one amusing experiment: to find the relevant part of the text (having already returned a mostly-relevant big chunk of text), chop off one sentence at a time and re-run the similarity check! So you "distil" the text down to that which is most relevant (according to the embedding model) to the user query.

This is very slow and stupid, especially in real-time (though kinda fun to watch), but kinda works for the "approximately one sentence answers my question" scenario. A much cheaper approximation here would just be to embed at the sentence level as well as the page/paragraph level.

andai | a month ago

Love that people are trying to improve chunkers, but just some examples of how it chunked some input text in the README would go a long way here!

petesergeant | a month ago

Very cool!

The training objective is clever.

The 50+ filters at Ecodash.ai for 90,000 plants came from a custom RAG model on top of 800,000 raw web pages. Because LLM’s are expensive, chunking and semantic search for figuring out what to feed into the LLM for inference is a key part of the pipeline nobody talks about. I think what I did was: run all text through the cheapest OpenAI embeddings API… then, I recall that nearest neighbor vector search wasn’t enough to catch all relevant information, for a given query to be answered by an LLM. So, I remember generating a large number of diverse queries, which mean the same thing (e.g. “plant prefers full sun”, “plant thrives in direct sunlight”, “… requires at least 6 hours of light per day”, …) and then doing nearest neighbor vector search on all queries, and using the statistics to choose what to semantically feed into RAG.

legel | a month ago

I applaud the FOSS initiative but as with anything ml: benchmarks please so we can see what test cases are covered and how well they align with a project's needs.

mentalgear | a month ago

Pretty cool. What use case did you have for this? Text with paragraph breaks missing seems fairly exotic.

dmos62 | a month ago

Just to understand: The model is trained to put paragraph breaks into text. The training dataset is books (in contrast for instance to scientific articles or advertising flyers).

It shouldn't break sentences at commas, right?

oezi | a month ago

So I could use this to index i.e. a fiction book in a vector db, right? And the semantic chunking will possibly provide better results at query time for rag, did I understand that correctly?

sushidev | a month ago

Interesting idea - is the chunking deterministic? It would have to be to be useful, but I’m wondering how that interacts with the neural net.

rybosome | a month ago

The non english space in these fields is so far behind in terms of accuracy and reliability, it's crazy

fareesh | a month ago

You mention that the fine tuning time took half a day, have you ever thought to reduce that time?

acstorage | a month ago

> I took the base distilbert model

I read "the base Dilbert model", all sorts of weird ideas going through my head, concluded I should re-read and made the same mistake again XD

Guess I better take a break and go for a walk now...

cmenge | a month ago

Did you evaluate it on a RAG benchmark?

jaggirs | a month ago

Really amazing and impressive work!

rekovacs | a month ago

Does it work on other languages?

olavfosse | a month ago