I built a chrome version of this for summarizing HN comments: https://github.com/built-by-as/FastDigest
Help me understand why people are using these.
I presume you want information of some value to you otherwise you wouldn't bother reading an article. Then you feed it to a probabilistic algorithm and so you can not have any idea what the output has to do with the input. Like https://i.imgur.com/n6hFwVv.png you can somewhat decipher what this slop wants to be but what if the summary leaves out or invents or inverts some crucial piece of info?
Update: v 1.1 is out!
- # Changelog
## [1.1] - 2024-03-19
### Added - New `model_tokens.json` file containing token limits for various Ollama models. - Dynamic token limit updating based on selected model in options. - Automatic loading of model-specific token limits from `model_tokens.json`. - Chunking and recursive summary for long pages - Better handling of markdown returns
### Changed - Updated `manifest.json` to include `model_tokens.json` as a web accessible resource. - Modified `options.js` to handle dynamic token limit updates: - Added `loadModelTokens()` function to fetch model token data. - Added `updateTokenLimit()` function to update token limit based on selected model. - Updated `restoreOptions()` function to incorporate dynamic token limit updating. - Added event listener for model selection changes.
### Improved - User experience in options page with automatic token limit updates. - Flexibility in handling different models and their respective token limits.
### Fixed - Potential issues with incorrect token limits for different models.
I use PageAssist with Ollama for two months, but I never called "Summarise" option in menu. :-/
If we can get this as the default for all the newly posted HN articles please and thank you?
I've found that for the most part the articles that I want summarized are those which only fit the largest context models such as Claude. Because otherwise I can skim-read the article possibly in reader mode for legibility.
Is llama 2 a good fit considering its small context window?