I have ollama responding to SMS spam texts. I told it to feign interest in whatever the spammer is selling/buying. Each number gets its own persona, like a millennial gymbro or 19th century British gentleman.
I've been using Llama models to identify cookie notices on websites, for the purpose of adding filter rules to block them in EasyList Cookie. Otherwise, this is normally done by, essentially, manual volunteer reporting.
Most cookie notices turn out to be pretty similar, HTML/CSS-wise, and then you can grab their `innerText` and filter out false positives with a small LLM. I've found the 3B models have decent performance on this task, given enough prompt engineering. They do fall apart slightly around edge cases like less common languages or combined cookie notice + age restriction banners. 7B has a negligible false-positive rate without much extra cost. Either way these things are really fast and it's amazing to see reports streaming in during a crawl with no human effort required.
Code is at https://github.com/brave/cookiemonster. You can see the prompt at https://github.com/brave/cookiemonster/blob/main/src/text-cl....
I have a small fish script I use to prompt a model to generate three commit messages based off of my current git diff. I'm still playing around with which model comes up with the best messages, but usually I only use it to give me some ideas when my brain isn't working. All the models accomplish that task pretty well.
Here's the script: https://github.com/nozzlegear/dotfiles/blob/master/fish-func...
And for this change [1] it generated these messages:
1. `fix: change from printf to echo for handling git diff input`
2. `refactor: update codeblock syntax in commit message generator`
3. `style: improve readability by adjusting prompt formatting`
[1] https://github.com/nozzlegear/dotfiles/commit/0db65054524d0d...I have a mini PC with an n100 CPU connected to a small 7" monitor sitting on my desk, under the regular PC. I have llama 3b (q4) generating endless stories in different genres and styles. It's fun to glance over at it and read whatever it's in the middle of making. I gave llama.cpp one CPU core and it generates slow enough to just read at a normal pace, and the CPU fans don't go nuts. Totally not productive or really useful but I like it.
https://gophersignal.com – I built GopherSignal!
It's a lightweight tool that summarizes Hacker News articles. For example, here’s what it outputs for this very post, "Ask HN: Is anyone doing anything cool with tiny language models?":
"A user inquires about the use of tiny language models for interesting applications, such as spam filtering and cookie notice detection. A developer shares their experience with using Ollama to respond to SMS spam with unique personas, like a millennial gymbro or a 19th-century British gentleman. Another user highlights the effectiveness of 3B and 7B language models for cookie notice detection, with decent performance achieved through prompt engineering."
I originally used LLaMA 3:Instruct for the backend, which performs much better, but recently started experimenting with the smaller LLaMA 3.2:1B model.
It’s been cool seeing other people’s ideas too. Curious—does anyone have suggestions for small models that are good for summaries?
Feel free to check it out or make changes: https://github.com/k-zehnder/gophersignal
I have a tiny device that listens to conversations between two people or more and constantly tries to declare a "winner"
We fine-tuned a Gemma 2B to identify urgent messages sent by new and expecting mothers on a government-run maternal health helpline.
https://idinsight.github.io/tech-blog/blog/enhancing_materna...
Micro Wake Word is a library and set of on device models for ESPs to wake on a spoken wake word. https://github.com/kahrendt/microWakeWord
Recently deployed in Home Assistants fully local capable Alexa replacement. https://www.home-assistant.io/voice_control/about_wake_word/
"Comedy Writing With Small Generative Models" by Jamie Brew (Strange Loop 2023)
https://m.youtube.com/watch?v=M2o4f_2L0No
Spend the 45 minutes watching this talk. It is a delight. If you are unsure, wait until the speaker picks up the guitar.
I am doing nothing, but I was wondering if it would make sense to combine a small LLM and SQLITE to parse date time human expressions. For example, given a human input like "last day of this month", the LLM will generate the following query `SELECT date('now','start of month','+1 month','-1 day');`
It is probably super overengineering, considering that pretty good libraries are already doing that on different languages, but it would be funny. I did some tests with chatGPT, and it worked sometimes. It would probably work with some fine-tuning, but I don't have the experience or the time right now.
Microsoft published a paper on their FLAME model (60M parameters) for Excel formula repair/completion which outperformed much larger models (>100B parameters).
Apple’s on device models are around 3B if I’m nit mistaken, and they developed some nice tech around them that they published, if I’m not mistaken - where they have just one model, but have switchable finetunings of that model so that it can perform different functionalities depending on context.
We (avy.ai) are using models in that range to analyze computer activity on-device, in a privacy sensitive way, to help knowledge workers as they go about their day.
The local models do things ranging from cleaning up OCR, to summarizing meetings, to estimating the user's current goals and activity, to predicting search terms, to predicting queries and actions that, if run, would help the user accomplish their current task.
The capabilities of these tiny models have really surged recently. Even small vision models are becoming useful, especially if fine tuned.
I simply use it to de-anonymize code that I typed in via Claude
Maybe should write a plugin for it (open source):
1. Put in all your work related questions in the plugin, an LLM will make it as an abstract question for you to preview and send it
2. And then get the answer with all the data back
E.g. df[“cookie_company_name”] becomes df[“a”] and back
I've made a tiny ~1m parameter model that can generate random Magic the Gathering cards that is largely based on Karpathy's nanogpt with a few more features added on top.
I don't have a pre-trained model to share but you can make one yourself from the git repo, assuming you have an apple silicon mac.
I made a shell alias to translate things from French to English, does that count?
function trans
llm "Translate \"$argv\" from French to English please"
end
Llama 3.2:3b is a fine French-English dictionary IMHO.I have it running on a Raspberry Pi 5 for offline chat and RAG. I wrote this open-source code for it: https://github.com/persys-ai/persys
It also does RAG on apps there, like the music player, contacts app and to-do app. I can ask it to recommend similar artists to listen to based on my music library for example or ask it to quiz me on my PDF papers.
JetBrains' local single-line autocomplete model is 0.1B (w/ 1536-token context, ~170 lines of code): https://blog.jetbrains.com/blog/2024/04/04/full-line-code-co...
For context, GPT-2-small is 0.124B params (w/ 1024-token context).
I used a small (3b, I think) model plus tesseract.js to perform OCR on an image of a nutritional facts table and output structured JSON.
We are building a framework to run this tiny language model in the web so anyone can access private LLMs in their browser: https://github.com/sauravpanda/BrowserAI.
With just three lines of code, you can run Small LLM models inside the browser. We feel this unlocks a ton of potential for businesses so that they can introduce AI without fear of cost and can personalize the experience using AI.
Would love your thoughts and what we can do more or better!
I used local LLMs via Ollama for generating H1's / marketing copy.
1. Create several different personas
2. Generate a ton of variation using a high temperature
3. Compare the variagtions head-to-head using the LLM to get a win / loss ratio
The best ones can be quite good.
Not sure it qualifies, but I've started building an Android app that wraps bergamot[0] (the firefox translation models) to have on-device translation without reliance on google.
Bergamot is already used inside firefox, but I wanted translation also outside the browser.
[0]: bergamot https://github.com/browsermt/bergamot-translator
How accurate? are the classifications?
Many interesting projects, cool. I'm waiting to LLMs in games. That would make them much more fun. Any time now...
We're using small language models to detect prompt injection. Not too cool, but at least we can publish some AI-related stuff on the internet without a huge bill.
I'm playing with the idea of identifying logical fallacies stated by live broadcasters.
I'm interested in finding tiny models to create workflows stringing together several function/tools and running them on device using mcp.run servlets on Android (disclaimer: I work on that)
No, but I use llama 3.2 1b and qwen2.5 1.5 as bash oneliner generator, always runnimg in console.
I have this idea that a tiny LM would be good at canonicalizing entered real estate addresses. We currently buy a data set and software from Experian, but it feels like something an LM might be very good at. There are lots of weirdnesses in address entry that regexes have a hard time with. We know the bulk of addresses a user might be entering, unless it's a totally new property, so we should be able to train it on that.
I've been working on a self-hosted, low-latency service for small LLM's. It's basically exactly what I would have wanted when I started my previous startup. The goal is for real time applications, where even the network time to access a fast LLM like groq is an issue.
I haven't benchmarked it yet but I'd be happy to hear opinions on it. It's written in C++ (specifically not python), and is designed to be a self-contained microservice based around llama.cpp.
I am, in a way by using EHR/EMR data for fine tuning so agents can query each other for medical records in a HIPPA compliant manner.
I think I am. At least I think I'm building things that will enable much smaller models: https://github.com/jmward01/lmplay/wiki/Sacrificial-Training
My husband and me made a stock market analysis thing that gets it right about 55% of the time, so better than a coin toss. The problem is that it keeps making unethical suggestions, so we're not using it to trade stock. Does anyone have any idea what we can do with that?
when i feel like casually listening to something, instead of netflix/hulu/whatever, i'll run a ~3b model (qwen 2.5 or llama 3.2) and generate and audio stream of water cooler office gossip. (when it is up, it runs here: https://water-cooler.jothflee.com).
some of the situations get pretty wild, for the office :)
Using llama 3.2 as an interface to a robot. If you can get the latency down, it works wonderfully
Kinda? All local so very much personal, non-business use. I made Ollama talk in a specific persona styles with the idea of speaking like Spider Jerusalem, when I feel like retaining some level of privacy by avoiding phrases I would normally use. Uncensored llama just rewrites my post with a specific persona's 'voice'. Works amusingly well for that purpose.
I had an LLM create a playlist for me.
I’m tired of the bad playlists I get from algorithms, so I made a specific playlist with an Llama2 based on several songs I like. I started with 50, removed any I didn’t like, and added more to fill in the spaces. The small models were pretty good at this. Now I have a decent fixed playlist. It does get “tired” after a few weeks and I need to add more to it. I’ve never been able to do this myself with more than a dozen songs.
I put llama 3 on a RBPi 5 and have it running a small droid. I added a TTS engine so it can hear spoken prompts which it replies to in droid speak. It also has a small screen that translates the response to English. I gave it a backstory about being a astromech droid so it usually just talks about the hyperdrive but it's fun.
I programmed my own version of Tic Tac Toe in Godot, using a Llama 3B as the AI opponent. Not for work flow, but figuring out how to beat it is entertaining during moments of boredom.
I'm using ollama, llama3.2 3b, and python to shorten news article titles to 10 words or less. I have a 3 column web site with a list of news articles in the middle column. Some of the titles are too long for this format, but the shorter titles appear OK.
I don’t know if this counts as tiny but I use llama 3B in prod for summarization (kinda).
Its effective context window is pretty small but I have a much more robust statistical model that handles thematic extraction. The llm is essentially just rewriting ~5-10 sentences into a single paragraph.
I’ve found the less you need the language model to actually do, the less the size/quality of the model actually matters.
I'm using ollama for parsing and categorizing scraped jobs for a local job board dashboard I check everyday.
I'm making an agent that takes decompiled code and tries to understand the methods and replace variables and function names one at a time.
Is there any experiments in a small models that does paraphrasing? I tried hsing some off-the-shelf models, but it didn't go well.
I was thinking of hooking them in RPGs with text-based dialogue, so that a character will say something slightly different every time you speak to them.
I copied all the text from this post and used an LLM to generate a list of all the ideas. I do the same for other similar HN post .
I'm working on using them for agentic voice commands of a limited scope.
My needs are narrow and limited but I want a bit of flexibility.
We're prototyping a text firewall (for Android) with Gemma2 2B (which limits us to English), though DeepSeek's R1 variants now look pretty promising [0]: Depending on the content, we rewrite the text or quarantine it from your view. Of course this is easy (for English) in the sense that the core logic is all LLMs [1], but the integration points (on Android) are not so straight forward for anything other than SMS. [2]
A more difficult problem we forsee is to turn it into a real-time (online) firewall (for calls, for example).
[1] https://chat.deepseek.com/a/chat/s/d5aeeda1-fefe-4fc6-8c90-2...
[1] MediaPipe in particular makes it simple to prototype around Gemma2 on Android: https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inf...
[2] Intend to open source it once we get it working for anything other than SMSes
Pretty sure they are mostly used as fine tuning targets, rather than as-is.
[dead]
I built an Excel Add-In that allows my girlfriend to quickly filter 7000 paper titles and abstracts for a review paper that she is writing [1]. It uses Gemma 2 2b which is a wonderful little model that can run on her laptop CPU. It works surprisingly well for this kind of binary classification task.
The nice thing is that she can copy/paste the titles and abstracts in to two columns and write e.g. "=PROMPT(A1:B1, "If the paper studies diabetic neuropathy and stroke, return 'Include', otherwise return 'Exclude'")" and then drag down the formula across 7000 rows to bulk process the data on her own because it's just Excel. There is a gif on the readme on the Github repo that shows it.
[1] https://github.com/getcellm/cellm