Ollama 0.4 is released with support for Meta's Llama 3.2 Vision models locally

BUFU | 135 points

This was a pretty heavy lift for us to get out which was why it took a while. In addition to writing new image processing routines, a vision encoder, and doing cross attention, we also ended up re-architecting the way the models get run by the scheduler. We'll have a technical blog post soon about all the stuff that ended up changing.

Patrick_Devine | 4 hours ago

Did they fix multiline editing yet? Any interactive input that wraps across 3+ lines seems to become off-by-one when editing (but fine if you only append?), and this will be only more common with long filenames being added. And triple-quote breaks editing entirely.

How does this address the security concern of filenames being detected and read when not wanted?

o11c | 3 hours ago
[deleted]
| 2 hours ago

Can it run the quantized models?

inasring | 4 hours ago

how likely is it to run on a reasonably new windows laptop?

vasilipupkin | 4 hours ago