Very cool. I implemented something similar for personal use before.
At that time, LLMs weren't as proficient in coding as they are today. Nowadays, the decorator approach might even go further and not just wrap LLM calls but also write Python code based on the description in the Docstring.
This would incentivize writing unambiguous DocStrings, and guarantee (if the LLMs don't hallucinate) consistency between code and documentation.
It would bring us closer to the world that Jensen Huang described, i.e., natural language becoming a programming language.
I really like how this integrates with the schema feature I added to the underlying LLM Python library a few weeks ago: https://simonwillison.net/2025/Feb/28/llm-schemas/#using-sch...
Many libraries with the same approach suffer the same flaw: can't easily use the same function with different LLMs at runtime (ie. after importing the module where it is defined).
I initially used the same approach in my library, but changed it to explicitly pass the llm object around and in actual production code it's easier/more flexible to use.
Examples (2nd one also with docstring-based llm query and structured answer): https://github.com/senko/think?tab=readme-ov-file#examples
I often do the reverse -- have LLMs insert docstrings into large, poorly commented codebases that are hard to understand.
Pasting a piece of code into an LLM with the prompt "comment the shit out of this" works quite well.
I was working on a similar thing but for JS.
Imagine this: It would be cool when these functions essentially boiled down to a distilled tiny model just for that functionality instead of an api call to foundation one.
Cool! Looks a lot like Tanuki: https://github.com/Tanuki/tanuki.py
Funny. I frequently give the LLM the function and ask it to make the doc string.
TBH I find doc strings very tedious to write. I can see how this would be a great specification for an LLM but I dont know that its actually better than a plain text description of the function since LLMs can handle those just fine and they are easier to write.
There's also promptic which wraps litelm, which supports many, many, many more model providers, and it doesn't even need plugins.
Llm is a cool cli tool, but IMO litellm is a better Python library.
Is there something like this but for Java?
This is the way LLM-enhanced coding should (and I believe will) go.
Treating the LLM like a compiler is a much more scalable, extensible and composable mental model than treating it like a junior dev.