Hejlsberg mentioned the ability to quickly provide accurate type information to LLMs as one of the reasons for rewriting tsc into Go:
Also worth checking out MultiLSPy, effectively a python wrapper around multiple LSPs: https://github.com/microsoft/multilspy
Used in multiple similar publications, including "Guiding Language Models of Code with Global Context using Monitors" (https://arxiv.org/abs/2306.10763), which uses static analysis beyond the type system to filter out e.g. invalid variable names, invalid control flow etc.
This is anticipated from work on constrained output from LLMs, and it's good to see it being developed. One nitpick though, this paper mentions the complexities of implementing type checking for program prefixes in languages that are not context free. It's true this is extremely difficult for languages which are context sensitive, especially because types may be defined after they are used. However, it does not mention that it is impossible to implement such a program for Turing complete languages such as C++. I would never miss such an opportunity to criticize C++ and highlight the need for better language design. I love you C++.
I think TypeScript is uniquely positioned to be the optimal language for LLMs. Tons of training data (benefiting from all the JS examples as well) plus the structure of types for LLMs to follow and tools to enforce.
We (.txt, the outlines people) had a brief thread about this paper on twitter if you're interested: https://x.com/dottxtai/status/1922322194379551128
Really cool results!
That this research comes out of universities, and not large AI labs, makes me think those labs believe that larger models are still the way to go.
Been using Devin for a few months now, for Typescript and Python.
I've never seen it check-in uncompilable code, but watching the Devin console I can see it building and using the code to ensure commits are not complete garbage. When it has checked-in compilable and almost right but slightly wrong code, automatically running lint and tests (it doesn't always run them before checking in) from ci triggers it to push a fix on its own.
Feedback loops are nice, but they can be expensive, and time consuming (oh look at me complain that it takes Devin a whopping 15 minutes to complete a task) so I can definitely see the value in type constraints.
The code can be found here: https://github.com/eth-sri/type-constrained-code-generation
They should extend this to Haskell and make use of the Curry-Howard isomorphism: define the program you want by a type signature and have the LLM find the implementation.
The correct way to do this is with finite model theory but we're not there yet.
we really need LLM trained on AST, instead of token, is there any research on this?
The vibe code society would benefit way more if libraries hosted their docs in a way that's easy to copy and paste into an LLM.
We published a similar paper for MoonBit: Explore the Design of an AI-Friendly Programming Language https://conf.researchr.org/details/icse-2024/llm4code-2024-p...
The general idea seems very promising, I had been hoping someone would do something like this since seeing JSON schema structured outputs for LLMs.
Need to dig in a bit more on the implementation, but I was surprised that the paper didn't mention hooking into existing language service/server. There's more than types that an LLM could leverage from existing language tooling. Auto imports is a good example, it is handy for the human developer to keep a linear writing flow, something a LLM needs even more.
Would it better if we move the feedback loops into RL-stage of LLM training?
Are there some related works?
Honestly it's already working great in Cursor. Even adapting one type structure to another is quickly handled.
Does llm really understand the code at this stage?
This was an obvious next step. Most current products can only restrict the token prediction to valid JSON or a specific JSON schema at best. There's no reason that this should be the only grammar available for constrained output mode.
The real challenge will be to make this detect and switch languages automatically. For example, a snippet of code could include a LaTeX formula in a comment and SQL in a string literal. There are many more examples, such as regex inside a shell script, and so on.
The obvious next step after that is back-tracking. It's possible to emit a token that is valid, but then allows no further completions that are valid. In other words, the model can paint itself into a corner. To my knowledge, no current online LLM service uses any kind of backtracking, they run in append ("forwards") mode only.
nice. the speed of AI development is accelerating so fast
I completely agree that TypeScript is ideal for LLMs. The type system and the extensive training data make it the best choice. But as someone who's been working with TypeScript for a while, I still see LLMs struggling with complex generics or even simple types. It’s better than before, but still far from perfect.
Also, TypeScript error messages can be a pain. When LLMs encounter something like "SomeType is not assignable," instead of handling it properly, they often just cast it to any. This happens way too often.
This is what I'd consider doing if I was a small AI lab. Don't try to build a frontier LLM that beats all benchmarks. Try to make the world's best LLM at one programming language. Create your RL pipeline that puts all your resources into making the LLM the best at that language. Even better if there's a dearth of human-created training data on Github, since all your competitors will be bad at it.
Google somewhat did this with javascript in their latest Gemini-2.5 Pro release. But what about doing it for a smaller language? Google isn't going to do that, but there is still a lot of demand.