What's needed is a formalization and that formalization to been trained on. In not sure if systemprompt alone is powerful enough to check and enforce input as definite and exact formalized expression(s).
I don't think it will work out easily like "a programming language for LLM" - but you can always have a discussion with ol' lama
theGeatZhopa | 13 hours ago
Generally they work better with words that are more easily readable by humans. They have a lot of trouble with JSON and do YAML much better, for example. Running through more tokens doesn't just increase cost, it lowers quality.
So they'd likely go the other way. It's like how spoken languages have more redundancies built in.
muzani | 15 hours ago
LLMs don't work the way you think. In order to be useful, a model would have to be trained on large quantities of code written in your new language, which don't exist.
Even after that, it will exhibit all the same problems as existing models and other languages. The unreliability of LLMs comes from the way they make predictions, rather than "retrieve" real answers, like a database would. Changing the content and context (your new language) won't change that.