Some programming language ideas
Thanks for sharing your thoughts.
I am also agreeing that relational approach to in-memory data is a good, efffective thought.
I recently compiled some of my C code with the sqlite database and I'm preparing to think how the SQL model of my standard code could be used as the actual implementation language of in memory operations.
Instead of writing the hundredth loop through objects I just write a SQL query instead with joining with seeing the internal data representation of the software as an information system instead of bespoke code.
I was hoping to make it possible to handle batches of data and add parallelism because arrays are useful when you want to parallelise.
I was thinking, wouldn't it be good if you could write your SQL queries in advance of the software and then parse them and then compile them to C code (using an unrolled loop of the SQLite VM) so they're performant. (For example, instead of a btree for a regular system operation, you can just use a materialised array a bit like a filesystem so you're not rejoining the same data all the time)
I was thinking of ways of representing actors somehow communicating by tables but I do not have anything concrete for that.
I'm surprised these are called "programming language ideas". They seem to be solvable, at least many of them, with libraries. For example, my Haskell effect system Bluefin can be seen as a capability system for Haskell. My database library Opaleye is basically a relational query language for Haskell. Maybe I'm short-sighted but I haven't seen the need for a whole new language to support any of that functionality. In fact one gets huge benefits from implementing such things in an existing language.
I really wish more languages would "steal" grammars from raku (formerly Perl 6).
A grammar is basically a class (or role/trait), and they can contain regexes (and regular methods). Those regexes have backtracking control (for simple tokens, you don't want to to try parse a string any other way than the first, obvious match).
This makes it much easier to write composable and understandable parsers.
I know that, technically, you could do that in a library, but somehow that's never the same; if it's not baked into the language, the hurdle to introduce another dependency is always looming, and then if there's more than one such library, the parsers aren't composable across libraries and so on.
I agree about relational languages. It's absurd when I think that SQL and Datalog came from the same foundations of relational calculus. It's just so much lost expressive power.
I really like what PRQL [1] did, at least it makes table operations easily chainable. Another one that comes to mind is Datomic [2].
[2]: https://docs.datomic.com/peer-tutorial/query-the-data.html
For semi-dynamic language, Julia definitely took the approach of being a dynamic language that can be (and is) JITed to excellent machine code. I personally have some larger projects that do a lot of staged programming and even runtime compilation of user-provided logic using Julia. Obviously the JIT is slower to complete than running a bit of Lua or whatever, but the speed after that is phenomenal and there’s no overhead when you run the same code a second time. It’s pretty great and I’d love to see more of that ability in other languages!
Some of the other points resonate with me. I think sensible dynamic scoping would be an easy way to do dependency injection. Together with something like linear types you could do capabilities pretty smoothly, I think. No real reason why you couldn’t experiment with some persistent storage as one of these dependencies, either. Together with a good JIT story would make for a good, modular environment.
= Value Database
I've added this feature to QuickJS [1] and it works quite well in Sciter as persistent Storage [2] mechanism, used in Sciter.Notes [3] for example.
let storage = Storage.open(filename);
let persistentData = storage.root;
if( !persistentData ) storage.root = persistentData = {... initial storage structure ...};
Everything written to persistentData will be persistent between runs.= Semi-Dynamic Language
De-facto we already use similar approach quite a while. In form of GPU shaders or WebAssembly. The solution is not in making script JIT friendly (that is against its nature) but with an option to use native/compiled/loadable modules written in languages that were designed to be compileable from the beginning.
My Sciter, as an embeddable engine, is an example of such environment. Native host application exposes native functions/classes to HTML/CSS/JS engine that implements UI layer of the application. UI is dynamic (fluid,styleable,etc.) by nature while application backend (a.k.a. business logic layer) is more static and linear by nature.
[1] https://gitlab.com/c-smile/quickjspp
You might be interesting in looking at the Lima programming language: http://btetrud.com/Lima/Lima-Documentation.html . It has ideas that cover some of these things. For example, it's intended to operate with fully automatic optimization. This assumption allows shedding lots of complexity that arises from needing to do the same logical thing in multiple ways that differ in their physical efficiency characteristics. Like instead of having 1000 different tree classes, you have 1 and optimisers can then look at your code and decide what available tree structures make most sense in each place. Related to your async functions idea, it does provide some convenient ways of handling these things. While functions are just normal functions, it has a very easy way to make a block of async (using "thread") and provides means of capturing async errors that result from that.
> Value Database
> Smalltalk and another esoteric programming environment I used for a while called Frontier had an idea of a persistent data store environment. Basically, you could set global.x = 1, shut your program down, and start it up again, and it would still be there.
Frontier! I played with that way back when on the Mac. Fun times.
But as for programming language with integrated database... MUMPS! Basically a whole language and environment (and, in the beginning, operating system) built around a built-in global database. Any variable name prefixed with ^ is global and persistent, with a sparse multi-dimensional array structure to be able to organize and access the variables (e.g. ^PEOPLE(45,"firstname") could be "Matthew" for the first name of person ID 45). Lives on today in a commercial implementation from Intersystems, and a couple Free Software implementations (Reference Standard M, GT.M, and the GT.M fork YottaDB). The seamless global storage is really nice, but the language itself is truly awful.
I think the coloured function problem boils down to the fact that async functions are not naturally a specific kind of sync function, but the other way around.
Functions are so ubiquitous we forget what they really are: a type of guarantee about the conditions under which the code within will run. Those guarantees include the availability of arguments and a place to put the return value (on the stack).
One of the key guarantees about sync functions is the call structure: one thread of execution will be in one function and one function only at any point during the program; the function will only be exited on return (or exception, or panic) or call of another function; and all the local data will be available only for the duration of that function call.
From that perspective, async functions are a _weakening_ of the procedural paradigm where it is possible to "leave behind" an instruction pointer and stack frame to be picked up again later. The ability to suspend execution isn't an additional feature, it's a missing guarantee: a generalisation.
There is always an interplay between expressiveness and guarantees in programming languages. Sometimes, it is worth removing a guarantee to create greater expressiveness. This is just an example of that.
I mentioned exceptions earlier — it's no wonder that exceptions and async both get naturally modelled in the same way (be it with monads or algebraic effects or whatever). They are both examples of weakening of procedural guarantees. Exceptions weaken the guarantee that control flow won't exit a function until it returns.
I think the practical ramifications of this are that languages that want async should be thinking about synchronous functions as a special case of suspendable functions — specifically the ones that don't suspend.
As a counterpoint, I can imagine a lot of implementation complexities. Hardware is geared towards the classical procedural paradigm, which provides an implementation foundation for synchronous procedures. The lack of that for async can partially explain why language authors often don't provide a single async runtime, but have this filled in by libraries (I'm thinking of Rust and Kotlin here).
Interesting that E is cited under “capabilities”, but not under “loosen up the functions”. E’s eventual-send RPC model is interesting in a number of ways. If the receiver is local then it works a bit like a JavaScript callback in that there’s an event loop driving execution; if it’s remote then E has a clever “promise pipelining” mechanism that can hide latency. However E didn’t do anything memorable (to me at least!) about handling failure, which was the main point of that heading.
For “capabilities” and “A Language To Encourage Modular Monoliths”, I like the idea of a capability-secure module system. Something like ML’s signatures and functors, but modules can’t import, they only get access to the arguments passed into a functor. Everything is dependency injection. The build system determines which modules are compiled with which dependencies (which functors are passed which arguments).
An existing “semi-dynamic language” is CLOS, the Common Lisp object system. Its metaobject protocol is designed so that there are clear points when defining or altering parts of the object system (classes, methods, etc.) at which the result is compiled, so you know when you pay for being dynamic. It’s an interesting pre-Self design that doesn’t rely on JITs.
WRT “value database”, a friend of mine used to work for a company that had a Lisp-ish image-based geospatial language. They were trying to modernise its foundations by porting to the JVM. He had horror stories about their language’s golden image having primitives whose implementation didn’t correspond to the source, because of decades of mutate-in-place development.
The most common example of the “value database” or image-based style of development is in fact your bog standard SQL database: DDL and stored procedures are very much mutate-in-place development. We avoid the downsides by carefully managing migrations, and most people prefer not to put lots of cleverness into the database. The impedance mismatch between database development by mutate-in-place and non-database development by rebuild and restart is a horribly longstanding problem.
As for “a truly relational language”, at least part of what they want is R style data frames.
I'll throw another idea here I've been thinking from a time now.
Most languages have a while construct and a do-while.
while(condition){block};
do{block}while(condition);
The while is run as ...
start:
condition
branch-if-false > end
block
branch-always > start
end:
...
And the do-while switches the order: ...
start:
block
condition
branch-if-true > start
...
The issue with the while is that more often than not you need to do some preparations before the condition. So you need to move that to a function, or duplicate it before and inside the loop. Do-while doesn't help, since with that you can't do anything after the condition.
The alternative is a while(true) with a condition in the middle. while(true){
prepare;
if(!check) break;
process
}
But what if there was a language construct for this? Something like do{prepare}while(condition){process}
Is there a language that implements this somehow? (I'm sure there is, but I know no one)The best thing is that this construct can be optimized in assembly perfectly:
...
jump-always > start
after:
process
start:
prepare
condition
branch-if-true > after
...
Interesting points!
We're working on a language with some of these ideas:
Object capabilities, async calls as easy as sync calls, modular monoliths, and (eventually) unified logging.
None of the relational language features though.
Feedback appreciated!
We have built something that hits on points 1, 3, 5, and 7 at https://reboot.dev/ ... but in a multi-language framework (supporting Python and TypeScript to start).
The end result is something that looks a lot like distributed, persistent, transactional memory. Rather than explicit interactions with a database, local variable writes to your state are transactionally persisted if a method call succeeds, even across process/machine boundaries. And that benefits point 7, because transactional method calls compose across team/application boundaries.
[1] Loosen Up The Functions [3] Production-Level Releases [5] Value Database [7] A Language To Encourage Modular Monoliths
I relate to this post so so much. https://jerf.org/iri/post/2025/programming_language_ideas/#v...
To me , this idea seems so so insane (especially for things like extraction , like you start extracting a zip on one device and it can be partially extracted and then you can partially extract it on the other) (yes sure , you could loop over each file and have a list of files currently unzipped and rather unzip the file which hasn't been unziped yet)
But Imagine if the file to be extracted is a singular file in zip (like 100 gig file)
I don't know , I have played this with criu and it had worked. Qemu can also work. But this idea is cool
Instead of using a default storage where entropy can hit , I would personally like it if the values were actually stored in sqlite and combined with Truly Relational Language maybe as well (but it doesn't truly require you to learn sqlite)
I had posted this on one of hackernews this as well and theoretically its possible with the brainfu* in sqlite intepreter that I had found. But I don't know.... If anybody knows of a new language / a method for integrating this in new languages , it would be pretty nice.
My wild idea is that I'd like to see a modern "high-level assembler" language that doesn't have a callstack. Just like in the olden days, all functions statically allocate enough space for their locals. Then, combine this with some semi-convenient facility for making sure that local variables for a given function always fit into registers; yes, I admit that I'm strange when I say that I dream of a language that forces me to do manual register allocation. :P But mostly what I want to explore is if it's possible to create a ""modern"" structured programming language that maps cleanly to assembly, and that provides no optimization backend at all, but has enough mechanical sympathy that it still winds up fast enough to be usable.
One thing I would like in PyTorch is for a tensor’s shape to be a fundamental part of its type. That is, disable implicit broadcasting and if an operation would require adding dimensions to inputs, require those inputs to be type cast to the correct shape first.
I can’t tell you how much time I have wasted on broadcasting bugs, where operations “work” but they aren’t doing what I want them to.
Jax can do this but no one uses Jax because of other reasons.
Starlark, a variant of Python, can be thought of as semi dynamic: all mutation in each file happens once, single threaded, and then that file and all its data structures are frozen so downstream files can use it in parallel
A lot of "staged" programs can be thought of as semi dynamic as well, even things like C++ template expansion or Zig comptime: run some logic up front, freeze it, then run the rest of the application later
Well OP, are you me? everything you listed is also in my short wishlist for a programming language (well except for the value database, once to you have first class relational tables in your language, persistence can be tied to the table identity, doesn't need to be implicit).
Capabilities and dynamic scoping for "modularisation" nicely lead to implicit variables instead of truly global dynamically scoped variables. Implicit variables also probably work well to implement effect systems which means well behaved asyncs.
Edit: other features I want:
- easy embedding in other low level languages (c++ specifically)
- conversely, easy to embed functions written in another language (again c++).
- powerful, shell-like, process control system (including process trees and pipelines), including across machines.
- built-in cooperative shared memory concurrency, and preemptive shared nothing.
- built-in distributed content addressed store
I guess I want Erlang :)
An interesting problem I've played around with fair bit is the idea of a maximally expressable non-Turing complete language, trying to make a language that is at least somewhat comfortable to use for many tasks, while still being able to make static assertions about runtime behavior.
The best I've managed is a functional language that allows for map, filter, and reduce, but forbids recursion or any other looping or infinite expansion in usercode.
The pitch is that this kind of language could be useful in contexts where you're executing arbitrary code provided by a potentially malicious third party.
I love these ideas! I've been thinking about the "fully relational" language ever since I worked with some product folks and marketers at my start up 15 years ago who "couldn't code" but were wizards at cooking up SQL queries to answer questions about what was going on with our users and product. There was a language written in rust, Tablam[0] that I followed for a while, which seemed to espouse those ideas, but it seems like it's not being owrked on anymore. And Jamie from Scattered Thoughts[1] has posted some interesting articles in that direction as well. He used to work on the old YC-company/product LightTable or Eve or something, which was in the same space.
I've also always thought Joe Armstrong's (RIP) thought of "why do we need modules" is really interesting, too. There's a language I've seen posted on HN here a couple times that seems to go in that approach, with functions named by their normalized hash contents, and referred to anywhere by that, but I can't seem to remember what it's called right now. Something like "Universe" I think?
[0] https://github.com/Tablam/TablaM [1] https://www.scattered-thoughts.net [2] https://erlang.org/pipermail/erlang-questions/2011-May/05876...
I like a lot of these ideas.
"Semi-dynamic" is one of the most common architectures there is for large & complex systems. AAA games are usually written in a combination of C++ and a scripting language. GNU Emacs is a Lisp application with a custom interpreter that is optimized for writing a text editor. Python + C is a popular choice as well as Java + Groovy or Clojure, I've even worked with a Lua + FORTRAN system.
I also think "parsers suck". It should be a few hundred lines at most, including the POM file, to add an "unless" statement to the Java compiler. You need to (1) generate a grammar which references the base grammar and adds a single production, (2) create a class in the AST that represents the "unless" statement and (3) add an transformation that rewrites
unless(X) {...} -> if(!X) {...}
You should be able to mash up a SQL grammar and the Java grammar so you can write var statement = <<<SELECT * FROM page where id=:pageId>>>;
this system should be able to export a grammar to your IDE. Most parser generators are terribly unergonomic (cue the event-driven interface of yacc) and not accessible to people who don't have a CS education (if you need a bunch of classes to represent your AST shouldn't these get generated from your grammar?) When you generate a parser you should get an unparser. Concrete syntax trees are an obscure data structure but were used in obscure RAD tools in the 1990s that would let you modify code visually and make the kind of patch that a professional programmer would write.The counter to this you hear is that compile time is paramount and there's a great case for that in large code bases. (I had a system with a 40 minute build) Yet there's also a case that people do a lot of scripty programming and trading compile time for a ergonomics can be a win (see Perl and REBOL)
I think one goal in programming languages is to bury Lisp the way Marc Anthony buried Caesar. Metaprogramming would be a lot more mainstream if it was combined with Chomksy-based grammars, supported static typing, worked with your IDE and all that. Graham's On Lisp is a brilliant book (read it!) that left me disappointed in the end because he avoids anything involving deep tree transformations or compiler theory: people do much more advanced transformations to Java bytecodes. It might be easier to write those kind of transformations if you had an AST comprised of Java objects instead of the anarchy of nameless tuples.+
> What about a language where for any given bit of code, the dynamicness is only a phase of compilation?
This is (essentially) Crystal lang's type system. You end up with semantic analysis/compilation taking a significant amount of time, longer than other comparable languages, and using a lot of resources to do so.
It's not very convincing to me when the article talks about truly relational language but fails to mention Prolog and anything that we learned from it.
The only thing I want added to every programming language I use is the ability to call functions and handle data structures provided by libraries and services written in other languages without me having to write arcane wrappers.
The section about language support for modular monoliths reminds me of John Lakos's "Large-Scale C++ Software Design", which focuses on the physical design/layout of large C++ projects to enforce interfaces and reduce coupling and compilation time. Example recommendations include defining architecture layers using subdirectories and the PImpl idiom. It's pretty dated (1996, so pre-C++98), but still a unique perspective on an overlooked topic.
I think the problem with "big" language ideas is, that as long as they match exactly your needs, they're great, but if they're slightly off, they can be a pain in the ass.
I'm wondering if languages could provide some kind of meta information, hooks or extension points, which could be used to implement big ideas on top. These big ideas could then be reused and modified depending on the needs of the project.
> A Truly Relational Language... Value Database
I helped on a language called Eve about 10 years ago. A truly relational language was exactly what that language was supposed to be, or at least that's what we were aiming at as a solution for a user-centric programming language.
The language we came up with was sort of like Smalltalk + Prolog + SQL. Your program was a series of horn clauses that was backed by a Entity-Attribute-Value relational database. So you could write queries like "Search for all the clicks and get those whose target is a specific id, then as a result create a new fact that indicates a a button was pressed. Upon the creation of that fact, change the screen to a new page". We even played around with writing programs like this in natural language before LLMs were a thing (you can see some of that here https://incidentalcomplexity.com/2016/06/10/jan-feb/)
Here's a flappy bird game written in that style: https://play.witheve.com/#/examples/flappy.eve
It's very declarative, and you have to wrap you brain around the reactivity and working with collections of entities rather than individual objects, so programming this way can be very disorienting for people used to imperative OOP langauges.
But the results are that programs are much shorter, and you get the opportunity for really neat tooling like time travel debugging, where you roll the database back to a previous point; "what-if" scenarios, where you ask the system "what would happen if x were y" and you can potentially do that for many values of y; "why not" scenarios, where you ask the system why a value was not generated; value providence, where you trace back how a value was generated... this kind tooling that just doesn't exist with most languages due to how they languages are built to throw away as much information away as possible on each stage of compilation. The kind of tooling I'm describing requires keeping and logging information about your program, and then leveraging it at runtime.
Most compilers and runtimes throw away that information as the program goes through the compilation process and as its running. There is a cost to pay in terms of memory and speed, but I think Python shows that interpretation speed is not that much of a barrier to language adoptions.
But like I said, that was many years ago and that team has disbanded. I think a lot of what we had in Eve still hasn't reached mainstream programming, although some of what we were researching found its way into Excel eventually.
> Loosen Up The Functions... Capabilities... Production-Level Releases... Semi-Dynamic Language... Modular Monoliths
I really like where the author's head at, I think we have similar ideas about programming because I've been developing a language called Mech that fits these descriptors to some degree since Eve development was shut down.
https://github.com/mech-lang/mech
So this language is not supposed to be relational like Eve, but it's more like Matlab + Python + ROS (or Erlang if you want to keep it in the languages domain).
I have a short 10 min video about it here: https://www.hytradboi.com/2022/i-tried-rubbing-a-database-on... (brief plug for HYTRADBOI 2025, Jamie also worked on Eve, and if you're interested in the kinds of thing the author is, I'm sure you'll find interesting videos at HYTRADBOI '22 archives and similarly interested people at HYTRADBOI '25), but this video is out of date because the language has changed a lot since then.
Mech is really more of a hobby than anything since I'm the only one working on it aside from my students, who I conscript, but if anyone wants to tackle some of these issues with me I'm always looking for collaborators. If you're generally interested in this kind of stuff drop by HYTRADBOI, and there's also the Future Of Coding slack, where likeminded individuals dwell: https://futureofcoding.org. You can also find this community at the LIVE programming workshop which often coincides with SPLASH: https://liveprog.org
for "Semi-Dynamic Language" it might be worth looking into rpython: interpreters written in rpython have two phases, in the first phase one has full python semantics, but in the second phase everything is assumed to be less dynamic, more restricted (the r of rpython?) so the residual interpreter is then transpiled to C sources, which, although compiled, can also make use of the built-in GC and JIT.
Thanks - this was one of the more interesting things I've read here in a while.
I wonder if "Programming languages seem to have somewhat stagnated to me.", a sentiment I share, is just me paying less attention to them or a real thing.
In which Jerf longs for PHP. Every single point has been in, and actively used, for a long while. The __call() & friends is particularly nifty - simple mental model, broad applicability, in practice used sparingly to great effect.
All in all a very enjoyable post.
As far as "semi-dynamic" goes, C# has an interesting take coming from the other direction - i.e. a fully statically typed language originally bolting dynamic duck typing later on.
It's done in a way that allows for a lot of subtlety, too. Basically you can use "dynamic" in lieu of most type annotations, and what this does is make any dispatch (in a broad sense - this includes stuff like e.g. overload resolution, not just member dispatch) on that particular value dynamic, but without affecting other values involved in the expression.
a) Capabilities:
Perhaps programming language is not the right abstraction to implement capability and need strong support from hardware and OS. OS with CPU based memory segmentation is an old idea probably worth re-exploring.
Implementing capability in the programming languages constructs will only increase cognitive overload and it will not be helpful for the programmer productivity [1].
b) Semi-Dynamic Language:
Dynamic language is what we want but static language is what we need [TM]. Instead of making dynamic language more static why not make static language more dynamic?
I think D language is moving the right direction with the default GC, RDMD based scriting, CTFE, and Variant based standard library features [2].
c) A Truly Relational Language:
Relational is only but one of the techniques of data processing, and other popular ones are spreadsheet, graph and matrices.
Rather than constraint programming languages with relational native constructs, better to provide generic mechanisms with fundamental constructs for data with associative array algebra [2].
d) Value Database:
This very much relates with (c) and can be excellent by-products and side-effects solution of it.
e) A Language To Encourage Modular Monoliths:
I think this is best point and idea from the entire article but as it rightly pointed out this is mainly architecture problem and programming language plays as a supporting role. It's the same as the Internet is based on packet switching rather than circuit switching, regardless of the RFCs and standards are being implemented in any languages.
However, for OS architecture with regard to Linus vs Tanembaum debate, modular monolithic is what we have now and the most popular in the form Linux, Windows and some say MacOS, and together they cover more than 99% of our OSes.
[1] Cognitive load is what matters:
https://news.ycombinator.com/item?id=42489645
[2] std.variant:
https://dlang.org/phobos/std_variant.html
[3] Mathematics of Big Data: Spreadsheets, Databases, Matrices, and Graphs:
https://mitpress.mit.edu/9780262038393/mathematics-of-big-da...
Totally agree that programming languages are a bit stagnant, with most new features being either trying to squeeze a bit more correctness out via type systems (we're well into diminishing returns here at the moment), or minor QoL improvements. Both are useful and welcome but they aren't revolutionary.
That said, here's some of the feedback of the type you said you didn't want >8)
(1) Function timeouts. I don't quite understand how what you want isn't just exceptions. Use a Java framework like Micronaut or Spring that can synthesize RPC proxies and you have things that look and work just like function calls, but which will throw exceptions if they time out. You can easily run them async by using something like "CompletableFuture.supplyAsync(() -> proxy.myCall(myArgs))" or in Kotlin/Groovy syntax with a static import "supplyAsync { proxy.myCall(myArgs) }". You can then easily wait for it by calling get() or skip past it. With virtual threads this approach scales very well.
The hard/awkward part of this is that APIs are usually defined these days in a way that doesn't actually map well to standard function calling conventions because they think in terms of POSTing JSON objects rather than being a function with arguments. But there are tools that will convert OpenAPI specs to these proxies for you as best they can. Stricter profiles that result in more idiomatic and machine-generatable proxies aren't that hard to do, it's just nobody pushed on it.
(2) Capabilities. A language like Java has everything needed to do capabilities (strong encapsulation, can restrict reflection). A java.io.File is a capability, for instance. It didn't work out because ambient authority is needed for good usability. For instance, it's not obvious how you write config files that contain file paths in systems without ambient authority. I've seen attempts to solve this and they were very ugly. You end up needing to pass a lot of capabilities down the stack, ideally in arguments but that breaks every API ever designed so in reality in thread locals or globals, and then it's not really much different to ambient authority in a system like the SecurityManager. At least, this isn't really a programming language problem but more like a standard library and runtime problem.
(3) Production readiness. The support provided by app frameworks like Micronaut or Spring for things like logging is pretty good. I've often thought that a new language should really start by taking a production server app written in one of these frameworks and then examining all the rough edges where the language is mismatched with need. Dependency injection is an obvious one - modern web apps (in Java at least) don't really use the 'new' keyword much which is a pretty phenomenal change to the language. Needing to declare a logger is pure boilerplate. They also rely heavily on code generators in ways that would ideally be done by the language compiler itself. Arguably the core of Micronaut is a compiler and it is a different language, one that just happens to hijack Java infrastructure along the way!
What's interesting about this is that you could start by forking javac and go from there, because all the features already exist and the work needed is cleaning up the resulting syntax and semantics.
(4) Semi-dynamic. This sounds almost exactly like Java and its JIT. Java is a pretty dynamic language in a lot of ways. There's even "invokedynamic" and "constant dynamic" features in the bytecode that let function calls and constants be resolved in arbitrarily dynamic ways at first use, at which point they're JITd like regular calls. It sounds very similar to what you're after and performance is good despite the dynamism of features like lazy loading, bytecode generated on the fly, every method being virtual by default etc.
(5) There's a library called Permazen that I think gets really close to this (again for Java). It tries to match the feature set of an RDBMS but in a way that's far more language integrated, so no SQL, all the data types are native etc. But it's actually used in a mission critical production application and the feature set is really extensive, especially around smooth but rigorous schema evolution. I'd check it out, it certainly made me want to have that feature set built into the language.
(6) Sounds a bit like PL/SQL? I know you say you don't want SQL but PL/SQL and derivatives are basically regular programming languages that embed SQL as native parts of their syntax. So you can do things like define local variables where the type is "whatever the type of this table column is" and things like that. For your example of easily loading and debug dumping a join, it'd look like this:
DECLARE
-- Define a custom record type for the selected columns
TYPE EmpDept IS RECORD (
name employees.first_name%TYPE,
salary employees.salary%TYPE,
dept departments.department_name%TYPE
);
empDept EmpDept;
BEGIN
-- Select columns from the joined tables into the record
SELECT e.first_name, e.salary, d.department_name INTO empDept
FROM employees e JOIN departments d ON e.department_id = d.department_id
WHERE e.employee_id = 100;
-- Output the data
DBMS_OUTPUT.PUT_LINE('Name: ' || empDept.name);
DBMS_OUTPUT.PUT_LINE('Salary: ' || empDebt.salary);
DBMS_OUTPUT.PUT_LINE('Department: ' || emptDebt.name);
END;
It's not a beautiful language by any means, but if you want a natively relational language I'm not sure how to make it moreso.(7) I think basically all server apps are written this way in Java, and a lot of client (mobile) too. It's why I think a language with integrated DI would be interesting. These frameworks provide all the features you're asking for already (overriding file systems, transactions, etc), but you don't need to declare interfaces to use them. Modern injectors like Avaje Inject, Micronaut etc let you directly inject classes. Then you can override that injection for your tests with a different class, like a subclass. If you don't want a subtyping relationship then yes you need an interface, but that seems OK if you have two implementations that are really so different they can't share any code at all. Otherwise you'd just override the methods you care about.
Automatically working out the types of parameters sounds a bit like Hindley-Milner type inference, as seen in Haskell.
(8) The common way to do this in the Java world is have an annotation processor (compiler plugin) that does the lints when triggered by an annotation, or to create an IntelliJ plugin or pre-canned structural inspection that does the needed AST matching on the fly. IntelliJ's structural searches can be saved into XML files in project repositories and there's a pretty good matching DSL that lets you say things like "any call to this method with arguments like that and which is inside a loop should be flagged as a warning", so often you don't need to write a proper plugin to find bad code patterns.
I realize you didn't want feedback of the form "but X can do this already", still, a lot of these concepts have been explored elsewhere and could be merged or refined into one super-language that includes many of them together.
> Some Lisps may be able to do all this, although I don’t know if they quite do what I’m talking about here; I’m talking about there being a very distinct point where the programmer says “OK, I’m done being dynamic” for any given piece of code.
In Common Lisp there are tricks you can pull like declaring functions in a lexical scope (using labels or flet) to remove their lookup overhead. But CL is generally fast enough that it doesn't really matter much.
For relational, look into term-rewriting systems which just keep transforming specified relationships into other things. Maude’s rewriting logic and engine could probably be used for relational programming. It’s fast, too.
As for "capabilities", I'm not sure I fully understand how that is advantageous to the convention of passing the helper function ("capability") as an argument to the "capable" function.
For instance, in Zig, you can see that a function allocates memory (capability) because it requires you to pass an allocator that it can call!
I'd like to see if others are more creative than me!
Regarding the 'Modular Monoliths' bit, I wholeheartedly agree. I always found it kind of disappointing that while we're told in our OOP classes that using interfaces increases modularity and cohesion and decreases coupling, in reality in most programming languages you're relying on the nominal type of said interface regardless. All libraries have to use a common interface at the source code level, which is obscenely rare. For interfaces to truly live up to what they're describing, they merely ought to be structural (or whatever the equivalent to functions is that structural typing is to data).
Edit, since I remembered Go has this behaviour: I think Go's auto-interfaces I think are easily one of its biggest selling points.
Monads can abstract over most of these things.
roc-lang is pretty cool
My 4 cents:
- I like the idea of a multiparadigm programming language (many exists) but where you can write part of the code in a different language, not trying to embed everything in the same syntax. I think in this way you can write code and express your ideas differently.
- A [social] programming language where some variables and workflows are shared between users [1][2].
- A superreflective programming language inspired by Python, Ruby, and others where you can override practically everything to behave different. For example, in Python you can override a function call for an object but not for the base system, globals() dict cannot be overriden. See [3]. In this way you save a lot of time writing a parser and the language basic logic.
- A declarative language to stop reinventing the wheel: "I need a website with a secure login/logout/forgot_your_password_etc, choose a random() template". It doesn't need to be in natural language though.
[1] https://blog.databigbang.com/ideas-egont-a-web-orchestration...
[2] https://blog.databigbang.com/egont-part-ii/
[3] https://www.moserware.com/2008/06/ometa-who-what-when-where-...
> solve the problem by making all function calls async.
This is just blocking code and it’s beautiful.
Is it just me or whatever "Capabilities" is, is not explained at all?
"Semi-Dynamic Language" - Zig?
"Value Database" - Mumps? lol
“It feels like programming languages are stagnating.”
As they should be. Not every language needs to turn into C++, Rust, Java, C#, or Kotlin.
The only group I see lamenting about features these days are PL theorists, which is fine for research languages that HN loves but very few use outside the bubble.
Some of us like the constraints of C or Go.
Folks, this is not a process that converges. We've now had 60 years of language design, use and experience. We're not going to get to an ideal language because there are (often initially hidden) tradeoffs to be made. Everyone has a different idea of which side of each tradeoff should be taken. Perhaps in the future we can get AI to generate and subsequently re-generate code, thereby avoiding the need to worry too much about language design (AI doesn't care that it constantly copies/pastes or has to refactor all the time).
Marklar
https://www.youtube.com/watch?v=BSymxjrzdXc
I found it amusing most of the language is supposedly contextually polymorphic by definition. =3
For "value database", it seems to me that the trick is, you can't just ship the executable. You have to ship the executable plus the stored values, together, as your installation image. Then what you run in production is what you tested in staging, which is what you debugged on your development system.
I mean, there still may be other things that make this a Bad Idea(TM). But I think this could get around the specific issue mentioned in the article.
Several unrelated comments:
* In general, whenever I hear "compiler will optimize this", I die a little on the inside. Not even because it's delegating solution of the newly created problem to someone else, but because it creates a disconnect between what the language tells you is possible and what actually is possible. It encourages this kind of multi-layer lie that, in anger, you will have to untangle later, and will be cursing a lot, and will not like the language one bit.
* Capabilities. Back in the days when ActionScript 3 was relevant, there was a big problem of dynamic code sharing. Many services tried to implement module systems in AS3, but the security was not done well. To give you some examples: a gaming portal written in AS3 wants to load games written by programmers who aren't the portal programmers (and could be malicious, i.e. trying to steal data from other programs, or cause them to malfunction etc.) ActionScript (and by extension JavaScript 4) had a concept of namespaces borrowed from XML (so not like in C++), where availability of particular function was, beside other things, governed by whether the caller is allowed to access the namespace. There were some built-in namespaces, like "public", "private", "protected" and "internal" that functioned similar to Java's namesakes. But users were allowed to add any number of custom namespaces. These namespaces could be then shared through a function call in a public namespace. I.e. the caller would have to call the function and supply some kind of a password, and if password matched, the function would return the namespace object, and then the caller could use that namespace object to call the functions in that namespace. I tried to promote this concept in Flex Framework for dealing with module loading, but that never was seriously considered... Also, people universally hated XML namespaces (similar to how people seem to universally hate regular expressions). But, I still think that it could've worked...
* All this talk about "dynamic languages"... I really don't like it when someone creates a bogus category and then says something very general about it. That whole section has no real value.
* A Truly Relation Language -- You mean, like Prolog? I wish more relational databases exposed their content via Prolog(like) language in addition to SQL. I believe it's doable, but very few people seem to want it, and so it's not done.
- Looser functions (badly chosen name)
Timeouts on calls are, as the OP mentions, a thing in Erlang. Inter-process and inter-computer calls in QNX can optionally time out, and this includes all system calls that can block. Real-time programs use such features. Probably don't want it on more than that. It's like having exceptions raised in things you thought worked.
- Capabilities
They've been tried at the hardware level, and IBM used them in the System/38, but they never caught on. They're not really compatible with C's flat memory model, which is partly they fell out of fashion. Capabilities mean having multiple types of memory. Might come back if partially-shared multiprocessors make a comeback.
- Production-Level Releases
That's kind of vague. Semantic versioning is a related concept. It's more of a tooling thing than a language thing.
- Semi-Dynamic Language
I once proposed this for Python. The idea was that, at some point, the program made a call that told the system "Done initializing". After that point, you couldn't load more code, and some other things that inhibit optimization would be prohibited. At that point, the JIT compiler runs, once. No need for the horrors inside PyPy which deal with cleanup when someone patches one module from another.
Guido didn't like it.
- Value Database
The OP has a good criticism of why this is a bad idea. It's an old idea, mostly from LISP land, where early systems saved the whole LISP environment state. Source control? What's that?
- A Truly Relational Language
Well, in Python, almost everything is a key/value store. The NoSQL people were going in that direction. Then people remembered that you want atomic transactions to keep the database from turning to junk, and mostly backed off from NoSQL where the data matters long-term.
- A Language To Encourage Modular Monoliths
Hm. Needs further development. Yes, we still have trouble putting parts together. There's been real progress. Nobody has to keep rewriting Vol. I of Knuth algorithms in each new project any more. But what's being proposed here?
- Modular Linting
That's mostly a hack for when the original language design was botched. View this from the point of the maintenance programmer - what guarantees apply to this code? What's been prevented from happening? Rust has one linter, and you can add directives in the code which allow exceptions. This allows future maintenance programmers to see what is being allowed.