I find it amusing, yet sad, that some here expect a podcast to exclusively be a source of information where every second delivers bite sized facts. What about entertainment? What about engaging with a topic for hours and eventually learn something that‘s not a fact, but a new perspective?
Link to the paper, https://arxiv.org/pdf/2505.15327v1
Forgive my ignorance about AI, but had anyone tried a "nondeterministic" language that somehow uses learning to approximate the answer? I'm not talking about the current cycles where you train your model on a zillions of inputs, tune it, and release it. I mean a language where you tell it what a valid output looks like, and deploy it. And let it learn as it runs.
Ex: my car's heater doesn't work the moment you turn it on. So if I enter the car one of my first tasks is to turn the blower down to 0 until the motor warms up. A learning language could be used here, given free reign over all the (non-safety-critical) controls, and told that it's job is to minimize the number of "corrections" made by the user. Eventually it's reward would be gained by initializing the fan blower to 0, but it might take 100 cycles to learn this. Rather that train it on a GPU, a language could express the reward and allow it to learn over time, even though it's output would be "wrong" quite often.
That's an esoteric language I'd like to see.
Fractran is great for emulating quantum computers on classical hardware.
Yes. This is a very good podcast. Give it a chance.
[dead]
OT but I couldn't stop laughing at the very first sentence of the transcript:
> One of the biggest goals of this show — our raisin detour, if you will...