The real risk isn't that AI will be "wrong" too often, it's that it will be right often enough that humans stop practising the skill. Pilots lose manual flying proficiency with autopilot, drivers lose wayfinding sense with GPS, and radiologists already double-check less when the AI agrees with them.
What makes medicine different is that the tail risks matter: you only need to miss one subtle but lethal case because you've dulled your instincts. And unlike navigation or driving, you don't get daily "reps" to stay sharp. Deskilling here isn't hypothetical, it compounds silently until a crisis forces a clinician to act without the crutch.
Of course deskilling will happen. But marketing says the machine is more often right than the operator is, and people also want it (we can replace docs with AI today, yadda yadda). Soooo... to be expected? It's just that the machine has to work correctly, which is not on the endoscopist, right?
I reading about deskilling these days. I’ll admit that in narrow specialties, with really clean training data, and results-checking by experts, AI can lighten the load on professionals. But here are professionals losing their edge. How and why? Well that’s another study, I suppose.
My main concern is for young people. They are given problem assignments of increasing difficulty in order to learn by thinking things through. They often reply on pushbutton answers. I recall one tough physics course where I read through solutions rather than working “from scratch”. Long story short, I learned methods and steps along the way, instead of copying and pasting a result.
Will young people not even see the approach and steps?
Perhaps courses should emphasize problem-solving over answers, or if AI is everyone’s “wingman”, how to use it reliably and responsibly (if that is possible).
DHH [0] pointed out the futility of CV’s, in that they conceal the important bits, whether a human reads them or AI reads them. I don’t know what to make of this, being one of those people who took things apart to learn how they worked, in the days when you could take things apart, and they weren’t composed of black boxes, or were entirely a black box.
“Look at real work” he says. How?
Here is the pre-print: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5070304
This part is interesting to me:
"We believe that continuous exposure to decision support systems like AI may lead to the natural human tendency to over-rely on their recommendations, leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance."