The skill of the future is not 'AI', but 'Focus'

weird_trousers | 184 points

Losing focus as a skill is something I see with every batch of new students. It’s not just LLMs, almost every app and startup is competing for the same limited attention from every user.

What LLMs have done for most of my students is remove all the barriers to an answer they once had to work for. It’s easy to get hooked on fast answers and forget to ask why something works. That said, I think LLMs can support exploration—often beyond what Googling ever did—if we approach them the right way.

I’ve seen moments where students pushed back on a first answer and uncovered deeper insights, but only because they chose to dig. The real danger isn’t the tool, it’s forgetting how to use it thoughtfully.

arkj | 19 days ago

Using aimbot in Gunbound didn't make players better. Yes, it changed everything: it destroyed the game ecosystem.

Can humanity use "literacy aimbot" responsibly? I don't know.

It's just a cautionary tale. I'm not expecting to win an argument. I could come up with counter anectdotes myself:

ABS made breaking in slippery conditions easier and safer. People didn't learned to brake better, they still pushed the pedal harder thinking it would make it stop faster, not realizing the complex dynamics of "making a car stop". That changed everything. It made cars safer.

Also, just an anecdote.

Sure, a lot of people need focus. Some people don't, they need to branch out. Some systems need aimbot (like ABS), some don't (like Gunbound).

The future should be home to all kinds of skills.

alganet | 19 days ago

I'm definitely gonna get hate for saying this but: the rise of coding with LLM assistants is going to worsen an issue our industry is already struggling with: we have tons of developers out there who do not know their fundamentals in programming, who are utterly rudderless without heaps upon heaps of framework code doing lots of work for them, who are now being further enabled by machines that write even that code for them with some tweaking afterwards.

I have interacted with software developers at conferences who cannot do basic things with computers, like navigate file systems, or make changes to the Windows registry, where to get and how to use environment variables, how to diagnose and fix PC issues... Like in a perfect world your IT department sorts this stuff for you but I struggle to take seriously someone who claims to create software who seemingly lacks basic computer literacy in a number of areas.

And I'm sorry, "it compiles and runs" is the bare fucking minimum for software quality. We have machines these days that would run circles around my first PC in the late 90's, but despite that, everything is slower and runs worse. My desktop messaging apps are each currently sucking up over 600 MB of RAM apiece, which is nearly 3 times what my original PC had total. Everything is some bloated shite that requires internet access now at all times or it utterly crashes and dies, and I'm sorry but I cannot separate in my mind the fact that we have seemingly a large contingent of software developers out there that can't bloody use computers to thank for this. And cheap-ass management, to be clear, but I think these are nested problems.

ToucanLoucan | 19 days ago

> Search enginers offer a good choice between Exploration (crawl through the list and pages of results) and Exploitation (click on the top result). LLMs, however, do not give this choice.

I've actually found that LLMs are great at exploration for me. I'd argue, even better than exploitation. I've solved many a complex problem by using an LLM as a thought partner. I've refined many ideas by getting the LLM to brainstorm with me. There's this awesome feedback loop you can create with the LLM when you're in exploration mode that is impossible to replicate on your own, and still somewhat difficult even with a human thought partner.

Ozzie_osman | 19 days ago

Being allowed to focus seems to be a privilege these days.

When I started in the 90s I could work on something for weeks without much interruption. These days there is almost always some scrum master, project manager or random other manager who wants to get an update or do some planning. Doing actual work seems to have taken a backseat to talking about work.

vjvjvjvjghv | 19 days ago

The flip side of focus (to me) is responsiveness. A post to SO might deliver me the exact answer I need, but it will take focus to write the correct question and patience to wait for a response and then time spent iterating in the comments. In contrast an LLM will happily tell me the wrong thing, instantaneously. It’s responsive.

Good engineers must also be responsive to their teammates, managers, customers, and the business. Great engineers also find a way to weave in periods of focus.

I’m curious how others navigate these?

It seems there was a large culture shift when Covid hit and non-async non-remote people all moved online and expected online to work like in person. I feel pushed to be more responsive at the cost of focus. On the flip side, I’ve given time and space to engineers so they could focus only to come back and find they had abused that time and trust. Or some well meaning engineers got lost in the weeds and lost the narrative of *why* they were focusing. It is super easy to measure responsiveness: how long did it take to respond. It’s much harder to measure quality and growth. Especially when being vulnerable about what you don’t know or the failure to make progress is a truly senior level skill.

How do we find balance?

schneems | 19 days ago

When I use LLMs, I quickly lose focus.

Copy-paste, copy-paste. No real understanding of the solutions, even for areas of my expertise. I just don't feel like understanding the flood of information, without any real purpose behind the understanding. While I probably (?) get done more, I also just don't enjoy it. But I also can't go back to googling for hours now that this ready-made solution exists.

I wish it would have never been invented.

(Obviously scoped to my enjoyment of hobbyist projects, let's keep AI cancer research out of the picture..)

knallfrosch | 19 days ago

"In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it."

Herbert Simon (1971)

namaria | 19 days ago

I'm old enough to remember myriad of experts 10+ years ago who were active selling a view, that smartphones with constantly connected social media will change everything, we just have to learn to use it wisely.

obscurette | 19 days ago

> This idea summarizes why I disagree with those who equate the LLM revolution to the rise of search engines, like Google in the 90s. Search enginers offer a good choice between Exploration (crawl through the list and pages of results) and Exploitation (click on the top result). > LLMs, however, do not give this choice, and tend to encourage immediate exploitation instead. Users may explore if the first solution does not work, but the first choice is always to exploit.

Well said, and an interesting idea, but most of my LLM usage (besides copilot autocomplete) is actually very search-engine-esque. I ask it to explain existing design decisions, or to search for a library that fits my needs, or come up with related queries so I can learn more.

Once I've chosen a library or an approach for the task, I'll have the LLM write out some code. For anything significantly more substantive code than copilot completions, I almost always do some exploring before I exploit.

djsavvy | 19 days ago

It's going to be a different kind of focus.

Technologies are regularly predicted to diminish a capability that was previously considered important.

Babbage came up with the ideas for his engines after getting frustrated with log tables - how many people reading this have used a log table or calculated one recently?

Calculators meant kids wouldn't need to do arithmetic by hand any more and so would not be able to do maths. In truth they just didn't have to do it by hand any more - they still needed the skills to interpret the results, they just didn't have to do the hard work of creating the outputs by pen and paper.

They also lost the skill of using slide rules which were used to give us approximations, because calculators allowed us to be precise - they were no longer needed.

Computers, similar story.

Then the same came with search engines in our pockets. "Oh no, people can find an answer to anything in seconds, they won't remember things". This is borne out, there have been studies that show recall diminishes if your phone is even in the same room. But you still need to know what to look for, and know what to do with what you find.

I think this'll still be true in the future, and I think TFA kind of agrees, but seems to be doing the "all may be lost" vibe by insisting that you still need foundational skills. You don't need to know the foundational skills if you want to know what the answer to 24923 * 923 is, you can quickly find out the answer and use that answer however you need.

I just think the work shifts - you'll still need to know how to craft your inputs carefully (vibe coding works better if you develop a more detailed specification), and you'll still need to process the output, but you'll become less connected to the foundation and for 99% of the time, that's absolutely fine in the same way it has been with calculators, and so on.

PaulRobinson | 19 days ago

Phrasing LLMs as encouraging exploitation is important, because they can still be powerful tools for exploration. The difference comes in the interface for LLMs, which is heavily focused on exploitation whereas search engine interfaces encourage exploration.

Newer models often end responses with questions and thoughts that encourage exploration, as do features like ChatGPT's follow up suggestions. However, a lot of work needs to be done with LLM interfaces to balance exploitation and exploration while avoiding limiting AI's capabilities.

bikedspiritlake | 19 days ago

> LLMs, however, do not give this choice, and tend to encourage immediate exploitation instead. Users may explore if the first solution does not work, but the first choice is always to exploit.

You can ask the llm to generate a number of solutions though - the exploration is possible and relatively easy then.

And I say that as someone who dislikes llms with a passion.

thih9 | 19 days ago

This article makes no sense. It criticizes current LLMs and then without stopping for a second pretends future LLMs will have these problems. Even though hallucination levels have been going down with every generation. Even though every test and benchmark we can come up with, LLMs do better with every generation.

bufferoverflow | 19 days ago

I frequent Hacker News and have not noticed that much buzz around AI and LLMs. The vast bulk of that buzz is insubstantial and therefore off topic here. Sites like LinkedIn on the other hand are overrun with the swill.

kazinator | 19 days ago

On the other hand: with thinking models, agents, and future models to come we are offloading the exploration phase to the models themselves. It really depends on constraints and pressures.

blotfaba | 19 days ago

For new developers wanting to learn to code, AI is great today to help them. For experienced developers AI is also great because it can write tons of code for us that we already know how to evaluate and test, because of years of experience doing it in the "pre-AI" world.

However the future is uncertain, when we reach a point where most developers have used generated code most of their lives, and never developed the coding skills that are required to fully understand the code.

I guess we'll adapt to it. We always do. I mean for example, I can no longer do long division on paper like I did in elementary school, so I rely totally on computers for all calculating.

quantadev | 19 days ago

The number one skill in the future is the ability to predict the future

Always has been

friendlyprezz | 19 days ago

Good read!

gitroom | 19 days ago

the Exploitation and exploration got me thinking, what if LLM generate, say, 5 results at a time and let user choose the best

HiPHInch | 19 days ago

People will need AI just to communicate in a polite manner! What is todays politically correct language? Who is current approved celebrity? What if you quoted something, that is somehow offensive today?!

No, skill for future is using AI to carve out safe space for yourself, so you can focus without distractions!

throeijfjfj | 19 days ago