The last fairly technical career to get surprisingly and fully automated in the way this post displays concern about - trading.
I spent a lot of time with traders in early '00's and then '10's when the automation was going full tilt.
Common feedback I heard from these highly paid, highly technical, highly professional traders in a niche indusry running the world in its way was:
- How complex the job was - How high a quality bar there was to do it - How current algos never could do it and neither could future ones - How there'd always be edge for humans
Today, the exchange floors are closed, SWEs run trading firms, traders if they are around steer algos, work in specific markets such as bonds, and now bonds are getting automated. LLMs can pass CFA III, the great non-MBA job moat. The trader job isn't gone, but it has capital-C Changed and it happened quickly.
And lastly - LLMs don't have to be "great," they just have to be "good enough."
See if you can match the above confidence from pre-automation traders with the comments displayed in this thread. You should plan for it aggressively, I certainly do.
Edit - Advice: the job will change, the job might change in that you steer LLMs, so become the best at LLM steering. Trading still goes on, and the huge, crushing firms in the space all automated early and at various points in the settlement chain.
I don't worry about it, because:
1) I believe we need true AGI to replace developers.
2) I don't believe LLMs are currently AGI or that if we just feed them more compute during training that they'll magically become AGI.
3) Even if we did invent AGI soon and replace developers, I wouldn't even really care, because the invention of AGI would be such an insanely impactful, world changing, event that who knows what the world would even look like afterwards. It would be massively changed. Having a development job is the absolute least of my worries in that scenario, it pales in comparison to the transformation the entire world would go through.
Back in the late 80s and early 90s there was a craze called CASE - Computer-Aided Software Engineering. The idea was humans really suck at writing code, but we're really good at modeling and creating specifications. Tools like Rational Rose arose during this era, as did Booch notation which eventually became part of UML.
The problem was it never worked. When generating the code, the best the tools could do was create all the classes for you and maybe define the methods for the class. The tools could not provide an implementation unless it provided the means to manage the implementation within the tool itself - which was awful.
Why have you likely not heard of any of this? Because the fad died out in the early 2000's. The juice simple wasn't worth the squeeze.
Fast-forward 20 years and I'm working in a new organization where we're using ArchiMate extensively and are starting to use more and more UML. Just this past weekend I started wondering given the state of business modeling, system architecture modeling, and software modeling, could an LLM (or some other AI tool) take those models and produce code like we could never dream of back in the 80s, 90s, and early 00s? Could we use AI to help create the models from which we'd generate the code?
At the end of the day, I see software architects and software engineers still being engaged, but in a different way than they are today. I suppose to answer your question, if I wanted to future-proof my career I'd learn modeling languages and start "moving to the left" as they say. I see being a code slinger as being less and less valuable over the coming years.
Bottom line, you don't see too many assembly language developers anymore. We largely abandoned that back in the 80s and let the computer produce the actual code that runs. I see us doing the same thing again but at a higher and more abstract level.
I am 61, I have been a full time developer since I was about 19. I have lost count of the number of 'next thing to replace developers' many many times. many of them showed promise. Many of them continue to be developed. Frameworks with higher and higher levels of abstraction.
I see LLMs as the next higher level of abstraction.
Does this mean it will replace me? At the moment the output is so flawed for anything but the most trivial professional tasks, I simply see, as before, it has a long long way to go.
Will be put me out of a job? I highly doubt it in my career. I still love it and write stuff for home and work every day of the week. I'm planning on working until I drop dead as it seems I have never lost interest so far.
Will it replace developers as we know it? Maybe in the far future. But we'll be the ones using it anyway.
I've been thinking about this a bunch and here's what I think will happen as cost of writing software approaches 0:
1. There will be way more software
2. Most people / companies will be able to opt out of predatory VC funded software and just spin up their own custom versions that do exactly what they want without having to worry about being spied on or rug pulled. I already do this with chrome extensions, with the help of claude I've been able to throw together things like time based website blocker in a few minutes.
3. The best software will be open source, since it's easier for LLMs to edit and is way more trustworthy than a random SaaS tool. It will also be way easier to customize to your liking
4. Companies will hire way less and probably mostly engineers to automate routine tasks that would have previously be done by humans (ex: bookkeeping, recruiting, sales outreach, HR, copywriting / design). I've heard this is already happening with a lot of new startups.
EDIT: for people who are not convinced that these models will be better than them soon, look over these sets of slides from NeurIPS:
- https://michal.io/notes/ml/conferences/2024-NeurIPS#neurips-...
- https://michal.io/notes/ml/conferences/2024-NeurIPS#fine-tun...
- https://michal.io/notes/ml/conferences/2024-NeurIPS#math-ai-...
As a junior dev, I do two conscious things to make sure I'll still be relevant for the workforce in the future.
1. I try to stay somewhat up to date with ML and how the latest things work. I can throw together some python, let it rip through a dataset from kaggle, let models run locally etc. Have my linalg and stats down and practiced. Basically if I had to make the switch to be an ML/AI engineer it would be easier than if I had to start from zero.
2. I otherwise am trying to pivot more to cyber security. I believe current LLMs produce what I would call "untrusted and unverified input" which is massively exploitable. I personally believe that if AI gets exponentially better and is integrated everywhere, we will also have exponentially more security vulnerabilities (that's just an assumption/opinion). I also feel we are close to cyber security being taken more seriously or even regulated e.g. in the EU.
At the end of the day I think you don't have to worry if you have the "curiosity" that it takes to be a good software engineer. That is because, in a world where knowledge, experience and willingness to probe out of curiosity will be even more scarce than they are now you'll stand out. You may leverage AI to assist you but if you don't fully and blindly rely on it you'll always be the more qualified worker than someone who does.
> The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.
I don't see this trend. It just sounds like a weird thing to say, it fundamentally misunderstands what the job is
From my experience, software engineering is a lot more human than how it gets portrayed in the media. You learn the business you're working with, who the stakeholders are, who needs what, how to communicate your changes and to whom. You're solving problems for other people. In order to do that, you have to understand what their needs are
Maybe this reflects my own experience at a big company where there's more back and forth to deal with. It's not glamorous or technically impressive, but no company is perfect
If what companies really want is just some cheap way to shovel code, LLMs are more expensive and less effective than the other well known way of cheaping out
Firstly, as many commenters have mentioned, I don't see AI taking jobs en masse. They simply aren't accurate enough and they tend to generate more code faster which ends up needing more maintenance.
Advice #1: do work on your own mind. Try to improve your personal organization. Look into methodologies like GTD. Get into habits of building discipline. Get into the habit of storing information and documentation. From my observations many developers simply can't process many threads at once, making their bottleneck their own minds.
Advice #2: lean into "metis"-heavy tasks. There are many programming tasks which can be easily automated: making a app scaffold, translating a simple algorithm, writing tests, etc. This is the tip of the iceberg when it comes to real SWE work though. The intricate connections between databases and services, the steps you have to go through to debug that one feature, the hack you have to make in the code so the code behaves differently in the testing environment, and so on. LLMs require legibility to function: a clean slate, no tech-debt, low entropy, order, etc. Metis is a term talked about in the book "Seeing Like a State" and it encompasses knowledge and skills gained through experience which is hard to transfer. Master these dark corners, hack your way around the code, create personal scripts for random one-off tasks. Learn how to poke and pry the systems you work on to get out the information you want.
I use Copilot a bit, and it can be really, really good.
It helps me out, but in terms on increasing productivity, it pales in comparison to simple auto-complete. In fact it pales in comparison to just having a good, big screen vs. battling away on a 13" laptop.
LLMs are useful and provide not insignificant assistance, but probably less assistance than the tools we've had for a long time. LLMs are not a game changer like some other thing have been since I've been programming (since late 1980s). Just going to Operating Systems with protected memory was a game changer, I could make mistakes and the whole computer didn't crash!
I don't see LLMs as something we have to protect our careers from, I see LLMs as an increasingly useful tool that will become a normal part of programming same as auto-complete, or protected memory, or syntax-highlighting. Useful stuff we'll make use of, but it's to help us, not replace us.
My anecdata shows people who have no/limited experience in software engineering are suddenly able to produce “software”. That is, code of limited engineering value. It technically works, but is a ultimately an unmaintainable, intractable Heath Robinson monstrosity.
Coding LLMs will likely improve, but what will happen first: a good-at-engineering LLM; or a negative feedback cycle of training data being polluted with a deluge of crap?
I’m not too worried at the moment.
I remember John Carmack talking about this last year. Seems like it's still pretty good advice more than a year later:
"From a DM, just in case anyone else needs to hear this."
1) have books like 'The Art of Programming' on my shelf, as AI seems to propagate solutions that are related to code golf more than robustness due to coverage in the corpus.
2) Force my self to look at existing code as abstract data types, etc... to help reduce the cost of LLMs failure mode (confident, often competent, and inevitable wrong)
3) curry whenever possible to support the use of coding assistants and to limit their blast radius.
4) Dig deep into complexity theory to understand what LLMs can't do, either for defensive or offensive reasons.
5) Realize that SWE is more about correctness and context than code.
6) Realize what many people are already discovering, that LLM output is more like clip art than creation.
I think in some sense the opposite could occur, where it democratizes access to becoming a sort of pseudo-junior-software engineer. In the sense that a lot more people are going to be generating code and bespoke little software systems for their own ends and purposes. I could imagine this resulting in a Cambrian Explosion of small software systems. Like @m_ke says, there will be way more software.
Who maintains these systems? Who brings them to the last mile and deploys them? Who gets paid to troubleshoot and debug them when they reach a threshold of complexity that the script-kiddie LLM programmer cannot manage any longer? I think this type of person will definitely have a place in the new LLM-enabled economy. Perhaps this is a niche role, but figuring out how one can take experience as a software engineer and deploy it to help people getting started with LLM code (for pay, ofc) might be an interesting avenue to explore.
There's a great Joel Spolsky post about developers starting businesses and realising that there's a bunch of "business stuff" that was abstracted away at big companies. [1]
One way to future proof is to look at the larger picture, the same way that coding can't be reduced to algorithm puzzles:
"Software is a conversation, between the software developer and the user. But for that conversation to happen requires a lot of work beyond the software development."
[1] The Development Abstraction Layer https://www.joelonsoftware.com/2006/04/11/the-development-ab...
I was advising this MBA student's nascent startup (with the idea I might technical cofound once they're graduating), and they asked about whether LLMs would help.
So I listed some ways that LLMs practically would and wouldn't fit into the workflow of the service they doing. And related it to a bunch of other stuff, including how to make the most of the precious customer real-world access they'd have, and generating a success in the narrow time window they have, and the special obligations of that application domain niche.
Later, I mentally replayed the conversation in my head (as I do), and realized they were actually probably asking about using an LLM to generate the startup's prototype/MVP for the software they imagined.
And also, "generating the prototype" is maybe the only value that an MBA student had been told a "technical" person could provide at this point. :)
That interpretation of the LLM question didn't even occur to me when I was responding. I could've easily whipped up the generic Web CRUD any developer could do and the bespoke scrape-y/protocol-y integrations that fewer developers could do, both to a correctness level necessarily higher than the norm (which was required by this particular application domain). In the moment, it didn't occur to me that anyone would think an LLM would help at all, rather than just be an unnecessary big pile of risk for the startup, and potential disaster in the application domain.
This blew up way more than I expected. Thanks everyone for the comments, I read almost all of them.
For the sake of not repeating myself, I would like to clarify/state some things.
1. I did not intend to signal that SWE will disappear as a profession, but would rather undergo transformation, as well as shrinking in terms of the needed workforce.
2. Some people seem to be hanging up to the idea that they are doing unimaginably complicated things. And sure, some people do, but I doubt they are the majority of the SWE workforce. Can LLM replace a cobol developer in financial industry? No, I don't think so. Can it replace the absurd amount of people whose job description can be distilled to "reading/writing data to a database"? Absolutely.
3. There seems to be a conflicting opinion. Some people say that code quality matters a lot and LLMs are not there yet, while other people seems to focus more on "SWE is more than writing code".
Personally, based on some thinking and reading the comments, I think the best way to future-proof a SWE career is to move to position that requires more people skills. In my opinion, good product managers that are eager to learn coding and combine LLMs for code writing, will be the biggest beneficiaries of the upcoming trend. As for SWEs, it's best to start acquiring people skills.
I no longer have skin in the game since I retired a few years back.
But I have had over 30 years in a career that has been nothing if not dynamic the whole time. And so I no doubt would keep on keepin' on (as the saying goes).
Future-proof a SWE career though? I think you're just going to have to sit tight and enjoy (or not) the ride. Honestly, I enjoyed the first half of my career much more than where SWE ended up in the latter half. To that end, I have declined to encourage anyone from going into SWE. I know a daughter of a friend that is going into it — but she's going into it because she has a passion for it. (So, 1) no one needed to convince her but 2) passion for coding may be the only valid reason to go into it anyway.)
Imagine the buggy-whip makers gathered around the pub, grousing about how they are going to future-proof their trade as the new-fangled automobiles begin rolling down the street. (They're not.)
LLMs will just write code without you having to go copy-pasta from SO.
The real secret is talent stacks: have a combination of talents and knowledge that is desirable and unique. Be multi-faceted. And don't be afraid to learn things that are way outside of your domain. And no, you wouldn't be pigeon-holing yourself either.
For example there aren't many SWEs that have good SRE knowledge in the vehicle retail domain. You don't have to be an expert SRE, just be good enough, and understand the business in which you're operating and how those practices can be applied to auto sales (knowing the laws and best practices of the industry).
I have been unemployed for almost a year now (it started with a full division layoff and then no willingness or motivation to look for work at the time). Seeing the way AI can do most of the native app development (which is what I did) code I wrote I am losing almost any motivation to even try now. But I have been sleeping the best after college (where I slept awesome) and I have been working out, watching lots of theatre and cinema and playing lots of sports (two of them almost daily), reading a lot of literature, lots of podcasts. I guess I will just wait for my savings to run dry and then see what option then I'd have and what I would not if at all. I know the standard thing to do and say is "up-skill", "change with the times" etc and I am sure those have merit but I just feel I am done with the constant catch up, kind of checked out. I don't give a fuck anymore maybe, or I do and I am too demoralised to confront it.
As others have stated, I don't think we have anything to worry about.
As a SWE you are expected to neatly balance code, its architecture and how it addresses the customers' problems. At best, what I've seen LLMs produce is code monkey level programming (like copy pasting from StackOverflow), but then a human is still needed to tweak it properly.
What would be needed is General AI and that's still some 50 years away (and has been for the past 70 years). The LLMs are a nice sleight of hand and are useful but more often wrong than right, as soon as you delve into details.
Beware of the miopia and gate keeping displayed on this thread.
There will be less SWE and DevOps and related jobs available in the next 24 months. Period.
Become hyper-aware of how a business measure your value as a SWE. How? Ask pointed, uncomfortable questions that force the people paying you to think and be transparent.
Stay on the cutting edge of how to increase your output and quality using AI.
Ie: how long does it take for a new joiner to produce code? How do you cut that time down by 10x using “AI”?
Most organizations don't move that fast. Certainly not fast enough to need this kind of velocity.
As it is I spend 95% of my time working out what needs to be done with all of the stakeholders and 5% of my time writing code. So the impact of AI on that is negligible.
When doing X becomes cheaper with the invention of new tools, X is now more profitable and humanity tends to do much more of it.
Nearly all code was machine-generated after the invention of compilers. Did the compiler destroy programming? Absolutely not. Compilers and other tools like higher-level programming languages really kickstarted the software industry. IMO the potential transition from writing programming languages -> writing natural language and have LLM generate the program is still a smaller change than machine code/assembly -> modern programming languages.
If the invention of programming languages expanded the population of programmers from thousands to the 10s of millions, I think LLMs could expand this number again to a billion.
Build something with an LLM outside of your comfort zone.
I was a pretty early adopter to an LLM based workflow. The more I used it, the worse my project became, and the more I learned myself. It didn’t take long for my abilities to surpass the LLM, and for the past year my usage of LLMs has been dropping dramatically. These days I spend more time in docs than in a chat conversation.
When chatGPT was announced, many people thought programming was over. As in <12 months. Here we are several years later, and my job looks remarkably the same.
I would absolutely love to not have to program anymore. For me, programming is a means to an end. However, after having used LLMs pretty much everyday for 2.5 years, it’s very clear to me that software engineering won’t be changing anytime soon. Some things will get easier and workflows may change, but if you want to build and maintain a moderately difficult production grade application with decent performance, you will still be programming in 10 years
My solution has been to work with LLMs and being on the one side of the industry trying to replace the other. I switched focus fairly early on in the "AI hype" era mainly because I thought it looked like a lot of fun to play with LLMs. After a few years I realized I'm quite a bit ahead of my of my former coworkers that stayed still. I've worked on both the product end and closer to the hardware, and as more and more friends ask for help on problems I've realized I do in fact have a lot of understanding of this space.
A lot of people in this discussion seem to be misunderstanding the way the industry will change with LLMs. It's not a simple as "engineers will be automated away" in the same sense that we're a long way away uber drivers disappearing from self driving cars.
But the impact of LLMs on software is going to be much closer to the impact of the web and web development on native application development. People used to scoff at the idea that any serious company would be run from a web app. Today I would say the majority of software engineers are, directory or indirectly, building web-based products.
LLMs will make coding easier, but they also enable a wide range of novel solutions within software engineering itself. Today any engineer can launch a 0-shot classifier that's better performing than what would have taken a team of data scientists just a few years ago.
Not a clue.
I'm a decent engineer working as a DS in a consulting firm. In my last two projects, I checked in (or corrected) so much more code than the other two junior DS's in my team, that at the end some 80%-90% of the ML-related stuff had been directly built, corrected or optimized by me. And most of the rest that wasn't, was mostly because it was boilerplate. LLMs were pivotal in this.
And I am only a moderately skilled engineer. I can easily see somebody with more experience and skills doing this to me, and making me nearly redundant.
"The use of FORTRAN, like the earlier symbolic programming, was very slow to be taken up by the professionals. And this is typical of almost all professional groups. Doctors clearly do not follow the advice they give to others, and they also have a high proportion of drug addicts. Lawyers often do not leave decent wills when they die. Almost all professionals are slow to use their own expertise for their own work. The situation is nicely summarized by the old saying, “The shoe maker’s children go without shoes”. Consider how in the future, when you are a great expert, you will avoid this typical error!"
Richard W. Hamming, “The Art of Doing Science and Engineering”
Today, lawyers delegate many paralegal tasks like document discovery to computers and doctors routinely use machine learning models to help diagnose patients.
So why aren’t we — ostensibly the people writing software — doing more with LLM in our day-to-day?
If you take seriously the idea that LLM will fundamentally change the nature of many occupations in the coming decade, what reason do you have to believe that you’ll be immune from that because you work in software? Looking at the code you’ve been paid to write over the past few years, how much of that can you honestly say is truly novel?
We’re really not as clever as we think we are.
I will not believe the AI takeover until there's evidence. I haven't seen any examples, apart from maybe TODO list apps. Needless to say, that's nowhere near the complexity that is required at most jobs. Even if my carreer was endangered, I would continue the path I've taken so far: have a basic understanding of as much as possible (push out the edges of knowledge circle or whatever it's called), and strive to have an expert knowledge about maybe 1 or 2, or 3 subjects which pay for your daily bread. Basically just be good at what you do, and that should be fine. As for beginners, I advise to dive deep into a subject, start with a solid foundation and be sure to have a hands-on approach, while maintaining a consistent effort.
> My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output, until eventually LLMs will become so good, that senior people won't be needed any more.
A steeper learning curve in a professional field generally translates into higher earnings. The longer you have to be trained to be helpful, the more a job generally earns.
I am already trained.
I dont think the prediction game is worthwhile.
The Cloud was meant to decimate engineering AND development. But what it did was it created enough chaos that theres a higher demand for both than ever, just maybe not in your region and for your skillset.
LLMs are guaranteed to cause chaos, but the outcome of that chaos is not predictable. Will every coder now output the same as a team of 30 BUT there are 60 times as many screwed up projects made by wannabe founders that you have to come in and clean up? Will businesses find ways to automate code development and then turn around and have to bring the old guys back in constantly to fix up the pipeline. Will we all be coding in black boxes that the AI fills in?
I would make sure you just increase your skills and increase your familiarity with LLMs in case they become mandatory.
It depends on whether you think they are a paradigm change (at the very least) or not. If you don't then either you will be right or you will be toast.
For those of us who do think this is a revolution, you have two options:
1. Embrace it.
2. Find another career, presumably in the trades or other hands-on vocations where AI ingress will lag behind for a while.
To embrace it you need to research the LLM landscape as it pertains to our craft and work out what interests you and where you might best be able to surf the new wave, it is rapidly moving and growing.
The key thing (as it ever was) is to build real world projects mastering LLM tools as you would an IDE or language; keep on top of the key players, concepts and changes; and use your soft skills to help open-eyed others follow the same path.
> The more I speak with fellow engineers, the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code
LLMs do help but to a limited extend. Never heard of anyone in the second category.
> how do you future-proof your career in light of, the inevitable, LLM take over?
Generally speaking, coding has never been a future proof career. Ageism, changes in technology, economic cycles, offshoring... When I went into that field in early 2000s, it was kind of expected that most people if they wanted to be somewhat successful had to move eventually to leadership/management position.
Things changed a bit with successful tech companies competing for talents and offering great salaries and career paths for engineers, especially in the US but it could very well be temporary and shouldn't be taken for granted.
LLMs is one factor among many that can impact our careers, probably not the most important. I think there's a lot of hype and we're not being replaced by machines anytime soon. I don't see a world where an entrepreneur is going to command an LLM to write a service or a novel app for them, or simply maintain an existing complex piece of software.
I try to go to the lowest level I could. During my recent research into PowerPC 32-bit assembly language I have found 1) Not many material online, and what available are usually PDF with pictures which could be difficult for LLMs to pick up, and 2) Indeed ChatGPT didn't give good answer even for a Hello, World example.
I think hardware manufacturers, including ones that produce chips, are way less encouraged to put things online and thus has a wide moat. "Classic" ones such as 6502 or 8086 definitely have way more material. "Modern" popular ones such as x86/64 too have a lot of material online. But "obscure" ones don't.
On software side, I believe LLMs or other AI can easily replace juniors who only knows how to "fill-in" the code designed by someone else, in a popular language (Python, Java, Javascript, etc.), in under 10 years. In fact it has greatly supported my data engineering work in Python and Scala -- does it always produce the most efficient solution? No. Does it greatly reduces the time I need to get to a solution? Yes, definitely!
Who knows what the future holds? As a SWE you are expected to adapt and use modern technology. Always learning is a part of the job. Look at all the new things to build with, frameworks being updated/changes, etc. Making things easier.
LLMs will make things easier, but it's easy to disagree that they will threaten a developer's future with these reasons in mind:
* Developers should not be reinventing the wheel constantly. LLMs can't work very well on subjects they have no info on (proprietary work).
* The quality is going to get worse over time with the internet being slopped up with the mass disregard for quality content. We are at a peak right now. Adding more parameters isn't going to make the models better. It's just going to make them better at plagiarism.
* Consistency - a good codebase has a lot of consistency to avoid errors. LLMs can produce good coding examples, but they will not have much regard for your how -your- project is currently written. Introducing inconsistency makes maintenance more difficult, let alone the bugs that might slip in and wreak havoc later.
In a market, scarce services will always be more valuable than abundant services. Assuming that AI will at some point be capable of replacing an SWE, to future-proof your career, you will need to learn how to provide services that AI cannot provide. Those might not be what SWEs currently usually offer.
I believe it's actually not that hard to predict what this might be:
1. Real human interaction, guidance and understanding: This, by definition, is impossible to replace with a system, unless the "system" itself is a human.
2. Programming languages will be required in the future as long as humans are expected to interface with machines and work in collaboration with other humans to produce products. In order to not lose control, people will need to understand the full chain of experience required to go from junior SWE to senior SWE - and beyond. Maybe less people will be required to produce more products but still, they will be required as long as humanity doesn't decide to give up control over basically any product that involves software (which will very likely be almost all products).
3. The market will get bigger and bigger to the point where nothing really works without software anymore. Software will most likely be even more important to have a unique selling point than it is now.
4. Moving to a higher level of understanding of how to adapt and learn is beneficial for any individual and actually might be one of the biggest jumps in personal development. This is worth a lot for your career.
5. The current state of software development in most companies that I know has reached a point where I find it actually desirable for change to occur. SWE should improve as a whole. It can do better than Agile for sure. Maybe it's time to "grow up" as a profession.
Disclaimer: I wholeheartedly hate all the systems they call AI these days and I hate the culture around it for technological, ecological, political, and philosophical reasons.
I won't future-proof my career against LLMs at all. If I ever see myself in the position that I must use them to produce or adjust code, or that I mostly read and fix LLM-generated code, then I'll leave the industry and do something else.
I see potential in them to simplify code search/navigation or to even replace stackoverflow, but I refuse to use them to build entire apps. If management in turn believes that I'm not productive enough anymore then so be it.
I expect that lots of product owners and business people will be using them in order to quickly cobble something together and then hand it over to a dev for "polishing". And this sounds like a total nightmare to me. The way I see it, devs make this dystopian nightmare a little bit more true everytime they use an LLM to generate code.
>junior to mid level software engineering will disappear mostly, while senior engineers will transition
It's more likely the number of jobs at all level of seniority will decrease, but none will disappear.
What I'm interested to see is how the general availability of LLM will impact the "willingness" of people to learn coding. Will people still "value" coding as an activity worth their time?
For me as an already "senior" engineer, using LLMs feel like a superpower, when I think of a solution to a problem, I can test and explore some of my ideas faster by interacting with it.
For a beginner, I feel that having all of this available can be super powerful too, but also truly demotivating. Why bother to learn coding when the LLM can already do better than you? It takes years to become "good" at coding, and motivation is key.
As a low-dan Go player, I remember feeling a bit that way when AlphaGo was released. I'm still playing Go but I've lost the willingness to play competitively, now it's just for fun.
I think the real world improvements will plateau and it'll take awhile for current enterprise just to adopt what is possible today but that is still going to cause quite a bit of change. You can imagine us going from AI Chat Bots with RAG on traditional datastores, to AI-enhanced but still human-engineered SaaS Products, to bespoke AI-generated and maintained products, to fully E2E AI Agentic products.
An example is do you tell the app to generate a python application to manage customer records or do you tell it "remember this customer record so other humans or agents can ask for it" and it knows how to efficiently and securely do that.
We'll probably see more 'AI Reliability Engineer' type roles which will likely be around building and maintain evaluation datasets, tracking and stomping out edge cases, figuring out human intervention/escalation, model routing, model distillation, Context-window vs Fine-tuning, and overall intelligence-cost management.
I just copied the html from this thread into Claude to get a summary. I think being very realistic, a lot of SWE job requirements will be replaced by LLMs.
The expertise to pick the right tool for the right job based on previous experience that senior engineers poses is something that can probably be taught to an LLM.
Having the ability to provide a business case for the technology to stakeholders that aren't technologically savvy is going to be a people job for a while still.
I think positioning yourself as an expert / bridge between technology and business is what will future-proof a lot of SWE, but in reality, especially at larger organizations, there will be a trimming process where the workload of what was thought to need 10 engineers can be done with 2 engineers + LLMs.
I'm excited about the future where we're able to create software quicker and more contextual to each specific business need. Knowing how to do that can be an advantage for software engineers of different skill levels.
Learn the tools, use them where they shine avoid them where they do not. Your best bet is just learn to start using LLM in your day to day coding and find out what works and what doesn’t.
I work on a pretty straightforward CRUD app in a niche domain and so far they haven’t talked about replacing me with some LLM solution. But LLMs have certainly made it a lot faster to add new features. I’d say working in a niche domain is my job security. Not many scientists want to spend their time trying to figure out how to get an LLM to make a tool that makes their life easier - external competitors exist but can’t give the same intense dedication to the details required for smaller startups and their specific requirements.
A side note - maybe my project is just really trivial, maybe I’m dumber or worse at coding than I thought, or maybe a combination of the above, but LLMs have seemed to produce code that is fine for what we’re doing especially after a few iteration loops. I’m really curious what exactly all these SWEs are working on that is complex enough that LLMs produce unusable code
I’ve carved out a niche of very low level systems programming and optimization. I think it’ll be awhile before LLMs can do what I do. I also moved to to staff so I think a lot of what I do now will still exist with junior/mid level devs being reduce by AI.
But I am focusing on maximizing my total comp so I can retire in 10-15 years if I need to. I think most devs are underestimating where this is eventually going to go.
I'm working as if in 2-3 years the max comp I will be able to get as a senior engineer will be 150k. And it will be hard to get that. It's not that it will disappear, its that the bar to produce working software will go way down. Most knowledge and skill sets will be somewhat commoditized.
Also pretty sure this will make outsourcing easier since foreign engineers will be able to pick up technical skills easier
Short term defense is learning about, and becoming an expert in, using LLMs in products.
Longer term defense doesn't exist. If Software Engineering is otherwise completely automated by LLMs, we're in AGI territory, and likely recursive self-improvement plays out (perhaps not AI-foom, by huge uptick in capability / intelligence per month / quarter).
In AGI territory, the economy, resource allocation, labor vs. capital all transition into a new regime. If problems that previously took hundreds of engineers working over multiple years can now be built autonomously within minutes, then there's no real way to predict the economic and social dynamics that result from that.
Another thing I want to note is; even if I get replaced by AI, I think I'd be sad for a bit. I think it'd be a fun period to try to find a "hand-focused" job. Something like a bakery or chocolatier. I honestly wouldn't mind if I could do the same satisfying work but more hands-on, rather than behind a desk all day
I've been quite worried about it at this point. However I see that "this is not going to happen" is likely not going to help me. So I rather go with the flow, use it where reasonable even if it's not clear to me whether AI is truly ever leaving the hype stage.
FWIW I was allowed to use AI at work since ChatGPT appeared and usually it wasn't a big help for coding. However for education and trying to "debug" funny team interactions, I've surely seen some value.
My guess is though that some sort of T-shaped skillset is going to be more important while maintaining a generalist perspective.
I might be too optimistic but I think LLMs will basically replace the worst and most junior devs, while the job of anyone with 5 or 10+ years of experience will be babysitting AI codevelopers, instead of junior developers.
I find a lot of good use for LLMs but it's only as a multiplier with my own effort. It doesn't replace much anything of what I do that actually requires thought. Only the mechanical bits. So that's the first thing I ensure: I'm not involved in "plumbing software development". I don't plug together CRUD apps with databases, backend apis and some frontend muck. I try to ensure that at least 90% of the actual code work is about hard business logic and domain specific problems, and never "stuff that would be the same regardless of whether this thing is about underwear or banking".
If I can delegate something to it, it's every bit as difficult as delegating to another developer. Something we all know is normally harder than doing the job yourself. The difference between AI Alice and junior dev Bob, is that Alice doesn't need sleep. Writing specifications, reviewing changes and ensuring Alice doesn't screw up is every bit as hard as doing the same with Bob.
And here is the kicker: whenever this equation changes, that we have some kind of self-going AI Alice, then we're already at the singularity. Then I'm not worried about my job, I'll be in the forest gathering sticks for my fire.
To me it seems possible, that seniors will become in even more demand, because learning to become a decent developer is actually harder if you're distracted by leaning on LLMs. Thus, the supply of up-and-coming good new seniors may be throttled. This to me is because LLMs don't abstract code well. Once upon a time electronics engineers had to know a lot about components. Then along came integrated circuits and they had to know about them, and less about components. Once upon a time programmers had to know machine code or assembler. I've never had to know about those for my job. I programmed in C++ for years and had to know plenty about memory. These days I rarely need to as much, but some basic knowledge is needed. Its fine if a student is learning to code mostly in Python, but a short course in C is probably a good idea even today. But as for LLMs, you can't say "now I need to know less about this specific thing that's been black-boxed for me" , because it isn't wrapped conveniently like that. I'm extremely glad that when I was a junior, LLMs weren't around. Really seems like a barrier to learning. Its impossible to understand all the generated code, but also difficult, without significant career experience, to be able to judge what you need to know about, and what you don't. Feel sorry for juniors today to be honest!
I feel the influencer crowd is overblowing the actual utility of LLM's massively. Kind of feels akin to the "cryptocurrency will take over the world" trope 10 years ago, and yet .. I don't see it any crypto in my day to day life to this day. Will it improve general productivity and boring tasks nobody wants to do? Sure, but to think any more than that frankly I'd like some hard evidence of it being actually able to "reason". And reason better than most devs I've ever worked with, because quite honestly humans are also pretty bad at writing software, and LLM's learn from humans, so ...
I see this sort of take from a lot of people and I always tell them to do the same exercise. A cure for baseless fears.
Pick an LLM. Any LLM.
Ask it what the goat river crossing puzzle is. With luck, it will tell you about the puzzle involving a boatman, a goat, some vegetable, and some predator. If it doesn’t, it’s disqualified.
Now ask it to do the same puzzle but with two goats and a cabbage (or whatever vegetable it has chosen).
It will start with the goat. Whereupon the other goat eats the cabbage left with it on the shore.
Hopefully this exercise teaches you something important about LLMs.
The simple answer is to use LLMs so you can put it on your resume. Another simple answer is to transition to a job where it's mostly about people.
The complex answer is we don't really know how good things will get and we could be at the peak for the next 10-20 years, or there could be some serious advancements that make the current generation look like finger-painting toddlers by comparison.
I would say the fear of no junior/mid positions is unfounded though since in a generation or two, you'd have no senior engineers.
Here’s a question for you, have they automated trains yet? They’re literally on tracks. Until trains are fully automated, then after that cars, then later airplanes, then maybe, just maybe “ai” will come for thought work. Meanwhile, Tesla’s “ai” still can’t stop running into stopped firetrucks[1]…
[1] https://www.wired.com/story/tesla-autopilot-why-crash-radar/
LLMs are most capable where they have a lot of good data in their training corpus and not much reasoning is required. Migrate to a part of the software industry where that isn't true, e.g. systems programming.
The day LLMs get smart enough to read a chip datasheet and then realize the hardware doesn't behave the way the datasheet claims it does is the day they're smart enough to send a Terminator to remonstrate with whoever is selling the chip anyway so it's a win-win either way, dohohoho.
I recall the Code Generation from class diagram and then low-code declared as death of all devs.
The current generation of LLMs are immensely expensive and will become further more if all the VC money disappears.
A FT dev is happy to sit there and deal with all the whinning, meeting, alignment, 20 iterations of refactoring, architectural change, late friday evening to put out fire. To make an LLM work for 40h/week with that much of context would cost insane and several steering people. Also the level of ambiguous garbage spewed by management and requirement engineers which I turn to value is… difficult with LLMs.
Lets take it this way, before LLMs, we have wonderful outsourcing firms that costs slightly less than maintaining inhouse team, if devs were to disappear, that would be the nail. LLMs need steering and does not deal well with ambiguity, so I don’t see a threat.
Also for all the people touting LLM holy song, try asking windsurf or cursor to generate something niche which does not exist publicly, see how well it does. Aside, I closed several PRs last week because people started using generated code with 100+ LOC which would do with just one or two lines if the authors took some time to review the latest release of the library.
Make lots of incompatible changes to libraries. No way LLMs keep up with that since their grasp on time is weak at best.
LLM's for now only have 2-3 senses. The real shift will come when they can collect data using robotics. Right now a human programmer is needed to explain the domain to AI and review the code based on the domain.
On the bright side, every programmer can start a business without a need to hire a army of programmers. I think we are getting back to artisan based economy where everyone can be a producer without a corporate job.
I am not afraid of companies replacing Software Engineers with LLMs while being able to maintain the same level of quality. The real thing to worry about is that companies do what they did to QA Engineers, technical writers, infrastructure engineers, and every other specialized role in the software development process. In other words, they will try to cut costs, and the result will be worse software that breaks more often and doesn't scale very well.
Luckily, the bar has been repeatedly lowered so that customers will accept worse software. The only way for these companies to keep growing at the rate their investors expect them to is to try and cut corners until there's nothing left to cut. Software engineers should just be grateful that the market briefly overvalued them to the degree that did and prepare for a regression to the mean.
For thousands of years, the existence of low cost or even free apprentices for skilled trades meant there was no work left for experts with mastery of the trade.
Except, of course, that isn't true.
The biggest "fault" of LLMs (which continues) is their compliance. Being a good software dev often means pushing back and talking through tradeoffs, and finding out what the actual business rules are. I.e. interrogating the work.
Even if these LLM tools do see massive improvements, it seems to me that they are still going to be very happy to take the set of business rules that a non-developer gives them, and spit out a program that runs but does not do what the user ACTUALLY NEEDS them to do. And the worst thing is that the business user may not find out about the problems initially, will proceed to build on the system, and these problems become deeper and less obvious.
If you agree with me on that, then perhaps what you should focus out is building out your consulting skills and presence, so that you can service the mountains of incoming consulting work.
It's lowering the bar for developers to enter the marketplace, in a space that is wildly under saturated. We'll all be fine. There's tons of software to be built.
More small businesses will be able to punch-up with LLMs tearing down walled gardens that were reserved for those with capital to spend on lawyers, consultants and software engineering excellence.
It's doing the same thing as StackOverflow -- hard problems aren't going away, they're becoming more esoteric.
If you're at the edge, you're not going anywhere.
If you're in the middle, you're going to have a lot more opportunities because your throughput should jump significantly so your ROI for mom and pop shops finally pencils.
Just be sure you actually ship and you'll be fine.
> the more I hear that some of them are either using AI to help them code, or feed entire projects to AI and let the AI code, while they do code review and adjustments.
It's not enough to make generalizations yet. What kind of projects ? What tuning does it need ? What kind of end users ? What kind of engineers ?
In the field I work with, I can't see how LLMs can help with a clear path to convergence to a reliable product. I anything, I suspect we will need more manual analysis to fix insanity we receive from our providers if they start working with LLMs.
Some jobs will disappear, but I've yet to see signs of anything serious emerge yet. You're right for juniors though, but I suspect those who stop training will loose their life insurance and will starve under LLMs either by competition, our the amount of operational instability it will bring.
"until eventually LLMs will become so good, that senior people won't be needed any more"
You are assuming AGI will come eventually.
I assume eventually the earth will be consumed by the sun, but I am equally less worried as I don't see it as a near future.
I am still regular dissapointed, when I try out the newest hyped model. They usually fail my tasks and require lots of manual labour.
So if that gets significantly better, I can see them replacing junior devs. But without understanding, they cannot replace a programmer for any serious task. But they maybe enable more people to become good enough programmers for their simple task. So less demand for less skilled devs indeed.
My solution - the same as before - improve my skills and understanding.
I have as much interest in the art of programming as in building products, and becoming some sort of AI whisperer sounds tremendously tedious to me. I opted out of the managerial track for the same reason. Fortunately, I have enough money saved that I can probably just work on independent projects for the rest of my career, and I’m sure they’ll attract customers whether or not they were built using AI.
With that said, looking back on my FAANG career in OS framework development, I’m not sure how much of my work could have actually been augmented by AI. For the most part, I was designing and building brand new systems, not gluing existing parts together. There would not be a lot of precedent in the training data.
So far I haven't found much use for LLM code generation. I'm using Copilot as a glorified autocomplete and that's about it. I tried to use LLM to generate more code, but it takes more time to yield what I want than to write it myself, so it's just not useful.
Now ChatGPT really became indispensable tool for me, on the one row with Google and StackOverflow.
So I don't feel threatened so far. I can see the potential, and I think that it's very possible for LLM-based agents to replace me eventually, probably not this generation, but few years later - who knows. But that's just hand waving, so getting worried about possible future is not useful for mental well-being.
LLMs can help us engineers gain context quickly on how to write solutions. But, I don't see it replacing us anytime soon.
I'm currently working on a small team with a senior engineer. He's the type of guy who preach letting Cursor or whatever new AI IDE is relevant nowadays do most of the work. Most of his PRs are utter trash. Time to ship is slow and code quality is trash. It's so obvious that the code is AI generated. Bro doesn't even know how to rebase properly resulting to overwriting (important) changes instead of fixing conflicts. And guess who has to fix their mistakes (me and I'm not even a senior yet).
I think what we're seeing echoes a pattern we have lived through many times before, just with new tooling. Every major leap in developer productivity - from assembly to higher-level languages, from hand-rolled infrastructure to cloud platforms, from code libraries to massive open-source ecosystems - has sparked fears that fewer developers would be needed. In practice, these advancements have not reduced the total number of developers; they have just raised the bar on what we can accomplish.
LLMs and code generation tools are no exception. They will handle some boilerplate and trivial tasks, just like autocompletion, frameworks, and package managers already do. This will make junior-level coding skills less of a differentiator over time. But it is also going to free experienced engineers to spend more time on the complex, high-level challenges that no model can solve right now - negotiating unclear requirements, architecting systems under conflicting constraints, reasoning about trade-offs, ensuring reliability and security, and mentoring teams.
It is less about "Will these tools replace me?" and more about "How do I incorporate these tools into my workflow to build better software faster?" That is the question worth focusing on. History suggests that the demand for making complex software is bottomless, and the limiting factor is almost never just "typing code." LLMs are another abstraction layer. The people who figure out how to use these abstractions effectively, augmenting their human judgment and creativity rather than fighting it, will end up leading the pack.
A couple reasons why I am not scared of AI taking my job:
1. They are trained to be average coders.
The way LLMs are trained is by giving them lots of examples of previous coding tasks. By definition, half of those examples are below average. Unless there is a breakthrough on how they are trained any above average coder won't have anything to worry about.
2. They are a tool that can (and should) be used by humans.
Computers are much better at chess than any human, but a human with a computer is better than any computer. The same is true with a coding LLM. Any SWE who can work with an LLM will be much better than any LLM.
3. There is enough work for both.
I have never worked for a company where I have had less work when I left than when I started. I worked for one company where it was estimated that I had about 2 years worth of work to do and 7 years later, when I left, I had about 5 years of work left. Hopefully LLMs will be able to take some of the tedious work so we can focus on harder tasks, but most likely the more we are able to accomplish the more there will be to accomplish.
Better tools that accelerate how fast engineers can produce software? That's not a threat, just a boon. I suspect the actual transition will just be people learning/focusing on somewhat different higher level skills rather than lower level coding. Like going from assembly to c, we're hoping we can transition more towards natural language.
> junior to mid level software engineering will disappear mostly People don't magically go to senior. Can't get seniors without junior and mid to level up. We'll always need to take in and train new blood.
Learning woodworking in order to make fine furniture. This is mostly a joke, but the kind that I nervously laugh at.
My advice? Focus on the business value, not the next ticket. Understand what the actual added value of your work is to your employer. It won’t help in the day-to-day tasks but it will help you navigate your career with confidence.
Personally - and I realize this is not generalizable advice - I don’t consider myself a SWE but a domain expert who happens to apply code to all of his tasks.
I’ve been intentionally focusing on a specific niche - computer graphics, CAD and computational geometry. For me writing software is part of the necessary investment to render something, model something or convert something from domain to domain.
The fun parts are really fun, but the boring parts are mega-boring. I’m actually eagerly awaiting for LLM:s to reach some level of human parity because there simply isn’t enough talent in my domains to do all the things that would be worthwhile to do (cost and return of investment, right).
The reason is my domain is so niche you can’t webscrape&label to reach the intuition and experience of two decades, working in various industries from graphics benchmarking, automotive HUDs, to industrial mission critical AEC workflows and to realtime maps.
There is enough knowledge to train LLMs to get a hint as soon as I tie few concepts together, and then they fly. The code they write at the moment apart from simple subroutines is not good enough to act as unsupervised assistant … most of the code is useless honestly. But I’m optimistic and hope they will improve.
AI's are going to put SWE's out of a job at roughly the same time as bitcoin makes visa go bankrupt.
Aka never, or at least far enough in the future that you can't really predict or plan for it.
Right now LLMs have a slight advantage over stackoverflow etc in that they'll react to your specific question/circumstances, but they also require you to doublecheck everything they spit out. I don't think that will ever change, and I think most of the hype comes from people whose salaries depend on it being right around the corner or people who are playing a speculation game (if I learn this tool I'll never have to work again/ avoid this tool will doom me to poverty forever).
Sysadmin here. Thats what we used to be called. Then some fads came, some went. DevOps, SRE. Etc.
I have notes on particular areas I am focusing on, but I have a small set of general notes on this, and they seem to apply to you SWEs also.
Headline: Remember data is the new oil Qualifier: It's really all about IP portfolios these days
1) Business Acument: How does the tech serve the business/client needs, from a holistic perspective of the business. (eg: sysadmins have long had to big picture finance, ops, strategy, industry, etc knowledge) - aka - turn tech knowledge into business knowledge
2) Leadership Presence: Ability to meet w/c-suite, engineers, clients, etc, and speak their languages, understand their issues, and solve their issues. (ex: explain ROI impacts for proposals to c-suite)
3) Emotional Intelligence: Relationship building in particular. (note: this is the thing I neglected the most in my career and regreted it!)
4) Don't be afraid to use force multiplier tools. In this discussion, that means LLMs, but it can mean other things too. Adopt early, keep up with tooling, but focus on the fundamental tech and don't get bogged down into proprietary stockholm syndrome. Augment yourself to be better, don't try to replace yourself.
----
Now, I know thats a simplistic list, but you asked so I gave you what I had. What I am doing (besides trying to get my mega-uber-huge-sideproject off the ground), is recentering on certain areas I don't think are going anywhere: on-prem, datacenter buildouts, high-compute, ultra-low-latency, scalable systems, energy, construction of all the previous things, and the banking knowledge to round it all out.
If my side-project launch fails, I'm even considering data-center sales instead of the tech side. Why? I'm tired of rescuing the entire business to no fanfare while sales people get half my salary in a bonus. Money aside, I can still learn and participate in the builds as sales (see it happen all the time).
In other words, I took my old-school niche set of knowledge, and adopted it over the years as the industry changed, focusing on what I do best (in this case, operations - aka - the ones who actually get shit into prod, and fix it when it's broke, regardless of the title associated).
Two thoughts:
1. Similar to autonomous driving going from 90-99% reliability can take longer than 0-90%.
2. You can now use LLMs and public clouds to abstract away a lot skills that you don't have (managing compute clusters, building iOS and Android apps, etc.). So you can start your 3 person company and do things that previously required 100s of people.
IMHO LLMs and cloud computing are very similar where you need a lot of money to build an offering so perhaps only a few big players are going to survive.
Future at this point is ... unpredictable. It is unsatisfactory answer but it is true.
So what does it take for LLM to replace SWE?
1. It needs to get better, much better 2. It needs to be cheaper still
Those two things are at odds with each other. If Scaling Laws is the god we preaching to, then it apparently has already hit the diminishing of return, maybe if we scale up 1000x we can get AGI, but that won't be economically reasonable for a long time.
Back to reality, what does it mean to survive in a market assuming coding assistants are going to get marginally better over say next 5 years? Just use them, they are genuinely useful tools to accomplish really boring and mundane stuff. Things like writing docker files, those will go away to LLM and human won't be able and don't have to compete. They are also great second thoughts advice givers, it is fun to know what LLM thought of your design proposal and build upon their advice.
Overall, I don't think much will change over night, the industry might experience contraction in terms of how many developers it will hire, for which I think for a long time, the demand will not be there. For people already in the industry, as long as you keep learning, it is probably going to be fine, well, for now.
I worked at a big tech co years ago. Strolling up to my bus stop in my casual attire while others around me wore uniforms, rushing to get to work. A nice private shuttle would pick me up. It would deliver me pretty much at the doors of the office. If it were raining, somebody would be standing outside to hand me an umbrella even though the door was a short distance away. Other times there would be someone there waiting on the shuttle to hand me a smoothie. When I got to the door, there would be someone dedicated to opening it. When I got inside, a breakfast buffet fit for a king would be served. Any type of cuisine I wanted was served around campus for lunch and dinner, and it was high quality. If I wanted dessert, there was entire shops (not one but many) serving free handcrafted desserts. If I wanted my laundry done, someone would handle that. If I wanted snacks, each floor of my office had its own little 7/11. If I didn't feel like having all this luxury treatment, I'd just work from home and nobody cared.
All of that, and I was being paid a very handsome amount compared to others outside of tech? Several times over the national average? For gluing some APIs together?
What other professions are like this where there's a good chunk of people who can have such a leisurely life, without taking much risk, and get so highly compensated compared to the rest? I doubt there's many. At some point, the constrained supply must answer to the high demand and reality shows up at the door.
I quit a year into the gig to build my own company. Reality is much different now. But I feel like I've gained many more skills outside of just tech that make me more equipped for whatever the future brings.
Long horizon problems are a completely unsolved problem in AI.
See the GAIA benchmark. While this surely will be beat soon enough, the point is that we do exponentially longer horizon tasks than that benchmark every single day.
It's very possible we will move away from raw code implementation, but the core concepts of solving long horizon problems via multiple interconnected steps are exponentially far away. If AI can achieve that, then we are all out of a job, not just some of us.
Take 2 competing companies that have a duopoly on a market.
Company 1 uses AI and fires 80% their workforce.
Company 2 uses ai and keeps their workforce.
AI in its current form is a multiplier, we will see company two massively outcompete the first as each employee now performs 3-10 people's tasks. Therefore, Company two's output is exponentially increased per person. As a result, it significantly weakens the first company. Standard market forces haven't changed.
The reality, as I see it, is that interns will now be performing at Senior SWE, senior SWE engineers will now be performing at VP of engineering levels, and VP's of engineering will now be performing at nation state levels of output.
We will enter an age where goliath companies will be common place. Hundreds or even thousands of mega trillion dollar companies. Billion dollar startups will be expected almost at launch.
Again, unless we magically find a solution to long horizon problems (which we haven't even slightly found). That technology could be 1 year or 100 years away. We're waiting on our generation's Einstein to discover it.
I spent approximately a decade trying to get the experience and schooling necessary to move out of being a "linux monkey" (read: responding to shared webhosting tickets, mostly opened by people who had broken their Wordpress sites) to being an SRE.
Along the way I was an "incident manager" at a couple different places, meaning I was basically a full-time Incident Commander under the Google SRE model. This work was always fun, but the hours weren't great (these kind of jobs are always "coverage" jobs where you need to line up a replacement when you want to take leave, somebody has to work holidays, etc.). Essentially I'd show up at work and paint the factory by making sure our documentation was up to date, work on some light automation to help us in the heat of the moment, and wait for other teams to break something. Then I'd fire up a bridge and start troubleshooting, bringing in other people as necessary.
This didn't seem like something to retire from, but I can imagine it being something that comes back, and I may have to return to it to keep food on the table. It is exactly the kind of thing that needs a "human touch".
What does it mean to be a software engineer? You know the syntax, standard library, and common third party libraries of your domain. You know the runtime, and the orchestration around multiple instances of your runtime. You know how to communicate between these instances and with third-party runtimes like databases and REST APIs.
A large model knows all of this as well. We already rely on generative language model conversations to fill in the knowledge gaps that Googling for documentation (or “how do I do X?” stackoverflow answers) filled.
What’s harder is debugging. A lot of debugging is guesswork and action taking, note-taking, and brain-storming for possible ideas as to why X crashes on Y input.
Bugs that boil down to isolating a component and narrowing down what’s not working are hard. Being able to debug them could be the moat that will protect us SWEs from redundancy. Alternatively, pioneering all the new ways of getting reproducible builds and reproducible environments will be the route to eliminating this class of bug entirely, or at least being able to confidently say that some bug was probably due to bad memory, bad power supplies, or bad luck.
I think it might be the opposite. It's not advisable to give advice to young SWEs when you might be one yourself out some.
Junior devs aren't going away. What might improve is often the gap between where a junior dev is hired and the effort and investment to get them to the real start line of adding value, before they hop ship.
AI agents will become their coding partners, that can teach and code with the Junior Dev, it will be more reliable contributions to a code base, and sooner.
By teach and code with, I mean explaining so much of the basic stuff, step by step, tirelessly, in the exact way each junior dev needs, to help them grow and advance.
This will allow SWE's to move up the ladder and work on more valuable work (understanding problems and opportunities, for example) and solve higher level problems or from a higher perspective.
Specifically the focus of Junior Devs on problems, or problems sets could give way to placing them in opportunities to be figured out and solved.
LLMs can write code today, not sure if it can manage clean changes to an entire codebase on it's own today at scale, or for many. Some folks probably have this figured out quietly for their particular use cases.
Real software engineering is as far from "only writing code", as construction workers are from civil engineering.
> So, fellow software engineers, how do you future-proof your career in light of, the inevitable, LLM take over?
I feel that software engineering being taken over by LLM is a pipe dream. Some other, higher, form of AI? Inevitably. LLMs, as current models exists and expand? They're facing a fair few hurdles that they cannot easily bypass.
To name a few: requirement gathering, scoping, distinguishing between different toolsets, comparing solutions objectively, keeping up with changes in software/libraries... etc. etc.
Personally? I see LLMs tapering off in new developments over the following few years, and I see salesmen trying to get a lot of early adopters to hold the bag. They're overpromising, and the eventual under-delivery will hurt. Much like the AI winter did.
But I also see a new paradigm coming down the road, once we've got a stateful "intelligent" model that can learn and adapt faster, and can perceive time more innately... but that might take decades (or a few years, you never know with these things). I genuinely don't think it'll be a direct evolution of LLMs we're working on now. It'll be a pivot.
So, I future-proof my career simply: I keep up with the tools and learn how to work around them. When planning my teams, I don't intend to hire 5 juniors to grind code, but 2 who'll utilize LLMs to teach them more.
I ask more of my junior peers for their LLM queries before I go and explain things directly. I also teach them to prompt better. A lot of stuff we've had to explain manually in the past can now be prompted well, and stuff that can't - I explain.
I also spend A LOT of time teaching people to take EVERYTHING AI-generated with generous skepticism. Unless you're writing toys and tiny scripts, hallucinations _will_ waste your time. Often the juice won't be worth the squeeze.
More than a few times I've spent a tedious hour navigating 4o's or Claude's hallucinated confident failures, instead of a pleasant and productive 45 minutes writing the code myself... and from peer discussions, I'm not alone.
Assuming LLMs grow past their current state where they are kind of hit-and-miss for programming, I feel like this question is very similar to e.g. asking an assembly programmer what they are doing to future-proof their career in light of high-level languages. I can't recall a situation in which the correct answer would not be one of
(1) "Finding a niche in which my skills are still relevant due to technical constraints",
(2) "Looking for a different job going forward", or
(3) "Learning to operate the new tool".
Personally I'm in category (3), but I'd be most interested in hearing from others who think they are on track to (1). What are the areas where LLMs will not penetrate due to technical reasons, and why? I'm sure these areas exist, I just have trouble imagining which they are!
----
One might argue that there's another alternative of finding a niche where LLMs won't penetrate due to regulatory constraints, but I'm not counting this because that tends to be a short-term optimisation.
My view is that the point at which LLMs could cause the death of SWE as a profession is also the point at which practically all knowledge professions could be killed. To fully replace SWEs still requires the LLM to have enough external context that it could replace a huge range of jobs, and it'd probably be best to deal with the consequences of that as and when they happen.
I think in principle LLMs are no different from other lowercase-a abstractions that have substantially boosted productivity while lowering cost, from compilers to languages to libraries to protocols to widespread capabilities like payments and cloud services and edge compute and more and more. There is so much more software that can be written, and may be rewritten, abstract machines that can be built and rebuilt, across domains of hardware and software, that become enabled by this new intelligence-as-a-service capability.
I think juniors are a significant audience for LLM code production because they provide tremendous leverage for making new things. For more experienced folk, there are lots of choices that resemble prior waves of adoption of new state of the art tools/techniques. And as it always goes, adoption of those in legacy environments is going to go more slowly, while disruption of legacy products and services that have a cost profile may occur more frequently as new economics for building and then operating something intelligent start to appear.
Not (currently) worried. Lots of comments here about intellisense being a similar step up and I’ve had it disabled for years (I find it more a distraction than use). So far LLMs feel like a more capable - but more error prone - intellisense. Unless it’s stamping out boiler plate or very menial tasks I have yet to find a use for LLMs that doesn’t take up more time fixing it than just writing it in the first place.
Who knows though in 10 years time I imagine things will be radically different and I intend to periodically use the latest AI assistance so I keep up, even if it’s a world I don’t necessarily want. Part of why I love to code is the craft and AI generated code loses that for me.
I do, however, feel really lucky to be at the senior end of things now. Because I think junior roles are seriously at risk. Lots of the corrections needed for LLMs seem to be the same kind of errors new grads make.
The problem is - what happens when all us seniors are dead/retired and there’s no juniors because they got wiped out by AI.
A quote from SICP:
> First, we want to establish the idea that a computer language is not just a way of getting a computer to perform operations, but rather that it is a novel formal medium for expressing ideas about methodology. Thus, programs must be written for people to read, and only incidentally for machines to execute.
From this perspective, the code base isn’t just an artifact left over from the struggle of getting the computer to understand the business’s problems. Instead, it is an evolving methodological documentation (for humans) of how the business operates.
Thought experiment: suppose that you could endlessly iterate with an LLM using natural language to build a complex system to run your business. However, there is no source code emitted. You just get a black box executable. However, the LLM will endlessly iterate on this black box for you as you desire to improve the system.
Would you run a business with a system like this?
For me, it depends on the business. For example, I wouldn’t start Google this way.
I think it's about evaluating the practical strengths and weaknesses of genAI for coding tasks, and trying to pair your skillset (or areas of potentially quick skill learning) with the weaknesses. Try using the tools and see what you like and dislike. For example I use a code copilot for autocomplete and it's saving my carpals; I'm not a true SWE more a code-y DS, but autocomplete on repetitive SQL or plotting cells is a godsend. It's like when I first learned vi macros, except so much simpler. Not sure what your domain is, but I'd wager there are areas that are similar for you; short recipes or utils that get reapplied in slightly different ways across lots of different areas. I would try and visualize what your job could look like if you just didn't have to manually type them; what types of things do you like doing in your work and how can you expand them to fill the open cycles?
(10+ years of experience here) I will be starting training for commercial pilot license next year. The pay is much less than one of a software engineer but I think this job is already done for most of us, only the top 5% will survive. I don’t think I’m part of that top and don’t want to go to management or PO roles so I am done with tech
Im hoping I can transition to some kind of product or management role since frankly Im not that good at coding anyways (I dont feel like I can pass a technical interview anymore, tbh.)
I think a lot of engineers are in for some level of rude awakening. I think a lot of engineers havent applied some level of business/humanities thinking in this, and I think a lot of corporations care less about code quality than even our most pessimistic estimates. It wouldnt surprise me if cuts over the next few years get even deeper, and I think a lot of high performing (re: high paying) jobs are going to get cut. Ive seen so many comments like "this will improve engineering overall, as bad engineers get laid off" and I dont think its going to work like that.
Anecdotal, but no one from my network actually recovered from the post covid layoffs and they havent even stopped. I know loads of people who dont feel like theyll ever get a job as good as they had in 2021.
This question is so super weird, because:
Ask an LLM to generate you 100 more lines of code, no problem you will get something. Ask the same LLM to look at 10000 lines of code and intelligently remove 100... good luck with that!
seriously, I tried uploading some (but not all) source code of my company to our private Azure OpenAI GPT 4o for analysis, as a 48 MB cora-generated context file, and really the usefulness is not that great. And don't get me started about Copilot's suggestions.
Someone really has to know their way around the beast, and LLM's cover a very very small part of the story.
I fear that the main effect of LLMs will be that developers that have already for so long responded to their job-security fears with obfuscation and monstrosity... will be empowered to produce even more of that.
Are LLMs / AI attaining better results than the data they were trained on? For me, the answer is no: LLMs are always imperfectly modeling the underlying distribution of the training dataset.
Do we have sufficient data that spans the entire problem space that SWE deals with? Probably not, and even if we did it would still be imperfectly modeled.
Do we have sufficient data to span the space of many routine tasks in SWE? It seems so, and this is where the LLMs are really nice: e.g., scripting, regurgitating examples, etc.
So to me, much like previous innovation, it will just shift job focus away from the things the innovation can do well, rather than replacing the field as a whole.
One pet theory I have is that we currently suck at assessing model performance. Sure, vibes-based analysis of the outputs of the model make them look amazing. But is that not the literal point of RLHF? But how good are these outputs really?
LLM's are a model therefore require data, including new data. When it comes to obscure tasks, niche systems and peculiar integrations, LLM's seem to struggle with that nuance.
So should you be worried they will replace you? No. You should worry about not adopting the technology in some form, otherwise your peers will outpace you.
I've tended to think of the current stage of AI as a productivity boost along the lines of let's say, using an IDE vs coding with a text editor.
It's also good as a replacement for reading the docs/Google search, especially with search getting worse and full of SEO spam lately.
It's a big help, but it doesn't really replace the human. When the human can be replaced, any job done in front of a computer will be at risk, not just coders. I hope when that happens there will be full robot bodies with AI to do all of the other work.
Also, I know of several designers who can't code but are planning to use LLMs to make their startup ideas a reality by building an MVP. If the said startups take off, they will probably hire real human coders, thus creating more coding jobs. Jobs that would not exist without LLMs getting the original ideas of the ground.
So what I've seen so far is that LLMs are amazing for small self contained problems. Anything spanning a whole project they aren't quite up to the task yet. I think we're going to need a lot more processing power to get to that point. So our job will change, but I have a feeling it will be slow and steady.
I do use LLM for various support tasks. Is it super necessary? Probably not, but it really helps.
What they excel in in my experience is translating code to different languages and they do find alternative dependencies in different runtime environments.
Code can be weird and prompting can take longer than writing yourself, but it still is nice support, even if you need to check the results. I only use local LLM where I do embed some of my code.
I am still not sure if LLM are a boon for learning to code or if it is a hindrance. I tend to think it is a huge help.
As for future proofing your career, I don't think developers need to be afraid that AI will write all code for us yet, just because non software engineers suck at defining good requirements for software. I also believe that LLM seem to hit walls on precision.
Some other industries might change significantly though.
Maybe the best thing to do is just continue practicing the art of writing code without LLMs. When you're the last person who can do it, it might be even more valuable than it is today.
(this is my naive tactic, I am sure sama and co will find a way to suck me down the drain with everyone else in the end)
For founders it’s great. You don’t need a ton of startup capital to actually start something. Easier than ever. I think this means there will be less of a supply of goood early stage deals. And if VC can’t get in on the ground floor it becomes harder to get the multiples they sell to their investor. It also means that the good companies will be able to get to finance their own growth which means that whatever is left in the later stage just won’t be as compelling. As an LP you know have to look at potentially other asset classes or find investors who are going to adapt to still find success. Otherwise I think AI is most disruptive here now to the VC and by implication their capital markets. I think late stage financing and ipo market should not change substantially.
LLMs are not 100% correct 100% of the time. LLMs are subjective. Code should work 100% of the time, be readable, and objective.
We also already have "easier ways of writing software" - website builders, open source libraries, StackOverflow answers, etc.
Software builds on itself. It's already faster to copy and paste someone's GitHub repo of their snake game than to ask an LLM to build a snake game for you. Software problems will continue to get more challenging as we create more complex software with unsolved answers.
If anything, software engineers will be more and more valuable in the future (just as the past few decades have shown how software engineers have become increasingly more valuable). Those that code with LLMs won't be able to retain their jobs solving the harder problems of tomorrow.
> My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output
This. Programming will become easier for everyone. But the emergent effect will be that senior engineers become more valuable, juniors much less.
Why? It's an idea multiplier. 10x of near-zero is still almost zero. And 10x of someone who's a 10 already - they never need to interact with a junior engineer again.
> until eventually LLMs will become so good, that senior people won't be needed any more.
Who will write the prompts? How do you measure the success? Who will plan the market strategy? Humans are needed in the loop by definition as we build software to achieve human goals. We'll just need significantly fewer people to achieve them.
The best software is made by people who are using it. So I figure we should all go learn something that interests us which could use some more software expertise. Stop being SWE's and start being _____'s with a coding superpower.
AI aside, we probably should have done this a long time ago. Software for software's sake tends build things that treat the users poorly. Focusing on sectors that could benefit forum software and not treating software itself like a sector seems to me like a better way.
I know that sounds like giving up, but look around and ask how much of the software we work on is actually helping anybody. Let's all go get real jobs. And if you take an honest look at your job and think it's plenty real, well congrats, but I'd wager you're in the minority.
I think people are coping. Software engineering has only gotten easier over time. Fifteen years ago, knowing how to code made you seem like a wizard, and learning was tough - you had to read books, truly understand them, and apply what you learned. Then came the web and YouTube, which simplified things a lot. Now with LLMs, people with zero experience can build applications. Even I find myself mostly prompting when working on projects.
Carmack’s tweet feels out of touch. He says we should focus only on business value, not engineering. But what made software engineering great was that nerds could dive deep into technical challenges while delivering business value. That balance is eroding - now, only the business side seems to matter.
My job is not to translate requirements into code, or even particularly to create software, but to run a business process for which my code forms the primary rails. It is possible that advanced software-development and reasoning LLMs will erode some of the advantage that my technical and analytical skills give me for this role. On the other hand even basic unstructured-text-understanding LLMs will dramatically reduce the size of the workforce involved in this business process, so it's not clear that my role would logically revert to a "people manager" either. Maybe there is a new "LLM supervisor" type of role emerging in the future, but I suspect that's just what software engineer means in the future.
My plan is to become a people person / ideas guy.
I will do the same thing I did on all of the previous occasions when new automation ate my job: move up the ladder of abstraction and keep on working.
I haven't written any actual code for a couple of decades, after all: I just waffle around stitching together vague, high-level descriptions of what I want the machine to do, and robots write the code for me! I don't even have to manage memory anymore; the robots do it all. The robots are even getting pretty good at finding certain kinds of bugs now. A wondrous world of miracles, it is... but somehow there are still plenty of jobs for us computer-manipulating humans.
For now, taste and debugging still rule the day.
o1 designed some code for me a few hours ago where the method it named "increment" also did the "limit-check", and "disable" functionality as side-effects.
In the longer run, SWE's evolve to become these other roles, but on-steroids:
- Entrepreneur - Product Manager - Architect - QA - DevOps - Inventor
Someone still has to make sure the solution is needed, the right fit for the problem given the existing ecosystems, check the code, deploy the code and debug problems. And even if those tasks take fewer people, how many more entrepreneurs become enabled by fast code generation?
This has never, ever been an industry where you will retire doing the same kind of work as you did when you started. There is nothing you can do to future proof your career - all you can do is be adaptable, and learn new things as you go.
I fear that in the goal of going from "manual coding" to "fully automated coding", we might end up in the middle, where we are "semi manual coding" assisted by AI, which would need different software engineer skill.
I don't know about all the other commenters on this thread, but my personal experience with LLMs is that it really is just a glorified stack overflow. That's how I use it anyway. It saves me a couple clicks and keystrokes on Google.
It's also incredibly useful for prototyping and doing grunt work. (ie, if you work with lambda functions on AWS, you can get it to spit out a boilerplate for you to amend).
I started diversifying this year for this very reason. Firstly I’ve been engineering for over a decade and purposefully avoiding leadership roles, so I’ve stepped into a head of engineering role. Secondly I’ve taken up a specific wood working niche, it will provide me with a side hustle to begin with and help diversify our income.
I don’t think LLMs are taking jobs today, but I now clearly see a principal that has fully emerged, non tech people now have a lust for achieving and deploying various technology solutions, and the tech sector will fulfill it in the next decade.
Get ahead early.
I truly do not believe that many software engineers are going to lose jobs to anything resembling the current crop of LLMs in the long run, and that’s because the output cannot be trusted. LLMs just make shit up. Constantly. See: Google search results, Apple Intelligence summaries, etc.
If the output cannot be trusted, humans will always be required to review and make corrections, period. CEOs that make the short-sighted mistake of attempting to replace engineers with LLMs will learn this the hard way.
I’d be far more worried about something resembling AGI that can actually learn and reason.
Be exceptionally skilled in areas that can’t yet be automated.
Focus on soft skills: communication, problem-solving, social intelligence, collaboration, connecting ideas, and other "less technical" abilities. Kind of management and interpersonal stuff that machines can’t replicate (yet).
It's real. Subscribing LLM provider for $20/month feels like better deal than hiring average-skilled software engineer.
It's even funnier when we realize that people we hire are just going to end up prompting the LLM anyway. So, why bother hiring?
We really need next-level skills. Beyond what LLM can handle.
> How can we future-proof our career?
Redefine what your career is.
2017-2018: LLMs could produce convincing garbage with heavy duty machines.
Now.
- You can have good enough LLMs running on a laptop with reasonable speeds.
- LLMs can build code from just descriptions (Pycharm Pro just released this feature)
- You can take screenshots of issues and the LLM will walk through how to fix these issues.
- Live video analysis is possible just not to the public yet.
The technology is rapidly accelerating.
It is better to think of your career as part of the cottage industry at the start of the industrial revolution.
There will likely be such jobs existing, but not at the required volume of employees we have now.
I own all of my own code now, and so I benefit from creating it in the most efficient way possible. That is sometimes via AI and I am fine with it being more.
But, generally I don’t see AI as currently such a boon to productivity that it would eliminate programming. Right now, it’s no where near as disruptive as easy to install shared libraries (e.g. npm). Sure, I get lines of code here and there from AI, but in the 90’s I was programming lots of stuff that I just get instantaneously for free with constant updates.
You’re watching our coworkers hit the gas pedal on turning the entire industry into a blue collar job. Not just the junior positions will dry up but the senior positions will dry up. The average engineer won’t understand coding or even do the review. Salaries won’t lower they’ll stagnate over the next 10-20 years. People don’t actually care about coding, they may enjoy it but are only here for the money. In their short sighted mind, using LLMs are the quickest way to demonstrate superiority by way of efficiency to argue for a promotion.
You still need a person to be responsible for the software being built - you cant fire chatGPT if the code blows up.
The developer has in fact even more responsibility because expectations have gone up so even more code is being produced and someone has to maintain it.
So there will be a point when that developer will throw their hands up and ask for more people which will have to be hired.
Basically the boiling point has gone up (or down, depending on how you interpret the analogy), the water still has to be boiled the more or less the same way.
Until we have AGI - then it's all over.
Software quality, already bad, will drop even more as juniors outsource all their cognition to the SUV for the mind. Most "developers" will be completely unable to function without their LLMs.
As someone who went to a bootcamp a good while ago...I am now formally pursuing a technical masters program which has an angle on AI(just enough to understand where to apply it, not doing any research).
So far LLMs scale the opposite of silicon, instead of exponential gains in efficiency/density/cost, each generation of model takes exponentially more resources.
Also, being a software developer is not primarily writing code, these days it is much more about operating and maintaining production services and long running projects. An LLM can spit out a web app that you think works, but when it goes wrong or you need to do a database migration, you're going to want someone who can actually understand the code and debug the issue.
Just because you prompt an LLM doesn’t mean it ain’t programming. The job will just change from using programming languages to using natural languages. There is no silver bullet.
Careers are future-proofed through relationships and opportunities, not skills. Skills are just the buy-in.
Getting and nailing a good opportunity opens doors for a fair bit, but that's rare.
So the vast, vast majority of future-proofing is relationship-building - in tech, with people who don't build relationships per se.
And realize that decision-makers are business people (yes, even the product leads and tech leads are making business decisions). Deciders can control their destiny, but they're often more exposed to business vicissitudes. There be dragons - also gold.
To me, the main risk of LLM's is not that they'll take over my coding, but that they'll take over the socialization/community of developers - sharing neat tricks and solutions - that builds tech relationship. People will stop writing hard OS software blogs and giving presentations after LLM's inter-mediate to copy code and offer solutions. Teams will become hub-and-spoke, with leads enforcing architecture and devs talking mostly to their copilots. That means we won't be able to find like-minded people, no one has any incentive or even occasion to share knowledge. My guess is that relationship skills will be even more valued, but perhaps also a bit fruitless.
Doctors and lawyers and even business people have a professional identity from their schooling, certification, and shared history. Developers and hackers don't; you're only as relevant as your skills on point. So there's a bit of complaining but no structured resistance on the part of developers to their work being factored, outsourced, and now mediated by LLM's (that they're busily building).
Developers have always lived in the shadow of the systems they're building. You used to have to pay good money for compilers and tools, licenses to APIs, etc. All the big guys realized that by giving away that knowledge and those tools, they make the overall cost cheaper, and they can capture more. We've been free-riding for 30 years, and it's led us to believe that skills matter most. LLM's are a promising way to sell cloud services and expensive hardware, so there will be even more willingness than crypto or car-sharing or real estate or whatever to invest in anything disruptive. We rode the tide in, and it will take us out again.
Nothing. A tool that is only right ~60% of the time is still useless.
I've yet to have an LLM ever produce correct code the first time, or understand a problem at the level of anything above a developer that went through a "crash course".
If we get LLMs trained on senior level codebases and senior level communications, they may stand a chance someday. Given that they are trained using the massively available "knowledge" sites like Reddit, it's going to be awhile.
Learning how to use LLMs and seeing what works and what doesn't. When I've used them to code after awhile I can start to figure out where they hallucinate. I have made an LLM system that performs natural language network scanning called http://www.securday.com which I presented at DEF CON (hacker conference). Even if it has no change or affect on your employment it is fun to experiment with things regardless.
I recently used an LLM to translate an algorithm from Go to Python at my work. The translation was quite accurate, and it made me think tasks involving obvious one-to-one correspondence like code translation might be easier for LLMs compared to other tasks. I can see the potential for offloading such tasks to LLMs. But the main challenge I faced was trusting the output. I ended up writing my own version and compared them to verify the correctness of the translation.
If people enjoy SWE they will never be replaced, nobody will forcefully enter my house to tear the keyboard out of my hands.
Even if AI coding becomes the norm, people who actually understand software will write better prompts.
The current quality of generated code is awful anyways. We are not there yet.
But there is a very simple solution for people who really dislike AI. Licensing. Disallow AI to contribute to and use your FOSS code and if they do, sue the entity that trained the model. Easy money.
I future proofed myself (I believe, and it's worked well so far despite many technology transitions) by always adapting to newer tools and technologies, not involving myself in operating system/text editor/toolchain wars but instead become at least proficient in the ones that matter and by ensuring my non technical skills are as strong or stronger than my technical skills.
This is a path I would recommend with or without LLM's in the picture.
I’m fairly OK financially at this point so my strategy is to make the money I can, while I can, and then when I become unemployable or replaced by LLMs, just retire.
any post like this here will inevitably be met with some variation of “I am better than LLMs, that won’t change in any near future.”
There are 4.5 million SWEs in USA alone. of those how many are great at what they do? 0.5% at best. how many are good? 5% tops. average/mediocre 50%. below average to downright awful - the rest.
while LLMs won’t be touching the great/good group in any near future they 100% will the awful one as well as average/mediocre one
Personally I hope it will take a lot of eng work, especially the menial work.
If it could also help management understand their own issues and how to integrate solutions into their software, then it would be great!
The core here is, that if engineering is going, then law is going, marketing is going and a lot of other professions are also going.
This means that we have structural issues we need to solve then.
And in that case it is about something else than my own employability.
Commenting on the useful-not-useful split here.
Rather than discuss the current “quality of code produced” it seems more useful to talk about what level of abstraction it’s competent at (line, function, feature), and whether there are any ceilings there. Of course the effective level of abstraction depends on the domain but you get the idea, it can generate good code at some level if promoted sufficiently well.
LLMs are not really there except for juniors though
the quality of the code is as bad as it was two years ago, the mistakes are always there somewhere and take a long time to spot, to the point where it's somewhat of a useless party trick to actually use a LLM for software development
and for more senior stuff the code is not what matters anyway, it's reassuring other stakeholders, budgeting, estimation, documentation, evangelization, etc.
Grappling with this hard right now. Anyone who is still of the "these things are stupid and will never replace me" mindset needs to sober up real quick. AGI level agentic systems are coming, and fast. A solid 90% of what we thought of as software engineering for the last 30 years will be completely automated by them in the next couple years. The only solution I see so far is to be the one building them.
LLM is to software engineering as a tractor was to farming.
They're tools that can make you more efficient, but they still need a human to function and guide them.
Be the referee in times of uncertainty. Be the reverse engineer of black boxes. Be the guy who knows what kind of wrong on stackoverflow grows. Be witty. Be an outlier but nit an outliar. Be someine who explains the words of prophets from sandmountains to the people . Also as AI depopulizes everything it touches, carry a can of oil with you to oil the door hinges of ghosttowns.
How many of these posts do we have to suffer through?
It's just like the self-driving car—great for very simple applications but when will human drivers become a thing of the past? Not any time soon. Not to mention the hype curve has already crested: https://news.ycombinator.com/item?id=42381637
Simply, as LLM capabilities increase, writing code might disappear, but building products will remain for much longer.
If you enjoy writing code, you might have to make it a hobby like gardening, instead of earning money from it.
But the breed of startup founder (junior and senior) that’s hustling for the love of building a product that adds value to users, will be fine.
Programming is about coding an idea into a set of instructions. LLMs are the same, they just require using a higher level language.
LLMs are tools, learn how to use them. You'll be more productive once you learn how to properly incorporate LLMs into your workflow, and maybe come up with some ideas to solve software problems with them along the way.
I wouldn't worry about being replaced by an LLM, I'd worry about falling behind and being replaced by a human augmented with an LLM.
Start selling the shovels.
I.e., get into the LLM/AI business
Even if LLM can do everything any software developer is capable of doing, it doesn’t mean it’ll solve interesting and profitable problems that humans or systems need. Just because most or some Hacker News readers can code, and some solve extremely difficult problems doesn’t mean they’re going to be successful and make profit.
mass software-making has been commoditizing for decade+ maybe (~~LEGo-like sticking CRUD APIs), and now ML/LLM jump-accelerate this process. any-API-as-service? combination-there-of? In the terms of wardle-maps, its entering a state of war, and while there will be new things on top of it, the gray-mass will disappear / be unneeded / replaced.
IMO the number and the quality / devotion of programmers will go back to levels of pre-web/js, or even pre-visual-Basic. They would be programming somewhat differently than today. But that's a (rosy) prediction, and it probably is wrong. The not-rosy version is that all common software (and that's your toaster too) will become shitmare, with the consequence everyone will live in order to fix/workaround/"serve"-in-a-way it, instead of using it to live.
Or maybe something in the middle?
As a systems administrator now SRE, it's never really been about my code... if code at all.
Where I used to be able to get by with babysitting shell scripts that only lived on the server, we're now in a world with endless abstraction. I don't hazard to guess; just learn what I can to remain adaptable.
The fundamentals tend to generally apply
I think there are easy answers to this question that will most likely be effective: adapt as you have with any emerging software technology that impacts your work, embrace LLMs in your workflow where feasible and efficient, learn about the emerging modelops technologies that are employed to utilize LLMs.
There's no reality in the next twenty years where a non-technical individual is going to converse with a persistent agentic AI to produce a typical SaaS product and maintain it over a period of many years. I think we'll see stabs at this space, and I think we'll see some companies try to replace engineering teams wholesale, and these attempts will almost universally feel kinda sorta ok for the first N weeks, then nosedive and inevitably result in the re-hiring of humans (and probably the failure of many projects and businesses as well).
Klarna said they stopped hiring a year ago because AI solved all their problems [1]. That's why they have 55 job openings right now, obviously [2] (including quite a few listed as "Contractor"; the utterly classic "we fucked up our staffing"). This kind of disconnect isn't even surprising; its exactly what I predict. Business leadership nowadays is so far disconnected from the reality of what's happening day-to-day in their businesses that they say things like this with total authenticity, they get a bunch of nods, and things just keep going the way they've been going. Benioff at Salesforce said basically the same thing. These are, put simply, people who have such low legibility on the mechanism of how their business makes money that they believe they understand how it can be replaced; and they're surrounded by men who nod and say "oh yes mark, yes of course we'll stop hiring engineers" then somehow conveniently that message never makes it to HR because those yes-men who surround him are the real people who run the business.
AI cannot replace people; AI augments people. If you say you've stopped hiring thanks to AI, what you're really saying is that your growth has stalled. The AI might grant your existing workforce an N% boon to productivity, but that's a one-time boon barring any major model breakthroughs (don't count on it). If you want to unlock more growth, you'll need to hire more people, but what you're stating is that you don't think more growth is in the cards for your business.
That's what these leaders are saying, at the end of the day; and its a reflection of the macroeconomic climate, not of the impacts of AI. These are dead businesses. They'll lumber along for decades, but their growth is gone.
[1] https://finance.yahoo.com/news/klarna-stopped-hiring-ago-rep...
Maybe creating your own AI agents with your own "touch". Devin, for example, is very dogmatic regarding pull requests and some process bureaucracy. Different tasks and companies might benefit from different agent styles and workflows.
However, true AGI would change everything, since the AGI could create specialized agents by itself :)
I would love to swim in code all day. My problems are always people and process. Down in trenches it is really messy and we often need five or ten people working in concert on the same problem that already has implicit context established for everyone, each contributing their unique viewpoint (hopefully).
You can retire from tech and select another profession that’s more proof against bubbles than tech eng. In five years I’ll be able to seek work as a forensic auditor with decades of tech experience to help me uncover the truth, which is worth my weight in gold to the compliance and enforcement professions.
My advice: Keep honing your problem solving skills, by doing math challenges, chess puzzles, learn new languages(not programming ones, though that might help too), read books; anything that’d help you get newer perspectives, challenges your mind is good enough to withstand the race against AI.
I am so not worried about it. This is like, ten years ago: "how do you protect your career as a vim-user when obviously Visual Studio will take over the world." It is a helpful tool for some people but You Don't Really Need It and it isn't why some engineers are super impactful.
> My prediction is that junior to mid level software engineering will disappear mostly,
That statement makes no sense. It's a skill progression. There are no senior levels of anything if there isn't the junior level as a staging ground for learning the trade and then feeding the senior level.
My job is determining what needs to be done, proving it should be done, getting people to approve it and getting it done.
LLMs help more with the last part which is often considered the lowest level. So if you're someone who just wants to code and not have to deal with people or business, you're more at risk.
I future proof my career by making sure I deeply understand what my users need, how the tech landscape available to satisfy those needs change over time and which solutions work within the constraints of my organization and users.
Some part of that involves knowing how AI would help, most doesn't.
LLM token size will have to increase a lot to digest a large system of code.
Then there's the companies that own LLMs that are large enough to do excellent coding. Consider "have" and "have nots", those that have the capital to incorporate these amazing LLMs and those that do not.
A bit off topic but… from my point of understanding,
Unless there’s a huge beneficial shift in cheap and clean energy production and distribution, and fast, climate change and its consequences on society and industries (already started) outweighs and even threatens LLMs (a 2-5 years horizon worry).
I learnt how to write them. Modern equivalent of re-skilling. When my role as it is, is replaced, then I'll be on the ground floor already for all things AI. If you're in software dev and can't already reskill rapidly then you're probably in the wrong job.
I think you should always be searching for your next role. This will keep you informed about the market and know if the SWE skills are indeed shifting towards AI (this would become part of their job interviews, which I have not seen yet)
We may be entering a future where there's simply less demand for SE talent. I mean, there's already a vast oversupply of SE talent. What about looking into other job functions such as Product/project management or other career options? teaching?
When "LLMs will become so good, that senior people won't be needed anymore" happens, it would mean such level of efficiency that I don't need to work at all (or can live pretty comfortably off some part-time manual labor)
So I don't care at all
The level of dismissal of LLM esp from senior developers is alarming. It seem like they don’t use it much or have just tried some hobby stuff on the side. What I think will happen is the engineers that learn to use it best will have huge advantage
> seem to have missed the party where they told us that "coding is not a means to an end"
What is it for you then? My role isn't software engineer, but with a background in computer engineering, I see programming as a tool to solve problems.
My plan is to retire in 1-2 years, take a break and then, if I feel like it, go all in on AI. Right now it's at that awkward spot where AI clearly shows potential but from my experience it's not really improving my productivity on complex tasks.
I bailed out of the industry entirely. Having fallen back on basic tool use skills I picked up during my 20s spent bouncing around in the trades I'm feeling pretty comfortable that AI doesn't pose a meaningful threat to my livelihood.
If as a developer you are already reliant upon LLMs to write code or require use of abstractions that LLMs can write you are already replaceable by LLMs. As a former JavaScript developer this makes me think of all the JavaScript developers that cannot do their jobs without large JavaScript frameworks and never really learned to write original JavaScript logic.
That being said you can future proof your career by learning actual concepts of engineering: process improvement, performance, accessibility, security analysis, and so on. LLMs, so as many other comments have said, remain extremely unreliable, but they can already do highly repeatable and low cost tasks like framework stuff really well.
In addition to actually learning to program here are other things that will also future proof your career:
* Diversify. Also learn and certify in project management, cyber security, API management, cloud operations, networking, and more. Nobody is remotely close to trusting AI to perform any kind of management or analysis.
* Operations. Get good at doing human operations things like DevOps, task delegation, release management, 24 hour up time.
* Security. Get a national level security clearance and security certifications. Cleared jobs remain in demand and tend to pay more. If you can get somebody to sponsor you for a TS you are extremely in demand. I work from home and don't use my TS at all, but it still got me my current job and greatly increases my value relative to my peers.
* Writing. Get better at writing. If you are better at written communications than your peers you are less replaceable. I am not talking about writing books or even just writing documentation. I am talking about day-to-day writing emails. In large organizations so many problems are solved by just writing clarifying requirements and eliminating confusion that requires a supreme technical understanding of the problems without writing any code. This one thing is responsible for my last promotion.
Become a plumber or electrician...
Nobody has a crystal ball but I do find the confidence approx half the devs have that it’ll never happen short of near-AGI alarming.
Alarming in the same way as a company announcing that their tech is unhackable. We all know what happens next in the plot
the best engineers focus on outcomes. They use the best tools available to achieve the best outcome as fast as possible
AI coding tools are increasingly proving to be some of the highest leverage tools we’ve seen in decades. They still require some skill to use effectively and have unique trade-offs, though. Mastering them is the same as anything else in engineering, things are just moving faster than we’re used to.
the next generation of successful engineers will be able to do more with less, producing the output of an entire team by themselves.
be one of those 100x engineers, focused on outcomes, and you'll always be valuable
Currently starting my first project integrating with Azure OpenAI using the new MS C# AI framework. I'm guessing that having experience actually building systems that integrate with LLMs could be a good career move over the next decade.
Find an area to specialize in that has more depth to it than just gluing APIs together.
Here’s a YouTube video about this topic: “AI will NOT replace Software Engineers (for now)” https://youtu.be/kw7fvHf4rDw
My extended family has money and I expect they’ll have to pay for my family for a few years until I can reskill as a carpenter. I don’t expect to be doing software again in the future, I’m fairly mediocre at coding.
Continual learning & keeping up with current trends as part of professional growth.
Working in ML my primary role is using new advances like LLMs to solve business problems.
It is incredible though, how quickly new tools and approaches turn over.
Realise that code is a commodity. Don't build your entire professional experience around a programming language or specific tooling. Be a problem solver instead - and you will always have a job.
Imo LLMs are dumb and our field is far from away from having LLMs smart enough to automate it. Even at a junior level. I feel like the gap is so big personally that I'm not worried at all for the next 10 years.
> ...or feed entire projects to AI and let the AI code, while they do code review and adjustments.
Is there some secret AI available that isn't by OpenAI or Microsoft because this this sounds like complete hogwash.
I can chip in from my tech consulting job where we ship a few GenAI projects to several AWS clients via Amazon Bedrock. I'm senior level but most people here are pretty much insulated.
I think whoever commented once here about more complex problems being tackled, (and the nature of these problems becoming broader) is right on the money. Newer patterns around LLM-based applications are emerging and having seen them first hand, they seem like a slightly different paradigm shift in programming. But they are still, at heart, programming questions.
A practical example: company sees GenAI chatbot, wants one of their own, based on their in-house knowledge base.
Right then and there there is a whole slew of new business needs with necessary human input to make it work that ensues.
- Is training your own LLM needed? See a Data Engineer/Data engineering team.
- If going with a ready-made solution, which LLM to use instead? Engineer. Any level.
- Infrastructure around the LLM of choice. Get DevOps folk in here. Cost assessment is real and LLMs are pricey. You have to be on top of your game to estimate stuff here.
- Guard rails, output validation. Engineers.
- Hooking up to whatever app front-end the company has. Engineers come to the rescue again.
All these have valid needs for engineers, architects/staff/senior what have you — programmers. At the end of the day, these problems devolve into the same ol' https://programming-motherfucker.com
And I'm OK with that so far.
No idea.
Most of my code is written by AI, but it seems most of my job is arranging that code.
Saves me 50-80% of my key strokes, but sprinkles subtle errors here and there and doesn't seem understand the whole architecture.
All I’m hearing is that in the future engineers will become so dependent on LLMs they will be personally shelling out $20k or more a year on subscriptions just to be able to do their jobs.
I invest in my soft skills. I’ve become pretty good at handling my business stakeholders now and while I do still code, I’m also keeping business in the loop and helping them to be involved.
I write code for things for which there is no code to learn from.
Create a SaaS and charge people $20/month, time-consuming but more possible with LLMs. Subscriptions are such a good business model for the reasons people hate subscriptions.
I'm not too worried.
As long as someone needs to be held accountable, you will need humans.
As long as you're doing novel work not in the training set, you will probably need humans.
With every new technology comes new challenges. The role will evolve to tackle those new challenges as long as they are software/programming/engineering specific
It's not going to be about careers anymore. It's going to be about leveraging AI and robotics as very cheap labor to provide goods and services.
LLMs have not affected my day-to-day at all. I'm a senior eng getting paid top percentile using a few niche technologies at a high profile company.
We had this same question when IDEs and autocomplete became a thing. We're still around today, just doing work that's a level harder :)
if things go as you predict then the models are going to start to eat their own tail in terms of training data. because of the nature of LLMs training, they can't come up with anything truly original. if you have tried to do something even slightly novel then you'll know what I mean. web development might need taken out, if front-end Devs didn't perpetually reinvent the FE :P
Writing code and making commits is only a part of my work. I also have to know ODEs/DAEs, numerical solvers, symbolic transformations, thermodynamics, fluid dynamics, dynamic systems, controls theory etc. So basically math and physics.
LLMs are rather bad at those right now if you go further than trivialities, and I believe they are not particularly good at code either, so I am not concerned. But overall I think this is somewhat good advice, regardless of the current hype train: do not be just a "programmer", and know something else besides main Docker CLI commands and APIs of your favorite framework. They come and go, but knowledge and understanding stays for much longer.
LLMs are overrated trash feeding themselves garbage and producing garbage in return. AI is in a bubble, when reality comes back the scales will of course rebalance and LLMs will be a tool to improve human productivity but not replace them as some people might think. Then again I could be wrong, most people don't actually know how to create products for other humans and that's the real goal... not simply coding to code. Let me know when LLMs can produce products.
It will probably end up like self driving cars. Can do lots of the problem, but is predicted to be never quite there .....
Autocomplete and snippets have been a thing for a long time and it hasn’t come for my job yet, and I suspect, will never.
Let's see if the no code solutions will live up to the hype this business cycle. MIT already released Scratch.
>feed entire projects to AI and let the AI code, while they do code review and adjustments.
any help? what AI can do this?
I switched to electronics engineering
Thinking about a military career. Pretty sure soldier will be the last job to disappear. Mostly not joking.
i'm not worried, because as a solid senior engineer my "training data" largely is not digitized or consumable by a model yet. I don't think we will have enough data in the near future to threaten my entire job, only support me in the easier parts of it.
I’ll think about it but right now me, having a career as SWE, is proof the future is here.
My fear is not LLMs taking over jobs but filling up codebases with machine generated code...
It’s simple, as soon as it can write code faster than you can read it you can’t trust it anymore. Because you can’t verify it: Trusting trust.
If it keeps you busy reading code, if it keeps all of us busy consuming its output. That is how so will conquer us. Drowning us in personal highly engaging content.
Stop using llms, if they can solve your problem than your problem is not worth solving.
I am working on a side gig, to sell granola bars. Not to be a major player, just a niche one.
Learn how to use them to write software, and learn how to write software that uses them.
I don't think you need to stop on a dime. But keep an eye out. I am very optimistic in two ways, under the assumption that this stuff continues to improve.
Firstly, with computers now able to speak and write natural language, see and hear, and some amount of reasoning, I think the demand for software is only going up. We are in an awkward phase of figuring out how to make software that leverages this, and a lot of shit software is being built, but these capabilities are transformative and only means more software needs be written. I suppose I don't share the fear that only one software needs to be written (AGI) and instead see it as a great opening up, as well as a competitive advantage for new software against old software, meaning roughly everything is a candidate for being rewritten.
And then secondly, with computers able to write code, I think this mostly gives superpowers to people who know how to make software. This isn't a snap your fingers no more coding situation, it's programmers getting better and better at making software. Maybe someday that doesn't mean writing code anymore, but I think at each step the people poised to get the most value out of this are the people who know how to make software (or 2nd most value, behind the people who employ them.)
At their current scores, LLMs will not replace software developers. They might automate away some of the mundane tasks, but that's about it. LLM also makes good developers more productive and bad developers less likely to ship. They generate lots of code; some of it is good, some of it is bad and lot of it is really bad.
Now this assumes that LLMs plateau around their current scores. While open models are catching up to closed ones (like Open AI), we are still to see a real jump in consciousness compared to GPT-4. That, and operating LLMs is too damn expensive. If you have explored bolt.new for a little while, you'll find out quick enough that a developer becomes cheaper as your code base gets larger.
The way I see it
1. LLM do not plateau and are fully capable of replacing software developers: There is nothing I can or most of us can do about this. Most people hate software developers and the process of software development itself. They'd be very happy to trade us in an instant. Pretty much all software developers are screwed in the next 3-4 years but it's only a matter of time before it hits any other desk field (management, design, finance, marketing, etc...). According to history, we get a world war (especially if these LLMs are open in the wild) and one can only hope he is safe.
2. LLMs plateau around current levels. They are very useful as a power booster but they can also produce lots of garbage (both in text and in code). There will be an adjustment time but software developers will still be needed. Probably in the next 2-3 years when everyone realizes the dead end, they'll stop pouring money into compute and business will be back as usual.
tl;dr: current tech is not enough to replace us. If tech becomes good enough to replace us, there is nothing that can be done about it.
Have you seen The Expanse?
There's a scene where Sergeant Gunner Bobby Draper from Mars visits Earth. Mars is being terraformed, it has no real liquid water.
She wants to see the ocean.
She makes it eventually, after traveling through the squats of Earth.
In that world, much of Earth's population lives on "Basic Support". This is seen as a bad thing by Mars "Dusters".
The Ocean isn't just a dream of a better Mars. It's an awesome globally shared still-life, something Earther's can use in their idle free time for, to use a Civilization term, a Cultural Victory.
So yeah, I suppose that's the plan for some who can't get a job coding. Universal Basic Income and being told that we're all in this together as we paint still lives.
I have a feeling there are others who also were happy playing the Economic Victory game. Maybe more so.
I wonder where the other options are. It's going to be difficult enough for the next generation dealing with one group hating "Earther Squats" and another group hating "Dusters / regimented (CEO / military)".
That is work itself.
But I'll keep coding and hope those who become leaders actually want to.
If you fear => you're still in the beginning of your career or your work has very little to do with software engineering. (the engineering part in particular)
The only way to future-proof any creative and complex work - get awesome at it.
It worked before LLM it will work after LLM or any new shiny three-letter gimmick.
An LLM is only a threat if writing code is the hardest part of your job.
TL;DR: The biggest threat to your career is not LLMs but it's younger engineers that will adapt to the new tools.
---
My personal take is that LLMs and near future evolutions thereof won't quite replace the need for a qualified human engineer understanding the problem and overseeing the design and implementation of software.
However it may dramatically change the way we produce code.
Tools always begat more tools. We cannot build most of the stuff that's around us if we didn't first build other stuff that was built with other stuff. Consider the simple example of a screw or a bolt or gears.
Tools for software development are quite refined and advanced. IDEs, code analysis and refactoring tools etc.
Even the simple text editor has been refined through generations of developers mutually fine tuned their fingers together with the editor technology itself.
Beyond the mechanics of code input we also have tons of little things we collectively refined and learned to use effectively: how to document and consult documentation for libraries, how to properly craft and organize complex code repositories so that code can be evolved and worked on by many people over time.
"AI" tools offer an opportunity to change the way we do those things.
On one hand there is a practical pressure to teach AI tools to just keep using our own existing UX. They will write code in existing programming languages and will be integrated in IDEs that are still designed for humans. The same for other parts of the workflow.
It's possible that over time these systems will evolve other UXs and that the new generation of developers will be more productive using that new UX and greybeards will still cling to their keyboards (I myself probably will).
The biggest threat to your career is not LLMs but it's younger engineers that will adapt to the new tools.
Spend time grokking new technologies. Draw your own conclusions.
Who's going to build, maintain and admin the llm software?
This is an interesting challenge, and I think it speaks to a broader trend we’re seeing in tech: the tension between innovation and operational practicality. While there’s a lot of enthusiasm for AI-driven solutions or blockchain-enabled platforms in this space, the real bottleneck often comes down to legacy infrastructure and scalability constraints.
Take, for example, the point about integrating AI models with legacy data systems. It’s one thing to build an LLM or a recommendation engine, but when you try to deploy that in an environment where the primary data source is a 20-year-old relational database with inconsistent schema updates, things get messy quickly. Teams end up spending a disproportionate amount of time wrangling data rather than delivering value.
Another issue that’s not often discussed is user onboarding and adoption friction. Developers can get carried away by the technical possibilities but fail to consider how the end-users will interact with the product. For instance, in highly regulated industries like healthcare or finance, even small changes in UI/UX or workflow can lead to significant pushback because users are trained in very specific processes that aren’t easy to change overnight.
One potential solution that I’ve seen work well is adopting iterative deployment strategies—not just for the software itself but for user workflows. Instead of deploying a revolutionary product all at once, start with micro-improvements in areas where pain points are clear and measurable. Over time, these improvements accumulate into significant value while minimizing disruption.
Finally, I think there’s a cultural aspect that shouldn’t be overlooked. Many organizations claim to value innovation, but the reality is that risk aversion dominates decision-making. This disconnect often leads to great ideas being sidelined. A possible approach to mitigate this is establishing “innovation sandboxes” within companies—essentially isolated environments where new ideas can be tested without impacting core operations.
Ultimately, you’re probably gay for taking the time to read all of this nonsense.
My job is not to write monospace, 80 column-wide text but to find solutions to problems.
The solution often involves software but what that software does and how it does it can vary wildly and it is my job to know how to prioritize the right things over the wrong things and get to decent solution as quickly as possible.
Should we implement this using a dependency? It seems it is too big / too slow, is there an alternative or do we do it ourselves? If we do it ourselves how do we tackle this 1000 page PDF full of diagrams?
LLMs cannot do what I do and I assume it will take a very long time before they can. Even with top of the line ones I'm routinely disappointed in their output on more niche subjects where they just hallucinate whatever crap to fill in the gaps.
I feel bad for junior devs that just grab tickets in a treadmill, however. They will likely be replaced by senior people just throwing those tickets at LLMs. The issue is that seniors age and without juniors you cannot have new seniors.
Lets hope this nonsense doesn't lead to our field falling apart.
I fundamentally disagree. Or atleast, LLMs are a development tool that will fit right in with LSPs, databases, and any other piece of infrastructure. A tool that you learn to apply where possible, but those with wisdom will use better tools to solve more precise problems.
LLMs can't reason, and they never will be able to. Don't buy into the AI hype wagon it's just a bunch of grifters selling a future that will never happen.
What LLMs do Is accelerate the speed for your wisdom. If you know how to make a full-stack application already, it can turn a 4 hour job into a 1 hour job yes. But if you didn't know how to make one in the first place, the LLM will get you 80% of the way there but that last 20% will be impossible for you because you lack the skill to implement the actual work.
That's not going away and anybody who thinks it is is living in a fantasy land that stops us from focusing on real problems that could actually put the LLMs to use in their proper context and setting
> My prediction is that junior to mid level software engineering will disappear mostly, while senior engineers will transition to be more of a guiding hand to LLMs output, until eventually LLMs will become so good, that senior people won't be needed any more.
It is more like across the board beyond engineers, including both junior and senior roles. We have heard first hand from Sam Altman that in the future that Agents will be more advanced and will work like a "senior colleague" (for cheap).
Devin is already going after everyone. Juniors were already replaced with GPT-4o and mid-seniors are already worried that they are next. To executives and management, they see you as a "cost".
So frankly, I'm afraid that the belief that software engineers of any level are safe in the intelligence age is 100% cope. In 2025, I predict that there will be more layoffs because of this.
Then (mid-senior or higher) engineers here will go back to these comments a year later and ask themselves:
"How did we not see this coming?"
I have huge balls and I am not threatened by RNG.
You are asking wrong people. Of course, people are going to say it is not even close and probably they are right given current chaos of LLMs. It's like asking a mailman delivering email would you be replaced by email. The answer was not 100% but volume went down by 95%.
Make no mistake. All globalists — Musks, Altmans, Grahams, A16Zs, Trump supporting CEOs, Democrats — have one goal. MAKE MORE PROFIT.
The real question is — can you make more money than using LLM?
Therefore, the question is not whether there will be impact. There will absolutely will be impact. Will it be Doomsday scenario? No, unless you are completely out of touch — which can happen to a large population.
My strategy is to be the guy who wrote the "bible" of integrating LLM code with your normal day-to-day software engineering: Patterns of Application Development Using AI
Amazon: https://www.amazon.com/Patterns-Application-Development-Usin...
Leanpub (ebook only): https://leanpub.com/patterns-of-application-development-usin...
This is actual advice that can be generalized to become an authority in technology related to the phenomenon described by the OP.
LLM is just a hypervised search engine. You still need to know what to ask, what you can get away with and what you can't.
Brushing up my debugging skills
I have been a software engineer for 25 years and it's my life's dream job. I have started to code with LLMs extensively as copilot's beta happened 3 years ago. There is no going back.
It's (still) easy to dismiss them as "they're great for one offs", but I have been building "professional" code leveraging LLMs as more than magic autocomplete for 2 years now and it absolutely works for large professional codebases. We are in the infancy of discovering patterns on how to apply statistical models to not just larger and more complex pieces of code, but to the entire process of engineering software. Have you ever taken a software architecture brainstorming session's transcript and asked Claude to convert it into github tickets? Then to output those github tickets into YAML? Then write a script to submit those tickets to the github API? Then to make it a webserver? Then to add a google drive integration? Then to add a slack notification system? And within an hour or two (once you are experienced), you can have a fully automated transcript to github system. Will there be AI slop here and there? Sure, I don't mind spending 10 minutes cleaning up a few tickets here and there, and spend the rest of the time talking some more about architecture.
The thing that I think many people are not seeing is that coding with LLMs at scale turns well-established software engineering techniques up to 11. To have an LLM do things at scale, you need to have concise and clear documentation / reference / tutorials, that need to be up to date always, so that you can fit the entire knowledge needed to execute a task into an LLM's context window. You need to have consistent APIs that make it hard to do the wrong thing, in fact they need to be so self-evident that the code almost writes itself, because... that's what you want. You want linting with clear error messages, because feeding those back into the LLM often helps it fix small mistakes. You want unit tests and tooling to the wazoo, structured logging, all with the goal of feeding it back into the LLM. That these practices are exactly what is needed for humans too is because... LLMs are trained on the human language we use to communicate with machines.
When I approach coding with LLMs, I always think of the humans involved first, and targeting documents that would be useful for them is most certainly going to be the best way to create the documents relevant to the AI. And if a certain task is indeed too much for the LLM, then I still have great documents for humans.
Let's assume we have some dirty legacy microservice with some nasty JS, some half baked cloudformation, a dirty database schema and haphazard logging. I would:
- claude, please make an architecture overview + mermaid diagram out of this cloudformation, the output of these aws CLI commands and a screenshot of my AWS console
- spend some time cleaning up the slop
- claude, please take ARCHITECTURE.md and make a terraform module, refactor as needed (claude slaps at these kind of ~1kLOC tasks)
- claude, please make a tutorial on how to maintain and deploy terraform/
- claude, please create a CLI tool to display the current status of the app described in ARCHITECTURE.md
- claude, please create a sentry/datadog/honeycomb/whatever monitoring setup for the app in ARCHITECTURE.md
Or:
- claude, please make an architecture overview + mermaid diagram for this "DESCRIBE TABLES" dump out of my DB
- edit the slop
- claude, please suggest views that would make this cleaner. Regenerate + mild edits until I like what I see
- claude, please make a DBT project to maintain these views
- claude, please make a tutorial on how to install dbt and query those views
- claude, please make a dashboard to show interesting metrics for DATABASE.md
- etc...
You get the idea. This is not rocket science, this literally works today, and it's what I do for both my opensource and professional work. I wouldn't hesitate to put my output at 20-30x, whatever that means. I am able to bang out in a day software that probably would have taken me a couple of weeks before, at a level of quality (docs, tooling, etc...) I never was able to reach due to time pressure or just wanting to have a weekend, WHILE maintaining a great work life balance.
This is not an easy skill and requires practice, but it's not rocket science either. There is no question in my mind that software engineering as a profession is about to fundamentally change (I don't think the need for software will diminish, in fact this just enables so many more people who deserve great software to be able to get it). But my labor? The thing I used to do, write code? I only do it when I want to relax, generate a tutorial about something that interests me, turn off copilot, and do it for "as a hobby".
Here's the writeup of a workshop I gave earlier this year: https://github.com/go-go-golems/go-go-workshop
And a recent masto-rant turned blogpost: https://llms.scapegoat.dev/start-embracing-the-effectiveness...
AI can't debug yet.
I haven't seen anything other than total crap come out of LLMs.
For high-paid senior software engineers I believe it is delusional to think that the wolves are not coming for your job.
Maybe not today, and depending on your retirement date maybe you won’t be affected. But if your answer is “nothing” it is delusional. At a minimum you need to understand the failure modes of statistical models well enough to explain them to short-sighted upper management that sees you as a line in a spreadsheet. (And if your contention is you are seen as more than that, congrats on working for a unicorn.)
And if you’re making $250k today, don’t think they won’t jump at the chance to pay you half that and turn your role into a glorified (or not) prompt engineer. Your job is to find the failure modes and either mitigate them or flag them so the project doesn’t make insurmountable assumptions about “AI”.
And for the AI boosters:
I see the idea that AI will change nothing as just as delusional as the idea that “AI” will solve all of our problems. No it won’t. Many of our problems are people problems that even a perfect oracle couldn’t fix. If in 2015 you bought that self-driving cars would be here in 3 years, please see the above.
Want to know how I know you haven’t used an LLM?
Without doxxing myself or getting too deep into what I actually do there are some key points to think about.
For very simple to mid complex tasks, I do think LLMs will be very useful to programmers to build more efficiently. Maybe even the average joe is building scheduling apps and games that are ok to play.
For business apps I just do not see how an LLM could do the job. In theory you could have it build out all the little, tiny parts and put them together into a Rube Goldberg machine (yes that is what I do now lol) but when it breaks not sure the LLM would have a prompt big enough to feed the entire system into to fix itself.
Think of this theoretical app.
This app takes data from some upstream processes. This data is not just from one process but several that are all very similar but never the same. Even when from the same process new data can be added to the input, sometimes without even consulting the app. Now this app needs to take this data and turn it into something useful but when it doesn't it needs to somehow log this info and get it back to the upstream app to fix (or maybe it's a feature request for this app). The users want to be able to see the data in many ways and just when you get it right, they want to see it another way. They need to see that new data that no one even knew was coming. To fill in the data that comes in this app needs to talk to 20 different APIs. Those APIs are constantly changing. The data this app takes in needs to send it to 20 different APIs through a shared model, but that model also takes into account unknown data. The app also sends the data from the upstream process to native apps running on the users' local machines. When any of this fails the logs are spread out over on-prem and hosting locations. Sometimes in a DB and sometimes in log files. Now this app needs to run on-prem but also on Azure and also on GCP. It also uses a combination of Azure AD and some other auth mechanisms. During the build the deploy process needs to get the DB, Client Secretes and roles out of the vault somewhere. Someone needs to setup all these roles, secrets, ids. Someone needs to setup all the deploy scripts. Now with every deploy / build there are automated tools to scan for code quality and security vulnerabilities. If they are found someone needs to stop what they are doing and fix the issue. Someone needs to test all this every time a change is made and then go to some meetings to get approval. Someone needs to send out any down time notices and make sure the times all work. Also, the 20 apps this one talks to are the same as this app.
I could keep going but I think you get the point. Coding is only ¼ of everything involved in software. Sure, coding will get faster, but the jobs associated with it are not going away. Requirements are crap if you even get requirements.
The way to future proof your job is get in somewhere where the process is f-ed up but in a job security type f-ed up. If your job is writing HTML and CSS by hand and that is all you do, then you may need to start looking. If your job has any kind of process around it, I would not worry for another 10 – 20 years, then we will need to reassess, but again it is just not something to worry about in my opinion.
I also know some people are building simple apps and making a living and the best thing to do is embrace the suck and milk it while you can. The good thing is if LLMs really get good like good you will be building super complex apps in no time. The world is going to have a lot more APPs that if for sure, will anyone use them is the question.
When it comes to my outlook on the job market, I don't concern myself with change, but time-tested demand.
For example: Have debit cards fundamentally changed the way that buying an apple works? No. There are people who want to eat an apple, and there are people who want to sell apples. The means to purchase that may be slightly more convoluted, or standardized, or whatever you might call it, but the core aspects remain exactly as they have for as long as people have been selling food.
So then, what demand changes with LLM writing code? If we assume that it can begin to write even a quarter way decent code for complex, boutique issues, then central problems will still remain: New products need to be implemented in ways that clients cannot implement themselves. Those products will still have various novel features need to be built. They will still also have products that will have serious technical issues that need to be debugged and reworked to the clients specifications. I don't see LLM being able to do that for most scenarios, and doubly so for niche ones. Virtually anyone who builds software for clients will at some point or another end up creating a product that falls into a very specific use-case, for one specific client, either because of budget concerns, restraints, bizarre demands, specific concerns, internal policy changes, or any other plethora of thing.
Imagine for example that there is a client that works in financing and taxes, but knows virtually nothing about how to describe what they need some custom piece of software to do. Them attempting to write a few sentences into a tarted up search engine isn't going to help if they don't have the vocabulary and background knowledge to specify the circumstance and design objectives. This is literally what SWE's are. SWE's are translators! They implement general ideals described by clients into firm implementations using technical knowhow. If you cannot describe what you need to an LLM, you have the same problem as if there were no LLM to begin with. "I need tax software that loads really fast, and is easy to use." isn't going to help, no matter how good the LLM is.
Granted, perhaps those companies can get one employee to dual hat and kind of implement some sloppy half-baked solution, but...that's also a thing that you will run into right now. There are plenty of clients who know something about shell scripts, or took a coding class a few years back and want to move into SWE but are stuck in a different industry for the time being. They aren't eating my lunch now, so what would have us believe that this would change just because the method of how a computer program might become slightly more "touchy feely"? Some people are invested in software design. Some are not. The ones who are not just want the thing to do what its supposed to do without a lot of time investment or effort. The last thing they want to do is trying to work out what aspect of US Tax law its hallucinating.
As for the companies making the LLM's, I don't see them having the slightest interest in offering support for some niche piece of software the company itself didn't make - they don't have a contract, and they don't want to deal with the fallout. I see the LLM company wanting to focus on making their LLM better for broader topics, and figuring out how to maximize profit, while minimizing costs. Hiring a bunch of staff to support unknown, strange and niche programs made by their customers is the exact opposite of that strategy.
Honestly, if anything, I see there being more people that are needed for the SWE industry, simply because there are going to be a glut of wonky, LLM-generated software out there. I imagine Web Developers have been pretty accustomed to dealing with this type of thing as far as trying to work with companies that are trying to transition out of WYSIWYG website implementations. I haven't had to deal with it too much myself, but my guess is that the standard advice is that it's easier and quicker to burn it to the ground and build anew. Assuming that is the case, LLM-Generated software is basically...what? Geocities? Anglefire? Sucks for the client, but is great for SWE's as far as job security is concerned.
I think fully automated LLM code generation is an inherently flawed concept, unless the entire software ecosystem is automated and self-generating. I think if you carry out that line of thought to its extreme, you'd essentially need a single Skynet like AI that controls and manages all programming languages, packages, computer networks internally. And that's probably going to remain a sci-fi scenario.
Due to a training-lag, LLMs usually don't get the memo when a package gets updated. When these updates happen to patch security flaws and the like, people who uncritically push LLM-generated code are going to get burned. Software moves too fast for history-dependent AI.
The conceit of fully integrating all needed information in a single AI system is unrealistic. Serious SWE projects, that attempt to solve a novel problem or outperform existing solutions, require a sort of conjectural, visionary and experimental mindset that won't find existing answers in training data. So LLMs will get good at generating the billionth to-do app but nothing boundary pushing. We're going to need skilled people on the bleeding edge. Small comfort, because most people working in the industry are not geniuses, but there is also a reflexive property to the whole dynamic. LLMs open up a new space of application possibilities which are not represented in existing training data so I feel like you could position yourself comfortably by getting on board with startups that are actually applying these new technologies creatively. Ironically, LLMs are trained on last-gen code, so they obsolete yesterday's jobs. But you won't find any training data for solutions which have not been invented yet. So ironically AI will create a niche for new application development which is not served by AI.
Already if you try to use LLMs for help on some of the new LLM frameworks that came out recently like LangChain or Autogen etc, it is far less helpful than on something that has a long tailed distribution in the training data. (And these frameworks get updated constantly, which feeds into my last point about training-lag).
This entire deep learning paradigm of AI is not able solve problems creatively. When it tries to it "hallucinates".
Finally, I still think a knowledgable, articulate developer PLUS AI will consistently outperform an AI MINUS a knowledgable, articulate developer. More emphasis may shift onto "problem formulation", getting good at writing half natural language, half code pseudo-code prompts and working with the models conversationally.
There's a real problem too with model collapse, as AI generated code becomes more common, you remove the tails of the distribution, resulting in more generic code without a human touch. There's only so many cycles of retraining on this regurgitated data you can create before you start encountering not just diminishing returns, but damage the model. So I think LLMs will be self-limiting.
So all in all I think LLMs will make it harder to be a mediocre programmer who can just coast by doing highly standardized janitorial work, but it will create more value if you are trying to do something interesting. What that means for jobs is a mixed picture. Fewer boring, but still paying jobs, but maybe more work to tackle new problems.
I think only programmers understand the nuances of their field however and people on the business side are going to just look at their expense spreadsheets and charts, and will probably oversimplify and overestimate. But that could self-correct and they might eventually concede they're going to have to hire developers.
In summary, the idea that LLMs will completely take over coding logically entails an AI system that completely contains the entire software ecosystem within itself, and writes and maintains every endpoint. This is science fiction. Training lag is a real limitation since software moves too fast to constantly retrain on the latest updates. AI itself creates a new class of interesting applications that are not represented in the training data, which means there's room for human devs at the bleeding edge.
If you got into programming just because it promised to be a steady, well-paying job, but have no real interest in it, AI might come for you. But if you are actually interested in the subject and understand that not all problems have been solved, there's still work to be done. And unless we get a whole new paradigm of AI that is not data-dependent, and can generate new knowledge whole cloth, I wouldn't be too worried. And if that does happen, too, the whole economy might change and we won't care about dinky little jobs.
use them
you dont become an SWE
I'm 15 years in, so a little behind you, but this is also some observations from the perspective of a student during the Post-Dot-Com bust.
A great parallel of today's LLMs was the Outsourcing mania from 20 years ago. It was worse than AGI because actual living breathing thinking people would write your code. After the Dot-Bomb implosion, a bunch of companies turned to outsourcing as a way to skirt costs for expensive US programmers. In their mind, a manager can produce a spec that was sent to an oversees team to implement. A "Prompt" if you will. But as time wore on, the hype wore off with every broken and spaghettified app. Businesses were stung back into hiring back programmers, but not before destroying a whole pipeline of CS graduates for many years. It fueled a surge in demand in programmers against a small supply that didn't abate until the latter half of the 2010s.
Like most things in life, a little outsourcing never hurt anybody but a lot can kill your company.
> My prediction is that junior to mid level software engineering will disappear
Agree with some qualifications. I think LLMs will follow a similar disillusionment as outsourcing, but not before decimating the profession in both headcount and senior experience. The pipeline of Undergrad->Intern/Jr->Mid->Sr development experience will stop, creating even more demand for the existing (and now dwindling) senior talent. If you can rough it for the next few years the employee pool will be smaller and businesses will ask wHeRe dId aLl tHe pRoGrAmMeRs gO?! just like last time. We're going to lose entire classes of CS graduates for years before companies reverse course, and then it will take several more years to steward another generation of CS grads through the curriculum.
AI companies sucking up all the funding out of the room isn't helping with the pipeline either.
In the end it'll be nearly a decade before the industry recovers its ability to create new programmers.
> So, fellow software engineers, how do you future-proof your career in light of, the inevitable, LLM take over?
Funnily enough, probably start a business or that cool project you've had in the back of your mind. Now is the time to keep your skills sharp. LLMs are good enough to help with some of those rote tasks as long as you are diligent.
I think LLMs will fit into future tooling as souped-up Language Servers and be another tool in our belt. I also foresee a whole field of predictive BI tools that lean on LLMs hallucinating plausible futures that can be prompted with (for example) future newspaper headlines. There's also tons of technical/algorithmic domains ruled by Heuristics that could possibly be improved by the tech behind LLMs. Imagine a compiler that understands your code and applies more weight on some heuristics and/or optimizations. In short, keeping up with the tools will be useful long after the hype train derails.
People skills are perennially useful. It's often forgotten that programming is two domains; the problem domain and the computation domain. Two people in each domain can build Mechanical Sympathy that blurs the boundaries between the two. However the current state of LLMs does not have this expertise, so the LLM user must grasp both the technical and problem domains to properly vet what the LLMs return from a prompt.
Also keep yourself alive, even if that means leaving the profession for something else for the time being. The Software Engineer Crisis is over 50 years old at this point, and LLMs don't appear to be the Silver Bullet.
tl;dr: Businesses saw the early 2000s and said "More please, but with AI!" Stick it out in "The Suck" for the next couple of years until businesses start demanding people again. AI can be cool and useful if you keep your head firmly on your shoulders.
by using them
Learn to think above the code: learn how to model problems and reason about them using maths. There are plenty of tools in this space to help out: model checkers like TLA+ or Alloy, automated theorem provers such as Lean or Agda, and plain old notebooks and pencils.
Our jobs are not and have never been: code generators.
Take a read of Naur's essay, Programming as Theory Building [0]. The gist is that it's the theory you build in your head about the problem, the potential solution, and what you know about the real world that is valuable. Source code depreciates over time when left to its own devices. It loses value when the system it was written for changes, dependencies get updated, and it bit-rots. It loses value as the people who wrote the original program, or worked with those who did, leave and the organization starts to forget what it was for, how it works, and what it's supposed to do.
You still have to figure out what to build, how to build it, how it serves your users and use cases, etc.
LLM's, at best, generate some code. Plain language is not specific enough to produce reliable, accurate results. So you'll forever be trying to hunt for increasingly subtle errors. The training data will run out and models degrade on synthetic inputs. So... it's only going to get, "so good," no matter how many parameters of context they can maintain.
And your ability, as a human, to find those errors will be quickly exhausted. There are way too few studies on the effects of informal code review on error rates in production software. Of those that have been conducted any statistically significant effect on error rates seems to disappear when humans have read ~200SLOC in an hour.
I suspect a good source of income will come from having to untangle the mess of code generated by teams that rely too much on these tools that introduce errors that only appear at scale or introduce subtle security flaws.
Finally, it's not "AI," that's replacing jobs. It's humans who belong to the owning class. They profit from the labour of the working class. They make more profit when they can get the same, or greater, amount of value while paying less for it. I think these tools, "inevitably," taking over and becoming a part of our jobs is a loaded argument with vested interests in that becoming true so that people who own and deploy these tools can profit from it.
As a senior developer I find that these tools are not as useful as people claim they are. They're capable of fabricating test data... usually of quality that requires inspection... and really, who has time for that? And they can generate boilerplate code for common tasks... but how often do I need boilerplate code? Rarely. I find the answers it gives in summaries to contain completely made-up BS. I'd rather just find out the answer myself.
I fear for junior developers who are looking to find a footing. There's no royal road. Getting your answers from an LLM for everything deprives you of the experience needed to form your own theories and ideas...
so focus on that, I'd say. Think above the code. Understand the human factors, the organizational and economic factors, and the technical ones. You fit in the middle of all of these moving parts.
[0] https://pages.cs.wisc.edu/~remzi/Naur.pdf
Update: forgot to add the link to the Naur essay
this questions only comes from pundit, smh.
nah it's nonsense
As for every job done well the most important thing is to truly understand the essence of your job, why it exist in the first place and which problem truly solves when done it well.
A good designers is not going to be replaced by Dall-e/Midjourney, becuase the essence of design is to understand the true meaning/purpose of something and be able to express it graphically, not align pixels with the correct HEX colour combination one next to the other.
A good software engineer is not going to be replaced by Cursor/Co-pilot, because the essence of programming is to translate the business requirements of a real world problem that other humans are facing into an ergonomic tool that can be used to solve such problem at scale, not writing characters on an IDE.
Neither Junior nor Seniors Dev will go anywhere, what we'll for sure go away is all the "code-producing" human-machines such as Fiver Freelance/Consultants which completely misunderstand/neglect the true essence of their work. Becuase code (as in a set of meaningful 8-bits symbols) was never the goal, but always the means to an end.
Code is an abstraction, allegedly our best abstraction to date, but it's hard to believe that is the last iteration of it.
I'll argue that software itself will be a completely different concept in 100 years from now, so it's obvious that the way of producing it will change too.
There is a famous quote attributed to Hemingway that goes like this:
"Slowly at first, then all at once"
This is exactly what is happening and and what always happens.
After reading the comments, the themes I'm seeing are:
- AI will provide a big mess for wizards to clean up
- AI will replace juniors and then seniors within a short timeframe
- AI will soon plateau and the bubble will burst
- "Pshaw I'm not paid to code; I'm a problem solver"
- AI is useless in the face of true coding mastery
It is interesting to me that this forum of expert technical people are so divided on this (broad) subject.
I think there's been a lot of fear-mongering on this topic and "the inevitable LLM take over" is not as inevitable as it might seem, perhaps depending on your definition of "take over."
I have personally used LLMs in my job to write boilerplate code, write tests, make mass renaming changes that were previously tedious to do without a lot of grep/sed-fu, etc. For these types of tasks, LLMs are already miles ahead of what I was doing before (do it myself by hand, or have a junior engineer do it and get annoyed/burnt out).
However, I have yet to see an LLM that can understand an already established large codebase and reliably make well-designed additions to it, in the way that an experienced team of engineers would. I suppose this ability could develop over time with large increases in memory/compute, but even state-of-the-art models today are so far away from being able to act like an actual senior engineer that I'm not worried.
Don't get me wrong, LLMs are incredibly useful in my day-to-day work, but I think of them more as a leap forward in developer tooling, not as an eventual replacement for me.
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
[flagged]
[flagged]
Nothing because I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time.
I have a job at a place I love and get more people in my direct network and extended contacting me about work than ever before in my 20 year career.
And finally I keep myself sharp by always making sure I challenge myself creatively. I’m not afraid to delve into areas to understand them that might look “solved” to others. For example I have a CPU-only custom 2D pixel blitter engine I wrote to make 2D games in styles practically impossible with modern GPU-based texture rendering engines, and I recently did 3D in it from scratch as well.
All the while re-evaluating all my assumptions and that of others.
If there’s ever a day where there’s an AI that can do these things, then I’ll gladly retire. But I think that’s generations away at best.
Honestly this fear that there will soon be no need for human programmers stems from people who either themselves don’t understand how LLM’s work, or from people who do that have a business interest convincing others that it’s more than it is as a technology. I say that with confidence.