The world could run on older hardware if software optimization was a priority

turrini | 739 points

There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.

It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.

This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].

It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.

[0] https://www.lg.com/uk/lg-experience/inspiration/lg-ai-wash-e...

caseyy | a day ago

I like to point out that since ~1980, computing power has increased about 1000X.

If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.

If you went back in time to 1980 and offered the following choice:

I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.

People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...

Personally I think the 1000Xers kinda ruined things for the rest of us.

titzer | a day ago

So I've worked for Google (and Facebook) and it really drives the point home of just how cheap hardware is and how not worth it optimizing code is most of the time.

More than a decade ago Google had to start managing their resource usage in data centers. Every project has a budget. CPU cores, hard disk space, flash storage, hard disk spindles, memory, etc. And these are generally convertible to each other so you can see the relative cost.

Fun fact: even though at the time flash storage was ~20x the cost of hard disk storage, it was often cheaper net because of the spindle bottleneck.

Anyway, all of these things can be turned into software engineer hours, often called "mili-SWEs" meaning a thousandth of the effort of 1 SWE for 1 year. So projects could save on hardware and hire more people or hire fewer people but get more hardware within their current budgets.

I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands. So if you spend 1 SWE year working on optimization acrosss your project and you're not saving 5000 CPU cores, it's a net loss.

Some projects were incredibly large and used much more than that so optimization made sense. But so often it didn't, particularly when whatever code you wrote would probably get replaced at some point anyway.

The other side of this is that there is (IMHO) a general usability problem with the Web in that it simply shouldn't take the resources it does. If you know people who had to or still do data entry for their jobs, you'll know that the mouse is pretty inefficient. The old terminals from 30-40+ years ago that were text-based had some incredibly efficent interfaces at a tiny fraction of the resource usage.

I had expected that at some point the Web would be "solved" in the sense that there'd be a generally expected technology stack and we'd move on to other problems but it simply hasn't happened. There's still a "framework of the week" and we're still doing dumb things like reimplementing scroll bars in user code that don't work right with the mouse wheel.

I don't know how to solve that problem or even if it will ever be "solved".

cletus | 20 hours ago

The title made me think Carmack was criticizing poorly optimized software and advocating for improving performance on old hardware.

When in fact, the tweet is absolutely not about either of the two. He's talking about a thought experiment where hardware stopped advancing and concludes with "Innovative new products would get much rarer without super cheap and scalable compute, of course".

SilverSlash | a day ago

Often, this is presented as a tradeoff between the cost of development and the cost of hardware. However, there is a third leg of that stool: the cost of end-user experience.

When you have a system which is sluggish to use because your skimped on development, it is often the case that you cannot make it much faster no matter how expensive is the hardware you throw at it. Either there is a single-threaded critical path, so you hit the limit of what one CPU can do (and adding more does not help), or you hit the laws of physics, such as with network latency which is ultimately bound by the speed of light.

And even when the situation could be improved by throwing more hardware at it, this is often done only to the extent to make the user experience "acceptable", but not "great".

In either case, the user experience suffers and each individual user is less productive. And since there are (usually) orders of magnitude more users than developers, the total damage done can be much greater than the increased cost of performance-focused development. But the cost of development is "concentrated" while the cost of user experience is "distributed", so it's more difficult to measure or incentivize for.

The cost of poor user experience is a real cost, is larger than most people seem to think and is non-linear. This was observed in the experiments done by IBM, Google, Amazon and others decades ago. For example, take a look at:

The Economic Value of Rapid Response Time https://jlelliotton.blogspot.com/p/the-economic-value-of-rap...

He and Richard P. Kelisky, Director of Computing Systems for IBM's Research Division, wrote about their observations in 1979, "...each second of system response degradation leads to a similar degradation added to the user's time for the following [command]. This phenomenon seems to be related to an individual's attention span. The traditional model of a person thinking after each system response appears to be inaccurate. Instead, people seem to have a sequence of actions in mind, contained in a short-term mental memory buffer. Increases in SRT [system response time] seem to disrupt the thought processes, and this may result in having to rethink the sequence of actions to be continued."

branko_d | 31 minutes ago

.NET has made great strides in this front in recent years. Newer versions optimize cpu and ram usage of lots of fundamentals, and introduced new constructs to reduce allocations and cpu for new code. One might argue they were able because they were so bad, but it’s worth looking into if you haven’t in a while.

shireboy | an hour ago

I heartily agree. It would be nice if we could extend the lifetime of hardware 5, 10 years past its, "planned obsolescence." This would divert a lot of e-waste, leave a lot of rare earth minerals in the ground, and might even significantly lower GHG emissions.

The market forces for producing software however... are not paying for such externalities. It's much cheaper to ship it sooner, test, and iterate than it is to plan and design for performance. Some organizations in the games industry have figured out a formula for having good performance and moving units. It's not spread evenly though.

In enterprise and consumer software there's not a lot of motivation to consider performance criteria in requirements: we tend to design for what users will tolerate and give ourselves as much wiggle room as possible... because these systems tend to be complex and we want to ship changes/features continually. Every change is a liability that can affect performance and user satisfaction. So we make sure we have enough room in our budget for an error rate.

Much different compared to designing and developing software behind closed doors until it's, "ready."

agentultra | a day ago

We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.

I think this specific class of computational power - strictly serialized transaction processing - has not grown at the same rate as other metrics would suggest. Adding 31 additional cores doesn't make the order matching engine go any faster (it could only go slower).

If your product is handling fewer than several million transactions per second and you are finding yourself reaching for a cluster of machines, you need to back up like 15 steps and start over.

bob1029 | a day ago

One of the things I think about sometimes, a specific example rather than a rebuttal to Carmack.

The Electron Application is somewhere between tolerated and reviled by consumers, often on grounds of performance, but it's probably the single innovation that made using my Linux laptop in the workplace tractable. And it is genuinely useful to, for example, drop into a MS Teams meeting without installing.

So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.

fdr | 20 hours ago

Well, yes. It's an economic problem (which is to say, it's a resource allocation problem). Do you have someone spend extra time optimising your software or do you have them produce more functionality. If the latter generates more cash then that's what you'll get them to do. If the former becomes important to your cashflow then you'll get them to do that.

AndrewDucker | a day ago

"The world" runs on _features_ not elegant, fast, or bug free software. To the end user, there is no difference between a lack of a feature, and a bug. Nor is there any meaningful difference between software taking 5 minutes to complete something because of poor performance, compared to the feature not being there and the user having to spend 5 minutes completing the same task manually. It's "slow".

If you keep maximizing value for the end user, then you invariably create slow and buggy software. But also, if you ask the user whether they would want faster and less buggy software in exchange for fewer features, they - surprise - say no. And even more importantly: if you ask the buyer of software, which in the business world is rarely the end user, then they want features even more, and performance and elegance even less. Given the same feature set, a user/buyer would opt for the fastest/least buggy/most elegant software. But if it lacks any features - it loses. The reason to keep software fast and elegant is because it's the most likely path to be able to _keep_ adding features to it as to not be the less feature rich offering. People will describe the fast and elegant solution with great reviews, praising how good it feels to use. Which might lead people to think that it's an important aspect. But in the end - they wouldn't buy it at all if it didn't do what they wanted. They'd go for the slow frustrating buggy mess if it has the critical feature they need.

alkonaut | a day ago

I have been thinking about this a lot ever since I played a game called "Balatro". In this game nothing extraordinary happens in terms of computing - some computations get done, some images are shuffled around on the screen, the effects are sparse. The hardware requirements aren't much by modern standards, but still, this game could be ported 1:1 to a machine with Pentium II with a 3dfx graphics card. And yet it demands so much more - not a lot by today standards, but still. I am tempted to try to run it on a 2010 netbook to see if it even boots up.

ManlyBread | a day ago

Unfortunately, bloated software passes the costs to the customer and it's hard to evaluate the loss.

Except your browser taking 180% of available ram maybe.

By the way, the world could also have some bug free software, if anyone could afford to pay for it.

nottorp | a day ago

Sorry, don't want to go back to a time where I could only edit ASCII in a single font.

Do I like bloat? No. Do I like more software rather than less? Yes! Unity and Unreal are less efficient than custom engines but there are 100x more titles because that tradeoff of efficiency of the CPU vs efficiency of creation.

The same is true for website based app (both online and off). Software ships 10x faster as a web page than as a native app for (windows/mac/linux/android/ios). For most, that's all I need. Even for native like apps, I use photopea.com over photoshop/gimp/krita/affinity etc because it's available everywhere no matter which machine I use or who's machine it is. Is it less efficient running in JS in the browser? Probaby. Do I care? No

VSCode, now the most popular editor in the worlds (IIRC) is web-tech. This has so many benefits. For one, it's been integrated into 100s of websites, so this editor I use is available in more places. It's using tech more people know so more extension that do more things. Also, probably arguably because of JS's speed issues, it encouraged the creation of the Language Server Protocol. Before this, every editor rolled their own language support. The LSP is arguably way more bloat than doing it directly in the editor. I don't care. It's a great idea, way more flexible. Any language can write one LSP and then all editors get support for that language.

socalgal2 | 12 hours ago

I was working as a janitor, moonlighting as an IT director, in 2010. Back then I told the business that laptops for the past five years (roughly since Nehalem) have plenty of horsepower to run spreadsheets (which is basically all they do) with two cores, 16 GB of RAM, and a 500GB SATA SSD. A couple of users in marketing did need something a little (not much) beefier. Saved a bunch of money by not buying the latest-and-greatest laptops.

I don't work there any more. Today I am convinced that's true today: those computers should still be great for spreadsheets. Their workflow hasn't seriously changed. It's the software that has. If they've continued with updates (can it even "run" MS Windows 10 or 11 today? No idea, I've since moved on to Linux) then there's a solid chance that the amount of bloat and especially move to online-only spreadsheets would tank their productivity.

Further, the internet at that place was terrible. The only offerings were ~16Mbit asynchronous DSL (for $300/mo just because it's a "business", when I could get the same speed for $80/mo at home), or Comcast cable 120Mbit for $500/mo. 120Mbit is barely enough to get by with an online-only spreadsheet, and 16Mbit definitely not. But worse: if internet goes down, then the business ceases to function.

This is the real theft that another commenter [0] mentioned that I wholeheartedly agree with. There's no reason whatsoever that a laptop running spreadsheets in an office environment should require internet to edit and update spreadsheets, or crazy amounts of compute/storage, or even huge amounts of bandwidth.

Computers today have zero excuse for terrible performance except only to offload costs onto customers - private persons and businesses alike.

[0]: https://news.ycombinator.com/item?id=43971960

inetknght | a day ago

The world DOES run on older hardware.

How new do you think the CPU in your bank ATM or car's ECU is?

freddie_mercury | a day ago

HN: Yeah! We should be go back to writing optimized code that fully uses the hardware capabilities!

Also HN: Check this new AI tool that consumes 1000x more energy to do the exact same thing we could already do, but worse and with no reproducibility

joaohaas | 21 hours ago

Wirth's Law:

>software is getting slower more rapidly than hardware is becoming faster.

https://en.wikipedia.org/wiki/Wirth%27s_law

WillAdams | 17 hours ago

Is there or could we make an iPhone-like that runs 100x slower than conventional phones but uses much less energy, so it powers itself on solar? It would be good for the environment and useful in survival situations.

Or could we make a phone that runs 100x slower but is much cheaper? If it also runs on solar it would be useful in third-world countries.

Processors are more than fast enough for most tasks nowadays; more speed is still useful, but I think improving price and power consumption is more important. Also cheaper E-ink displays, which are much better for your eyes, more visible outside, and use less power than LEDs.

armchairhacker | a day ago

Meanwhile on every programmer's 101 forum: "Space is cheap! Premature optimization is the root of all evil! Dev time > runtime!"

xg15 | a day ago

I generally believe that markets are somewhat efficient.

But somehow, we've ended up with the current state of Windows as the OS that most people use to do their job.

Something went terribly wrong. Maybe the market is just too dumb, maybe it's all the market distortions that have to do with IP, maybe it's the monopolístic practices of Microsoft. I don't know, but in my head, no sane civilization would think that Windows 10/11 is a good OS that everyone should use to optimize our economy.

I'm not talking only about performance, but about the general crappiness of the experience of using it.

diego_sandoval | 4 hours ago
vermilingua | a day ago
[deleted]
| 5 hours ago

Well, yes, I mean, the world could run on less of all sorts of things, if efficient use of those things were a priority. It's not, though.

rsynnott | 3 hours ago

8bit/16bit demo scene can do it, but that's true dedication.

emsign | 2 hours ago

This always saddens me. We could have things instant, simple, and compute & storage would be 100x more abundant in practical terms than it is today.

It's not even a trade off a lot of the time, simpler architectures perform better but are also vastly easier and cheaper to maintain.

We just lack expertise I think, and pass on cargo cult "best practices" much of the time.

QuadrupleA | a day ago

I'm not much into retro computing. But it amazes me what people are pulling out of a dated hardware.

Doom on the Amiga for example (many consider it the main factor for the Amiga demise). Optimization and 30 years and it finally arrived

yonisto | a day ago

My phone isn't getting slower, but rather the OS running on it becomes less efficient with every update. Shameful.

protoster | a day ago

I wonder if anyone has calculated the additional planet heating generated by crappy e.g. JS apps or useless animations

seydor | a day ago

Consider UX:

Click the link and contemplate while X loads. First, the black background. Next it spends a while and you're ready to celebrate! Nope, it was loading the loading spinner. Then the pieces of the page start to appear. A few more seconds pass while the page is redrawn with the right fonts; only then can you actually scroll the page.

Having had some time to question your sanity for clicking, you're grateful to finally see what you came to see. So you dwell 10x as long, staring at a loaded page and contemplating the tweet. You dwell longer to scroll and look at the replies.

How long were you willing to wait for data you REALLY care about? 10-30 seconds; if it's important enough you'll wait even longer.

Software is as fast as it needs to be to be useful to humans. Computer speed doesn't matter.

If the computer goes too fast it may even be suspected of trickery.

cadamsdotcom | 12 hours ago

Z+6 months: Start porting everything to Collapse OS

https://collapseos.org/

yencabulator | 21 hours ago

The idea of a hand me down computer made of brass and mahogany still sounds ridiculous because it is, but we're nearly there in terms of Moore's law. We have true 2nm within reach and then the 1nm process is basically the end of the journey. I expect 'audiophile grade' PCs in the 2030s and then PCs become works of art, furniture, investments, etc. because they have nowhere to go.

https://en.wikipedia.org/wiki/2_nm_process

https://en.wikipedia.org/wiki/International_Roadmap_for_Devi...

1970-01-01 | a day ago

If the tooling had kept up. We went from RADs that built you fully native GUIs to abandoning ship and letting Electron take over. Anyone else have 40 web browsers installed and they are each some Chromium hack?

giancarlostoro | 12 hours ago

Obviously, the world ran before computers. The more interesting part of this is what would we lose if we knew there were no new computers, and while I'd like to believe the world would put its resources towards critical infrastructure and global logistics, we'd probably see the financial sector trying to buy out whatever they could, followed by any data center / cloud computing company trying to lock all of the best compute power in their own buildings.

threetonesun | a day ago

  > npx create-expo-app@latest --template blank HelloWorldExpoReact

  > du -h HelloWorldExpoReact/
258M! A quarter of a gigabyte for a HelloWorld example. Sheesh.
floathub | 14 hours ago

Carmack is right to some extent, although I think it’s also worth mentioning that people replace their computers for reasons other than performance, especially smartphones. Improvements in other components, damage, marketing, and status are other reasons.

It’s not that uncommon for people to replace their phone after two years, and as someone who’s typically bought phones that are good but not top-of-the-line, I’m skeptical all of those people’s phones are getting bogged down by slow software.

actuallyalys | 13 hours ago

This is Carmack's favorite observation over the last decade+. It stems from what made him successful at id. The world's changed since then. Home computers are rarely compute-bound, the code we write is orders of magnitude more complex, and compilers have gotten better. Any wins would come at the cost of a massive investment in engineering time or degraded user experience.

dehrmann | 12 hours ago

The priority should be safety, not speed. I prefer an e.g. slower browser or OS that isn't ridden with exploits and attack vectors.

Of course that doesn't mean everything should be done in JS and Electron as there's a lot of drawbacks to that. There exists a reasonable middle ground where you get e.g. memory safety but don't operate on layers upon layers of heavy abstraction and overhead.

margorczynski | a day ago

Optimise is never a neutral word.

You always optimise FOR something at the expense of something.

And that can, and frequently should, be lean resource consumption, but it can come at a price.

Which might be one or more of: Accessibility. Full internationalisation. Integration paradigms (thinking about how modern web apps bring UI and data elements in from third parties). Readability/maintainability. Displays that can actually represent text correctly at any size without relying on font hinting hacks. All sorts of subtle points around UX. Economic/business model stuff (megabytes of cookie BS on every web site, looking at you right now.) Etc.

Earw0rm | 17 hours ago

In the last Def Con 32 the badge can run full Doom on puny Pico 2 microcontroller [1].

[1] Running Doom on the Raspberry Pi Pico 2: A Def Con 32 Badge Hack:

https://shop.sb-components.co.uk/blogs/posts/running-doom-

teleforce | 11 hours ago

The goal isn't optimized code, it is utility/value prop. The question then is how do we get the best utility/value given the resources we have. This question often leads to people believing optimization is the right path since it would use fewer resources and therefore the value prop would be higher. I believe they are both right and wrong. For me, almost universally, good optimization ends up simplifying things as it speeds things up. This 'secondary' benefit, to me, is actually the primary benefit. So when considering optimizations I'd argue that performance gains are a potential proxy for simplicity gains in many cases so putting a little more effort into that is almost always worth it. Just make sure you actually are simplifying though.

jmward01 | 21 hours ago

I work on a laptop from 2014. An i7 4xxx with 32 GB RAM and 3 TB SSD. It's OK for Rails and for Django, Vue, Slack, Firefox and Chrome. Browsers and interpreters got faster. Luckily there was pressure to optimize especially in browsers.

pmontra | 20 hours ago

Perfect parallel to the madness that is AI. With even modest sustainability incentives, the industry wouldn't have pulverized a trillion dollar on training models nobody uses to dominate the weekly attention fight and fundraising game.

Evidence: DeepSeek

gmerc | 20 hours ago

He mentions the rate of innovation would slow down which I agree with. But I think that even 5% slower innovation rate would delay the optimizations we can do or even figure out what we need to optimize through centuries of computer usage and in the end we'd be less efficient because we'd be slower at finding efficiencies. Low adoption rate of new efficiencies is worse than high adoption rate of old efficiencies is I guess how to phrase it.

If Cadence for example releases every feature 5 years later because they spend more time optimizing them, it's software after all, how much will that delay semiconductor innovations?

vasco | a day ago

Feels like half of this thread didn't read or ignored his last line: "Innovative new products would get much rarer without super cheap and scalable compute, of course."

benced | 11 hours ago

Reminded me of this interesting thought experiment

https://x.com/lauriewired/status/1922015999118680495

southernplaces7 | 13 hours ago

Minimalism is excellent. As others have mentioned, using languages that are more memory safe (by assumption the language is wrote in such a way) may be worth the additional complexity cost.

But surely with burgeoning AI use efficiency savings are being gobbled up by the brute force nature of it.

Maybe model training and the likes of hugging face can avoid different groups trying to reinvent the same AI wheel using more resources than a cursory search of a resource.

ricardo81 | a day ago

Yes, but it is not a priority. GTM is the priority. Make money machine go brrr.

oatmeal_croc | 8 hours ago

Tell me about it. Web development has only become fun again at my place since upgrading from Intel Mac to M4 Mac.

Just throw in Slack chat, vscode editor in Electron, Next.js stack, 1-2 docker containers, one browser and you need top notch hardware to run it fluid (Apple Silicon is amazing though). I'm doing no fancy stuff.

Chat, editor in a browser and docker don't seem the most efficient thing if put all together.

therealmarv | a day ago

When it's free, it doesn't need to be performant unless the free competition is performant.

vishalontheline | 18 hours ago

Well obviously. And there would be no wars if everybody made peace a priority.

It's obvious for both cases where the real priorities of humanity lie.

WJW | a day ago

I think optimizations only occur when the users need them. That is why there are so many tricks for game engine optimization and compiling speed optimization. And that is why MSFT could optimize the hell out of VSCode.

People simply do not care about the rest. So there will be as little money spent on optimization as possible.

markus_zhang | 21 hours ago

Sadly software optimization doesn't offer enough cost savings for most companies to address consumer frustration. However, for large AI workloads, even small CPU improvements yield significant financial benefits, making optimization highly worthwhile.

abetaha | 20 hours ago

I already run on older hardware and most people can if they chose to - haven't bought a new computer since 2005. Perhaps the OS can adopt a "serverless" model where high computational tasks are offloaded as long as there is sufficient bandwidth.

knowitnone | 21 hours ago

This is the story of life in a nutshell. It's extremely far from optimized, and that is the natural way of all that it spawns. It almost seems inelegant to attempt to "correct" it.

jrowen | 17 hours ago

100% agree with Carmack. There was a craft in writing software that I feel has been lost with access to inexpensive memory and compute. Programmers can be inefficient because they have all that extra headroom to do so which just contributes to the cycle of needing better hardware.

don_searchcraft | a day ago

I'm already moving in this direction in my personal life. It's partly nostalgia but it's partly practical. It's just that work requires working with people who only use what hr and it hoists on them, then I need a separate machine for that.

noobermin | a day ago

We are squandering bandwidth similarly and that hasn’t increased as much as processing power.

jasonthorsness | a day ago

1. Consumers are attracted to pretty UIs and lots of features, which pretty much drives inefficiency.

2. The consumers that have the money to buy software/pay for subscriptions have the newer hardware.

turtlebits | 20 hours ago

I've installed OSX Sequoia on 2015 iMacs with 8 gigs of ram and it runs great. More than great actually.

Linux on 10-15 year old laptops and it runs good. if you beef up RAM and SSD then actually really good.

So for everyday stuff we can and do run on older hardware.

VagabundoP | a day ago

The world runs on the maximization of ( - entropy / $) and that's definitely not the same thing as minimizing compute power or bug count.

generalizations | 19 hours ago

It could also run on much less current hardware if efficiency was a priority. Then comes the AI bandwagon and everyone is buying loads of new equipment to keep up with the Jones.

dardeaup | a day ago

My partner was diagnosed with Parkinson’s almost 5 years ago. His disease has progressed significantly in the past year, and he begun to have delusions. He also had side effects from carbidopa/levodopa, which we decided to stop, and our primary physician decided he should start on PD-5 formula 4 months ago from UINE HEALTH CENTER. He now sleeps soundly, works out frequently, and is now very active since we started him on the PD-5 formula. It doesn’t make the Parkinson’s disease go away, but it did give him a better quality of life. We got the treatment from www. uineheathcentre. com

melissabaerz60 | 2 hours ago

Where lack of performance costs money, optimization is quite invested in. See PyTorch (Inductor CUDA graphs), Triton, FlashAttention, Jax, etc.

Scene_Cast2 | a day ago

How much of the extra power has gone to graphics?

Most of it?

narag | 19 hours ago

The world could run on older hardware if rapid development did not also make money.

Rapid development is creating a race towards faster hardware.

nashashmi | a day ago

True for large corporations. But for individuals the ability to put what was previously an entire stacks in a script that doesn't call out to the internet will be a big win.

How many people are going to write and maintain shell scripts with 10+ curls? If we are being honest this is the main reason people use python.

casey2 | 4 hours ago

Really no notes on this. Carmack hit both sides of the coin:

- the way we do industry-scale computing right now tends to leave a lot of opportunity on the table because we decouple, interpret, and de-integrate where things would be faster and take less space if we coupled, compiled, and made monoliths

- we do things that way because it's easier to innovate, tweak, test, and pivot on decoupled systems that isolate the impact of change and give us ample signal about their internal state to debug and understand them

shadowgovt | a day ago

don't major cloud companies do this and then sell the gains as a commodity?

JohnMakin | 14 hours ago

As long as sufficient amounts of wealthy people are able to wield their money as a force to shape society, this is will always be the outcome.

Unfortunately,in our current society a rich group of people with a very restricted intellect, abnormal psychology, perverse views on human interaction and a paranoid delusion that kept normal human love and compassion beyond their grasp, were able to shape society to their dreadful imagination.

Hopefully humanity can make it through these times, despite these hateful aberrations doing their best to wield their economic power to destroy humans as a concept.

cannabis_sam | 16 hours ago

Carmack is a very smart guy and I agree with the sentiment behind his post, but he's a software guy. Unfortunately for all of us hardware has bugs, sometimes bugs so bad that you need to drop 30-40% of your performance to mitigate them - see Spectre, Meltdown and friends.

I don't want the crap Intel has been producing for the last 20 years, I want the ARM, RiscV and AMD CPUs from 5 years in the future. I don't want a GPU by Nvidia that comes with buggy drivers and opaque firmware updates, I want the open source GPU that someone is bound to make in the next decade. I'm happy 10gb switches are becoming a thing in the home, I don't want the 100 mb hubs from the early 2000s.

redleader55 | 20 hours ago

This is a double edge sword problem, but I think what people are glazing over with the compute power topic is power efficiency. One thing I struggle with home labing old gaming equipment is the consideration to the power efficiency of new hardware. Hardly a valid comparison, but I can choose to recycle my Ryzen 1700x with a 2080ti for a media server that will probably consume a few hundred watts, or I can get a M1 that sips power. The double edge sword part is that Ryzen system becomes considerably more power efficient running proxmox or ubuntu server vs a windows client. We as a society choose our niche we want to leverage and it swings with and like economics, strapped for cash, choose to build more efficient code; no limits, buy the horsepower to meet the needs.

datax2 | a day ago

I'm going to be pretty blunt. Carmack gets worshiped when he shouldn't be. He has several bad takes in terms of software. Further, he's frankly behind the times when it comes to the current state of the software ecosystem.

I get it, he's legendary for the work he did at id software. But this is the guy who only like 5 years ago was convinced that static analysis was actually a good thing for code.

He seems to have a perpetual view on the state of software. Interpreted stuff is slow, networks are slow, databases are slow. Everyone is working with Pentium 1s and 2MB of ram.

None of these are what he thinks they are. CPUs are wicked fast. Interpreted languages are now within a single digit multiple of natively compiled languages. Ram is cheap and plentiful. Databases and networks are insanely fast.

Good on him for sharing his takes, but really, he shouldn't be considered a "thought leader". I've noticed his takes have been outdated for over a decade.

I'm sure he's a nice guy, but I believe he's fallen into a trap that many older devs do. He's overestimating what the costs of things are because his mental model of computing is dated.

cogman10 | 21 hours ago

My professor back in the day told me that "software is eating hardware". No matter how hardware gets advanced, software will utilize that advancement.

hnlurker22 | a day ago

Developers over 50ish (like me) grew up at a time when CPU performance and memory constraints affected every application. So you had to always be smart about doing things efficiently with both CPU and memory.

Younger developers have machines that are so fast they can be lazy with all algorithms and do everything 'brute force'. Like searching thru an array every time when a hashmap would've been 10x faster. Or using all kinds of "list.find().filter().any().every()" chaining nonsense, when it's often smarter to do ONE loop, and inside that loop do a bunch of different things.

So younger devs only optimize once they NOTICE the code running slow. That means they're ALWAYS right on the edge of overloading the CPU, just thru bad coding. In other words, their inefficiencies will always expand to fit available memory, and available clock cycles.

quantadev | 9 hours ago

Imagine software engineering was like real engineering, where the engineers had licensing and faced fines or even prison for negligence. How much of the modern worlds software would be tolerated?

Very, very little.

If engineers handled the Citicorp center the same way software engineers did, the fix would have been to update the documentation in Confluence to not expose the building to winds and then later on shrug when it collapsed.

Devasta | 21 hours ago

Yeah, having browsers the size and complexities of OSs is just one of many symptoms. I intimate at this concept in a grumbling, helpless manner somewhat chronically.

There's a lot today that wasn't possible yesterday, but it also sucks in ways that weren't possible then.

I foresee hostility for saying the following, but it really seems most people are unwilling to admit that most software (and even hardware) isn't necessarily made for the user or its express purpose anymore. To be perhaps a bit silly, I get the impression of many services as bait for telemetry and background fun.

While not an overly earnest example, looking at Android's Settings/System/Developer Options is pretty quick evidence that the user is involved but clearly not the main component in any respect. Even an objective look at Linux finds manifold layers of hacks and compensation for a world of hostile hardware and soft conflict. It often works exceedingly well, though as impractical as it may be to fantasize, imagine how badass it would be if everything was clean, open and honest. There's immense power, with lots of infirmities.

I've said that today is the golden age of the LLM in all its puerility. It'll get way better, yeah, but it'll get way worse too, in the ways that matter.[1]

Edit: 1. Assuming open source doesn't persevere

eth0up | a day ago

The world will seek out software optimization only after hardware reaches its physical limits.

We're still in Startup Land, where it's more important to be first than it is to be good. From that point onward, you have to make a HUGE leap and your first-to-market competitor needs to make some horrendous screwups in order to overtake them.

The other problem is that some people still believe that the masses will pay more for quality. Sometimes, good enough is good enough. Tidal didn't replace iTunes or Spotify, and Pono didn't exactly crack the market for iPods.

MisterBastahrd | 15 hours ago

Well, it is a point. But also remember the horrors of the monoliths he made. Like in Quake (123?) where you have hacks like if level name contains XYZ then do this magic. I think the conclusion might be wrong.

AtNightWeCode | 16 hours ago

I mean, if you put win 95 on a period appropriate machine, you can do office work easily. All that is really driving computing power is the web and gaming. If we weren't doing either of those things as much, I bet we could all quite happily use machines from the 2000s era

voidUpdate | a day ago
westurner | 18 hours ago

Probably, but we'd be in a pretty terrible security place without modern hardware based cryptographic operations.

Mindwipe | a day ago
[deleted]
| a day ago
[deleted]
| a day ago

[dead]

OhNoNotAgain_99 | a day ago

[dead]

VoodooJuJu | a day ago

[flagged]

mkonecny | a day ago

Let's keep the CPU efficiency golf to Zachtronics games, please.

I/O is almost always the main bottleneck. I swear to god 99% of developers out there only know how to measure cpu cycles of their code so that's the only thing they optimize for. Call me after you've seen your jobs on your k8s clusters get slow because all of your jobs are inefficiently using local disk and wasting cycles waiting in queue for reads/writes. Or your DB replication slows down to the point that you have to choose between breaking the mirror and stop making money.

And older hardware consumes more power. That's the main driving factor between server hardware upgrades because you can fit more compute into your datacenter.

I agree with Carmack's assessment here, but most people reading are taking the wrong message away with them.

busterarm | a day ago

based

x1unix | a day ago

I'd much prefer Carmack to think about optimizing for energy consumption.

wiz21c | a day ago