I have the same experience here with my MacBook Air M1 from 2020 with 16GB RAM and 512GB SSD. After three years, I upgraded to a MacBook Pro with M3 Pro, 36GB of RAM, and 2TB of storage. I use this as my main machine with 2 displays attached via a TB4 dock.
I'm working in IT and I get all new machines for our company over my desk to check them, and I observed the exact same points as the OP.
The new machines are either fast and loud and hot and with poor battery life, or they are slow and "warm" and have moderate battery life.
But I had no business laptop yet, ARM, AMD, or Intel, which can even compete with the M1 Air, not to speak of the M3 Pro! Not to speak about all the issues with crappy Lenovo docks, etc.
It doesn’t matter if I install Linux or Windows. The funny point is that some of my colleagues have ordered a MacBook Air or Pro and use their Windows or Linux and a virtual machine via Parallels.
Think about it: Windows 11 or Linux in a VM is even faster, snappier, more silent, and has even longer battery life than these systems native on a business machine from Lenovo, HP, or Dell.
Well, your mileage may vary, but IMHO there is no alternative to a Mac nowadays, even if you want to use Linux or Windows.
> Why haven’t AMD/Intel been able to catch up? Is x86 just not able to keep up with the ARM architecture? When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!
AMD kind of has, the "Max 395+" is (within 5% margin or so) pretty close to M4 Pro, on both performance and energy use. (it's in the 'Framework Desktop', for example, but not in their laptop lineup yet)
AMD/Intel hasn't surpassed Apple yet (there's no answer for the M4 Max / M3 Ultra, without exploding the energy use on the AMD/Intel side), but AMD does at least have a comparable and competitive offering.
First, Apple did an excellent job optimizing their software stack for their hardware. This is something that few companies have the ability to do as they target a wide array of hardware. This is even more impressive given the scale of Apple's hardware. The same kernel runs on a Watch and a Mac Studio.
Second, the x86 platform has a lot of legacy, and each operation on x86 is translated from an x86 instruction into RISC-like micro-ops. This is an inherent penalty that Apple doesn't have pay, and it is also why Rosetta 2 can achieve "near native" x86 performance; both platform translate the x86 instructions.
Third, there are some architectural differences even if the instruction decoding steps are removed from the discussion. Apple Silicon has a huge out-of-order buffer, and it's 8-wide vs x86 4-wide. From there, the actual logic is different, the design is different, and the packaging is different. AMD's Ryzen AI Max 300 series does get close to Apple by using many of the same techniques like unified memory and tossing everything onto the package, where it does lose is due to all of the other differences.
In the end, if people want crazy efficiency Apple is a great answer and delivers solid performance. If people want the absolute highest performance, then something like Ryzen Threadripper, EPYC, or even the higher-end consumer AMD chips are great choices.
There’s a number of reasons, all of which in concert create the appearance of a performance gap between the two:
* Apple has had decades optimizing its software and hardware stacks to the demands of its majority users, whereas Intel and AMD have to optimize for a much broader scope of use cases.
* Apple was willing to throw out legacy support on a regular basis. Intel and AMD, by comparison, are still expected to run code written for DOS or specific extensions in major Enterprises, which adds to complexity and cost
* The “standard” of x86 (and demand for newly-bolted-on extensions) means effort into optimizations for efficiency or performance meet diminishing returns fairly quickly. The maturity of the platform also means the “easy” gains are long gone/already done, and so it’s a matter of edge cases and smaller tweaks rather than comprehensive redesigns.
* Software in x86 world is not optimized, broadly, because it doesn’t have to be. The demoscene shows what can be achieved in tight performance envelopes, but software companies have never had reason to optimize code or performance when next year has always promised more cores or more GHz.
It boils down to comparing two different products and asking why they can’t be the same. Apple’s hardware is purpose-built for its userbase, operating systems, and software; x86 is not, and never has been. Those of us who remember the 80s and 90s of SPARC/POWER/Itanium/etc recall that specialty designs often performed better than generalist ones in their specialties, but lacked compatibility as a result.
The Apple ARM vs Intel/AMD x86 is the same thing.
> might be my Linux setup being inefficient
Given that videos spin up those coolers, there is actually a problem with your GPU setup on Linux, and I expect there'd be an improvement if you managed to fix it.
Another thing is that Chrome on Linux tends to consume exorbitant amount of power with all the background processes, inefficient rendering and disk IO, so updating it to one of the latest versions and enabling "memory saving" might help a lot.
Switching to another scheduler, reducing interrupt rate etc. probably help too.
Linux on my current laptop reduced battery time x12 compared to Windows, and a bunch of optimizations like that managed to improve the situation to something like x6, i.e. it's still very bad.
> Is x86 just not able to keep up with the ARM architecture?
Yes and no. x86 is inherently inefficient, and most of the progress over last two decades was about offloading computations to some more advanced and efficient coprocessors. That's how we got GPUs, DMA on M.2 and Ethernet controllers.
That said, it's unlikely that x86 specifically is what wastes your battery. I would rather blame Linux, suspect its CPU frequency/power drivers are misbehaving on some CPUs, and unfortunately have no idea how to fix it.
They're big, expensive chips with a focus on power efficiency. AMD and Intel's chips that are on the big and expensive side tend toward being optimized for higher power ranges, so they don't compete well on efficiency, while their more power efficient chips tend toward being optimized for size/cost.
If you're willing to spend a bunch of die area (which directly translates into cost) you can get good numbers on the other two legs of the Power-Performance-Area triangle. The issue is that the market position of Apple's competitors is such that it doesn't make as much sense for them to make such big and expensive chips (particularly CPU cores) in a mobile-friendly power envelope.
I’ve been thinking a lot about getting something from Framework, as I like their ethos around relatability. However, I currently have an M1 Pro which works just fine, so I’ve been kicking the can down the road while worrying that it just won’t be up to par in terms of what I’m used to from Apple. Not just the processor, but everything. Even in the Intel Mac days, I ended up buying a Asus Zephyrus G14, which had nothing but glowing reviews from everyone. I hated it and sold it within 6 months. There is a level of polish that I haven’t seen on any x86 laptop, which makes it really hard for me to venture outside of Apple’s sandbox.
I think this is partially down to Framework being a very small and new company that doesn't have the resources to make the best use of every last coulomb, rather than an inherent deficiency of x86. The larger companies like Asus and Lenovo are able to build more efficient laptops (at least under Windows), while Apple (having very few product SKUs and full vertical integration) can push things even further.
notebookcheck.com does pretty comprehensive battery and power efficiency testing - not of every single device, but they usually include a pretty good sample of the popular options.
I tend to think its putting the memory on the package. Putting the memory on the package has given the M1 over 400GB/s which is a good 4x that on a usual dual channel x64 CPU and the latency is half that of going out to a DRAM slot. That is drastic and I remember when the northbrige was first folded into the CPU by AMD with the Athlon and it had a similarly big improvements in performance. It also reduces power consumption a lot.
The cost is flexibility and I think for now they don't want to move to fixed RAM configurations. The X3D approach from AMD gets a good bunch of the benefits by just putting lots of cache on board.
Apple got a lot of performance out of not a lot of watts.
One other possibility on power saving is the way Apple ramps the clockspeed. Its quite slow to increase from its 1Ghz idle to 3.2Ghz, about 100ms and it doesn't even start for 40ms. With tiny little bursts of activity like web browsing and such this slow transition likely saves a lot of power at a cost of absolute responsiveness.
I don't think there is a single thing you can point to. But overall Apple's hardware/software is highly optimized, closely knit, and each component is in general the best the industry has to offer. It is sold cheap as they make money on volume and an optimized supply chain.
Framework does not have the volume, it is optimized for modularity, and the software is not as optimized for the hardware.
As a general purpose computer Apple is impossible to beat and it will take a paradigm shift for that for to change (completely new platform - similar to the introduction of the smart phone). Framework has its place as a specialized device for people who enjoy flexible hardware and custom operating systems.
That's a Chrome problem, especially on extra powerful processors like Strix Halo. Apple is very strict about power consumption in the development of Safari, but Chrome is designed to make use of all unallocated resources. This works great on a desktop computer, making it faster than Safari, but the difference isn't that significant and it results in a lot of power draw on mobile platforms. Many simple web sites will peg a CPU core even when not in focus, and it really adds up with multiple tabs open.
It's made worse on the Strix Halo platform, because it's a performance first design, so there's more resource for Chrome to take advantage of.
The closest browser to Safari that works on Linux is Falkon. It's compatability is even less than Safari, so there's a lot of sites where you can't use it, but on the ones where you can, your battery usage can be an order of magnitude less.
I recommend using Thorium instead of Chrome; it's better but it's still Chromium under the hood, so it doesn't save much power. I use it on pages that refuse to work on anything other than Chromium.
Chrome doesn't let you suspend tabs, and as far as I could find there aren't any plugins to do so; it just kills the process when there aren't enough resources and reloads the page when you return to it. Linux does have the ability to suspend processes, and you can save a lot of battery life, if you suspend Chrome when you aren't using it.
I don't know of any GUI for it, although most window managers make it easy to assign a keyboard shortcut to a command. Whenever you aren't using Chrome but don't want to deal with closing it and re-opening it, run the following command (and ignore the name, it doesn't kill the process):
killall -STOP google-chrome
When you want to go back to using it, run: killall -CONT google-chrome
This works for any application, and the RAM usage will remain the same while suspended, but it won't draw power reading from or writing to RAM, and its CPU usage will drop to zero. The windows will remain open, and the window manager will handle them normally, but whats inside won't update, and clicks won't do anything until resumed.One downside of Framework is they use DDR instead of LPDDR. This means you can upgrade or replace the RAM, but it also means memory is much slower and more power hungry.
Its also probably worth putting the laptop in "efficiency" mode (15W sustained, 25W boost per Framework). The difference in performance should be fairly negligible compared to balanced mode for most tasks and it will use less energy.
Well, there is a major architectural reason why the entire M-series appears to be "so fast" and that is the unified memory, which completely eliminates the buffer-to-buffer data copying that is probably over half of what a non-unified memory architecture chip is doing at any given time. M-series chips have an architecture that completely eliminates data copying, just reference the data where it is, and you're done.
I considered getting a personal MBP (I have an M3 from work), but picked up a Framework 13 with the AMD 7 7840U. I have Pop!_OS on it, and while it isn't quite as impressive as the MBP, it is radically better than other Windows / Linux laptops I have used lately, battery life is quite good, ~5hr or so, not quite on par with the MBP but still good enough that I don't really have any complaints (and being able to up upgrade RAM / SSD / even mobo is worth some tradeoff to me, where my employers will just throw my MBP away in a few years).
A lot of insightful comments already, but there are two other tricks I think Apple is using: (1) the laptops can get really hot before the fans turn on audibly and (2) the fans are engineered to be super quiet. So even if they run on low RPM, you won't hear them. This makes the M-series seem even more efficient than they are.
Also, especially the MacBook Pros have really large batteries, on average larger than the competition. This increases the battery runtime.
1. Memory soldered to the CPU
2. Much more cache
3. No legacy code
4. High frequencies (to be 1st in game benchmarks, see what happens when you're a little behind like the last Intel launch, the perception is Intel has bad CPUs because they are some percentage points behind AMD on games, pressure Apple doesn't have - comparisons are mostly Apple vs. Apple and Intel vs. Amd)
The engineers at AMD are the same as at Apple, but both markets demand different chips and they get different chips.
Since some time now the market is talking about energy efficiency, and we see
1. AMD soldering memory close to the CPU
2. Intel and AMD adding more cache
3. Talks about removing legacy instructions and bit widths
4. Lower out of the box frequencies
Will take more market pressure and more time though.
On efficiency side, there's big difference on OS department. Recently released handheld Lenovo Go S has both SteamOS (which is Arch btw) and Windows11 versions, allowing to directly compare efficiency of a AMD's Z1E chip under load with limited TDP. And the difference is huge, with SteamOS fps is significantly higher and and the same time battery lasts a lot more.
Windows does a lot of useless crap in the background that kills battery and slows down user-launched software
There's a dimension to this people wilfully ignore: the AArch64 design is inspired, especially if you have a team as good as Apple have to execute an implementation of it. And that isn't a one way causality because AArch64 is what it is because of things that the Apple team wanted to do, which has led to their performance advantages today.
I don't think many people have appreciated just how big a change the 64 bit Arm was, to the point it's basically a completely different beast than what came before.
From the moment the iPhone went 64 bit it was clear this was the plan the whole time.
Like a few other comments have mentioned, AMD's Strix Halo / AI Max 380 and above is the chip family that is closest to what Apple has done with the M series. It has integrated memory and decent GPU. A few iterations of this should be comparable to the M series (and should make local LLMs very feasible, if that is your jam.)
On my Framework (16), I've found that switching to GNOME's "Power Saver" mode strikes the right balance between thermals, battery usage and performance. I would recommend trying it. If you're not using GNOME, manually modifying `amd_pstate` and `amd_pstate_epp` (either via kernel boot parameters or runtime sysfs parameters) might help out.
I agree that it's unfortunate that the power usage isn't better tuned out of the box. An especially annoying aspect of GNOME's "Power Saver" mode is that it disables automatic software updates, so you can't have both automatic updates and efficient power usage at the same time (AFAIK)
There are really a lot of responses to this which explain it well. The summary though might be phrased as 'alignment'. Specifically, when everyone from the mainboard engineer to the product marketeer for the product have the same goals and priorities (are aligned) the overall system reflects that. In x86 land the processor guys are always trying to 'capture more addressable market' which means features for specific things which perhaps have no value to your 'laptop' but are great for cars embedding the chip. Similarly for display manufacturers who want standards that work for everyone even if they aren't precisely what everyone wants. Need a special 'sleep the pixels that are turned off' mode for your screen ASIC which isn't part of the HDMI spec? Nah we're not gonna do that because who would use it? But Apple can, specific things in the screen that minimize power that the OS can talk to through 'side channels' that aren't part of any standard? Sure they can do that too. And if everyone is aligned on long battery life (for example) that happens. I worked at both Google and Netapp and both of them bought enough hard drives that they could demand and get specific drive firmware that did things to make their systems run better. Their software knew about the specific firmware and exploited it. They 'aligned' their vendors with their system objectives which they could do because of their volume purchases.
In the x86 laptop space the 'big' vendors like Dell, HP, Asus, Lenovo, Etc. Can do that sort of thing. Framework doesn't have the leverage yet. Linux is an issue too because that community isn't aligned either.
Alignment is facilitated by mutual self interest, vendors align because they want your business, etc. The x86 laptop industry has a very wide set of customer requirements, which is also challenging (need lots of different kinds of laptops for different needs).
The experience is especially acute when one's requirements for a piece of equipment have strayed from the 'mass market' needs so the products offered are less and less aligned with your needs. I feel this acutely as laptops move from being a programming tool to being an applications product delivery tool.
I may be out of date or wrong, but I recall when the M1 came out there was some claims that x86 could never catch up, because there is an instruction decoding bottleneck (instructions are all variable size), which the M1 does not have, or can do in parallel. Because of that bottleneck x86 needs to use other tricks to get speed and those run hot.
It sounds like something is horribly misconfigured.
- Try running powertop to see if it says what the issue is.
- Switch to firefox to rule out chrome misconfigurations.
- If this is wayland, try x11
I have an amd SOC desktop and it doesn’t spin up the fans or get warm unless its running a recent AAA title or an LLM. (I’m running devuan because most other distros I’ve tried aren’t stable enough these days).
In scatterplots of performance vs wattage, AMD and Apple silicon are on the same curve. Apple owns the low end and AMD owns the high end. There’s plenty of overlap in the middle.
Apple tailors their software to run optimally on their hardware. Other OSs have to work on a variety of platforms. Therefore limiting the amount of hardware specific optimizations.
I think that you are wrong to extrapolate anything about the hardware capabilities based on your experience on a few tasks.
Difference are more software related in my opinion. And it might be just appearance as Apple is used to do tricks. Like for example it was shown in a good old time that people were thinking that the iphone was faster to load things because of using animation at load time despite taking the same time as other phones.
For example, for the many tabs in Chrome, the difference might be that macos is aggressively throttling things when your linux laptop will give you the maximum performance possible and so producing more heat. I often noted that with osx and especially when you don't have a lot of ram. The os will easily put to sleep and evict other programs, but also other windows and other tabs I guess as part of them are separated process. Then, when you need them, it reload the memory. Good in term of power efficiency but in my experience I was experiencing terrible latencies like going from one window to another. Let's say like 1s. Not obvious if you are not used to better.
In the same way that a lot of persons are used to electron based ide like vscode and feels perfectly ok, but for me the latency of typing code and it showing on the screen is awful compared to my native ide.
In the same way for macos, you can see how often the laptop will go to sleep, or lower the display light unexpectedly with default settings. Like these guys that suddenly quit Google meet meetings because the mac went to sleep despite the active call.
No incentive. x86 users come to the table with a heatsink in one hand and a fan in the other, ready to consume some watts.
What is the power profile setting? Is it on balanced or performance? Install powertop and see what is up. What distro are you using? The linux drivers for the new AMD chips might stink cause the chips are so new. Linux drivers for laptops stink in general compared to Windows. I know my 11th gen WiFi still doesn't work right, even with latest kernel and disabling powersaving on the wifi.
AMD needs to put out a reference motherboard to pair with their chips. They’re basically relying on third-party “manufacturers” to put up R&D. We have decades of these mobo manufacturers doing bare min churning out crappy quality mobos. No one’s interested in overclocking in 2025. Why am I paying $300 premium for a feature I don’t care about?
My M1 Macbook Pro I used at work for several months until the Ubuntu Ryzen 7 7840U P14s w/32GB RAM arrived didn't seem particularly amazing.
The only real annoying thing I've found with the P14s is the Crowdstrike junk killing battery life when it pins several cores at 100% for an hour. That never happened in MacOS. These are corporate managed devices I have no say in, and the Ubuntu flavor of the corporate malware is obviously far worse implemented in terms of efficiency and impact on battery life.
I recently built myself a 7970X Threadripper and it's quite good perf/$ even for a Threadripper. If you build a gaming-oriented 16c ryzen the perf/$ is ridiculously good.
No personal experience here with Frameworks, but I'm pretty sure Jon Blow had a modern Framework laptop he was ranting a bunch about on his coding live streams. I don't have the impression that Framework should be held as the optimal performing x86 laptop vendor.
For what it's worth -- and I'm not familiar with the Framework 13 -- but I did recently review a marketed-for-AI-workloads laptop with Ryzen 260 CPU and Nvidia 5060 laptop GPU, which shipped with Windows, and was curious how graphical Ubuntu with GNOME would run from a fresh install on it. It ran hot on simple tasks, with severely worse battery performance (from 11h runtime playing a local video stream via Firefox to 3.5h) and moderately worse total work output relative to Windows.
It runs Debian headless now (I didn't have particular use for a laptop in the first place). Not sure just how unpopular this suggestion'd be, but I'd try booting Windows on the laptop to get an idea of how it's supposed to run.
> When can we expect a x86 laptop chip to match the M1 in efficiency/thermals?!
One of the things Apple has done is to create a wider core that completes more instructions per clock cycle for performance while running those cores at conservative clock speeds for power efficiency.
Intel and AMD have been getting more performance by jacking up the clock speeds as high as possible. Doing so always comes at the cost of power draw and heat.
Intel's Lunar Lake has a reputation for much improved battery life, but also reduces the base clock speed to around 2 gigahertz.
The performance isn't great vs the massively overclocked versions, but at least you get decent battery life.
A friend at grad school was asking me for advice -- he had an "in" at Intel -- make $250k and do nothing. A friend had promised a basically no-show position for him. My friend was debating between this $250k/yr no-show position at Intel (no growth) vs something elsewhere which was more demanding but would provide more growth.
This isnt the only no-show position I've heard about at Intel. That is why Intel cannot catch up. You probably cannot get away with that at Apple.
I think u just didn't pick right chip Intels new coure ultra 2 series they redone everything from scratch to focus on efficiency and also the x elite laptops or tablets u could chose lie surface pro with arm chip find how well it does combining what mac and i pad can't do you could get similar battery performance experience, but one question did u consider remote developing on tablet and have dedicated server in home with all specs u could build wanted to know ur opinion how much relevant u think it would because it makes sense if laptop is heavy battery hungry but what about efficient laptops which do u think is more pleasing Remote ssh very efficient tablets good screen Or efficient m1 pro laptop
This is a modularity vs integration tradeoff.
- Linux is a patchwork of SW, written in a variety of programming languages, using a variety of libraries, some of which having the same functionality. There is duplication, misalignment, legacy.
- MacOS is developed by a single company. It is much more integrated and coherent.
Same for the CPU:
- x86 accesses memory through an external bus. The ability to install a third party GPU requires an external bus, with a standardized protocol, bus width, etc. This is bound to lag behind state of the art
- Apple chips have on die memory, GPU (actually same package but not same die). Higher speeds, optimization, evading from standardized protocols: all this is possible.
This has an impact on kernel/drivers/compilers:
- x86: so many platforms, CPU versions, protocol revisions to support. Often with limited documentation. This wastes hell a lot of engineering time!
- Apple: limited number of HW platforms to support, full access to internals.
There's a lot of trash talking of x86 here but I feel like it's not x86 or Intel/AMD that are the problem for the simple reason that Chromebooks exist. If you've ever used a Chromebook with the Linux VM turned on, they can basically run everything you can run in Linux, don't get hot unless you actually run something demanding, have very good idle power usage, and actually sleep properly. All this while running on the same i5 that would overheat and fail to sleep in Windows / default Linux distros. This means that it is very much possible to have an x86 get similar runtimes and heat output as an M Series Mac, you just need two things:
- A properly written firmware. All Chromebooks are required to use Coreboot and have very strict requirements on the quality of the implementation set by Google. Windows laptops don't have that and very often have very annoying firmware problems, even in the best cases like Thinkpads and Frameworks. Even on samples from those good brands, just the s0ix self-tester has personally given me glaring failures in basic firmware capabilities.
- A properly tuned kernel and OS. ChromeOS is Gentoo under the hood and every core service is afaik recompiled for the CPU architecture with as many optimisations enabled. I'm pretty sure that the kernel is also tweaked for battery life and desktop usage. Default installations of popular distros will struggle to support this because they come pre-compiled and they need to support devices other than ultrabooks.
Unfortunately, it seems like Google is abandoning the project altogether, seeing as they're dropping Steam support and merging ChromeOS into Android. I wish they'd instead make another Pixelbook, work with Adobe and other professional software companies to make their software compatible with Proton + Wine, and we'd have a real competitor to the M1 Macbook Air, which nothing outside of Apple can match still.
You can probably install Asahi Linux on that M1 pro and do comparative benchmarks. Does it still feel different? (serious question)
Plenty of excellent comments about the companies - e.g. Apple vertical closed mobile 1st, while Microsoft horizontal open desktop 1st; decades of work by many thousands of people went into optimising many tiny advantages, aka tricks - but can't help but think back of pre-history. Where Intel was always more-is-more, while ARM was always less-is-more. Intel was winning for the longest time. Never expected to see non-x86 competitive single core integer performance tbh. And in the pre-pre-history, one generation further back, tiny 6502 1MHz and mostly totally 8 bit only, could about keep up with Z80 4MHz and his almost-aspiring-to 16 bit registers. Always made me wonder somewhat - "whut, how come??"
It's not just the architecture - it's the memory subsystem. Unified memory gives Apple huge advantages for workloads that need to move data between CPU and GPU frequently. x86 is stuck with discrete graphics memory in most configurations. Even if Intel/AMD match the compute efficiency, the memory bandwidth story is harder to solve.
Plenty of reasons, but the big one would be integration, especially RAM. Apple M series processors are exclusively designed for Apple products running the Apple OS, none of them extensible. It means it can be optimized for that use case.
RAM in particular can be a big performance bottleneck, Apple M as way better bandwidth than most x86 CPUs, having well specified RAM chips soldered right next to the CPU instead of having to support DIMM modules certainty helps. AMD AI MAX chips, which also have great memory bandwidth and the most comparable to Apple M also use soldered RAM.
Maybe some details like ARM having a more efficient instruction decoder plays a part, but I don't believe it is that significant.
Honestly, I have serious FOMO about this. I am never going to run a Mac (or worse: Windows) I'm 100% on Linux, but I seriously hate it that I can't reliably work at a coffee shop for five hours. Not even doing that much other than some music, coding, and a few compiles of golang code.
My Apple friends get 12+ hrs of battery life. I really wish Lenovo+Fedora or whoever would get together and make that possible.
Backward compatibility.
Intel provides processors for many vendors and many OS. Changing to a new architecture is almost impossible to coordinate. Apple doesn't have this problem.
Actually in de 90s Intel and Microsoft wanted to move to a RISC architecture but Compaq forced them to stay on x86.
A better question is which (if any) ARM competitors can achieve comparable performance to M-series? I do understand Apple has tuned the entire platform from cpu/gpu, cache, unified memory, and software to achieve what they offer.
When the device is doing nothing it should use no power. The goal is to get to 'doing nothing' as fast as possible.
That's a legacy of iPhone. And that's a fundamental philosophical difference between Apple and everyone else.
I suspect that's why Apple's caches are structured the way they are: the goal is to stop working. More instruction cache means more work, which leads to less work.
On my ryzen laptop, i have to manually ensure that Linux is setting the right power settings. Once i do that, my 5950HS laptop from 2022 is completely competitive with my work MacBook M2. Louder and hotter at full tilt, but it also has a better GPU (even with the onboard Nvidia turned off) and i can get ~6 hours of web dev out of it if I'm not constantly churning tons of files.
I would try it with Windows for a better comparison, or get into the weeds of getting Linux to handle the ryzen platform power settings better.
With Ubuntu properly managing fans and temps and clocks, I'll take it over the Mac 10/10 times.
They are pretty similar when comparing the latest amd, and Apple chips on the same node. The buying power from Apple means that they get them earlier than AMD, usually by 6-9 months.
Windows on the other hand is horribly optimized, not only for performance, but also for battery life. You see some better results from Linux, but again it takes a while for all of the optimizations to trickle down.
The tight optimization between the chip, operating system, and targeted compilation all come together to make a tightly integrated product. However comparing raw compute, and efficiency, the AMD products tend to match the capacity of any given node.
Hi Stephen,
On the Mac, you can fix neither the hardware nor the software; it's like a car with the hood welded shut.
On the Framework, you can fix (or change) both, and there is no built-in expiry date beyond which you cannot update the software.
Is performance really the only thing that matters?
I have a ASUS Zenbook 14 OLED UX3405 and it's the same here - thermals are shit. I can't watch a yt video without the fan spinning up. I can in mplayer though, so there must be something in the desktop tech stack that prevents the cores from sleeping. Maybe it's Wayland which I've noticed is sluggy sometimes or maybe Linux tcp/ip handling of video streams isn't optimized for energy-efficiency. The stack is so deep that finding the culprit is probably impossible.
Hardware performance literally doesn't matter if your software doesn't use it. The more soc-like design of the M series essentially results in an easier time for performance developers. x86 vendors are fighting a losing battle until they change their image of what a x86 based computer should look like. you aren't going to beat apple insiders, x86 vendors have a market opportunity here, but they've had it for 2 decades at this point and they have refused to switch, so they are likely incapable and will die. Sad.
Apple designs their laptops to throttle power when they warm up too much. Framework gives theirs a fan.
It's a design choice.
Also, different Linux distros/DEs prioritize different things. Generally they prioritize performance over battery life.
That being said, I find Debian GNOME to be the best on battery life. I get 6 hours on an MSI laptop that has an 11th gen Intel processor and a battery with only 70% capacity left. It also stays cool most of the time (except gaming while being plugged in) but it does have a fan...
Part of it is software. Mac OS is redesigned for M1 chips. They redid a huge amount of the OS down to like memory anllocators and other low level stuff.
In the general case, it appears to be impossible to beat a hardware vendor that is also entirely in charge of the operating system and much of the software on top of that (e.g. safari).
In special cases, such as not caring about battery life, x86 can run circles around M1. If you allow the CPU rated for 400W to actually consume that amount of power, it's going to annihilate the one that sips down 35W. For many workloads it is absolutely worth it to pay for these diminishing returns.
> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
I've got the Framework 13 with the Ryzen 5 7640U and I routinely have dozens of tabs open, including YouTube videos, docker containers, handful of Neovim instances with LSPs and fans or it getting hot have never been a problem (except when I max out the CPU with heavy compilation).
The issue you're seeing isn't because x86 is lacking but something else in your setup.
Cinebench points per Watt according to a recent c't CPU comparison [1]:
Apple M1: 23.3
Apple M4: 28.8
Ryzen 9 7950X3D (from 2023, best x86): 10.6
All other x86 were less efficient.The Apple CPUs also beat most of the respective same-year x86 CPUs in Cinebench single-thread performance.
[1] https://www.heise.de/tests/Ueber-50-Desktop-CPUs-im-Performa... (paywalled, an older version is at https://www.heise.de/select/ct/2023/14/2307513222218136903#&...)
How much do you like the rest of the hardware? What price would seem OK for decent GUI software that runs for a long time on batter?
Am learning x86 in order to build nice software for the Framework 12 i3 13-1315U (raptor lake). Going into the optimization manuals for intel's E-cores (apparently Atom) and AMD's 5c cores. The efficiency cores on the M1 MacBook Pro are awesome. Getting debian or Ubuntu with KDE to run this on a FW12 will be mind-boggling.
> using the Framework feels like using an older Intel based Mac
Your memory served you wrong. Experience eith Intel based Macs was much worse than recent AMD chips.
I think it is getting close: [0]
(Edit, I read lower in the thread that the software platform also needs to know how to make efficient use of this performance per watt, ie, by not taking all the watts you can get.)
[0] https://www.phoronix.com/review/ryzen-ai-max-395-9950x-9950x...
x86 has long been the industry standard and can’t be remove, but Apple could move away from it because they control both hardware and software.
>My daily workhorse is a M1 Pro that I purchased on release date, It has been one of the best tech purchases I have made
Same, I just realized it's three years old, I've used every day for hours and it still feels like the first day I got it.
They truly revindicated on this as their laptops were getting worse and worse and worse (keyboard fiasco, touchbar, ...).
Thanks for the honest review! I have two Intel ThinkPads (2018 and 2020) and I've been eying the Framework laptops for a few years as a potential replacement. It seems they do keep getting better, but I might just wait another year. When will x86 have the "alien technology from the future" moment that M1 users had years ago already?
Macbooks are more like "phone/tablet hardware evolved into desktop" mindset (low power, high performance). x86 hardware is the other way around (high power, we'll see about performance).
That being said, my M2 beats the ... out of my twice as expensive work laptop when compiling an arduino project. Literall jaw drop the first time I compiled on the M2.
I love all the benchmarks and numbers and what not but for us who switched to M* laptops it is all very obvious without any of that: longer battery life, more performance for disk/cpu/gpu, no fans spinning.
My M1 Air still beats top of the line i7 MacBook Pros.
To those who are using the newer MacBook pros, how easy and seamless it is to run Linux on it via Parallels etc without going all the way to Asahi etc? Like if i'm super comfortable with Linux, can I just get near native Linux desktop experience and forget that all of it is running on top of MacOS?
All Ryzen mobile chips (so far) use a homogeneous core layout. If heat/power consumption is your concern, AMD simply hasn't caught up to the Big.little architecture Intel and Apple use.
In terms of performance though, those N4P Ryzen chips have knocked it out of the park for my use-cases. It's a great architecture for desktop/datacenter applications, still.
Apple has vertical integration between their hardware and operating system meaning they have way more control. They can adapt their software to enable them to optimize their hardware in ways competitors can't.
Read history of Bugatti Veyron for answer. In short, VW have made extraordinary machine, but so expensive, they fear to sell it for real cost.
So literally, VW partially donate Veyron to their clients, selling it under-priced.
I think, same happen with Apple M architecture - it is extraordinary and different from anything on market, but Apple sell it under-priced, so to limit losses, they decided to limit it to very few models.
How such things happen? Well, hardware is hard - usually so sophisticated SoC need 7..8 iterations to achieve production, and this could cost million or even more. And mostly happen problem, just low output, mean, for example you make 100 cores on one die, but only 5..6 working.
How AMD/Intel deal with such things? It's hard, mean complex.
First, they just have huge experience and very wide portfolio of different SoCs, but used some tricks, so could for example downgrade Xeon to Core-i7 with jumpers.
Second, for large patterns like RAM/Cache, could disable broken parts of die with jumpers, or even could disable cores. That's why there are so many DRAM PCB designs - they usually made as 6 RAM fields with one controller, and with jumpers could sell chips with literally 1, 2, 3,4,5 or 6 fields enabled; some AMD SoCs exists with odd number of cores because of this (for example 3-cores), and other tricks, which could made some averaged profits from wide line of SoCs.
Third, for some designs, Intel/AMD use already proven technologies, like Atom was basically just first Pentium on new semiconductor process, or for long time, I7 series was basically Xeons previous generations.
Unfortunately for Apple, they have not such luxury to make wide product line, and don't have significant place to dump low grade chips, so they limited M line to one which as I think just appear to have largest output.
From my experience, I could speculate, Apple tops consider to make wider product line, when achieve better output, but for now without much success.
The M4 I have lets me run GPT-OSS-20B on my Mac, and its surprisingly responsive. I was able to get LM Studio to even run a web API for it, which Zed detected. I'm pleasantly surprised by how powerful it is. My gaming with with a 3080 cannot even run the same LLM model (not enough VRAM).
Software.
If you actually benchmark said chips in a computational workload I'd imagine the newer chip should handily beat the old M1.
I find both windows and Linux have questionable power management by default.
On top of that, snappiness/responsiveness has very little to do with the processor and everything to do with the software sitting on top of it.
I can build myself a new amd64 box for just under €200. Under €100 with used parts. Some older Dell and Lenovo laptops even work with coreboot.
An Airbook sets me back €1000, enough to buy a used car, and AFAICT is much more difficult to get fully working Linux on than my €200 amd64 build.
Why hasn't apple caught up?
There is an M-series competitor from Intel that was released last year, codename Lunar Lake.
Here's a video about it. Skip to 4:55 for battery life benchmarks. https://www.youtube.com/watch?v=ymoiWv9BF7Q
I was in nearly the same situation as you and went with the Framework 13 as well (albeit with the AMD Ryzen 5 7640U which is an older chip). Not really regretting it though despite some quirks. Out of curiosity, how much RAM do you have in your Framework 13?
I don't know, but I suspect the builds of the programs you're using play a huge factor in this. Depending on the Linux distro and package management you're using, you just might not be getting programs that are compiled with the latest x86_64 optimizations.
Does the M series have a flat memory model? If so, I believe that may be the difference. I'm pretty sure the entire x86 family still pages RAM access which (at least) quadruples activity on the various busses and thus generates far more heat and uses more energy.
One is more built from the ground up more recently than the other.
Looking beyond Apple/Intel, AMD recently came out with a cpu that shares memory between the GPU and CPU like the M processors.
The Framework is a great laptop - I'd love to drop a mac motherboard into something like that.
from pure CPU and battery life perspective, Snapdragon X Elite based Surface Laptop 7 are really quite good -- comparable to M2 Pro and M3 Pro performance and performance per watt. GPU is a bit weak.
the build quality of surface laptop is superb also.
> I haven’t tried Windows on the Framework yet it might be my Linux setup being inefficient.
My experience has been to the contrary. Moving to Linux a couple months ago from Windows doubled my battery life and killed almost all the fan noise.
I always thought it's Apple's on-package DRAM latency that contributes to its speed relative to x86 especially for local LLM (generative but not necessarily training) usage but with the answers here I'm not so sure.
If the framework 12 could have the 395+ but I think it cannot work out vs arm? And then my m4 air is just better and cheaper. Cheaper I don't care about much but battery vs perf is quite mental.
> a number of Dockers containers running simultaneously and I never hear the fans, battery life has taken a bit of a hit but it is still very respectable.
Note those docker containers are running in a linux VM!
Of course they are on Windows (WSL2) as well.
I have an iPad pro (m1) and don't feel like upgrading at all. Of course, it's an overpowered chip for a tablet - but I'm still impressed by what I can run on it (like DrawThings).
I don't give any fucks about battery life or even total power consumption cost; I just hate that I have some crap-ass Apple mid-range (for them) laptop with only 36GB RAM and an "M4 Max" CPU, and it runs rings around my 350W Core i9-14900K desktop Linux workstation, and there is essentially no way I can develop software (Rust, web apps, multi-container Docker crap) on Linux with anything close to the performance of my shitty laptop computer, even if I spend $10,000.
That's actually wild. I think we're in a kind of unique moment, but one that is good for Apple mainly, because their OS is so developer-hostile that I pay back all the performance gains with interest. T_T
I don't think a fan spinning is negative. The cooling is functioning effectively.
Apple often lets the device throttle before it turns on the fans for "better ux" linux plays no such mind games.
It's not just the hardware efficiency, but it's also the software stack that's efficient. I'd be curious, macOS versus Linux for battery life testing.
I was about to say it might be windows and use linux, since perf benchmarks on windows can be far worse for the same chip than linux, but you are using linux already.
I love how few people mention ARM being used in the cloud when it has literally saved folks so much money not to mention the planet burns less quickly on ARM.
The M series chips are optimized at both the assembly language and silicon (hardware) levels for mobile use. X86 is much more generalized.
In general, probably co-design with software. Apple is in a position where they design microprocessors that are only going to be running MacOS/iOS.
Intel and AMD have to earn their investments back in one generation, Apple can earn their investments back over a customers lifetime.
Chrome has been very conservative about enabling hardware acceleration features on Linux. Look under about://gpu to see a list. It is possible to force them via command line flags. That said, this is only part of the story.
There are different kinds of transistors that can be used when making chips. There are slow, but efficient transistors and fast, but leaky transistors. Getting an efficient design is a balancing act where you limit use of the fast transistors to only the most performance critical areas. AMD historically has more liberally used these high performance leaky transistors, which enabled it to reach some of the highest clock frequencies in the industry. Apple on the other hand designed for power efficiency first, so its use of such transistors was far more conservative. Rather than use faster transistors, Apple would restrict itself to the slower transistors, but use more of them, resulting in wider core designs that have higher IPC and matched the performance of some of the best AMD designs while using less power. AMD recently adopted some of Apple’s restraint when designing the Zen 5c variant of its architecture, but it is just a modification of a design that was designed for significant use of leaky transistors for high clock speeds:
https://www.tomshardware.com/pc-components/cpus/amd-dishes-m...
The resulting clock speeds of the M4 and the Ryzen AI 340 are surprisingly similar, with the M4 at 4.4GHz and the Ryzen AI 340 at 4.8GHz. That said, the same chip is used in the Ryzen AI 350 that reaches 5.0GHz.
There is also the memory used. Apple uses LPDDR5X on the M4, which runs at lower voltages and has tweaks that sacrifice latency to an extent for a big savings in power. It also is soldered on/close to the CPU/SoC for a reduction needed in power to transmit data to/from the CPU. AMD uses either LPDDR5X or DDR5. I have not kept track of the difference in power usage between DDR versions and their LP variants, but expect the memory to use at least half the power if not less. Memory in many machines can use 5W or more just at idle, so cutting memory power usage can make a big impact.
Additionally, x86 has a decode penalty compared to other architectures. It is often stated that this is negligible, but those statements began during the P4 era when a single core used ~100W where a ~1W power draw for the decoder really was negligible. Fast forward to today where x86 is more complex than ever and people want cores to use 1W or less, the decode penalty is more relevant. ARM, using fixed length instructions and having a fraction of the instructions, uses less power to decode its instructions, since its decoder is simpler. To those who feel compelled to reply to repeat the mantra that this is negligible, please reread what I wrote about it being negligible when cores use 100W each and how the instruction set is more complex now. Let’s say that the instruction decoder uses 250mW for x86 and 50mW for ARM. That 200mW difference is not negligible when you want sub-1W core energy usage. It is at least 20% of the power available to the core. It does become negligible when your cores are each drawing 10W like in AMD’s desktops.
Apple also has taken the design choice of designing its own NAND flash controller and integrating it into its SoC, which provides further power savings by eliminating some of the power overhead associated with an external NAND flash controller. Being integrated into the SoC means that there is no need to waste power on enabling the signals to travel very far, which gives energy savings, versus more standard designs that assume a long distance over a PCB needs to be supported.
Finally, Apple implemented an innovation for timer coalescing in Mavericks that made a fairly big impact:
https://www.imore.com/mavericks-preview-timer-coalescing
On Linux, coalescing is achieved by adding a default 50ms slack to traditional Unix timers. This can be changed, but I have never seen anyone actually do that:
https://man7.org/linux/man-pages/man2/pr_set_timerslack.2con...
That was done to retroactively support coalescing in UNIX/Linux APIs that did not support it (which were all of them). However, Apple made its own new API for event handling called grand central dispatch that exposed coalescing in a very obvious way via the leeway parameter while leaving the UNIX/BSD APIs untouched, and this is now the preferred way of doing event handling on MacOS:
https://developer.apple.com/documentation/dispatch/1385606-d...
Thus, a developer of a background service on MacOS that can tolerate long delays could easily set the slack to multiple seconds, which would essentially guarantee it would be coalesced with some other timer, while a developer of a similar service on Linux, could, but probably will not, since the scheduler slack is something that the developer would need to go out of his way to modify, rather than something in his face like the leeway parameter is with Apple’s API. I did check how this works on Windows. Windows supports a similar per timer delay via SetCoalescableTimer(), but the developer would need to opt into this by using it in place of SetTimer() and it is not clear there is much incentive to use it. To circle back not Chrome, it uses libevent, which uses the BSD kqueue on MacOS. As far as I know, kqueue does not take advantage of timer coalescing on macOS, so the mavericks changes would not benefit chrome very much and the improvements that do benefit chrome are elsewhere. However, I thought that the timer coalescing stuff was worthwhile to mention given that it applies to many other things on MacOS.
In my opinion AMD is on a good way having at least comparable performance to MacBooks copying Apples architectural decisions. Unfortunately their jump on the latest AI Hype Train did not suit them well for efficiency. Ryzen 7840U was significantly more efficient than Ryzen AI 7 350 [1]
However, with AMD Strix Halo aka AMD Ryzen AI Max+ 395 (PRO) there are Notebooks like the ZBook Ultra G1a and Tablets like the Asus ROG Flow Z13, that come close to the MacBook power / performance ratio[2] due to the fact, that they used high bandwidth soldered on memory, which allows for GPUs with shared VRAM similar to Apple's strategy.
Framework did not manage to put this thing in notebook yet, but shipped a Desktop variant. They also pointed out, that there was no way to use LPCAMM2 or any other modular RAM tech with that machine, because it would have slowed it down / increased latencies to an unusable state.
So I'm pretty sure the main reason for Apple's success is the deeply integrated architecture and I'm hopeful that AMD's next generation STRIX Halo APUs might provide this with higher efficiency and hopefully Framework adapts these chips in their notebooks. Maybe they just did in the 16?! Let's wait for this announcement: https://www.youtube.com/watch?v=OZRG7Og61mw
Regarding the deeply thought through integration there is a story I often tell: Apple used to make iPods. These had support for audio playback control with their headphone remotes (e.g. EarPods), which are still available today. These had a proprietary ultra sonic chirp protocol[3] to identify Apple devices and supported volume control and complex playback control actions. You could even navigate through menus via voiceover with longpress and then using the volume buttons to navigate. Until today with their USB-C-to-AudioJack Adapters these still work on nearly every apple device published after 2013 and the wireless earbuds also support parts of this. Android has tried to copy this tiny little engineering wonder, but until today they did not manage to get it working[4]. They instead focus on their proprietary "longpress" should work in our favour and start "hey google" thing, which is ridiculously hard to intercept / override in officially published Android apps... what a shame ;)
1: https://youtu.be/51W0eq7-xrY?t=773
2: https://youtu.be/oyrAur5yYrA
There is one positive to all of this. Finally, we can stop listening to people who keep saying that Apple Silicon is ahead of everyone else because they have access to better process. There are now chips on better processes than M1 that still deliver much worse performance per watt.
s/x84/x86/
I think the Ryzen ai max 395+ gets really close in terms of performance per watt.
> I am sorely disappointed, using the Framework feels like using an older Intel based Mac. If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
A big thing is storage. Apple uses extremely fast storage directly attached to the SoC and physically very very close. In contrast, most x86 systems use storage that's socketed (which adds physical signal runtime) and that goes via another chip (southbridge). That means, unlike Mac devices that can use storage as swap without much practical impact, x86 devices have a serious performance penalty.
Another part of the issue when it comes to cooling is that Apple is virtually the only laptop manufacturer that makes solid full aluminium frames, whereas most x86 laptops are made out of plastic and, for higher-end ones, magnesium alloy. That gives Apple the advantage of being able to use the entire frame to cool the laptop, allowing far more thermal input before saturation occurs and the fans have to activate.
Anda bisa membatalkan pinjaman KrediVo hubungi call center Kredivo di nmor +62815_4034_985 menjelaskan keinginan Anda untuk membatalkan pinjaman dan ikuti arah verifikasi data dari layanan pelanggan
> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
Is that your metric of performance? If so...
$ sudo cpufreq-set -u 50MHz
done!M1’s efficiency/thermals performance comes from having hardware-accelerated core system libraries.
Imagine that you made an FPGA do x86 work, and then you wanted to optimize libopenssl, or libgl, or libc. Would you restrict yourself to only modifying the source code of the libraries but not the FPGA, or would you modify the processor to take advantage of new capabilities?
For made-up example, when the iPhone 27 comes out, it won’t support booting on iOS 26 or earlier, because the drivers necessary to light it up aren’t yet published; and, similarly, it can have 3% less battery weight because they optimized the display controller to DMA more efficiently through changes to its M6 processor and the XNU/Darwin 26 DisplayController dylib.
Neither Linux, Windows, nor Intel have shown any capability to plan and execute such a strategy outside of video codecs and network I/O cards. GPU hardware acceleration is tightly controlled and defended by AMD and Nvidia who want nothing to do with any shared strategy, and neither Microsoft nor Linux generally have shown any interest whatsoever in hardware-accelerating the core system to date — though one could theorize that the Xbox is exempt from that, especially given the Proton chip.
I imagine Valve will eventually do this, most likely working with AMD to get custom silicon that implements custom hardware accelerations inside the Linux kernel that are both open source for anyone to use, and utterly useless since their correct operation hinges on custom silicon. I suspect Microsoft, Nintendo, and Sony already do this with their gaming consoles, but I can’t offer any certainty on this paragraph of speculation.
x86 isn’t able to keep up because x86 isn’t updated annually across software and hardware alike. M1 is what x86 could have been if it was versioned and updated without backwards compatibility as often as Arm was. it would be like saying “Intel’s 2026 processors all ship with AVX-1024 and hardware-accelerated DMA, and the OS kernel (and apps that want the full performance gains) must be compiled for its new ABI to boot on it”. The wreckage across the x86 ecosystem would be immense, and Microsoft would boycott them outright to try and protect itself from having to work harder to keep up — just like Adobe did with Apple M1, at least until their userbase starting canceling subscriptions en masse.
That’s why there are so many Arm Linux architectures: for Arm, this is just a fact of everyday life, and that’s what gave the M1 such a leg up in x86: not having to support anything older than your release date means you can focus on the sort of boring incremental optimizations that wouldn’t be permissible in a “must run assembly code written twenty years ago” environment assumed by Lin/Win today.
Most probably it is not impacting on Microsoft sales?
FW just probably has shitty thermals.
It's macos. The M-series isn't that much better anymore. Just look at Asahi linux, you get just as rubbish battery life there as with any windows laptop.
+ What everyone else has already said about node size leads, specific benchmarks, etc
poverty
[dead]
They haven’t beat the low morale out of their workforce yet.
To me it simply looks like Apple buys out the first year of every new TSMC node and that is the main reason why the M series is more efficient. Strix Halo (N4P) has, according to Wikipedia, a transistor density about 140 MTr/mm2, while the M4 (N3E) has about 210 MTr/mm2. Isn't the process node alone enough to explain the difference? (+ software optimizations in MacOS of course)
> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
Change TDP, TDC, etc. and fan curves if you don't like the thermal behavior. Your Ryzen has low enough power draw that you could even just cool it passively. It has a lower power draw ceiling than your M1 Pro while exceeding it in raw performance.
Also comparing chips based on transistor density is mostly pointless if you don't also mention die size (or cost).
RISC vs CISC. Why you think a mainframe is so fast?
ARM is great. Those M are the only thing I could buy used and put Linux on it.
I wonder what is the difference between efficiency of MacBook display vs Framework laptop. Whilst CPU and GPU take considerable power they aren't usually working at 100% utilization. Display however has to be using power all the time, possibly at high brightness in daytime. MacBooks have (all?) high resolution displays which should be much power hungrier than Framework 13 IPS. Pro models use mini LED, which needs even more power.
I did ask LLM for some stats about this. According to Claude Sonnet 4 through VS Code (for what that's worth), my Macbook's display can consume same or even more power than CPU does for "office work". Yet my M1 Max 16" seems to last a good while longer than whatever it was I got from work this year. I'd like to know how those stats are produced (or are they hallucinated...). There doesn't seem to be a way to get display's power usage in M series Macs. So, you'd need to devise a testing regime for display off and display on 100% brightness to get some indication of its effect on power use.
Battery efficiency comes from a million little optimizations in the technology stack, most of which comes down to using the CPU as little as possible. As such the instruction set architecture and process node aren't usually that important when it comes to your battery life.
If you fully load the CPU and calculate how much energy a AI340 needs to perform a fixed workload and compare that to a M1 you'll probably find similar results, but that only matters for your battery life if you're doing things like blender renders, big compiles or gaming.
Take for example this battery life gaming benchmark for an M1 Air: https://www.youtube.com/watch?v=jYSMfRKsmOU. 2.5 hours is about what you'd expect from an x86 laptop, possibly even worse than the fw13 you're comparing here. But turn down the settings so that the M1 CPU and GPU are mostly idle, and bam you get 10+ hours.
Another example would be a ~5 year old mobile qualcomm chip. It's a worse process node than an AMD AI340, much much slower and significantly worse performance per watt, and yet it barely gets hot and sips power.
All that to say: M1 is pretty fast, but the reason the battery life is better has to do with everything other than the CPU cores. That's what AMD and Intel are missing.
> If I open too many tabs in Chrome I can feel the bottom of the laptop getting hot, open a YouTube video and the fans will often spin up.
It's a fairly common issue on Linux to be missing hardware acceleration, especially for video decoding. I've had to enable gpu video decoding on my fw16 and haven't noticed the fans on youtube.