Do you think it will be possible to run GNU/Linux operating systems on Microsoft’s brand new “Copilot+ PCs”? The latter ones were unveiled just yesterday, and honestly, the sales pitch is quite impressive! A Verge article on them: Link
Do you think it will be possible to run GNU/Linux operating systems on Microsoft’s brand new “Copilot+ PCs”? The latter ones were unveiled just yesterday, and honestly, the sales pitch is quite impressive! A Verge article on them: Link
Wow. That settles the discussion pretty quickly…
I’m not sure with the transition layer… Isn’t there things like qemu and box64… And multiarch support is part of most of the Linux distributions as of today? I always thought it’s just a few commands to make your system execute foreign binaries. I mean I’ve only ever tried cross-compiling for arm and running 32bit games on amd64 architecture so I don’t know that much. In the end I don’t use that much proprietary software, so it’s not really any issue for me. >99% of Linux software I use is available for ARM. But I can see how that’d be an issue for a gamer, regardless of the operating system being Windows or Linux or MacOS.
And I’m not really interested in the AI coprocessor itself. The real question for me is: Can it do LLM inference as fast as a M2/M3 Macbook? For that it’d need RAM that’s connected via a wide bus. And then there’s the question what does a machine with 64GB of RAM cost. That’s the major drawback with a Macbook because they get super expensive if you want a decent amount of RAM.
Yeah, but they’re experimental and probably very buggy. I’ve used box64 on my phone, it doesn’t play well with everything.
It should be better at AI stuff than M series laptops, allegedly. Many manufacturers actually started listing their prices for the new laptops, the new Microsoft ones start at 16GB of RAM at $1000. I know the Lenovo one can reach 64GB of RAM but not sure about the pricing.
By the time Snapdragon X Elite devices are broadly available you probably have to compare them against the M4. Apple specifies the M4’s NPU with 38 TOPS while Qualcomm specifies the Snapdragon X Elite with 45 TOPS, but I wouldn’t bet on these numbers being directly comparable (just like TFLOPS from different GPU manufacturers).
The M4 also made quite a big jump in single core performance and multi-core performance seems to be comparable to what the X Elite can achieve unless we’re talking about its 80 watts mode, but then we’d have to take Apple’s “Pro” and “Max” chips into account. Keep in mind current M4 performance metrics stem from a 5mm thick, passively cooled device. It will be interesting to see whether Qualcomm releases bigger chips on this architecture.
Price is obviously where the X Elite could shine as there’ll be plenty of devices to choose from (once they’re actually broadly available) and if you need anything above base models (which mostly start at 8 GB RAM and 256 GB SSD at Apple) you’ll likely pay a lot less for upgrades compared to the absolutely ridiculous upgrade pricing from Apple. Price to performance might be very good here.
If and when Linux distributions start seamlessly supporting x86 apps on ARM I’ll be interested in a thin and light ARM device if it really turns out to be that much more energy efficient compared to x86 chips. Most comparisons use Intel as a reference for x86 efficiency, but AMD has a decent lead here and I feel like it’s not as far off of ARM chips as the marketing makes it seem, so for the time being I think going with something like an AMD Ryzen 7840U/8840U is the way to go for the broadest Windows/Linux compatibility while achieving decent efficiency.
Hmmh. I can’t really make an informed statement. I can’t fathom qemu being experimental. That’s like a 20 year old project and used by lots of people. I’m not sure. And I’ve yet to try Box64.
I looked it up. The Snapdragin X Elite “Supports up to 64GB LPDDR5, with 136 GB/s memory bandwidth” while the Apple M2/M3 have anywhere from 100 GB/s memory bandwith to 150/300 or 400. (800 in the Ultra). And a graphics card has like ~300 to ~1000GB/s)
(Of course that’s only relevant for running large language models.)