Sounds like it’s time to switch out the 1080ti for a 9070xt. Been almost 10 years, probably due for an upgrade.
I will miss having that CUDA compatibility on hand for matlab tinkering. I wonder if any translation layers are working yet?
https://github.com/vosen/ZLUDA I’ve heard is doing pretty well
Looks cool, thanks for the link. I’ll give it a go.
Those are the GPUs they were selling — and a whole lot of people were buying — until about five years ago. Not something you’d expect to suddenly be unsupported. I guess Nvidia must be going broke or something, they can’t even afford to maintain their driver software any more.
I don’t get what needs support, exactly. Maybe I’m not yet fully awake, which tends to make me stupid. But the graphics card doesn’t change. The driver translates OS commands to GPU commands, so if the target is not moving, changes can only be forced by changes to the OS, which puts the responsibility on the Kernel devs. What am I missing?
The driver needs to interface with the OS kernel which does change, so the driver needs updates. The old Nvidia driver is not open source or free software, so nobody other than Nvidia themselves can practically or legally do it. Nvidia could of course change that if they don’t want to do even the bare minimum of maintenance.
The driver needs to interface with the OS kernel which does change, so the driver needs updates.
That’s a false implication. The OS just needs to keep the interface to the kernel stable, just like it has to with every other piece of hardware or software. You don’t just double the current you send over USB and expect cable manufacturers to adapt. As the consumer of the API (which the driver is from the kernel’s point of view) you deal with what you get and don’t make demands to the API provider.
Device drivers are not like other software in at least one important way: They have access to and depend on kernel internals which are not visible to applications, and they need to be rebuilt when those change. Something as huge and complicated as a GPU driver depends on quite a lot of them. The kernel does not provide a stable binary interface for drivers so they will frequently need to be recompiled to work with new versions of linux, and then less frequently the source code also needs modification as things are changed, added to, and improved.
This is not unique to Linux, it’s pretty normal. But it is a deliberate choice that its developers made, and people generally seem to think it was a good one.
They have access to and depend on kernel internals
That sounds like a stupid idea to me. But what do I know? I live in the ivory tower of application development where APIs are well-defined and stable.
Thanks for explaining.
You’re re-opening the microkernel vs monlithic kernel debate with that. For fun you can read how Andrew S. Tanenbaum and Linus Torvalds debated the question in 1992 here: https://groups.google.com/g/comp.os.minix/c/wlhw16QWltI
I don’t generally disagree, but
You don’t just double the current you send over USB and expect cable manufacturers to adapt
That’s pretty much how we got to the point where USB is the universal charging standard: by progressively pushing the allowed current from the initially standardized 100 mA all the way to 5 A of today. A few of those pushes were just manufacturers winging it and pushing/pulling significantly more current than what was standardized, assuming the other side will adapt.
The default standard power limit is still the same as it ever was on each USB version. There’s negotiation that needs to happen to tell the device how much power is allowed, and if you go over, I think over current protection is part of the USB spec for safety reasons. There’s a bunch of different protocols, but USB always starts at 5V, and 0.1A for USB 2.0, and devices need to negotiate for more. (0.15A I think for USB 3.0 which has more conductors)
As an example, USB 2.0 can signal a charging port (5V / 1.5A max) by putting a 200 ohm resistor across the data pins.
The default standard power limit is still the same as it ever was on each USB version
Nah, the default power limit started with 100 mA or 500 mA for “high power devices”. There are very few devices out there today that limit the current to that amount.
It all begun with non-spec host ports which just pushed however much current the circuitry could muster, rather than just the required 500 mA. Some had a proprietary way to signal just how much they’re willing to push (this is why iPhones used to be very fussy about the charger you plug them in to), but most cheapy ones didn’t. Then all the device manufacturers started pulling as much current as the host would provide, rather than limiting to 500 mA. USB-BC was mostly an attempt to standardize some of the existing usage, and USB-PD came much later.
I wasted days of my life getting nVidia to work on Linux. Too much stress. Screw that. Better ways to spend time. If I can’t game, that’s OK too.
I switched from a 3080 to a 7900 xt. It’s one of the better decisions I’ve made even though on paper the performances are not too far away.
According to the Steam HW survey around 6% of users are still using Pascal (10xx) GPUs. That’s about 8.4 million GPUs losing proprietary driver support. What a waste.
GPU % 1060 1.86 1050ti 1.43 1070 0.78 1050 0.67 1080 0.5 1080ti 0.38 1070ti 0.24Fixed: 1050 was noted as 1050ti
8.4 million GPUs losing proprietary driver support.
Are they all on Linux though?
Are they supported longer on the windows driver?
Apparently? Title only mentions dropping the support on Linux. 🤷♂️
You don’t have to updare your drivers though, isn’t this normal with older hardware?
You don’t have to updare your drivers though.
Not sure if you’re on Windows or Linux but, on Linux, we have to actively take explicit actions not to upgrade something when we are upgrading the rest of our system. It takes more or less significant effort to prevent upgrading a specific package, especially when it comes in a sneaky way like this that is hard to judge by the version number alone.
On Windows you’d be in a situation like “oh, I forgot to update the drivers for three years, well that was lucky.”
It makes me wonder why the package still auto updates if it detects you’re using the driver that would be removed, surely it could do some checks first?
Would be vastly preferable to it just breaking the system.
It would be a very out-of-scope feature for a Linux package manager to do a GPU hardware check and kernel module use check to compare whether you’re using the installed driver, and then somehow detect in the downloaded, about-to-be-installed binary that this will indeed remove support for your hardware.
It just seems very difficult to begin with, but especially not the responsibility of a general package manager as found on Linux.
On Windows, surely the Nvidia software should perform this detection and prevent the upgrade. That would be its responsibility. But it’s just not how it is done on Linux.
It’s not the package itself that “auto updates”. The package manager just updates all the packages that have updates available, that’s it.
But still, the system doesn’t really “break”, all you have to do is downgrade the package, then add a rule preventing it from being updated until Nvidia/Arch package maintainers add a new package that has only that legacy driver’# latest version, which won’t be upgraded again.







