Note: The method described in this article should only be used if you wish to have the latest version of CUDA that is
not yet available in the cuda-maintainers cache, otherwise follow this.
TL;DR: Save this flake, run nix develop and setup PyTorch as described
CUDA is a proprietary vendor lock-in for machine learning folks.
Training ML models is incredibly fast with CUDA as compared to CPUs due to the parallel
processing. So if you’re doing something serious, you have no other choice besides CUDA as of writing.
Although, OpenAI’s Triton and ZLUDA are worth keeping an eye on.
I had been struggling for a while to get CUDA on my main NixOS box for some ML programming. It seems there weren’t any clear solutions in the NixOS forums, which just suggested suffering through painful build times. Here’s my hacky, less Nix-y approach that takes ~5 minutes.
Ha! yes. The buildFHSEnv function works by creating a separate namespace akin to Docker containers but without complete separation from the host system (I hint to this in my post). It then hosts the libraries in /nix/store into this namespace.
Got a good chuckle out of me. As for
buildFHSEnv
how does it work? Does it usechroot
and mount parts of/nix/store
into it?Anti Commercial-AI license
Ha! yes. The
buildFHSEnv
function works by creating a separate namespace akin to Docker containers but without complete separation from the host system (I hint to this in my post). It then hosts the libraries in/nix/store
into this namespace.