• exuA
    link
    fedilink
    English
    arrow-up
    4
    ·
    10 months ago

    You’ll want to use a quantised model on your GPU. You could also use the CPU and offload some parts to the GPU with llama.cpp (an option in oobabooga). Llama.cpp models are in the GGUF format.