• 2 Posts
  • 1 Comment
Joined 1 year ago
cake
Cake day: July 22nd, 2023

help-circle
  • I know that people are using P40 and P100 GPUs. These are outdated but still work with some software stacks / applications. The P40 GPU, once very cheap for the amount of VRAM, is no longer as cheap as it was probably because folks have been picking them up for inference.

    I’m getting a lot done with an NVidia GTX 1080 which only has 8GB VRAM. I can run a quant of dolphin Mixtral 7x8B and it works well enough. It takes minutes to load, almost too long for me, but after that I get 3-5 TPS with some acceptable delay between questions.

    I can even run Miqu quants at 2 or 3 bits. It’s super smart even at these low quant levels.

    llama 3.1 8B runs great with this 1080 8BG GPU at 4_K_M and also 5 or 6_K_M. I believe I can run gemma 9B f16 at 8 bpw.