Hungary 🇭🇺🇪🇺

Developer behind the Eternity for Lemmy android app.

@[email protected] is my old account, migrated to my own instance in 2023.

  • 3 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle
  • From what I’ve seen, it’s definitely worth quantizing. I’ve used llama 3 8B (fp16) and llama 3 70B (q2_XS). The 70B version was way better, even with this quantization and it fits perfectly in 24 GB of VRAM. There’s also this comparison showing the quantization option and their benchmark scores:

    1000029570

    Source

    To run this particular model though, you would need about 45GB of RAM just for the q2_K quant according to Ollama. I think I could run this with my GPU and offload the rest of the layers to the CPU, but the performance wouldn’t be that great(e.g. less than 1 t/s).











  • I discovered a channel called ‘Just Alex’ awhile ago, and I’ve binge-watched most of his videos. He makes videos about a variety of activities he engages in. For example, two years ago, he started beekeeping as a hobby. In these videos, he shows the progress he has made since then, taking care of his bees and harvesting their honey.

    He also has videos on collecting mushrooms, growing crops, and traveling around Europe.

    What I like about him is how calming his videos are and the amount of effort he puts into making them interesting, without resorting to clickbait content.