• 0 Posts
  • 42 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle




  • Oobabooga is a pretty beginner-friendly solution for running LLMs locally. Models are freely available on Huggingface, but look for GGUF quantizations that will fit in your VRAM. The good thing about GGUFs is that they’re typically offered in a wide range of sizes so you can pick one that will fit on your GPU. If you use all your VRAM and start offloading to system memory then the generation will be far slower.

    I’ve had the best results with Noromaid20B and Rose20B quants running on a 16GB 4080. Don’t expect it to be as smart as GPT 4.0, but those models do a pretty good job of following instruction and writing decent prose.

    Once you mess around with Oobabooga a bit, I’d highly recommend picking up the SillyTavern front-end. Oobabooga runs the actual model while SillyTavern manages characters, world lore, and offers a wide range of other features including a “visual novel” mode where you can set up character sprites that emote based on the content of the messages. It takes a while to get the hang of but it’s pretty cool.





  • I have no idea why you’re being downvoted since you’re 100% correct. I watch one video about gaming and YouTube’s recommendations are all alt-right anti-feminist stuff with Ben Shapiro and Jordan Peterson.

    Google surely knows enough about me to know I lean far-left but the algorithm is determined to feed me that slop.

    I have no idea from a technical perspective if Odysee’s algorithm is independent from or worse than YouTube’s, but the criticism of YouTube is completely valid.


  • Neither Meta nor anyone else is hand-curating their dataset. The fact that Facebook is full of grandparents sharing disinformation doesn’t impact what’s in their model.

    But all LLMs are going to have accuracy issues because they’re 1) trained on text written by humans who themselves are inaccurate and 2) designed to choose tokens based on probability rather than any internal logic as to whether an answer is factual.

    All LLMs are full of shit. That doesn’t mean they’re not fun or even useful in some applications, but you shouldn’t trust anything they write.