• 30 Posts
  • 368 Comments
Joined 1 year ago
cake
Cake day: July 2nd, 2023

help-circle


















  • With this GPU you can install a media server like Plex or Jellyfin and offload the transcoding on the GPU, but mind you you will still have a high idle load consumption.

    Normally in a headless home server I would need virtualisation and low idle power consumption. So this GPU and PSU are a bit of an overkill if you are not planning to fully utilise them.


  • And you as an analytics engineer should know that already? I am using some LLMs on almost a daily basis, Gemini, OpenAI, Mistral, etc. and I know for sure that if you ask it a question about a niche topic, the chances for the LLM to hallucinate are much higher. But also to avoid hallucinating, you can use different prompt engineering techniques and ask a better question.

    Another very good question to ask an LLM is what is heavier one kilogram of iron or one kilogram of feathers. A lot of LLMs are really struggling with this question and start hallucinating and invent their own weird logical process by generating completely credibly sounding but factually wrong answers.

    I still think that LLMs aren’t the silver bullet for everything, but they really excel in certain tasks. And we are still in the honeymoon period of AIs, similar to self-driving cars, I think at some point most of the people will realise that even this new technology has its limitations and hopefully will learn how to use it more responsibly.