Keyoxide: aspe:keyoxide.org:MWU7IK7RMUTL3AP6U6UWCF4LHY

  • 8 Posts
  • 136 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
rss





  • A lot of the answers here are short or quippy. So, here’s a more detailed take. LLMs don’t “know” how good a source is. They are word association machines. They are very good at that. When you use something like Perplexity, an external API feeds information from the search queries into the LLM, and then it summarizes that text in (hopefully) a coherent way. There are ways to reduce hallucination rate and check factualness of sources, e.g. by comparing the generated text against authoritative information. But how much of that is employed by Perplexity et al I have no idea.













  • Lol, there are smaller versions of Deepseek-r1. These aren’t the “real” Deepseek model, but they are distilled from other foundation models (Qwen2.5 and Llama3 in this case).

    For the 671b parameter file, the medium-quality version weighs in at 404 GB. That means you need 404 GB of RAM/VRAM just to load the thing. Then you need preferably ALL of that in VRAM (i.e. GPU memory) to get it to generate anything fast.

    For comparison, I have 16 GB of VRAM and 64 GB of RAM on my desktop. If I run the 70b parameter version of Llama3 at Q4 quant (medium quality-ish), it’s a 40 GB file. It’ll run, but mostly on the CPU. It generates ~0.85 tokens per second. So a good response will take 10-30 minutes. Which is fine if you have time to wait, but not if you want an immediate response. If I had two beefy GPUs with 24 GB VRAM each, that’d be 48 total GB and I could run the whole model in VRAM and it’d be very fast.


  • They’re probably referring to the 671b parameter version of deepseek. You can indeed self host it. But unless you’ve got a server rack full of data center class GPUs, you’ll probably set your house on fire before it generates a single token.

    If you want a fully open source model, I recommend Qwen 2.5 or maybe deepseek v2. There’s also OLmo2, but I haven’t really tested it.

    Mistral small 24b also just came out and is Apache licensed. That is something I’m testing now.