Let's test QwQ, the new opensource alternative to o1

Поделиться
HTML-код
  • Опубликовано: 20 дек 2024

Комментарии • 11

  • @UCs6ktlulE5BEeb3vBBOu6DQ
    @UCs6ktlulE5BEeb3vBBOu6DQ 9 дней назад +1

    btw QwQ can totally do multi-turn. Set it to 32k context and 16k output tokens so its thinking isn't cut before he's done. llama.cpp has much more settings.

    • @volkovolko
      @volkovolko  9 дней назад

      Oh okay, I didn't knew that.
      I thought it cannot do multi turn because it's single turn only in the QwQ Space ^^
      Thanks a lot for the precision !

  • @UCs6ktlulE5BEeb3vBBOu6DQ
    @UCs6ktlulE5BEeb3vBBOu6DQ 9 дней назад +1

    Tetris game is often my coding test and they all struggle with it.

    • @volkovolko
      @volkovolko  9 дней назад

      Yes, tetris is quite difficult for LLMs. Only Claude 3.5 Sonnet and Qwen2.5 Coder 32B got it right on my tests. Even gpt4o didn't got it in my test (but i think it has more related to luck)

  • @SoM3KiK
    @SoM3KiK 12 дней назад +1

    hey! Would it work with a 3060ti and 32gb ram?

    • @hatnis
      @hatnis 11 дней назад

      I mean, you can't fit the required 24 gb of VRAM on your graphics card, but hey, only one way to find out if it works right.

    • @SoM3KiK
      @SoM3KiK 11 дней назад +2

      @@hatnis well, it was free to ask 😅

    • @volkovolko
      @volkovolko  10 дней назад

      Yes, but you will have to offload a lot in your CPU/RAM.
      It will run pretty slow but it will work 👍

    • @volkovolko
      @volkovolko  10 дней назад

      In the video, I ran it in my 24Go of VRAM. I think it is q4_k_m

    • @Timely-ud4rm
      @Timely-ud4rm 10 дней назад

      I was able to get it working on my new Mac mini base m4 pro chip model. QwQ-32B-Preview-GGUF bartowski repo. IQ3_XS quantization. the only one I could download as this one is 13.71 gb of ram. Note because I am using a Mac mini apples ram is unified so my 24gb of ram is shared between the gpu and cpu. if I spent spent a extra 300$ from the 1.4k I spent for the m4 pro model I could of loaded the max quantization model but I don't really do AI locally as I use online Ai services more. I hope this helps!