btw QwQ can totally do multi-turn. Set it to 32k context and 16k output tokens so its thinking isn't cut before he's done. llama.cpp has much more settings.
Yes, tetris is quite difficult for LLMs. Only Claude 3.5 Sonnet and Qwen2.5 Coder 32B got it right on my tests. Even gpt4o didn't got it in my test (but i think it has more related to luck)
I was able to get it working on my new Mac mini base m4 pro chip model. QwQ-32B-Preview-GGUF bartowski repo. IQ3_XS quantization. the only one I could download as this one is 13.71 gb of ram. Note because I am using a Mac mini apples ram is unified so my 24gb of ram is shared between the gpu and cpu. if I spent spent a extra 300$ from the 1.4k I spent for the m4 pro model I could of loaded the max quantization model but I don't really do AI locally as I use online Ai services more. I hope this helps!
btw QwQ can totally do multi-turn. Set it to 32k context and 16k output tokens so its thinking isn't cut before he's done. llama.cpp has much more settings.
Oh okay, I didn't knew that.
I thought it cannot do multi turn because it's single turn only in the QwQ Space ^^
Thanks a lot for the precision !
Tetris game is often my coding test and they all struggle with it.
Yes, tetris is quite difficult for LLMs. Only Claude 3.5 Sonnet and Qwen2.5 Coder 32B got it right on my tests. Even gpt4o didn't got it in my test (but i think it has more related to luck)
hey! Would it work with a 3060ti and 32gb ram?
I mean, you can't fit the required 24 gb of VRAM on your graphics card, but hey, only one way to find out if it works right.
@@hatnis well, it was free to ask 😅
Yes, but you will have to offload a lot in your CPU/RAM.
It will run pretty slow but it will work 👍
In the video, I ran it in my 24Go of VRAM. I think it is q4_k_m
I was able to get it working on my new Mac mini base m4 pro chip model. QwQ-32B-Preview-GGUF bartowski repo. IQ3_XS quantization. the only one I could download as this one is 13.71 gb of ram. Note because I am using a Mac mini apples ram is unified so my 24gb of ram is shared between the gpu and cpu. if I spent spent a extra 300$ from the 1.4k I spent for the m4 pro model I could of loaded the max quantization model but I don't really do AI locally as I use online Ai services more. I hope this helps!