I'll save your time, deepseek V2,5 which in Q8 quality uses 270Gb Ram is best, all others trash-which can't repair real code logic problem. Benchmark boards is nonsense.
The sign of really bad model is when it's writing full code wasting your tokens(=money), smart model writing portion to edit or change, saving time and tokens-only deepseek showed such.
@@fontenbleauhey, how can I use that specific model and size? Get it hosted after downloading and use hosting api key? How would cline be setup to use it? Thanks
Cline is sometimes unreliable with the direct codestral endpoint. So, for more reliability in Cline, just start a LiteLLM server (as shown in the Bolt DIY segment) and then put the LiteLLM endpoint in Cline's OpenAI compatible option.
for cline it is showing "api request failed, 404 status code (No body) and sometimes 422 error code too", altough working great with continue and aider but please do something with cline too
Cline is sometimes unreliable with the direct codestral endpoint. So, for more reliability in Cline, just start a LiteLLM server (as shown in the Bolt DIY segment) and then put the LiteLLM endpoint in Cline's OpenAI compatible option.
@@AICodeKing I did this and get a: INFO: 127.0.0.1:54695 - "POST /chat/completions HTTP/1.1" 401 Unauthorized I exported the api key as described... ideas?
very good post once again !! thank you for sharing. I'm trying to connect with cline on vscode and i'm having the error api request (failed) 422 status code (no body) , did anyone have that ?
Great coverage!
Can you do a ranking of the latest models for coding: Deepseek, Phi-4, Sky, Codestral?
I'll save your time, deepseek V2,5 which in Q8 quality uses 270Gb Ram is best, all others trash-which can't repair real code logic problem. Benchmark boards is nonsense.
The sign of really bad model is when it's writing full code wasting your tokens(=money), smart model writing portion to edit or change, saving time and tokens-only deepseek showed such.
@@fontenbleauhey, how can I use that specific model and size? Get it hosted after downloading and use hosting api key? How would cline be setup to use it? Thanks
@@fontenbleau codestral can do 30-40 of shitcoding, I beg your pardon. 60% of that even in the diff format, and it's free.
@@techminimalist2k Gigabyte motherboard with 12 Ram slots for enterprise market, they making these for years! And i use 10 years old model.
Thanks!
Thank you💚, but please always put a link to the tool or webpage you are presenting in the description box for quick access
The guy is making content about the platform and refuses to put a link probably because is not a sponsor.
I checked now. There is a link
for cline getting error like "422 status code (no body)"
me too. I tried to replicate the endpoint he uses, and the two that are on the codestral page. No dice. :(
yes me too not work
Let me know if anyone figures it out
It's unreliable in Cline. You can use the LiteLLM endpoint by running a server as shown in the Bolt DIY segment.
thank u but i need the cline or roo cline tool for my work
Excellent video!
This looks good, Cant wait to try it :)
but cant try it since the mistral site is down it seems
dns resolving issue
“It Works very well” and none of the front ends resemble a keyboard… probably u have a different concept of working well
You are absolutely amazing 🤩
Nice potential, thanks!
Good!
Singularity is near!
i'm unable to set it up with cline? it says 422 (no body)
Cline is sometimes unreliable with the direct codestral endpoint. So, for more reliability in Cline, just start a LiteLLM server (as shown in the Bolt DIY segment) and then put the LiteLLM endpoint in Cline's OpenAI compatible option.
@@AICodeKing tysm
@@AICodeKing tysm
It make sense it would work with Cline. Tried with continue it couldn't make any autocomplete.
for cline it is showing "api request failed, 404 status code (No body) and sometimes 422 error code too", altough working great with continue and aider but please do something with cline too
Cline is sometimes unreliable with the direct codestral endpoint. So, for more reliability in Cline, just start a LiteLLM server (as shown in the Bolt DIY segment) and then put the LiteLLM endpoint in Cline's OpenAI compatible option.
@@AICodeKing I did this and get a: INFO: 127.0.0.1:54695 - "POST /chat/completions HTTP/1.1" 401 Unauthorized
I exported the api key as described... ideas?
Very good
is it working with python?
Can it code in next.js and tailwind?
I'm asking if you know that it can code in those languages or perhaps you can use bolt ? I haven't used bolt yet. Thanks
works perfect, Thank You!
very good post once again !! thank you for sharing. I'm trying to connect with cline on vscode and i'm having the error api request (failed) 422 status code (no body) , did anyone have that ?
Bro can u check Minimax o1? Plzz
First 🥇
❤
Thanks!