I am using DeepSeek releases since over 9 months. The results have been great all the time but are getting better and better. I am running locally on my Linux PC all Qwen based DeepSeek R1 models and they are all great. The 1.5B model works fantastic when you are using it in the q16 variant. It is really killer. Inference is not very fast since I am running all models (from 1.5B up to 32B) on my CPU Ryzen5 8600G WITHOUT a dedicated GPU adapter. The CPU uses up to 40GB of my 64GB RAM for the 32B model. With good prompting the results are fantastic and save me hours of work every day. The dynamic memory allocation of the 8600G is great and allows me to run powerful LLMs with a small budget. My PC has cost me $900.
@@Aurelnpounengong The Ryzen5 8600G has a GPU on the processor and can use system memory for VRAM, but much more slowly. (40gb out of the 64gb system memory) He provided the details to research the parts you don't understand.
@@rhadiem ahhh I see, i did not know it used system emmory as VRAM. I also have 64GB DDR4 memory do you think I'll be able to run a 32B model with my Graphics card with some Memory offset to system memory?
ive also had luck gettign the model to reflect using - reversing the calculation (math) - writing the documentation while it codes - writing a tutorial while it codes. this is one of the best videos I have seen in some time Chris!
Chris, this is great. The math training is cool. What we need is a set of coding trained small models that are experts in the top programming languages. Start with Python and Javascript, HTML and CSS. You get the idea. Then, everyone can have a set of these models for the languages they use.
R1 has five main advantages: *1) it gives you the reasoning behind its thoughts, you can find and tell it to correct it's mistake if you find one 2) it is much more DEPLORABLE it's nothing short of "first invented Personal Computer PC"!! You don't have to have a huge Data center or large amount of GPUs to run it, in fact, you can even run it on your phone without internet 3) it is cheaper and faster than O1 4) most of all it is free 5) open source so you can open you can edit it update it any way you like* Any of the reasons above should be a game changer by itself but combination of five you got a stock crash like yesterday
Thanks for the reply Chris. I was able to run your code. I had to make slight adjustments as I am on Windows / RTX 4090 but finally I have my aha moment. I was able to train and infer from my first reasoning model. THANKS once again for the tutorial.
Awesomeeeeeee, it’s really a great feeling when you train the model and it’s reasoning and doing better than much bigger models, glad it helped. I’m hoping to have for the next video and dual trialing thing that works for Mac and Windows
Brilliant work! Yes i do remember you mentioning that o1 was mcts and and r1 was not. I agreed with you that r1 was surely not, will be exciting to see if o1 or o3 used similar techniques or used mcts!
I’m 100 percent convinced that o1 is using search (specifically mcts) at inference time, and I’m 100% convinced that R1 will do the same in a future release when they figure it. But the results they’ve gotten without it, is pretty incredible
@chrishayuk It just blows my mind every time I think about it still! That one can converge through search or learning at these endpoints so long as one is bootstrapped with some notion of correctness! Your demo was incredible work. thanks again.
thank you, yeah, i came up with the concept of getting the compiler to do the calc, and the ai to do the explanation, a while back, i think i did a video on this in june 2024. so it seemed a natural fit when i saw the long chain of thought coldstart piece from deep seek. felt like a good merge. i was blown away also on how good the results were
hello Chris Hay ! this is crazy, you made this amazing tutorial. thats mind blowing. while openAI is cloased, open source community is actually builidng it openly for community. although comanies like deepseek are validation and inspring. community is doing its own discovery. you are very inspring as well. thanks again for a wonderful video
@@chrishayuk we might not need MOE now , as we need only cold start data for different tasks 1. fuction calling 2. coding 3. summarization 4. role play 5. nlq and others we can do this on colab as its 1.5b, its going to crazy
Thank you Chris. I am hoping I will be able to replicate this on my old windows laptop. I want to be able to train a base model from scratch like you did here.
One cool addition.i use TwinMind AI on screen assistant to explain what your doing exactly,as i watch the video.(Reads transcript im guessing) Anyway it makes understanding the topic far easier.
Great content. unSloth is an excellent framework for training. You can create the same/potentially better COT reasoning using an advanced system prompt in an Ollama Modelfile and quickly turn most Ollama supported models into reasoning models using the Ollama create command. I’ve been using the technique for about one month now and it’s works surprisingly well. No qlora training required. The outputs are very similar to DeepSeek R1. My most recent success was using this technique on the most recent Mistral-Small LLM. Wondering if anybody else has figured this out or achieved similar results with reasoning system prompts.
Thank you for a great presentation, especially for your explanation and examples of the "cold start' part. The 'Incentivizing' paper and the technical report are heavy going, especially the reinforcement learning algorithm. When will you have a video out explaining the RL algorithm ?
From a creator point of view I am interested in knowing how do you manage to superimpose screen recording over yourself speaking in the background ! The video is quite informative off course.
Very informative video, I look forward to the next one. I am currently running the 32B version of R1 and I asked it about persistence of what it learned during our session and it said that unless I saved the session and fed it back, it was lost. It suggested using: '''bash ollama generate --model your_model_name | tee chat_history.txt ''' Is there any other way you know of for getting it to learn without re feeding everything back to it after restart? It also said it did not have access to any files on my computer and it would take modifications to get it to do this by itself.
I'm sure you're aware ot the Qwen-maths models but using these reasoning techniques it would be interesting to see if a small (Qwen2.5-1.5b) model could be trained to reason geometry or integration in the same was a mathematician would simply apply the rules they know to see what fits. I think the only limitation with this is the size of the context. I put the DeepSeek-R1-7b (Q4) on my phone and it was good but limited. I increased the context to 8192 and wow, it solved things o1 struggled with and failed.
Hi Chris, do you have your training dataset available on github as well? I am not able to find it out. Putting it somewhere will be really helpful in following your instructions.
Given that the intention is not so much to train new knowledge, but synthesize chain of thought capabilities on existing models, how good would it work if we were to use R1 to generate a bunch of non-math questions/thinking/answers input output as the cold start seed?
@@chrishayuk Thanks! I was playing around with Granite 3.1 MoE 3B, found it to be insanely fast even on CPU only. I'd be really curious to see how much "intelligence" we can extract from smaller MoE models like that by synthesizing chain of thought. I'll have to find some time to play around and see what could be extracted. I'm thinking a semi-capable thinking model, with MCP (thanks to your MCP-CLI project), that requires no GPU will be a very powerful local assistant!
Very much appreciate your videos. Thank you. I noticed your training data jsonl format is different than your validation and test jsonl format. Could you please explain?
Thanks for the info! I followed your instructions and it’s training the model but it’s pretty slow on my M1 Mac. Is there a similar software for Linux that I can coldstart train the model on a VPS?
I’m not verifying yet, I’ll do that in the RL stage in the next video. I’m just generating long and accurate chain of thoughts for coldstarting training
Hi Chris, it's pretty cool thanks for sharing. can we try to generate the cold start data from deepseek-r1-zero just like the paper and train lora, what do you think of that?
i’ve found that the 1.5B model is usually terrible with math or calculation, but it has extensive capabilities in generating humanlike thoughts in an eerie way. don’t play mind games with it unless u wanna spook yourself
i was hoping to record this weekend, but got a sore throat, so i'm a few days away from recording. i think the RL version is pretty cool, i think you will like. macbook pro m3 max
So what you are saying is that R1 will not perform good on non-logical and non maths like queries, where they cant use a verifier? Like what if I want to use R1 in a healthcare domain?
As a fellow Scot - many’s the time I’ve had (usually American tourists) ask “which part of Ireland are you from”. I’m always kind & say I’m Scots. When they get embarrassed I explain the accents can be similar & at its closest point there’s only 12 miles between Ireland & Scotland. If they comment that I’m quite understandable for a Scotsman - I’ll throw in a bit o’ auld Scots leid tae mak a muckle ow ther heids.😂
Obviously this is a toy example. The purpose is to explain how to generate accurate synthetic Chain of Thought data to use during the training process, which is quite valuable. Even better, he walks through it end to end within the context of DeepSeek's COLDSTART methodology.
unfortunately, I think, it's a military race, and we'll never know for sure until it's too late. For the general public, open-source model will win, this video shows it already pretty much
Unlike the space and nuclear arms race where spies were the only way to get the latest technological advances, DS has OPEN SOURCED everything they did to produce this model. Imagine how much faster the space/nuclear arms race would have been in that case! Open Source has been the biggest if not nearly the biggest accelerator for AI advancement in my opinion, especially within the last ~2 years.
bitsnbytes releases (bnb) many small models for ollama on windows/linux and yeah peft adapters. i am pretty impressed with mac ml, but I cant imagine not being on linux with direct access to my 4090!
I am using DeepSeek releases since over 9 months. The results have been great all the time but are getting better and better. I am running locally on my Linux PC all Qwen based DeepSeek R1 models and they are all great. The 1.5B model works fantastic when you are using it in the q16 variant. It is really killer. Inference is not very fast since I am running all models (from 1.5B up to 32B) on my CPU Ryzen5 8600G WITHOUT a dedicated GPU adapter. The CPU uses up to 40GB of my 64GB RAM for the 32B model. With good prompting the results are fantastic and save me hours of work every day. The dynamic memory allocation of the 8600G is great and allows me to run powerful LLMs with a small budget. My PC has cost me $900.
wait you're able to run a 32B model on just your CPU? i have a RTX 4060 TI with 16 gB of VRAM and I'm scared to download a 32B model 😅
@@Aurelnpounengong The Ryzen5 8600G has a GPU on the processor and can use system memory for VRAM, but much more slowly. (40gb out of the 64gb system memory) He provided the details to research the parts you don't understand.
really ? all this cost you 900 ? 64 gb ram ?
@@rhadiem ahhh I see, i did not know it used system emmory as VRAM. I also have 64GB DDR4 memory do you think I'll be able to run a 32B model with my Graphics card with some Memory offset to system memory?
@@Aurelnpounengong It will run, just slow. I can run a 32b on my 4090, but anything larger and it has to swap in and out of memory which is painful.
I am so glad I have encountered this series. This is real gold. Thanks you so much for the effort. Looking forward to the next episode!
ive also had luck gettign the model to reflect using - reversing the calculation (math) - writing the documentation while it codes - writing a tutorial while it codes.
this is one of the best videos I have seen in some time Chris!
Awesome, so glad you’ve seen similar results
Chris, this is great. The math training is cool. What we need is a set of coding trained small models that are experts in the top programming languages. Start with Python and Javascript, HTML and CSS. You get the idea. Then, everyone can have a set of these models for the languages they use.
Thanks for answering all the basic questions I had. Great teaching style, even for the non-programmer.
Glad it was useful, I had a lot of fun making this video
R1 has five main advantages: *1) it gives you the reasoning behind its thoughts, you can find and tell it to correct it's mistake if you find one 2) it is much more DEPLORABLE it's nothing short of "first invented Personal Computer PC"!! You don't have to have a huge Data center or large amount of GPUs to run it, in fact, you can even run it on your phone without internet 3) it is cheaper and faster than O1 4) most of all it is free 5) open source so you can open you can edit it update it any way you like*
Any of the reasons above should be a game changer by itself but combination of five you got a stock crash like yesterday
this is my official go to youtube channel thanks man for these videos
thank you, glad it's useful
It would be awesome if you did a tutorial on fine tuning a reasoning model with tool calling abilities
That is a really good shout, I will do that
Yes that would be awesome !
yes! i want to train a model for z3 use when doing logical reasoning. very powerful solver
Hey Chris, Great video. Really enjoy the way you teach. Keep up the good work. Can't wait for your next video on RLHF.
Excellent ! Bravo! I am spending hours analyzing how DS1-R 32B works with my 4090. I am getting amazing results everyday...
Thanks for the reply Chris. I was able to run your code. I had to make slight adjustments as I am on Windows / RTX 4090 but finally I have my aha moment. I was able to train and infer from my first reasoning model. THANKS once again for the tutorial.
Awesomeeeeeee, it’s really a great feeling when you train the model and it’s reasoning and doing better than much bigger models, glad it helped. I’m hoping to have for the next video and dual trialing thing that works for Mac and Windows
I may no longer be at IBM, but I was curious to hear your thoughts on DeepSeek. Very insightful video, thanks!
Why left IBM
@@hareram4233 Ha - that's for a chat over a pint...
Keep doing helpful videos Chris 😊
Always, glad it was useful, I was particularly happy with this one
This video has been amazing. I look forward to the RL video.
RL video is cool, i promise, i just can't record as sick at the moment, frustrating
@@chrishayuk Get well soon buddy!
thank you, just a cold or a flu or something, but frustrating. appreciate the well wishes
Brilliant work! Yes i do remember you mentioning that o1 was mcts and and r1 was not. I agreed with you that r1 was surely not, will be exciting to see if o1 or o3 used similar techniques or used mcts!
I’m 100 percent convinced that o1 is using search (specifically mcts) at inference time, and I’m 100% convinced that R1 will do the same in a future release when they figure it. But the results they’ve gotten without it, is pretty incredible
@chrishayuk It just blows my mind every time I think about it still! That one can converge through search or learning at these endpoints so long as one is bootstrapped with some notion of correctness! Your demo was incredible work. thanks again.
thank you, yeah, i came up with the concept of getting the compiler to do the calc, and the ai to do the explanation, a while back, i think i did a video on this in june 2024. so it seemed a natural fit when i saw the long chain of thought coldstart piece from deep seek. felt like a good merge. i was blown away also on how good the results were
Great video. Looking forward to RL video.
Coming soon!
Amazing. Thank you so much for this.
awesome, glad it was useful
hello Chris Hay !
this is crazy, you made this amazing tutorial. thats mind blowing. while openAI is cloased, open source community is actually builidng it openly for community. although comanies like deepseek are validation and inspring. community is doing its own discovery. you are very inspring as well.
thanks again for a wonderful video
Thank you, I appreciate it, I was pretty pleased with this one, glad it’s useful
@@chrishayuk we might not need MOE now , as we need only cold start data for different tasks
1. fuction calling
2. coding
3. summarization
4. role play
5. nlq and others
we can do this on colab as its 1.5b, its going to crazy
It’s cool right
Thank you Chris. I am hoping I will be able to replicate this on my old windows laptop. I want to be able to train a base model from scratch like you did here.
One cool addition.i use TwinMind AI on screen assistant to explain what your doing exactly,as i watch the video.(Reads transcript im guessing)
Anyway it makes understanding the topic far easier.
oooh, that sounds pretty sweet
Great content. unSloth is an excellent framework for training. You can create the same/potentially better COT reasoning using an advanced system prompt in an Ollama Modelfile and quickly turn most Ollama supported models into reasoning models using the Ollama create command. I’ve been using the technique for about one month now and it’s works surprisingly well. No qlora training required. The outputs are very similar to DeepSeek R1. My most recent success was using this technique on the most recent Mistral-Small LLM. Wondering if anybody else has figured this out or achieved similar results with reasoning system prompts.
Thank you for a great presentation, especially for your explanation and examples of
the "cold start' part. The 'Incentivizing' paper and the technical report are heavy going, especially
the reinforcement learning algorithm. When will you have a video out explaining the RL algorithm ?
thank you, yeah the RL video will be soon, sick at the moment, frustrating, but i'm pretty pleased with where the RL video will be
👍can't wait for the RL part, BTW, can you share the prompt as well?
the rl video is coming, just sick at the moment, so can't record, frustrating
@@chrishayukSorry to hear that. Hope you recover quickly! Rest up and take care.
@@waneyvin just a cold or a flu or something, but frustrating. appreciate the well wishes
Good work. One tiny suggestion - Maybe try using word-wraps for long lines , for better readability when watching a video.
From a creator point of view I am interested in knowing how do you manage to superimpose screen recording over yourself speaking in the background ! The video is quite informative off course.
Fantastic, well done
Thank you! Cheers!
Very informative video, I look forward to the next one. I am currently running the 32B version of R1 and I asked it about persistence of what it learned during our session and it said that unless I saved the session and fed it back, it was lost. It suggested using:
'''bash
ollama generate --model your_model_name | tee chat_history.txt
'''
Is there any other way you know of for getting it to learn without re feeding everything back to it after restart? It also said it did not have access to any files on my computer and it would take modifications to get it to do this by itself.
Can a reasoning model figure out that it doesn't know something, and ask for inputs? Or could it be trained to ask?
That’s an awesome idea
I'm sure you're aware ot the Qwen-maths models but using these reasoning techniques it would be interesting to see if a small (Qwen2.5-1.5b) model could be trained to reason geometry or integration in the same was a mathematician would simply apply the rules they know to see what fits.
I think the only limitation with this is the size of the context. I put the DeepSeek-R1-7b (Q4) on my phone and it was good but limited. I increased the context to 8192 and wow, it solved things o1 struggled with and failed.
excellent!
Thanks!
Hi Chris, do you have your training dataset available on github as well? I am not able to find it out. Putting it somewhere will be really helpful in following your instructions.
Yeah it’s in the verifiers repo
which deep seek model is better to download?
What HW specs do you use for training?
Thank you🙏
macbook pro m3 max
Given that the intention is not so much to train new knowledge, but synthesize chain of thought capabilities on existing models, how good would it work if we were to use R1 to generate a bunch of non-math questions/thinking/answers input output as the cold start seed?
That’s pretty much what happens with the RL stage.. but I also think you can use verifiers to do this well also
@@chrishayuk Thanks! I was playing around with Granite 3.1 MoE 3B, found it to be insanely fast even on CPU only. I'd be really curious to see how much "intelligence" we can extract from smaller MoE models like that by synthesizing chain of thought. I'll have to find some time to play around and see what could be extracted. I'm thinking a semi-capable thinking model, with MCP (thanks to your MCP-CLI project), that requires no GPU will be a very powerful local assistant!
Can you actually fine tune DeepSeek R1? I see you used Qwen-2.5
Awesome video 👏🏼👏🏼👏🏼
Very much appreciate your videos. Thank you. I noticed your training data jsonl format is different than your validation and test jsonl format. Could you please explain?
Thanks for the info! I followed your instructions and it’s training the model but it’s pretty slow on my M1 Mac. Is there a similar software for Linux that I can coldstart train the model on a VPS?
Are you saying there is a math compiler in deepseek R1 ? Its open source, so that can be checked
They said in the paper they use a math verifier
In your newly trained qwen model. What is the verifier step doing? Since there is no math compiler in qwen
I’m not verifying yet, I’ll do that in the RL stage in the next video. I’m just generating long and accurate chain of thoughts for coldstarting training
Hi Chris, it's pretty cool thanks for sharing.
can we try to generate the cold start data from deepseek-r1-zero just like the paper and train lora, what do you think of that?
Yes, I plan to do a pure version with RL, so will do that when I have that ready (which should be very soon)
@@chrishayuk that would be great! I would like to contribute in researching, writing script or generating data if possible
i’ve found that the 1.5B model is usually terrible with math or calculation, but it has extensive capabilities in generating humanlike thoughts in an eerie way. don’t play mind games with it unless u wanna spook yourself
14:52 isn't the answer it gives here incorrect?
a video on local agentic ide please.
WOW Superb
Is this RL though or just SFT?
RL is the next video, this is SFT with long chain of thoughts, i.e. the coldstart
@ awesome can’t wait to! Btw what hardware are you using?
i was hoping to record this weekend, but got a sore throat, so i'm a few days away from recording. i think the RL version is pretty cool, i think you will like. macbook pro m3 max
So what you are saying is that R1 will not perform good on non-logical and non maths like queries, where they cant use a verifier? Like what if I want to use R1 in a healthcare domain?
Nope, because verifiers work for that also, which I’m gonna show in an upcoming video
How about a video on creating a jsonl to finetune a model to write computer code
Yeah I plan do a new one on that using verifiers
Have a look at the Open R1 repo from huggingface as they work with the community to replicate deepseek r1 datasets etc
Chains-of-thought, surely, and not Chain-of-thoughts?
lol, no clue, tbh
7b model run on my hplaptop 16gb ram i5 intel no graphics card
Is NVDA going to die?
I think a new grand theft auto game is coming out, they’ll be fine
real open ai
This resembles "first principle" , -Don't teach me how to reason, I will find it myself!
Exactly
N.Ireland / N.American accents is wild.
Agreed, love those accents. Mine is Scottish though
@@chrishayukhaha :)
@@chrishayuk.u look like a musician that got into AI😂.Like I can see you on a synthesizer in a music video.
hahaha, i'm terrible at music.. but i think there is a lot of synergies. i like using lots of tools and techniques and meshing them together
As a fellow Scot - many’s the time I’ve had (usually American tourists) ask “which part of Ireland are you from”.
I’m always kind & say I’m Scots.
When they get embarrassed I explain the accents can be similar & at its closest point there’s only 12 miles between Ireland & Scotland.
If they comment that I’m quite understandable for a Scotsman - I’ll throw in a bit o’ auld Scots leid tae mak a muckle ow ther heids.😂
Wait a minute.. you used a how many billion parameter LLM to solve what a card-sized Casio calculator could solve in the 80s?
one is hardware
one is ml
ml can do things hardware cant. generalize.
Obviously this is a toy example. The purpose is to explain how to generate accurate synthetic Chain of Thought data to use during the training process, which is quite valuable. Even better, he walks through it end to end within the context of DeepSeek's COLDSTART methodology.
*_Who do you think will win the AI race: China or the US? Please reply._*
I don’t believe there will be a winner… I believe the game is an infinite game, and players will join and drop off. There are no winners….
@ Don't you think so it will be like space race?
@@HiteshKrishanKumar To what finish line? AI is already here and people use it every day.
unfortunately, I think, it's a military race, and we'll never know for sure until it's too late.
For the general public, open-source model will win, this video shows it already pretty much
Unlike the space and nuclear arms race where spies were the only way to get the latest technological advances, DS has OPEN SOURCED everything they did to produce this model. Imagine how much faster the space/nuclear arms race would have been in that case! Open Source has been the biggest if not nearly the biggest accelerator for AI advancement in my opinion, especially within the last ~2 years.
Nobel prize for the china man
how to do this in windows? i guess peft from huggingface. cool.
bitsnbytes releases (bnb) many small models for ollama on windows/linux and yeah peft adapters.
i am pretty impressed with mac ml, but I cant imagine not being on linux with direct access to my 4090!
I’ll do a regular PyTorch video for the next one
@@chrishayuknice
cool @@chrishayuk
I see there are now R1 reasoning datasets on Hugging Face e.g. ServiceNow-AI/R1-Distill-SFT