DeepSeek R1 Coldstart: How to TRAIN a 1.5B Model to REASON

Поделиться
HTML-код
  • Опубликовано: 7 фев 2025

Комментарии • 133

  • @HaraldEngels
    @HaraldEngels 12 дней назад +86

    I am using DeepSeek releases since over 9 months. The results have been great all the time but are getting better and better. I am running locally on my Linux PC all Qwen based DeepSeek R1 models and they are all great. The 1.5B model works fantastic when you are using it in the q16 variant. It is really killer. Inference is not very fast since I am running all models (from 1.5B up to 32B) on my CPU Ryzen5 8600G WITHOUT a dedicated GPU adapter. The CPU uses up to 40GB of my 64GB RAM for the 32B model. With good prompting the results are fantastic and save me hours of work every day. The dynamic memory allocation of the 8600G is great and allows me to run powerful LLMs with a small budget. My PC has cost me $900.

    • @Aurelnpounengong
      @Aurelnpounengong 12 дней назад +9

      wait you're able to run a 32B model on just your CPU? i have a RTX 4060 TI with 16 gB of VRAM and I'm scared to download a 32B model 😅

    • @rhadiem
      @rhadiem 12 дней назад +3

      @@Aurelnpounengong The Ryzen5 8600G has a GPU on the processor and can use system memory for VRAM, but much more slowly. (40gb out of the 64gb system memory) He provided the details to research the parts you don't understand.

    • @gracegoce5295
      @gracegoce5295 12 дней назад +2

      really ? all this cost you 900 ? 64 gb ram ?

    • @Aurelnpounengong
      @Aurelnpounengong 12 дней назад

      @@rhadiem ahhh I see, i did not know it used system emmory as VRAM. I also have 64GB DDR4 memory do you think I'll be able to run a 32B model with my Graphics card with some Memory offset to system memory?

    • @trevoC132
      @trevoC132 11 дней назад +1

      @@Aurelnpounengong It will run, just slow. I can run a 32b on my 4090, but anything larger and it has to swap in and out of memory which is painful.

  • @songlining
    @songlining 2 дня назад

    I am so glad I have encountered this series. This is real gold. Thanks you so much for the effort. Looking forward to the next episode!

  • @agenticmark
    @agenticmark 12 дней назад +16

    ive also had luck gettign the model to reflect using - reversing the calculation (math) - writing the documentation while it codes - writing a tutorial while it codes.
    this is one of the best videos I have seen in some time Chris!

    • @chrishayuk
      @chrishayuk  12 дней назад +1

      Awesome, so glad you’ve seen similar results

  • @DriftlessCryptoToo
    @DriftlessCryptoToo 8 дней назад +3

    Chris, this is great. The math training is cool. What we need is a set of coding trained small models that are experts in the top programming languages. Start with Python and Javascript, HTML and CSS. You get the idea. Then, everyone can have a set of these models for the languages they use.

  • @greghampikian9286
    @greghampikian9286 12 дней назад +7

    Thanks for answering all the basic questions I had. Great teaching style, even for the non-programmer.

    • @chrishayuk
      @chrishayuk  12 дней назад

      Glad it was useful, I had a lot of fun making this video

  • @mikeparker2486
    @mikeparker2486 8 дней назад +4

    R1 has five main advantages: *1) it gives you the reasoning behind its thoughts, you can find and tell it to correct it's mistake if you find one 2) it is much more DEPLORABLE it's nothing short of "first invented Personal Computer PC"!! You don't have to have a huge Data center or large amount of GPUs to run it, in fact, you can even run it on your phone without internet 3) it is cheaper and faster than O1 4) most of all it is free 5) open source so you can open you can edit it update it any way you like*
    Any of the reasons above should be a game changer by itself but combination of five you got a stock crash like yesterday

  • @bytemoney5655
    @bytemoney5655 8 дней назад +2

    this is my official go to youtube channel thanks man for these videos

    • @chrishayuk
      @chrishayuk  4 дня назад

      thank you, glad it's useful

  • @aaronabuusama
    @aaronabuusama 12 дней назад +30

    It would be awesome if you did a tutorial on fine tuning a reasoning model with tool calling abilities

    • @chrishayuk
      @chrishayuk  12 дней назад +27

      That is a really good shout, I will do that

    • @zacharielaik8652
      @zacharielaik8652 12 дней назад +4

      Yes that would be awesome !

    • @punchster289
      @punchster289 12 дней назад +1

      yes! i want to train a model for z3 use when doing logical reasoning. very powerful solver

  • @OpenAITutor
    @OpenAITutor 10 дней назад +1

    Hey Chris, Great video. Really enjoy the way you teach. Keep up the good work. Can't wait for your next video on RLHF.

  •  11 дней назад +1

    Excellent ! Bravo! I am spending hours analyzing how DS1-R 32B works with my 4090. I am getting amazing results everyday...

  • @PunitPandey
    @PunitPandey 7 дней назад +1

    Thanks for the reply Chris. I was able to run your code. I had to make slight adjustments as I am on Windows / RTX 4090 but finally I have my aha moment. I was able to train and infer from my first reasoning model. THANKS once again for the tutorial.

    • @chrishayuk
      @chrishayuk  7 дней назад +1

      Awesomeeeeeee, it’s really a great feeling when you train the model and it’s reasoning and doing better than much bigger models, glad it helped. I’m hoping to have for the next video and dual trialing thing that works for Mac and Windows

  • @PhilWeinmeister
    @PhilWeinmeister 11 дней назад +2

    I may no longer be at IBM, but I was curious to hear your thoughts on DeepSeek. Very insightful video, thanks!

  • @d.d.z.
    @d.d.z. 12 дней назад +3

    Keep doing helpful videos Chris 😊

    • @chrishayuk
      @chrishayuk  12 дней назад +1

      Always, glad it was useful, I was particularly happy with this one

  • @sakchais
    @sakchais 8 дней назад

    This video has been amazing. I look forward to the RL video.

    • @chrishayuk
      @chrishayuk  4 дня назад +1

      RL video is cool, i promise, i just can't record as sick at the moment, frustrating

    • @sakchais
      @sakchais 4 дня назад +1

      @@chrishayuk Get well soon buddy!

    • @chrishayuk
      @chrishayuk  4 дня назад

      thank you, just a cold or a flu or something, but frustrating. appreciate the well wishes

  • @wwkk4964
    @wwkk4964 12 дней назад +1

    Brilliant work! Yes i do remember you mentioning that o1 was mcts and and r1 was not. I agreed with you that r1 was surely not, will be exciting to see if o1 or o3 used similar techniques or used mcts!

    • @chrishayuk
      @chrishayuk  12 дней назад +2

      I’m 100 percent convinced that o1 is using search (specifically mcts) at inference time, and I’m 100% convinced that R1 will do the same in a future release when they figure it. But the results they’ve gotten without it, is pretty incredible

    • @wwkk4964
      @wwkk4964 12 дней назад +1

      @chrishayuk It just blows my mind every time I think about it still! That one can converge through search or learning at these endpoints so long as one is bootstrapped with some notion of correctness! Your demo was incredible work. thanks again.

    • @chrishayuk
      @chrishayuk  12 дней назад +1

      thank you, yeah, i came up with the concept of getting the compiler to do the calc, and the ai to do the explanation, a while back, i think i did a video on this in june 2024. so it seemed a natural fit when i saw the long chain of thought coldstart piece from deep seek. felt like a good merge. i was blown away also on how good the results were

  • @PunitPandey
    @PunitPandey 12 дней назад +2

    Great video. Looking forward to RL video.

  • @cryptonianbond
    @cryptonianbond 8 дней назад +1

    Amazing. Thank you so much for this.

    • @chrishayuk
      @chrishayuk  4 дня назад

      awesome, glad it was useful

  • @kishoretvk
    @kishoretvk 12 дней назад +1

    hello Chris Hay !
    this is crazy, you made this amazing tutorial. thats mind blowing. while openAI is cloased, open source community is actually builidng it openly for community. although comanies like deepseek are validation and inspring. community is doing its own discovery. you are very inspring as well.
    thanks again for a wonderful video

    • @chrishayuk
      @chrishayuk  12 дней назад +1

      Thank you, I appreciate it, I was pretty pleased with this one, glad it’s useful

    • @kishoretvk
      @kishoretvk 11 дней назад +1

      @@chrishayuk we might not need MOE now , as we need only cold start data for different tasks
      1. fuction calling
      2. coding
      3. summarization
      4. role play
      5. nlq and others
      we can do this on colab as its 1.5b, its going to crazy

    • @chrishayuk
      @chrishayuk  11 дней назад +1

      It’s cool right

  • @geocorpsys
    @geocorpsys 11 дней назад

    Thank you Chris. I am hoping I will be able to replicate this on my old windows laptop. I want to be able to train a base model from scratch like you did here.

  • @mrd6869
    @mrd6869 12 дней назад +1

    One cool addition.i use TwinMind AI on screen assistant to explain what your doing exactly,as i watch the video.(Reads transcript im guessing)
    Anyway it makes understanding the topic far easier.

    • @chrishayuk
      @chrishayuk  12 дней назад

      oooh, that sounds pretty sweet

  • @midcore2071
    @midcore2071 День назад

    Great content. unSloth is an excellent framework for training. You can create the same/potentially better COT reasoning using an advanced system prompt in an Ollama Modelfile and quickly turn most Ollama supported models into reasoning models using the Ollama create command. I’ve been using the technique for about one month now and it’s works surprisingly well. No qlora training required. The outputs are very similar to DeepSeek R1. My most recent success was using this technique on the most recent Mistral-Small LLM. Wondering if anybody else has figured this out or achieved similar results with reasoning system prompts.

  • @phillipneal8194
    @phillipneal8194 10 дней назад +1

    Thank you for a great presentation, especially for your explanation and examples of
    the "cold start' part. The 'Incentivizing' paper and the technical report are heavy going, especially
    the reinforcement learning algorithm. When will you have a video out explaining the RL algorithm ?

    • @chrishayuk
      @chrishayuk  4 дня назад +1

      thank you, yeah the RL video will be soon, sick at the moment, frustrating, but i'm pretty pleased with where the RL video will be

  • @waneyvin
    @waneyvin 7 дней назад +2

    👍can't wait for the RL part, BTW, can you share the prompt as well?

    • @chrishayuk
      @chrishayuk  4 дня назад

      the rl video is coming, just sick at the moment, so can't record, frustrating

    • @waneyvin
      @waneyvin 4 дня назад +1

      @@chrishayukSorry to hear that. Hope you recover quickly! Rest up and take care.

    • @chrishayuk
      @chrishayuk  4 дня назад +1

      @@waneyvin just a cold or a flu or something, but frustrating. appreciate the well wishes

  • @sumitmamoria
    @sumitmamoria 11 дней назад

    Good work. One tiny suggestion - Maybe try using word-wraps for long lines , for better readability when watching a video.

  • @Kaushik-RoyChowdhury
    @Kaushik-RoyChowdhury 8 дней назад +1

    From a creator point of view I am interested in knowing how do you manage to superimpose screen recording over yourself speaking in the background ! The video is quite informative off course.

  • @seanplynch
    @seanplynch 12 дней назад +1

    Fantastic, well done

  • @lenreinhart2020
    @lenreinhart2020 3 дня назад

    Very informative video, I look forward to the next one. I am currently running the 32B version of R1 and I asked it about persistence of what it learned during our session and it said that unless I saved the session and fed it back, it was lost. It suggested using:
    '''bash
    ollama generate --model your_model_name | tee chat_history.txt
    '''
    Is there any other way you know of for getting it to learn without re feeding everything back to it after restart? It also said it did not have access to any files on my computer and it would take modifications to get it to do this by itself.

  • @usget
    @usget 12 дней назад +6

    Can a reasoning model figure out that it doesn't know something, and ask for inputs? Or could it be trained to ask?

    • @chrishayuk
      @chrishayuk  12 дней назад

      That’s an awesome idea

  • @johntdavies
    @johntdavies 10 дней назад

    I'm sure you're aware ot the Qwen-maths models but using these reasoning techniques it would be interesting to see if a small (Qwen2.5-1.5b) model could be trained to reason geometry or integration in the same was a mathematician would simply apply the rules they know to see what fits.
    I think the only limitation with this is the size of the context. I put the DeepSeek-R1-7b (Q4) on my phone and it was good but limited. I increased the context to 8192 and wow, it solved things o1 struggled with and failed.

  • @danson3038
    @danson3038 10 дней назад +1

    excellent!

  • @PunitPandey
    @PunitPandey 7 дней назад +1

    Hi Chris, do you have your training dataset available on github as well? I am not able to find it out. Putting it somewhere will be really helpful in following your instructions.

    • @chrishayuk
      @chrishayuk  7 дней назад

      Yeah it’s in the verifiers repo

  • @aperson1181
    @aperson1181 4 дня назад

    which deep seek model is better to download?

  • @Memsido
    @Memsido 3 дня назад

    What HW specs do you use for training?
    Thank you🙏

  • @AndyHuangCA
    @AndyHuangCA 12 дней назад +1

    Given that the intention is not so much to train new knowledge, but synthesize chain of thought capabilities on existing models, how good would it work if we were to use R1 to generate a bunch of non-math questions/thinking/answers input output as the cold start seed?

    • @chrishayuk
      @chrishayuk  12 дней назад +1

      That’s pretty much what happens with the RL stage.. but I also think you can use verifiers to do this well also

    • @AndyHuangCA
      @AndyHuangCA 12 дней назад

      @@chrishayuk Thanks! I was playing around with Granite 3.1 MoE 3B, found it to be insanely fast even on CPU only. I'd be really curious to see how much "intelligence" we can extract from smaller MoE models like that by synthesizing chain of thought. I'll have to find some time to play around and see what could be extracted. I'm thinking a semi-capable thinking model, with MCP (thanks to your MCP-CLI project), that requires no GPU will be a very powerful local assistant!

  • @EliSpizzichino
    @EliSpizzichino 11 дней назад +2

    Can you actually fine tune DeepSeek R1? I see you used Qwen-2.5

  • @ApolloGemini11
    @ApolloGemini11 11 дней назад

    Awesome video 👏🏼👏🏼👏🏼

  • @SDGwynn
    @SDGwynn 11 дней назад

    Very much appreciate your videos. Thank you. I noticed your training data jsonl format is different than your validation and test jsonl format. Could you please explain?

  • @rodnet2703
    @rodnet2703 11 дней назад

    Thanks for the info! I followed your instructions and it’s training the model but it’s pretty slow on my M1 Mac. Is there a similar software for Linux that I can coldstart train the model on a VPS?

  • @blue-y3r
    @blue-y3r 12 дней назад +2

    Are you saying there is a math compiler in deepseek R1 ? Its open source, so that can be checked

    • @chrishayuk
      @chrishayuk  12 дней назад +1

      They said in the paper they use a math verifier

  • @blue-y3r
    @blue-y3r 12 дней назад +1

    In your newly trained qwen model. What is the verifier step doing? Since there is no math compiler in qwen

    • @chrishayuk
      @chrishayuk  12 дней назад +5

      I’m not verifying yet, I’ll do that in the RL stage in the next video. I’m just generating long and accurate chain of thoughts for coldstarting training

  • @snehmehta
    @snehmehta 12 дней назад +1

    Hi Chris, it's pretty cool thanks for sharing.
    can we try to generate the cold start data from deepseek-r1-zero just like the paper and train lora, what do you think of that?

    • @chrishayuk
      @chrishayuk  12 дней назад +2

      Yes, I plan to do a pure version with RL, so will do that when I have that ready (which should be very soon)

    • @snehmehta
      @snehmehta 12 дней назад +1

      @@chrishayuk that would be great! I would like to contribute in researching, writing script or generating data if possible

  • @cheesehead9980
    @cheesehead9980 8 дней назад

    i’ve found that the 1.5B model is usually terrible with math or calculation, but it has extensive capabilities in generating humanlike thoughts in an eerie way. don’t play mind games with it unless u wanna spook yourself

  • @ianhaylock7409
    @ianhaylock7409 12 дней назад +3

    14:52 isn't the answer it gives here incorrect?

  • @danson3038
    @danson3038 10 дней назад

    a video on local agentic ide please.

  • @santoshtelwane1776
    @santoshtelwane1776 11 дней назад

    WOW Superb

  • @barefeg
    @barefeg 5 дней назад +1

    Is this RL though or just SFT?

    • @chrishayuk
      @chrishayuk  5 дней назад

      RL is the next video, this is SFT with long chain of thoughts, i.e. the coldstart

    • @barefeg
      @barefeg 5 дней назад +1

      @ awesome can’t wait to! Btw what hardware are you using?

    • @chrishayuk
      @chrishayuk  5 дней назад

      i was hoping to record this weekend, but got a sore throat, so i'm a few days away from recording. i think the RL version is pretty cool, i think you will like. macbook pro m3 max

  • @blue-y3r
    @blue-y3r 12 дней назад +1

    So what you are saying is that R1 will not perform good on non-logical and non maths like queries, where they cant use a verifier? Like what if I want to use R1 in a healthcare domain?

    • @chrishayuk
      @chrishayuk  12 дней назад +2

      Nope, because verifiers work for that also, which I’m gonna show in an upcoming video

  • @andrewcameron4172
    @andrewcameron4172 12 дней назад +1

    How about a video on creating a jsonl to finetune a model to write computer code

    • @chrishayuk
      @chrishayuk  12 дней назад +1

      Yeah I plan do a new one on that using verifiers

  • @andrewcameron4172
    @andrewcameron4172 10 дней назад

    Have a look at the Open R1 repo from huggingface as they work with the community to replicate deepseek r1 datasets etc

  • @wrusty3767
    @wrusty3767 6 дней назад +1

    Chains-of-thought, surely, and not Chain-of-thoughts?

  • @leophysics
    @leophysics 5 дней назад

    7b model run on my hplaptop 16gb ram i5 intel no graphics card

  • @user-qe2ps9vm9o
    @user-qe2ps9vm9o 12 дней назад +2

    Is NVDA going to die?

    • @chrishayuk
      @chrishayuk  12 дней назад +4

      I think a new grand theft auto game is coming out, they’ll be fine

  • @did28
    @did28 12 дней назад +2

    real open ai

  • @dalsenov
    @dalsenov 12 дней назад +2

    This resembles "first principle" , -Don't teach me how to reason, I will find it myself!

  • @anubisai
    @anubisai 12 дней назад +1

    N.Ireland / N.American accents is wild.

    • @chrishayuk
      @chrishayuk  12 дней назад +3

      Agreed, love those accents. Mine is Scottish though

    • @wwkk4964
      @wwkk4964 12 дней назад

      ​@@chrishayukhaha :)

    • @mrd6869
      @mrd6869 12 дней назад +1

      ​@@chrishayuk.u look like a musician that got into AI😂.Like I can see you on a synthesizer in a music video.

    • @chrishayuk
      @chrishayuk  12 дней назад +1

      hahaha, i'm terrible at music.. but i think there is a lot of synergies. i like using lots of tools and techniques and meshing them together

    • @clarkcampbell1110
      @clarkcampbell1110 8 дней назад +1

      As a fellow Scot - many’s the time I’ve had (usually American tourists) ask “which part of Ireland are you from”.
      I’m always kind & say I’m Scots.
      When they get embarrassed I explain the accents can be similar & at its closest point there’s only 12 miles between Ireland & Scotland.
      If they comment that I’m quite understandable for a Scotsman - I’ll throw in a bit o’ auld Scots leid tae mak a muckle ow ther heids.😂

  • @dmalex321
    @dmalex321 12 дней назад +3

    Wait a minute.. you used a how many billion parameter LLM to solve what a card-sized Casio calculator could solve in the 80s?

    • @agenticmark
      @agenticmark 12 дней назад +2

      one is hardware
      one is ml
      ml can do things hardware cant. generalize.

    • @aiknownc
      @aiknownc 11 дней назад +2

      Obviously this is a toy example. The purpose is to explain how to generate accurate synthetic Chain of Thought data to use during the training process, which is quite valuable. Even better, he walks through it end to end within the context of DeepSeek's COLDSTART methodology.

  • @HiteshKrishanKumar
    @HiteshKrishanKumar 12 дней назад +1

    *_Who do you think will win the AI race: China or the US? Please reply._*

    • @chrishayuk
      @chrishayuk  12 дней назад +3

      I don’t believe there will be a winner… I believe the game is an infinite game, and players will join and drop off. There are no winners….

    • @HiteshKrishanKumar
      @HiteshKrishanKumar 12 дней назад

      @ Don't you think so it will be like space race?

    • @llIllIlI
      @llIllIlI 12 дней назад +2

      ​@@HiteshKrishanKumar To what finish line? AI is already here and people use it every day.

    • @EliSpizzichino
      @EliSpizzichino 11 дней назад

      unfortunately, I think, it's a military race, and we'll never know for sure until it's too late.
      For the general public, open-source model will win, this video shows it already pretty much

    • @aiknownc
      @aiknownc 11 дней назад

      Unlike the space and nuclear arms race where spies were the only way to get the latest technological advances, DS has OPEN SOURCED everything they did to produce this model. Imagine how much faster the space/nuclear arms race would have been in that case! Open Source has been the biggest if not nearly the biggest accelerator for AI advancement in my opinion, especially within the last ~2 years.

  • @drudru3591
    @drudru3591 8 дней назад +1

    Nobel prize for the china man

  • @LokeKS
    @LokeKS 12 дней назад +1

    how to do this in windows? i guess peft from huggingface. cool.

    • @agenticmark
      @agenticmark 12 дней назад +2

      bitsnbytes releases (bnb) many small models for ollama on windows/linux and yeah peft adapters.
      i am pretty impressed with mac ml, but I cant imagine not being on linux with direct access to my 4090!

    • @chrishayuk
      @chrishayuk  12 дней назад +1

      I’ll do a regular PyTorch video for the next one

    • @LokeKS
      @LokeKS 11 дней назад

      ​@@chrishayuknice

    • @LokeKS
      @LokeKS 8 дней назад

      cool ​@@chrishayuk

  • @BigAura
    @BigAura 9 дней назад

    I see there are now R1 reasoning datasets on Hugging Face e.g. ServiceNow-AI/R1-Distill-SFT