Fine-tuning a Phi-3 LeetCode Expert? - Dataset Generation, Unsloth ++

Поделиться
HTML-код
  • Опубликовано: 17 дек 2024

Комментарии • 17

  • @fezkhanna6900
    @fezkhanna6900 2 месяца назад +1

    This is a great example. It gives insight into how to train a model what to do and what not to do. I think its interesting because it makes me wonder if these models are actually capable of being fine-tuned to solve specific tasks, or if they the fine tuning is more about weighing the weights a bit further toward some specific sentiment.

  • @LaughDead
    @LaughDead 7 месяцев назад +1

    Bro I really need your help
    I tried creating a visual story using chatgpt and midjourney like u described in your video but the problem I am facing is that each time I tell midjourney to generate a image it generates a image of a different character
    It doesn't matches the character or the background of the previous image
    Pls help me🙏

  • @stanTrX
    @stanTrX 23 дня назад

    12:42 what if my dataset is not public? Thanks

  • @supercurioTube
    @supercurioTube 7 месяцев назад +1

    That was a cool watch, although the small model itself didn't have the problem solving capability desired in the end, I liked how you demonstrated this workflow.

  • @adriangpuiu
    @adriangpuiu 7 месяцев назад

    time to finally get on creating that SAIL data set ... motivation squad @All About AI

  • @smnomad9276
    @smnomad9276 7 месяцев назад +2

    You need to make a custom cap, and make three big letters AAA that stand for All About AI in the front so that it is visible when you wear it.

    • @thunkin-ai
      @thunkin-ai 7 месяцев назад

      4 letters... AAAI

    • @smnomad9276
      @smnomad9276 7 месяцев назад +1

      @@thunkin-ai 3, AAA. All About AI

    • @thunkin-ai
      @thunkin-ai 7 месяцев назад

      @@smnomad9276 OIC

  • @cloudsystem3740
    @cloudsystem3740 7 месяцев назад

    i stuck at the gguf format but thank you very much for the instructions

  • @DefaultFlame
    @DefaultFlame 7 месяцев назад +2

    Awwww . . . I was hoping the video was about finetuning an LLM into only speaking in 1337 5p34k.

  • @BrokenOpalVideos
    @BrokenOpalVideos 7 месяцев назад +2

    Hey bro, love your work. I became a member but still waiting for github access, could you please send an invite

    • @Noogybaga
      @Noogybaga 7 месяцев назад

      Same here 😭

    • @azizahtas
      @azizahtas 7 месяцев назад +1

      Same here, I don't understand what is wrong. Its been like a week now. Getting disappointed every day now

    • @BrokenOpalVideos
      @BrokenOpalVideos 7 месяцев назад +1

      @@azizahtas ikr, we paid to get access and instead we get ignored, disappointing.

  • @Etienne_O
    @Etienne_O 7 месяцев назад +2

    Cool tool, but no MLX support :(