Qwen-2.5 Coder 32B: BEST Opensource Coding LLM EVER! (Beats GPT-4o + On Par With Claude 3.5 Sonnet!)

Поделиться
HTML-код
  • Опубликовано: 30 ноя 2024

Комментарии • 64

  • @intheworldofai
    @intheworldofai  19 дней назад +3

    Want to HIRE us to implement AI into your Business or Workflow? Fill out this work form: td730kenue7.typeform.com/to/WndMD5l7
    💗 Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see!
    📆 Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1
    🔥 Become a Patron (Private Discord): patreon.com/WorldofAi
    🧠 Follow me on Twitter: twitter.com/intheworldofai
    Love y'all and have an amazing day fellas. Thank you so much guys! Love yall!

  • @intheworldofai
    @intheworldofai  18 дней назад +24

    Microsoft's AI Toolkit - VS Code: FREE AI Extention BEATS Cursor! (GPT-4o + Sonnet 3.5 FREE!: ruclips.net/video/JYikeQ4ySes/видео.html

  • @thisisashan
    @thisisashan 19 дней назад +20

    As a consistent viewer, I would really love to see a "best of" each category update instead of the constant litany of clickbait.
    It is getting hard for me to want to watch every "Best ever!" video, but I would really love to know what completes what thee best.
    Best LLM router.
    Best anime image gen.
    Best realistic image gen.
    Best logic LLM.
    Best prrogramming LLM.
    etc.
    People do these, but I seldom get any information about the specific LLMs and fine-tunes that you have on this channel.
    Just saying, since you $$$ is based off how much of each video I want.
    Not meant as judgment.
    Thanks for what you do.

    • @intheworldofai
      @intheworldofai  19 дней назад +6

      Thanks for the feedback! I really appreciate your input. I’ll definitely consider providing more details on the specific LLMs and fine-tunes featured in the videos. Your support means a lot, and I’m always looking to improve the content for you all!

  • @intheworldofai
    @intheworldofai  10 дней назад

    Deepseek-R1-Lite: BEST Opensource LLM EVER! Beats Claude 3.5 Sonnet + O1! - (Fully Tested): ruclips.net/video/a61R0HSSwUU/видео.html

  • @alals6794
    @alals6794 19 дней назад +6

    Prior to this I was locally running qwen2.5 coder 7B bf16 and it was great, for its size. Can't wait to locally run qwen2.5 coder 32B!

    • @intheworldofai
      @intheworldofai  19 дней назад +1

      The 7b model was quite impressive for it's size. This 32b model will surely blow your mind!

    • @lckillah
      @lckillah 19 дней назад +1

      What kind of workstation are you running to be able to run 32b? I’m new to ML and just now learning so wondering if I’d need an upgrade from m3 18gb Mac Pro.

    • @Kaalkian
      @Kaalkian 17 дней назад +1

      @@lckillah m4 pro/max atleast 48gb perferabbly 64gb. 64gb would let your qwen2.5 32b with good quant and long context andhave spare ram to use comp lol
      atleast as of nov week2 things are going bezerk day by day

    • @DickerehikariDuck
      @DickerehikariDuck 9 дней назад

      @@Kaalkian so basically to run this model, the system needs to have at least 32MB RAM?

  • @intheworldofai
    @intheworldofai  19 дней назад +1

    [Must Watch]:
    Qwen-2.5: The BEST Opensource LLM EVER! (Beats Llama 3.1-405B + On Par With GPT-4o): ruclips.net/video/yd0kgDwkfz0/видео.htmlsi=Uh2eCpIWYpcY54Hq
    DeepSeek-v2.5: BEST Opensource LLM! (Beats Claude, GPT-4o, & Gemini) - Full Test: ruclips.net/video/mvpkZ1yFy7o/видео.htmlsi=NR9ChO50-HKJW9Cb
    Bolt.New + Ollama: AI Coding Agent BEATS v0, Cursor, Bolt.New, & Cline! - 100% Local + FREE!: ruclips.net/video/ZooojV4ZDMw/видео.html

  • @intheworldofai
    @intheworldofai  15 дней назад

    Gemini Exp 1114: The BEST LLM Ever! Beats o1-Preview + Claude 3.5 Sonnet!: ruclips.net/video/HcxMwM0hwo0/видео.html

  • @intheworldofai
    @intheworldofai  17 дней назад

    Aider UPDATE: The BEST AI Coding Agent BEATS v0, Cursor, Bolt.New, & Cline!: ruclips.net/video/ywKTxA9FU_4/видео.html

  • @mazinngostoso
    @mazinngostoso 19 дней назад +13

    Make a video listing the 5 best AI APIs that are completely free 👍

  • @investfoxy
    @investfoxy 19 дней назад

    What will you recommended between LM studio vs Pinokio for running LLMs?

  • @eado9440
    @eado9440 19 дней назад +2

    Epic, on par with deep seek 2.5(which is amazing for the price, just a little slow, and has cache). if faster , and hopefully just as cheap, im might just switch over.

    • @intheworldofai
      @intheworldofai  19 дней назад +1

      Hopefully the qwen 3 model series will improve on inference speeds

  • @Foxy_proxy
    @Foxy_proxy 19 дней назад +3

    What kinda specs do you need to run this locally?

  • @Dom-zy1qy
    @Dom-zy1qy 18 дней назад

    Might need to buy some tesla m40s and put together a rig. Suprisingly, there seems to be reputable listings on ebay for cheap. 24GB vram for ~$100? Wonder how many token/sec a single card would get.

  • @delta-gg
    @delta-gg 19 дней назад +3

    what hardware would you suggest for running the Qwen-2.5 Coder 32B? Like what graphics card minimum, and what system memory?

    • @paulyflynn
      @paulyflynn 19 дней назад

      M4 Max 128GB

    • @johndaily9869
      @johndaily9869 19 дней назад +1

      4090

    • @antonivanov5782
      @antonivanov5782 19 дней назад +3

      RTX 3090 24GB, 32GB RAM

    • @alals6794
      @alals6794 19 дней назад +1

      Actually, you can run it on a 8GB VRAM GPU but you have have ALOT of RAM, about 64GB DDR4 for about $100 usd or 64GB DDR5 for less than $200. If you split the load between your low VRAM and massive RAM you do need custom code to make it run. I might do a yt video on that, use it to launch my future AI channel.

    • @andrepaes3908
      @andrepaes3908 19 дней назад +2

      If you run it in max specs (32b parameters at fp32+32k context size) I estimate you need 192 of VRAM. This means a 8x3090 Nvidia card config which is not feasible in any consumer grade hardware. But you can run it at 8 integer quant with small quality loss using 48gb VRAM. A 2x3090 config would be enough and I estimate speed at 15 tokens / sec. A recently launched Mac M4 pro mini with 64gb ram would also do it but at half the speed of the 2x3090 config (7.5 tokens/sec).

  • @WolfCat787
    @WolfCat787 18 дней назад +1

    so, as 32b, how many memory required to run this model? does single 4090 can handle it?

  • @abc_cba
    @abc_cba 19 дней назад

    Does anyone know how does it compare to :
    1) Nvidia/llama-3.1-nemotron-70b-instruct
    2) Qwen-2.5-72B-Instruct

    • @djds4rce
      @djds4rce 19 дней назад

      Better than qwen 72b at coding

  • @francoisjunior.
    @francoisjunior. 17 дней назад

    my mine is slowly when intalled , why that's mean?

  • @avalagum7957
    @avalagum7957 19 дней назад

    Possible to use it with Jetbrailns IDE's? If yes, how?

  • @AK-ox3mv
    @AK-ox3mv 19 дней назад

    Qwen 2.5 coder 32b Q4_km gguf that has %99 accuracy compared to fp16 version, is just 20GB and any graphic card with over 20GB vram like nvidia 3090 from year 2020 can run it with about 20~30 tokens per seconds which is like online models.

  • @Matthew-s5x7d
    @Matthew-s5x7d 19 дней назад

    when will a local LLM work with cline that is worth it. thats all the matters!!

  • @mik3lang3lo
    @mik3lang3lo 19 дней назад

    How can I increase the context size? Is there a limit?

    • @ShortCreepyTales
      @ShortCreepyTales 17 дней назад

      If I'm not mistaken, there's 128k of context.

  • @chucky_genz
    @chucky_genz 16 дней назад

    Ollama is still the king 😊

  • @null-db6or
    @null-db6or 19 дней назад

    You use so old lmstudio version

  • @Matelight_IT
    @Matelight_IT 19 дней назад

    32b on 24gb vram works like magic because the default quantization weight only 20gb so you can also add something like ~30 minute of video subtitles (8k? tokens)

  • @BeastModeDR614
    @BeastModeDR614 19 дней назад

    does ollama have it?

    • @johndaily9869
      @johndaily9869 19 дней назад +1

      yes, 51 minutes ago

    • @alals6794
      @alals6794 19 дней назад +3

      Don't get the quantized version. I think Ollama only offers quantized versions but for coding you want max precision/accuracy aka non quantized. Huffingface has them without quantization, I think, and might even offer it via free API if you register on their site. Ok, you need basic python to use it via API, true.

    • @latlov
      @latlov 19 дней назад

      2:15

  • @yzw8473
    @yzw8473 19 дней назад

    Task #1 is way too easy. Even qwen2.5-coder-0.5B-instruct can handle it right.

  • @gog2462
    @gog2462 19 дней назад +1

    it is not working with 99% of apps for coding because it has no tools insructed do do stuff and no prompts for code checking etc... basically it knows not much also about diff and normall stuff that is needed when you try to code with it

    • @gog2462
      @gog2462 19 дней назад

      without retraining for coding with coding apps like vsc... it is useless :) on maybee if you have "developer" that dont know nothing about coding then it can be used for snake game and nothing more

    • @latlov
      @latlov 19 дней назад

      How about WordPress plugins development?

  • @cstephen4695
    @cstephen4695 19 дней назад

    i was able to make it draw a butterfly svg. but still look ugly. 🤣🤣

  • @Windswept7
    @Windswept7 19 дней назад

    This is concerning, a warning should be given regarding the ties to the Chinese Communist Party.

  • @neoreign
    @neoreign 19 дней назад

    wow wow wo, dude go slow man lol not all of us are coders. you speak so fast!