NEW Qwen-2 LLM: Best Opensource LLM EVER? Impressive Coding Abilities!

Поделиться
HTML-код
  • Опубликовано: 6 июн 2024
  • In this video, we're diving into the groundbreaking evolution from Qwen 1.5 to Qwen 2. Packed with pre-trained and instruction-tuned models in various sizes, Qwen 2 revolutionizes the landscape of language models. Stay tuned as we explore its impressive features and the impact it's making across industries.
    [🔗 My Links]:
    🔥 Become a Patron (Private Discord): / worldofai
    ☕ To help and Support me, Buy a Coffee or Donate to Support the Channel: ko-fi.com/worldofai - It would mean a lot if you did! Thank you so much, guys! Love yall
    🧠 Follow me on Twitter: / intheworldofai
    📅 Book a 1-On-1 Consulting Call With Me: calendly.com/worldzofai/ai-co...
    📖 Want to Hire Me For AI Projects? Fill Out This Form: td730kenue7.typeform.com/to/W...
    🚨 Subscribe To My Second Channel: @WorldzofCrypto
    Sponsor a Video or Do a Demo of Your Product, Contact me: intheworldzofai@gmail.com
    [Must Watch]:
    DB-GPT: Multi-Agent Framework - All-In-One Opensource Model!: • DB-GPT: Multi-Agent Fr...
    GPT Computer Assistant: AI Controls Your Computer!: • GPT Computer Assistant...
    Mistral's NEW Codestral: The Ultimate Coding AI Model - Opensource: • Mistral's NEW Codestra...
    [Link's Used]:
    LM Studio Tutorial: • LM Studio: Easiest Way...
    Blog Post: qwenlm.github.io/blog/qwen2/
    Github Repo: github.com/QwenLM/Qwen2
    Hugging Face Model Card: huggingface.co/Qwen
    Demo: huggingface.co/spaces/Qwen/Qw...
    *Video Content:*
    Qwen 2 is a game-changer in the world of language models. With models available in five sizes, ranging from 0.5 billion to an astonishing 72 billion parameters, it sets new benchmarks for performance and efficiency. Trained on data in 27 languages and boasting a context length of 128k tokens, Qwen 2 delivers unparalleled capabilities in natural language processing. From enhanced coding and mathematical abilities to state-of-the-art performance in benchmark evaluations, Qwen 2 is redefining what's possible in AI.
    *Model Information:*
    The Qwen 2 series offers a comprehensive range of models, each designed to meet specific needs. From the compact Qwen2-0.5B to the powerhouse Qwen2-72B, there's a model for every application. With features like Group Query Attention (GQA) and extended context length support, Qwen 2 sets a new standard for versatility and performance.
    If you're ready to take your AI projects to the next level, don't miss out on Qwen 2. Like, subscribe, and share this video to spread the word about the future of language modeling. Join the conversation and explore the possibilities with Qwen 2 today!
    *Additional Tags and Keywords:*
    #Qwen2 #languagemodel #ai #artificialintelligence #nlp #machinelearning #deeplearning #QwenEvolution #technology
    *Hashtags:*
    #Qwen2 #LanguageModel #ai #NLP #MachineLearning
  • НаукаНаука

Комментарии • 37

  • @intheworldofai
    @intheworldofai  Месяц назад +2

    💗 Thank you so much for watching guys! I would highly appreciate it if you subscribe (turn on notifcation bell), like, and comment what else you want to see!
    📆 Book a 1-On-1 Consulting Call WIth Me: calendly.com/worldzofai/ai-consulting-call-1
    🔥 Become a Patron (Private Discord): patreon.com/WorldofAi
    📖 Want to Hire Me For AI Projects? Fill Out This Form: td730kenue7.typeform.com/to/WndMD5l7
    🚨 Subscribe to my NEW Channel! www.youtube.com/@worldzofcrypto
    🧠 Follow me on Twitter: twitter.com/intheworldofai
    Love y'all and have an amazing day fellas. ☕To help and Support me, Buy a Coffee or Donate to Support the Channel: ko-fi.com/worldofai - Thank you so much guys! Love yall!

  • @DBonacich
    @DBonacich Месяц назад

    Idk if I'm just looking at the LLM leaderboard wrong or what. I see Rhea-72b-v0.5 as the top ranking LLM. Mind explaining this so I know what I'm looking at?

    • @aaron6235
      @aaron6235 Месяц назад

      If you have the hardware to support the 72b model then go ahead. The bigger the model the more VRAM you need. Even the rtx 4090 is 24gb of VRAM. Which still isn't enough to run a 72b model. This code qwen is a 7b model but performs as well as higher parameter models. This means it can run on lower end hardware and still perform well.

  • @intheworldofai
    @intheworldofai  Месяц назад

    [Must Watch]:
    DB-GPT: Multi-Agent Framework - All-In-One Opensource Model!: ruclips.net/video/qOCN8NXEUX4/видео.htmlsi=sIFB5EyT5mPOA9Ld
    GPT Computer Assistant: AI Controls Your Computer!: ruclips.net/video/Bos9EYrNh6g/видео.htmlsi=6hVhtavRsq6Fz8nO
    Mistral's NEW Codestral: The Ultimate Coding AI Model - Opensource: ruclips.net/video/CJ4EcwO88UY/видео.htmlsi=tXFBHuICLW76CKY9

  • @kitastro
    @kitastro 14 дней назад

    after llm's I don't want to see bad english anymore, written translations ought to have been solved by now

  • @intheworldofai
    @intheworldofai  Месяц назад

    How To Build $5000+ AI Solutions For Your AI Automation Agency!: ruclips.net/video/cLntSggdSt8/видео.html

  • @not_a_human_being
    @not_a_human_being 24 дня назад

    interesting that it wrote exactly the same snake game for me

  • @intheworldofai
    @intheworldofai  Месяц назад

    Qwen-Agent: Powerful Multi-Agent Framework - Function Calling, Code Interpreter, and, RAG! ruclips.net/video/Blpyurdi4dA/видео.html

  • @hand-eye4517
    @hand-eye4517 Месяц назад

    why would you promote a closed source software to use an open source llm? especially by shoehorning our opinion that lm studio is the best way ?

  • @YoungMoneyFuture
    @YoungMoneyFuture Месяц назад +4

    I'm shocked that llms are terrible at math when calculators were invented decades ago🤔

    • @JohnSmith762A11B
      @JohnSmith762A11B Месяц назад +1

      Calculators!? OpenAI GPTs can access Mathematica, which is orders of magnitude more capable at advanced math than some calculator.

    • @YoungMoneyFuture
      @YoungMoneyFuture Месяц назад +1

      @JohnSmith762A11B 🤣 Trust me bro, that's what I mean! How is it possible that it has access to all this highly advanced mathematic material, yet it struggles in math! You can see in this vid they all have low math scores and other vids testing them in math with poor results😭

    • @h4ckh3lp
      @h4ckh3lp Месяц назад +5

      @@YoungMoneyFuture because it isn't thinking about the fact that it's a math problem whatsoever. It's generating text, based on the best choice of which individual word comes next that is the most likely best option based on the words that came before it, which is judged by the frequency and proximity to the context the potential word has been understood to occur based on the massive amount of text that it was trained on (which is rather ham-fisted approach, imo).
      All they do is try their best to continue generating the text which began with your prompt. Any perception of an interactive volley of "conversation" back and forth between you and it are a virtue of clever handling of your inputs and presentation of subsequent generations based upon said input.

    • @rumble1925
      @rumble1925 Месяц назад +2

      @@YoungMoneyFuture Because it's not trying to be a calculator, it's a statistical model of text

    • @rafizhae
      @rafizhae 2 часа назад

      LLMS can make calculator it self

  • @ppamment
    @ppamment Месяц назад +1

    I don't get it. The last answer doesn't make any sense. Does it think some of the cows transformed into chickens somehow? 12 legs are missing... Either 3 cows or 6 chickens (or some combination of the two) must have escaped. How do you figure Qwen's answer is correct?

    • @john_blues
      @john_blues Месяц назад

      You're exactly right.

    • @elchippe
      @elchippe Месяц назад

      responded 4 cows and 26 chickens are in the barn.

    • @elchippe
      @elchippe Месяц назад

      asume that no animal can enter the barn only exit
      -------------------------
      If we assume that no new animals can enter the barn, only existing animals can leave, and we start from the original counts of 10 cows and 20 chickens, we can adjust the numbers to fit the observation of 68 legs while adhering to the constraint that animals can only leave the barn.
      Initially, we have:
      10 cows: (10 \times 4 = 40) legs
      20 chickens: (20 \times 2 = 40) legs
      Total legs initially: (40 + 40 = 80) legs.
      Given that the farmer observes only 68 legs, this means that some animals must have left the barn, reducing the total number of legs.
      Let's denote the number of cows that left as (C_l) and the number of chickens that left as (H_l).
      Since cows have 4 legs, the reduction in legs due to cows leaving would be (4C_l). Similarly, since chickens have 2 legs, the reduction in legs due to chickens leaving would be (2H_l).
      The equation representing the observed number of legs after some animals left is:
      [80 - 4C_l - 2H_l = 68]
      Simplifying:
      [4C_l + 2H_l = 12]
      Dividing the entire equation by 2:
      [2C_l + H_l = 6]
      We're looking for non-negative integer solutions for (C_l) and (H_l) that satisfy this equation, considering that (C_l) and (H_l) cannot exceed the initial counts of cows and chickens respectively (i.e., (C_l \leq 10) and (H_l \leq 20)).
      Possible solutions include:
      If (C_l = 0), then (H_l = 6). This means 6 chickens left the barn.
      If (C_l = 1), then (H_l = 4). This means 1 cow and 4 chickens left the barn.
      If (C_l = 2), then (H_l = 2). This means 2 cows and 2 chickens left the barn.
      If (C_l = 3), then (H_l = 0). This means 3 cows left the barn.
      Considering the initial counts and the constraint that animals can only leave, the scenario where 2 cows and 2 chickens left the barn makes sense because it keeps the numbers of both types of animals in the barn positive:
      After 2 cows leave: (10 - 2 = 8) cows remain.
      After 2 chickens leave: (20 - 2 = 18) chickens remain.
      Checking the total legs remaining:
      Legs from 8 cows: (8 \times 4 = 32)
      Legs from 18 chickens: (18 \times 2 = 36)
      Total legs remaining: (32 + 36 = 68), which matches the farmer's observation.

    • @john_blues
      @john_blues Месяц назад

      @@elchippe What llm was this? I still don't like this from an evaluation standpoint because although it provides 1 correct answer, it doesn't acknowledge that there are several.

    • @elchippe
      @elchippe Месяц назад

      @@john_blues Qwen2 72B the same the guy used.

  • @ericshum8796
    @ericshum8796 Месяц назад

    I guess still many people think that alibaba is just doing e-commerce😂😂

    • @gustavheinrich5565
      @gustavheinrich5565 26 дней назад

      Probably. They should know it's Chinese state propaganda at this point.

  • @JaguarPanda
    @JaguarPanda Месяц назад

    It's answers are completely unreadable with LM studio, it's a character salad, what am i doing wrong?

    • @ab031ns
      @ab031ns 6 дней назад

      When you use this model in LM Studio - you need to use the included ChatML preset.
      Then in Settings (Right hand side chat screen) Go to -> Model Initialization -> Flash Attention -> Turn it on

  • @SiliconSouthShow
    @SiliconSouthShow Месяц назад +1

    OLLAMA < LMS

  • @john_blues
    @john_blues Месяц назад

    It got the Math problem completely wrong. Definitely not "Totally amazing" results.

  • @bobtarmac1828
    @bobtarmac1828 Месяц назад

    A better LLM? Maybe. But with swell robotics everywhere, Ai jobloss is the only thing I worry about anymore. Anyone else feel the same? Should we cease Ai?

    • @dylantoymaker759
      @dylantoymaker759 11 дней назад +1

      We should cease capitalism. Ai won’t take away your job. Your boss will.

  • @MindSetShortsOficial
    @MindSetShortsOficial Месяц назад

    primeiro!

  • @irabucc469
    @irabucc469 Месяц назад

    And it can't tell you how many people were killed during chinese cultural revolution 😂😂😂

  • @chapicer
    @chapicer Месяц назад

    my download keeping doing GG,GladGGtoGGGseeGGyou!GGHowGGGcanGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG

    • @Dats4794
      @Dats4794 Месяц назад +1

      Its already fix, update your ollama and redownload the model

    • @chapicer
      @chapicer Месяц назад

      @@Dats4794 thx bro you helpe me a lot