Introducing Gemma - 2B 7B 6Trillion Tokens

Поделиться
HTML-код
  • Опубликовано: 7 окт 2024
  • НаукаНаука

Комментарии • 85

  • @shApYT
    @shApYT 7 месяцев назад +118

    Prepare for Gemma-Orca-Wizard-Falcon-Hermes-7B-Uncensored

  • @amandamate9117
    @amandamate9117 7 месяцев назад +27

    if you want to test reasoning try this slightly changed riddle: "I hang 7 shirts out to dry in the Sun. After 5 hours all shirts are dry. The next day i hang 14 shirts out to dry. The conditions are the same. How long will it take to dry 14 shirts? take a deep breath and proceed step by step" 99%of LLMs will say it needs 10 hours including gemma-7B.
    If you change the prompt adding a example riddle (a 1-shot prompt) with a similar structure, the AI can learn the pattern. For example, a riddle about 3 t-shirts drying in 3 hours, then 6 t-shirts drying also in 3 hours, will help the AI understand that 14 t-shirts would only need 5 hours to dry.

    • @David_Box
      @David_Box 7 месяцев назад +10

      According to chatgpt, it takes "≈21.43 minutes" so obviousely it knows something we don't

    • @savvyvideos6454
      @savvyvideos6454 7 месяцев назад +4

      also, just try removing the "take a deep breath and proceed step by step" from your original prompt...

    • @amandamate9117
      @amandamate9117 7 месяцев назад +2

      @@savvyvideos6454 removing "take a deep breath and proceed step by step" wont change the output. i tried on several models.

    • @akhileshchander5307
      @akhileshchander5307 7 месяцев назад

      >>> i hang 7 shirts out to dry in the Sun. After 5 hours all shirts are dry. The next day i hang 14 shirts out to dry. The conditions are the same. How long will it take to dry 14 shirts? take a deep breath and proceed step by step
      gemma2b:
      The total time taken to dry 7 shirts is 5 hours.
      Since the shirts are hanging in the same conditions, we can assume that the drying process follows the same rate.
      Therefore, to dry 14 shirts, it will also take 5 hours.

    • @WeebLabs
      @WeebLabs 7 месяцев назад +5

      GPT 4 responds correctly to this riddle.
      "If 7 shirts dry in 5 hours under certain conditions, and the next day the conditions are exactly the same, 14 shirts will also dry in 5 hours, provided they all receive the same exposure to the drying conditions."

  • @MikewasG
    @MikewasG 7 месяцев назад +6

    🎉🎉🎉 Can’t wait for the fine-tune video! Thanks for sharing!

  • @JosephLiaw
    @JosephLiaw 7 месяцев назад +8

    It would be exciting to see if Gemma can become popular like the Llama

  • @amandamate9117
    @amandamate9117 7 месяцев назад +3

    top video again. i hope we get by the end of the week some monster finetuned version

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад

      give it a few days, but yes I think a lot of cool models coming

  • @EstebanAstudillo
    @EstebanAstudillo 7 месяцев назад +12

    Gemma is available with ollama.. FYI

  • @fonylew
    @fonylew 7 месяцев назад

    So fast! Very information, many thanks!

  • @dataprospect
    @dataprospect 7 месяцев назад

    Dont forget starcoder and santacoder models. They are among the earliest opensource models that standardized data quality checks and pipelines. And inspired so many new models.

  • @ThoughtLineQuotes
    @ThoughtLineQuotes 7 месяцев назад

    Really cool I thought there were 1 million tokens. Thanks for the video.

  • @2beJT
    @2beJT 7 месяцев назад +5

    Google: "Gemma"
    Me: Gimmie
    Google: NO, GEM-MA.. GEMMA!
    Me: Gimmie Gimmie

  • @proterotype
    @proterotype 7 месяцев назад +1

    Looking forward to the Hugging Face video and what the community is gonna do with this

  • @micbab-vg2mu
    @micbab-vg2mu 7 месяцев назад

    Thank you for the great video:)

  • @石万里-g4y
    @石万里-g4y 7 месяцев назад

    thanks!

  • @igor1591
    @igor1591 7 месяцев назад

    nice timing!

  • @picklenickil
    @picklenickil 7 месяцев назад

    My guys going total pokemon on this.
    Evolution after evolution

  • @maharishicoding440
    @maharishicoding440 6 месяцев назад

    00:00 Introduction of various open source language models
    01:19 Google has open-sourced Gemma, a suite of models
    02:34 Introducing Gemma - 2B 7B 6Trillion Tokens
    03:46 Models trained on TPU V5e with impressive benchmarks.
    04:57 Gemma's terms of use and access request process
    06:02 Using Keras 3.0 and Keras NLP for NLP models
    07:11 Gemma 2B 7B 6 trillion tokens model's potential for multilingual fine-tuning.
    08:18 Gemma 2B 7B 6Trillion Tokens for NLP
    Crafted by Merlin AI.

  • @chiaracoetzee
    @chiaracoetzee 7 месяцев назад

    fyi you say the weights are only English but in my tests it was able to respond to queries in French. It's possible they were going for an English-only dataset but accidentally brought in some other language data.

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад

      Yeah this is quite common. Especially with languages like French, Spanish etc. A lot of other languages appear even in english text and when you have 6 Trillion tokens that can add up to. a lot. Also the tokenizer is a multi-lingual tokenizer (like the full size Gemnini models) so this can help as well.

  • @avi7278
    @avi7278 7 месяцев назад +3

    Gemmani?

  • @GrygD-d6w
    @GrygD-d6w 7 месяцев назад

    nice

  • @mshonle
    @mshonle 7 месяцев назад +1

    I wonder if Gemma is quantized?

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад +6

      There are quanitzed version of it but it is a full resolution model they have released

  • @yusufnzm
    @yusufnzm 7 месяцев назад +2

    Can you provide the KerasNLP thing link?

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад +1

      sure here ai.google.dev/gemma/docs/get_started

  • @Eboher
    @Eboher 7 месяцев назад

    like your video😃

  • @user-qr4jf4tv2x
    @user-qr4jf4tv2x 7 месяцев назад +2

    6T you mean i can just plug an entire book in a single prompt

    • @user-qr4jf4tv2x
      @user-qr4jf4tv2x 7 месяцев назад

      Oh nevermind

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад +1

      not it is trained on 6T tokens as compared to LLaMA 2 being trained on 2T tokens

  • @hidroman1993
    @hidroman1993 7 месяцев назад

    "It's hard to pronounce Gemma instead of Gemini" is a feature, not a bug

  • @hqcart1
    @hqcart1 7 месяцев назад +2

    it's a simple fact, when your model is NOT a cutting edge, they never open source it.
    seems gemma is going to be used on android, that's that..

  • @MrErick1160
    @MrErick1160 7 месяцев назад +1

    Can you give some practical applications of such model? I'm data science student andlooking at how to use these models for meaningful purposes

    • @jmann277
      @jmann277 7 месяцев назад +1

      smaller models can fit on smaller devices. They’re also cheaper. Out of the box might not work great but maybe you can fine tune for your task.

  • @ShanyGolan
    @ShanyGolan 7 месяцев назад +1

    Tried 2b. Wow it sucks. 😅😅
    I asked him the derivative of x^3, it couldn't do it. Lol. What??

  • @pigeon_official
    @pigeon_official 7 месяцев назад +1

    im just waiting for LlAMA 3 :(

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад +1

      I think it may keep getting delayed as the other open models getting released are raising the bar.

    • @pigeon_official
      @pigeon_official 7 месяцев назад

      @@samwitteveenai wasnt llama 3 supposed to be really powerful and almost a really really primative "agi" that what i got from that little zuckerburg speech

    • @pylotlight
      @pylotlight 7 месяцев назад

      @@samwitteveenai I don't quite understand llama vs gemma. arent they both models? but why does it sound like gemma would run on top of llama, or how llamacpp allows for any model to be run on it, dont understand the layers here.

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад +1

      @@pylotlight it is just a model (2 different sizes of models) there are versions for cpp and other frameworks so it can run on various frameworks, but at the end of the day both Gemma and LLaMA are models

  • @stickmanland
    @stickmanland 7 месяцев назад +1

    Woah! Opensource? Google?

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад +2

      Maybe not fully open sources but certainly a good step in the right direction

    • @clray123
      @clray123 7 месяцев назад +1

      The answer is "no".

    • @NicolasEmbleton
      @NicolasEmbleton 7 месяцев назад

      It's open weights. Not open source. Still nice but not all the way.

    • @clray123
      @clray123 7 месяцев назад

      @@NicolasEmbletonNot even open weights, the proprietary license comes with strings, just as for LLama2.

    • @blender_wiki
      @blender_wiki 7 месяцев назад +1

      Maybe you don't know but google open sourced many many codes in his history and also ML models. 🤷🏿‍♀️🤷🏿‍♀️🤷🏿‍♀️

  • @Wanderer2035
    @Wanderer2035 7 месяцев назад +1

    It’s censored so it’s not really that good

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад

      This instruct models are like that but you can fine tune the base model to be how ever you want.

  • @just.play1ng
    @just.play1ng 7 месяцев назад +1

    Is this real 😂?

  • @123arskas
    @123arskas 7 месяцев назад

    For a minute I thought the Context-Window is 6 Trillion Tokens Good content

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад +1

      now that would be nice lol

    • @123arskas
      @123arskas 7 месяцев назад

      @@samwitteveenai Hugging face version works now

  • @russelllapua4904
    @russelllapua4904 7 месяцев назад

    why tf did they name this Gemma?

  • @davk
    @davk 7 месяцев назад +1

    Gemini is getting significantly worse now. The same was with GPT3 which despite upgrades lost a lot of quality.

    • @samwitteveenai
      @samwitteveenai  7 месяцев назад

      worse in what way and which Gemini are you noticing it on?

    • @blender_wiki
      @blender_wiki 7 месяцев назад

      What are you talking about? The public chat or Gemini 1.5 on gogole AI studio ?

  • @IdPreferNot1
    @IdPreferNot1 7 месяцев назад

    Ollama already has on its model page.. just pick the one you want and run on ollama with 3 words.