Natural Language to SQL with Google Gemma : A Comprehensive Guide

Поделиться
HTML-код
  • Опубликовано: 19 окт 2024

Комментарии • 25

  • @kimnilsson992
    @kimnilsson992 8 месяцев назад +3

    WOW, even I, who knows nothing about SQL or how to train an LLM, learned so much from your presentation. Thank you!

  • @jonzh4720
    @jonzh4720 6 месяцев назад +1

    Thanks straightforward explanation. 👍🏻 btw how to evaluate this finetune model results is it good to use?

  • @jinfengma
    @jinfengma 7 месяцев назад

    Amazing, great job, clearly shows how we can fine-tuning Gemma, looking forward to the next video

  • @pcprinceton
    @pcprinceton 8 месяцев назад

    Wonderful! Thanks for the video.. After the changes it performed some what better; lora_config = LoraConfig(
    lora_alpha=128,
    lora_dropout=0.05,
    r=256,
    bias="none",
    target_modules="all-linear",
    task_type="CAUSAL_LM",
    )

  • @allanng78
    @allanng78 3 месяца назад

    Hi,
    Nice video. Can you make a video for running gemma on local machine using pytorch? Thank you

  • @Rahul-ch9eh
    @Rahul-ch9eh 7 месяцев назад +1

    Great content ,i was wondering how should i save the model and push the model to hugging face hub after the fine tunning ?

  • @Joy_jester
    @Joy_jester 8 месяцев назад

    Hey, great video. i wanted to know good sources of information regarding generative ai and llms. I am mostly interested in training efficient llms. Thanks.

    • @bhattbhavesh91
      @bhattbhavesh91  8 месяцев назад +1

      Sure, I'll create a video on this soon!

  • @parthapratimdas4051
    @parthapratimdas4051 8 месяцев назад

    Thanks for the video, bro! Do you have the code on Github?

    • @bhattbhavesh91
      @bhattbhavesh91  8 месяцев назад

      Yes I have! Check the description section! I'm glad you liked the video.

  • @pinkfloyd2642
    @pinkfloyd2642 8 месяцев назад

    Thank you for the video. How to read a spreadsheet data using Gemma and perform mathematical operations on it?

    • @bhattbhavesh91
      @bhattbhavesh91  8 месяцев назад

      I'm glad you liked the video, I'll create a video on this soon!

  • @__________________________6910
    @__________________________6910 8 месяцев назад

    Great

  • @souvickdas5564
    @souvickdas5564 7 месяцев назад

    How to avoid repeatative response from the model.generate() ?

  • @deepjyotibaishya7576
    @deepjyotibaishya7576 4 месяца назад

    Now how to download this ai model and used it locally

  • @souvickdas5564
    @souvickdas5564 7 месяцев назад

    How to upload save the fine-tuned model locally? please provide the code for that

  • @ernestoherreralegorreta137
    @ernestoherreralegorreta137 7 месяцев назад

    Wait until you find that it appends a secret "diversity constraint" to your original query every time you search for, say, "whıte paint" or the like.
    Thanks but no thanks.

  • @hemachandhers
    @hemachandhers 8 месяцев назад

    after fine tuning , if i ask general question it still gives sql response
    text = "Quote: Our doubts are traitors,"
    device = "cuda:0"
    inputs = tokenizer(text, return_tensors="pt").to(device)
    outputs = model.generate(**inputs, max_new_tokens=20)
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))
    Quote: Our doubts are traitors, and we must not doubt.
    Context: CREATE TABLE table_name_7 (doubt VARCHAR,

    • @bhattbhavesh91
      @bhattbhavesh91  8 месяцев назад

      Because you have fine-tuned the LLM for your use case!