Build With AI, Fine Tuning Gemini - Roya Kandalan

Поделиться
HTML-код
  • Опубликовано: 18 янв 2025

Комментарии • 6

  • @EvgenSuit
    @EvgenSuit 24 дня назад

    as far as I know its not possible to use system instructions and structured output with fine tuned gemini models

  • @juliobecerra-o4n
    @juliobecerra-o4n 3 месяца назад +1

    Is possible to use tuned model and function calling?

    • @reserseAI
      @reserseAI 2 месяца назад

      Yes

    • @yashas9271
      @yashas9271 10 дней назад

      Yeah but time consuming for large dataset , if you want responses based on your dataset info only

  • @101RealTalker
    @101RealTalker 3 месяца назад +2

    I just plain don't get it, maybe I am misunderstanding with fine-tuning means, maybe I don't even need this for my case usage... in the end I have one folder on my desktop with a measly 1.4 GB of markdown files that are totaling over 3 million words in research, that I want google pro 1.5 to represent as a mouthpiece.
    I guess the quickest way to explain it would be, master levels of needle in a haystack representation, I want it to be able to take all the files into macro context per each question, and give me a higher order perspective between the files that only artificial intelligence could possibly keep a wrangle of, comprehend?
    How on earth can I achieve this please!? Thank you.🙏

    • @yashas9271
      @yashas9271 10 дней назад

      Use pre-trained models available in hugging face transformers, implement RAG and train your, database don't use gemini bcz I don't know if we can implement RAG there as well as gemini is not best for training your data so use other pre trained NLP models