Retrieval Augmented Generation (RAG) vs In-Context-Learning (ICL) vs Fine-Tuning LLMs

Поделиться
HTML-код
  • Опубликовано: 4 июл 2024
  • #ai #rag #llm #prompt
    This video is a simplified explanation of Retrieval Augmented Generation (RAG) vs In-Context-Learning (ICL) vs Fine-Tuning LLMs for beginners; three terms and concepts related to the ways to use Large Language Models and increase the accuracy of their responses. I have already explained these in my previous videos separately so I thought I would give you all in one place.
    Here are some relevant hands-on code videos:
    Mixture of Agents (MoA): • 🔴 Mixture of Agents (M...
    AI Agents With CrewAI And Ollama: • 💯 FREE Local LLM - AI ...
    Learn more about the main AI concepts here:
    github.com/Maryam-Nasseri/AI-...
    Key terms and concepts used in the video:
    AI agents, RAG, GPT-4o, Gemini 1.5 Pro, ICL, fine-tuning, large language models, zero-shot, few-shot learning, many-shot learning, context, context window, multimodal models, generative AI, LLM evaluation, image classification, natural language processing
    Don't forget to subscribe:
    / @analyticscamp
  • НаукаНаука

Комментарии • 5

  • @analyticsCamp
    @analyticsCamp  Месяц назад

    Hey everyone, I have already explained RAG, ICL, and fine-tuning in the previous videos separately, so I thought I would give you all in one place!

  • @RizwanYe
    @RizwanYe Месяц назад +1

    Good explanation 👏

  • @optiondrone5468
    @optiondrone5468 Месяц назад +1

    Medical images better than human operators! If we keep going at this rate soon many general practitioners in UK will have no jobs.

    • @analyticsCamp
      @analyticsCamp  Месяц назад

      Now imagine if we combine this with the agentic power! But I still think it's too early to make a definitive judgement, as many of these papers report on their best results/round! Thanks for watching though :)