Embeddings, Transformers, RLHF: Three key ideas to understand ChatGPT - Luca Baggi

Поделиться
HTML-код
  • Опубликовано: 13 сен 2024
  • Embeddings, Transformers, RLHF: Three key ideas to understand ChatGPT - Luca Baggi - PyCon Italia 2024
    Elevator Pitch:
    Everyone is using ChatGPT, but few know how it works. This talk will drive the audience through the main ideas powering ChatGPT, to rip through the curtain of magic around the tool.
    Description:
    ChatGPT has become a groundbreaking tool, transforming how professionals in various industries work. However, while many articles focus on "the 30 prompts everybody needs to know", they often overlook the underlying technology of ChatGPT.
    To truly understand ChatGPT, it's important to comprehend three key concepts:
    1. **Embeddings**: how Large Language Models (LLMs) convert words and phrases into numerical values, allowing them to interpret natural language effectively.
    2. **Transformers**: advanced deep-learning modules that enable LLMs to understand semantic connections within text, even when they're widely spaced.
    3. **RLHF (Reinforcement Learning with Human Feedback)**: a technique to align AI models with our intended purposes and ethical standards.
    In our presentation, we will explore the four primary steps involved in building and training a GPT-like model. In our journey, we will discuss Embeddings, Transformers, and RLHF, but we will not stop at the technical details.
    By relying on the laters academic studies, we will also discuss both the limitations and strengths of current generative AI models and provide actionable insights to foster their safe and effective adoption.
    Learn more: 2024.pycon.it/...
    #Education #BestPractice #DeepLearning

Комментарии •