Multimodal AI: LLMs that can see (and hear)

Поделиться
HTML-код
  • Опубликовано: 16 янв 2025

Комментарии • 13

  • @ShawhinTalebi
    @ShawhinTalebi  Месяц назад +6

    I'm excited to kick off this new series! Check out more resources and references in the description :)

  • @mohsinshah9933
    @mohsinshah9933 Месяц назад +6

    Hi, Shaw Talebi
    Please make some videos on LangChain, LangGraph, AI Agents
    your teaching style is best and simple

    • @ShawhinTalebi
      @ShawhinTalebi  Месяц назад +2

      Thanks for the suggestion! I added that to my list :)

  • @sam-uw3gf
    @sam-uw3gf Месяц назад +2

    great video, do videos on Lang chain and AI agents

  • @ifycadeau
    @ifycadeau Месяц назад

    WOOO 🎉 you’re back!!

  • @buanadaruokta8766
    @buanadaruokta8766 Месяц назад

    great video!

  • @mysteryman9855
    @mysteryman9855 Месяц назад

    I AM TRYING TO MAKE AN AVATAR THAT CAN CONTROL MY COMPUTER WITH OPEN INTERPRETER AND HEY-GEN LIVE STREAM A P I .

    • @ShawhinTalebi
      @ShawhinTalebi  Месяц назад

      Sounds like an awesome project! Claude's computer use capability might be helpful too: docs.anthropic.com/en/docs/build-with-claude/computer-use

  • @Ilan-Aviv
    @Ilan-Aviv Месяц назад

    Use dark mode man!!!
    I'll skip this video

    • @ShawhinTalebi
      @ShawhinTalebi  Месяц назад +1

      Thanks for the suggestion. I hadn't considered that before, but will experiment with it in future videos :)

    • @Ilan-Aviv
      @Ilan-Aviv Месяц назад

      ​@@ShawhinTalebimany if not most of developers work in low light space in dark mode. When get this white blue splash light, it kills the eyes.also blue light damage the brain for the long run.
      Just telling so you know.

  • @jonnylukejs
    @jonnylukejs Месяц назад

    I have versions of all of the above open sourced and not