Multimodal AI: LLMs that can see (and hear)

Поделиться
HTML-код
  • Опубликовано: 25 ноя 2024

Комментарии •

  • @ShawhinTalebi
    @ShawhinTalebi  5 дней назад +3

    I'm excited to kick off this new series! Check out more resources and references in the description :)

  • @mohsinshah9933
    @mohsinshah9933 5 дней назад +2

    Hi, Shaw Talebi
    Please make some videos on LangChain, LangGraph, AI Agents
    your teaching style is best and simple

    • @ShawhinTalebi
      @ShawhinTalebi  5 дней назад

      Thanks for the suggestion! I added that to my list :)

  • @ifycadeau
    @ifycadeau 5 дней назад

    WOOO 🎉 you’re back!!

  • @buanadaruokta8766
    @buanadaruokta8766 2 дня назад

    great video!

  • @sam-uw3gf
    @sam-uw3gf 5 дней назад

    great video, do videos on Lang chain and AI agents

  • @jonnylukejs
    @jonnylukejs 3 дня назад

    I have versions of all of the above open sourced and not

  • @mysteryman9855
    @mysteryman9855 4 дня назад

    I AM TRYING TO MAKE AN AVATAR THAT CAN CONTROL MY COMPUTER WITH OPEN INTERPRETER AND HEY-GEN LIVE STREAM A P I .

    • @ShawhinTalebi
      @ShawhinTalebi  День назад

      Sounds like an awesome project! Claude's computer use capability might be helpful too: docs.anthropic.com/en/docs/build-with-claude/computer-use

  • @Ilan-Aviv
    @Ilan-Aviv 4 дня назад

    Use dark mode man!!!
    I'll skip this video

    • @ShawhinTalebi
      @ShawhinTalebi  День назад +1

      Thanks for the suggestion. I hadn't considered that before, but will experiment with it in future videos :)

    • @Ilan-Aviv
      @Ilan-Aviv День назад

      ​@@ShawhinTalebimany if not most of developers work in low light space in dark mode. When get this white blue splash light, it kills the eyes.also blue light damage the brain for the long run.
      Just telling so you know.