Testing Llama 3.1 multimodal capabilities using Meta.ai playground

Поделиться
HTML-код
  • Опубликовано: 21 окт 2024
  • This video shows Llama 3.1 multimodal capabilities using Meta.ai playground and how to get access to the gates repo
    #ai #llama3 #chatgpt #ml #datascience

Комментарии • 2

  • @elmo_ki_dunia
    @elmo_ki_dunia 2 месяца назад +1

    But you told about instruct which only generate text :( not images

    • @datascienceinyourpocket
      @datascienceinyourpocket  2 месяца назад

      A really nice observation. The definition I explained on that comment was from the perspective of text based models. I'm not very sure how the definition will change for multi-modal LLMs as text completion won't make sense for images. Let me check and get back to you. Thanks