Developing Local AI Copilots with LangChain, NVIDIA NIM, and FAISS | LLM App Development

Поделиться
HTML-код
  • Опубликовано: 18 ноя 2024

Комментарии • 7

  • @luisangelhernandezmiranda437
    @luisangelhernandezmiranda437 29 дней назад

    I’m really interested on doing it. Could I run it on an RTX 4060 Dual or RTX 4070 Dual?

  • @sarpsomer
    @sarpsomer 2 месяца назад

    Nice outline. 👍

  • @gdc6244
    @gdc6244 2 месяца назад

    Wow amazing!

  • @timothydavid1069
    @timothydavid1069 2 месяца назад

    What's the hardware specifications of the PC / Laptop this is being run on? It seems extremely fast for a local LLM

    • @NVIDIADeveloper
      @NVIDIADeveloper  2 месяца назад

      It's running on a desktop machine with RTX A6000 GPU on it.

    • @Ronaldograxa
      @Ronaldograxa Месяц назад

      Thank you for the video! I found it really interesting, but I’m struggling a bit to understand why the demonstration uses such an expensive GPU, especially since most of us don’t have access to that kind of hardware. It would be awesome to see how this can be achieved with more accessible resources

    • @timothydavid1069
      @timothydavid1069 29 дней назад

      @@Ronaldograxa At this stage, it's still possible, just takes a lot more time than displayed in this video. For the meantime, the main solution is utilizing apis to replace the hardware demands necessary to achieve such speed.