Official Local AI Control in Home Assistant

Поделиться
HTML-код
  • Опубликовано: 15 сен 2024

Комментарии • 25

  • @goodcitizen4587
    @goodcitizen4587 Месяц назад

    Really cool! And your blog links are great.

  • @rafa.visions
    @rafa.visions 29 дней назад +1

    I had to disable llama integration and restart home assistant conversation to work it!

  • @mice3d
    @mice3d Месяц назад

    I just installed it. the official one actually dims the lights, but as a conversation assistant I think the unofficial one works nicer, maybe I need to work on the prompt.

  • @maglat
    @maglat 15 дней назад

    what kind of GPU is recommended. RTX3060 8GB or RTX4060 16GB?

  • @justinbishop9584
    @justinbishop9584 Месяц назад

    noooo i just set up the unofficial method 2 days ago from your tutorial XD

    • @fixtse.
      @fixtse.  Месяц назад +1

      Jajaja but that means you already have most of the setup work done. Don't erase the custom integration, test both approaches and see which one works better for you.

    • @justinbishop9584
      @justinbishop9584 Месяц назад

      @@fixtse. already up and running. The supported version is much smoother too

  • @Airbag888
    @Airbag888 Месяц назад

    Another great video.. can Google Coral be used instead of a GPU? I guess not.. After waiting ages to be able to buy one it seems we've moved past haha.. Maybe only for image recognition of my camera feeds?

    • @fixtse.
      @fixtse.  Месяц назад +1

      No, that's why I make emphasis on how much ram it needs to run. Check how much ram the Coral TPU has.
      But as you mention, it's super efficient to run image recognition models, so for Frigate it's the best.

    • @Airbag888
      @Airbag888 Месяц назад

      @@fixtse. I think that would be the main purpose yes to use Frigate.. or maybe Blue Iris. I also happen to have been gifted an old Nvidia Jetson TX2 and Nano... any good projects around these for home automation / security? I'm focused on power draw and want to limit as much as possible so if these devices can replace bigger ones it's a win for me :)

  • @elisalant
    @elisalant Месяц назад

    Thanks for another Great Video
    I am using a Mac, download Olama, ran the install via terminal .
    But when I try to insert URL ….. does not accept the url.
    Any ideas ?

    • @MrThesoulripper13
      @MrThesoulripper13 Месяц назад

      I'm having same problem. Did you find a fix?

    • @elisalant
      @elisalant Месяц назад

      @@MrThesoulripper13 no, not yet

    • @fixtse.
      @fixtse.  Месяц назад

      Hi, please follow the instructions on this repository for ollama github.com/valentinfrlch/ha-llmvision?tab=readme-ov-file#ollama

  • @matroska80
    @matroska80 Месяц назад

    What about using snapdragon elite dev kit (desktop) with 32 gb memory?

    • @fixtse.
      @fixtse.  Месяц назад

      As far as I can tell, I don't think it's supported yet by ollama, but the hardware arquitecture will allow it, so it probably will at some point

  • @neilos2085
    @neilos2085 Месяц назад

    Any recommendations on hardware to use if you need to buy something? Thinking same model used by the HA team

    • @neilos2085
      @neilos2085 Месяц назад

      I have an i7 Nuc. But don’t think this will be powerful enough and would need some kind of eGPU. It’s hard to know where to start

    • @fixtse.
      @fixtse.  Месяц назад

      I would go for a Nvidia 3060 with 12 gb of ram, just to have a little more wiggle room.

    • @neilos2085
      @neilos2085 Месяц назад

      @@fixtse. as a whole pc tho I mean. Where is best to start?

    • @fixtse.
      @fixtse.  Месяц назад +1

      mmmm it always depends on the budget, BUT simplifying it to be comment long acceptable. I would go for something like an i3 or i5 for the cpu (or it's equivalent on amd), 8-16 gb of ram, an RTX 3060 (12 gb ram), 1TB nmve/ssd or 1TB HDD + 256GB nvme/ssd and probably a 600-650w (silver) power supply. This will allow you to run not only the text AI tasks, but have something like TTS, STT and image generation, and most of the services you would want to run locally in one server.

    • @erans
      @erans Месяц назад

      @@fixtse. What about using a different computer thats on the local network that has a very strong GPU? think thats possible?

  • @EvgenMo1111
    @EvgenMo1111 Месяц назад

    почему то не работает, доступ открыт, оллама переустановленна, но так и не работает

    • @fixtse.
      @fixtse.  Месяц назад

      Hi, it might be due to the host configuration, try following this instructions github.com/valentinfrlch/ha-llmvision?tab=readme-ov-file#ollama