Adding ChatGPT in Home Assistant for Natural Language Control of A Smart Home

Поделиться
HTML-код
  • Опубликовано: 22 ноя 2024

Комментарии • 33

  • @AntonioBrandao
    @AntonioBrandao 17 дней назад

    Thank you for your tutorial. I got this set up, and it's working. Great thing is that it remembers the previous conversations / requests. Only 1 small problem, is that it drops off periodically (the screen says "HA not found" and a QR code) then comes back after a short while again.

  • @galdakaMusic
    @galdakaMusic 5 месяцев назад +2

    Would be great all in local mode with two Rpi5. One of them for HAOS with SSD and Hailo 8L with Frigate and second one with Ollama server and AI kit. All in a special case with liquid refrigeration. May be would be possible in near future.

    • @KPeyanski
      @KPeyanski  5 месяцев назад +1

      Sounds good indeed. And maybe that will be even possible on one device in the near future :)

    • @NicksStuff
      @NicksStuff 4 месяца назад

      Wouldn't *one* fanless Intel N100 do all that for cheaper, while being more flexible, more powerful and more reliable (no water cooling)?

  • @rogeriocamargo1984
    @rogeriocamargo1984 5 месяцев назад +3

    Hi Kiril, good video! Do you think it is possible to run AI in local mode some day with Piper and Whisper, without using the internet? I'm saying this because I think it doesn't make sense to use the voice control over the internet since the philosophy of HA is to run locally.

    • @JoshFisher567
      @JoshFisher567 5 месяцев назад +2

      You can do that now. I use an integration I had to add to HACs names "extended OpenAI Conversation" and setup is the same but as long as you have a local LLM based on OpenAI with a URL, which all do, it's just a local one. You can point to it and just put in 1234 as the API key.
      Nabu Casa has also been working with Nvidia using one of their Jetson computers for local AI but they have to port stuff from x86/CPU based to GPU based processing. Some has been done, some hasn't.
      The problem with a local LLM right now is coats. Can you have HA work with an LLM and get fast response times? Yes, but you need an Nvidia GPU or response times can take up to 30 seconds or more when using anything CPU based depending on the question. Heck, you can install an LLM on Windows using Linux subsystem and have it work. Probably not ideal though.
      Honestly, I got my voice controls working, I can create timers and it can tell me the weather outside with no AI. That's honestly all I personally need from my voice assistant. I can open a web browser and type in a question if it's more complicated than that but I can see the appeal of an all in one local solution. I'm using an Espressif Korvo-1 as my voice assistant using micro wake word.

    • @KPeyanski
      @KPeyanski  5 месяцев назад +2

      Yes, you can by using Ollama and I have a video tutorial. The only problem for now is that is in "read only" mode and it cannot turn on/off things yet, but I guess this will be fixed soon. Here is the link to my HA Ollama video - ruclips.net/video/yp1IkUavVvc/видео.html

    • @KPeyanski
      @KPeyanski  5 месяцев назад +1

      @JoshFisher567 thanks for this comment it was informative!

    • @JoshFisher567
      @JoshFisher567 5 месяцев назад

      @@KPeyanski This can be done with the manually added repository to HACs. With that said, I have no idea what it takes to be an "official" HACs integration vs a native HA integration but I imagine lots of testing is involved. The extended OpenAI conversation integration makes you go into HACs and add it via the GitHub link. By default, it only exposes entities already exposed to voice assistants. It won't even answer general questions without some very minor changes. It can do some neat stuff like if you ask "how many lights are on" and it will tell you. There is a query you can change that allows you to control what can be sent to openAI.
      The issue is everything now goes to Open AI and paying for API calls to turn off lights and other minor stuff ads up. I set a 1 dollar limit and you can burn through that quickly. It still works perfectly although the repo doesn't appear to have been updated in a while. It also though it was smarter then me and when I tried to play a media player sometimes it would tell me it was already playing when it was paused. It did do some neat things like let me say "unpause media player" when I have no s eats or aliases using "unpause"
      I'm looking forward to local LLM's but right now it's just the cost issue and to the fact that it's in it's early stages but the guy who created HA said they are working on a LLM specifically for HA. Nvidia reached out to them because a lot of people at Nvidia use HA. Time will tell how that works out. Below are some of the features of the integration I'm talking about. I just don't trust anyone that mass collects data anymore, especially after all the stuff FB and Google among others has been caught doing, and how that data it's used. At least outside Apple but that's because of different business models used as apples main revenue stream is from hardware sales. In fact, Nvidia passed Apple and MS to become the world's largest company (3.3 trillion or somewhere around there). In January that was 2 trillion. Pretty insane that's all from AI and video cards seem to be taking a backseat for actual gamers. I'm just waiting to see how things pan out as things can change quickly, especially with new technology.
      Ability to call service of Home Assistant
      Ability to create automation
      Ability to get data from external API or web page
      Ability to retrieve state history of entities
      Option to pass the current user's name to OpenAI via the user message context

  • @NBD739
    @NBD739 3 месяца назад

    I love the setup, I've bought a Atom Echo and also made a setup like this, but my problem with it is the speech recognition.
    It's TERRIBLE at recognising what I'm saying, and doesn't hear half the words I say, and gets the others wrong, it works perfectly fine on my phone but not on the ESP32
    I suspect it's the microphone & hardware that's bad, and that's why it sucks.
    I'm not sure what to do to remedy this, the ESP32-S3 doesn't seem much better as it had the same issue in your demo

  • @l0gic23
    @l0gic23 5 месяцев назад

    Was hoping you would leverage that Fabric thing Network Chuck discussed...
    But yes, this was interesting and entertaining, thank you.

    • @KPeyanski
      @KPeyanski  5 месяцев назад +1

      Thank you, didn't watched that video of Network Chuck

    • @l0gic23
      @l0gic23 5 месяцев назад

      @@KPeyanski worth the time investment I think.

  • @NightHawkATL
    @NightHawkATL 5 месяцев назад

    Use Ollama locally. Wait for next month or so for them to allow local LLMs to control home assistant and then you can control it more.

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      Yes, I have a Home Assistant Ollama video as well - ruclips.net/video/yp1IkUavVvc/видео.html

  • @justlama0
    @justlama0 4 месяца назад

    Is it possible to use homepods only like speakers and micro to build this system?

  • @NicksStuff
    @NicksStuff 4 месяца назад

    Why show only examples where the condition is met? Does it work if you ask it to make you a coffee if it's before noon (if it's 1 PM)?

    • @KPeyanski
      @KPeyanski  4 месяца назад

      yes, it does work if conditions are not met

  • @oj488
    @oj488 5 месяцев назад

    Hello sir, can you pls show me how to integrate MIPC cameras into home assistant

  • @deadadam666
    @deadadam666 5 месяцев назад +7

    good video but i thought the point of using home assistant was to avoid relying on cloud products with horrifying security risks like llms

    • @KPeyanski
      @KPeyanski  5 месяцев назад +6

      Home Assistant allow all options, Cloud, Local Only and Hybrid so you can do whatever you like. I will not use this Cloud AI on my main Home Assistant for now. This video is just for fun...

    • @arnoldbencz6886
      @arnoldbencz6886 5 месяцев назад

      @@KPeyanski Vobec ma to nepobavilo, sorry

    • @NBD739
      @NBD739 3 месяца назад

      Home Assistant is what ever you want it to be, it doesn't have to be local. If you want to use the cloud and trust certain cloud providers you can.
      The great thing about it is its customisability, you can make it what you want

    • @deadadam666
      @deadadam666 3 месяца назад

      @@NBD739 no it doesnt have to be local , but if you are not bothered about privacy you would likely just use google home wouldnt you.
      the major selling point is that its local and not tracked by massive tech companies

  • @MrDenisJoshua
    @MrDenisJoshua 5 месяцев назад

    I just want to add Piper and it ask me a server address and port ...
    What I must put there please ?
    Thanks

  • @avanaraveloson5017
    @avanaraveloson5017 5 месяцев назад

    Yes I'd be glad to see the Google generative ai intégration video

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      Noted & thanks for you comment. Was the OpenAI GPT integration interesting?

    • @avanaraveloson5017
      @avanaraveloson5017 5 месяцев назад

      @@KPeyanski Yes definitely interesting

  • @arnoldbencz6886
    @arnoldbencz6886 5 месяцев назад

    Hi. Nabu Casa should be a free application and not a paid one. He can do no more than an experienced homeassistant user!
    We have to pay for the fact that he uses the free GPT chat???

    • @KPeyanski
      @KPeyanski  5 месяцев назад +2

      I’m not sure if I understand you correctly, but Nabu Casa subscription is optional you can subscribe or not it is not a must.

    • @arnoldbencz6886
      @arnoldbencz6886 5 месяцев назад

      @@KPeyanski That's clear to me, my friend. I just don't like the approach of the Nabu Casa itself to the homeassistant app! Well, that's just my opinion, sorry...

    • @JoshDerGrueneFrosch
      @JoshDerGrueneFrosch 3 месяца назад

      @@arnoldbencz6886the fact that it costs isn’t caused by nabu casa but openAI because you use their api. They decided to not make the api free, which isn’t unusual at all cause apis are often used by other apps etc. So that’s their way of getting a piece of the cake.