Thank you for your tutorial. I got this set up, and it's working. Great thing is that it remembers the previous conversations / requests. Only 1 small problem, is that it drops off periodically (the screen says "HA not found" and a QR code) then comes back after a short while again.
Would be great all in local mode with two Rpi5. One of them for HAOS with SSD and Hailo 8L with Frigate and second one with Ollama server and AI kit. All in a special case with liquid refrigeration. May be would be possible in near future.
Hi Kiril, good video! Do you think it is possible to run AI in local mode some day with Piper and Whisper, without using the internet? I'm saying this because I think it doesn't make sense to use the voice control over the internet since the philosophy of HA is to run locally.
You can do that now. I use an integration I had to add to HACs names "extended OpenAI Conversation" and setup is the same but as long as you have a local LLM based on OpenAI with a URL, which all do, it's just a local one. You can point to it and just put in 1234 as the API key. Nabu Casa has also been working with Nvidia using one of their Jetson computers for local AI but they have to port stuff from x86/CPU based to GPU based processing. Some has been done, some hasn't. The problem with a local LLM right now is coats. Can you have HA work with an LLM and get fast response times? Yes, but you need an Nvidia GPU or response times can take up to 30 seconds or more when using anything CPU based depending on the question. Heck, you can install an LLM on Windows using Linux subsystem and have it work. Probably not ideal though. Honestly, I got my voice controls working, I can create timers and it can tell me the weather outside with no AI. That's honestly all I personally need from my voice assistant. I can open a web browser and type in a question if it's more complicated than that but I can see the appeal of an all in one local solution. I'm using an Espressif Korvo-1 as my voice assistant using micro wake word.
Yes, you can by using Ollama and I have a video tutorial. The only problem for now is that is in "read only" mode and it cannot turn on/off things yet, but I guess this will be fixed soon. Here is the link to my HA Ollama video - ruclips.net/video/yp1IkUavVvc/видео.html
@@KPeyanski This can be done with the manually added repository to HACs. With that said, I have no idea what it takes to be an "official" HACs integration vs a native HA integration but I imagine lots of testing is involved. The extended OpenAI conversation integration makes you go into HACs and add it via the GitHub link. By default, it only exposes entities already exposed to voice assistants. It won't even answer general questions without some very minor changes. It can do some neat stuff like if you ask "how many lights are on" and it will tell you. There is a query you can change that allows you to control what can be sent to openAI. The issue is everything now goes to Open AI and paying for API calls to turn off lights and other minor stuff ads up. I set a 1 dollar limit and you can burn through that quickly. It still works perfectly although the repo doesn't appear to have been updated in a while. It also though it was smarter then me and when I tried to play a media player sometimes it would tell me it was already playing when it was paused. It did do some neat things like let me say "unpause media player" when I have no s eats or aliases using "unpause" I'm looking forward to local LLM's but right now it's just the cost issue and to the fact that it's in it's early stages but the guy who created HA said they are working on a LLM specifically for HA. Nvidia reached out to them because a lot of people at Nvidia use HA. Time will tell how that works out. Below are some of the features of the integration I'm talking about. I just don't trust anyone that mass collects data anymore, especially after all the stuff FB and Google among others has been caught doing, and how that data it's used. At least outside Apple but that's because of different business models used as apples main revenue stream is from hardware sales. In fact, Nvidia passed Apple and MS to become the world's largest company (3.3 trillion or somewhere around there). In January that was 2 trillion. Pretty insane that's all from AI and video cards seem to be taking a backseat for actual gamers. I'm just waiting to see how things pan out as things can change quickly, especially with new technology. Ability to call service of Home Assistant Ability to create automation Ability to get data from external API or web page Ability to retrieve state history of entities Option to pass the current user's name to OpenAI via the user message context
I love the setup, I've bought a Atom Echo and also made a setup like this, but my problem with it is the speech recognition. It's TERRIBLE at recognising what I'm saying, and doesn't hear half the words I say, and gets the others wrong, it works perfectly fine on my phone but not on the ESP32 I suspect it's the microphone & hardware that's bad, and that's why it sucks. I'm not sure what to do to remedy this, the ESP32-S3 doesn't seem much better as it had the same issue in your demo
Home Assistant allow all options, Cloud, Local Only and Hybrid so you can do whatever you like. I will not use this Cloud AI on my main Home Assistant for now. This video is just for fun...
Home Assistant is what ever you want it to be, it doesn't have to be local. If you want to use the cloud and trust certain cloud providers you can. The great thing about it is its customisability, you can make it what you want
@@NBD739 no it doesnt have to be local , but if you are not bothered about privacy you would likely just use google home wouldnt you. the major selling point is that its local and not tracked by massive tech companies
Hi. Nabu Casa should be a free application and not a paid one. He can do no more than an experienced homeassistant user! We have to pay for the fact that he uses the free GPT chat???
@@KPeyanski That's clear to me, my friend. I just don't like the approach of the Nabu Casa itself to the homeassistant app! Well, that's just my opinion, sorry...
@@arnoldbencz6886the fact that it costs isn’t caused by nabu casa but openAI because you use their api. They decided to not make the api free, which isn’t unusual at all cause apis are often used by other apps etc. So that’s their way of getting a piece of the cake.
Thank you for your tutorial. I got this set up, and it's working. Great thing is that it remembers the previous conversations / requests. Only 1 small problem, is that it drops off periodically (the screen says "HA not found" and a QR code) then comes back after a short while again.
Would be great all in local mode with two Rpi5. One of them for HAOS with SSD and Hailo 8L with Frigate and second one with Ollama server and AI kit. All in a special case with liquid refrigeration. May be would be possible in near future.
Sounds good indeed. And maybe that will be even possible on one device in the near future :)
Wouldn't *one* fanless Intel N100 do all that for cheaper, while being more flexible, more powerful and more reliable (no water cooling)?
Hi Kiril, good video! Do you think it is possible to run AI in local mode some day with Piper and Whisper, without using the internet? I'm saying this because I think it doesn't make sense to use the voice control over the internet since the philosophy of HA is to run locally.
You can do that now. I use an integration I had to add to HACs names "extended OpenAI Conversation" and setup is the same but as long as you have a local LLM based on OpenAI with a URL, which all do, it's just a local one. You can point to it and just put in 1234 as the API key.
Nabu Casa has also been working with Nvidia using one of their Jetson computers for local AI but they have to port stuff from x86/CPU based to GPU based processing. Some has been done, some hasn't.
The problem with a local LLM right now is coats. Can you have HA work with an LLM and get fast response times? Yes, but you need an Nvidia GPU or response times can take up to 30 seconds or more when using anything CPU based depending on the question. Heck, you can install an LLM on Windows using Linux subsystem and have it work. Probably not ideal though.
Honestly, I got my voice controls working, I can create timers and it can tell me the weather outside with no AI. That's honestly all I personally need from my voice assistant. I can open a web browser and type in a question if it's more complicated than that but I can see the appeal of an all in one local solution. I'm using an Espressif Korvo-1 as my voice assistant using micro wake word.
Yes, you can by using Ollama and I have a video tutorial. The only problem for now is that is in "read only" mode and it cannot turn on/off things yet, but I guess this will be fixed soon. Here is the link to my HA Ollama video - ruclips.net/video/yp1IkUavVvc/видео.html
@JoshFisher567 thanks for this comment it was informative!
@@KPeyanski This can be done with the manually added repository to HACs. With that said, I have no idea what it takes to be an "official" HACs integration vs a native HA integration but I imagine lots of testing is involved. The extended OpenAI conversation integration makes you go into HACs and add it via the GitHub link. By default, it only exposes entities already exposed to voice assistants. It won't even answer general questions without some very minor changes. It can do some neat stuff like if you ask "how many lights are on" and it will tell you. There is a query you can change that allows you to control what can be sent to openAI.
The issue is everything now goes to Open AI and paying for API calls to turn off lights and other minor stuff ads up. I set a 1 dollar limit and you can burn through that quickly. It still works perfectly although the repo doesn't appear to have been updated in a while. It also though it was smarter then me and when I tried to play a media player sometimes it would tell me it was already playing when it was paused. It did do some neat things like let me say "unpause media player" when I have no s eats or aliases using "unpause"
I'm looking forward to local LLM's but right now it's just the cost issue and to the fact that it's in it's early stages but the guy who created HA said they are working on a LLM specifically for HA. Nvidia reached out to them because a lot of people at Nvidia use HA. Time will tell how that works out. Below are some of the features of the integration I'm talking about. I just don't trust anyone that mass collects data anymore, especially after all the stuff FB and Google among others has been caught doing, and how that data it's used. At least outside Apple but that's because of different business models used as apples main revenue stream is from hardware sales. In fact, Nvidia passed Apple and MS to become the world's largest company (3.3 trillion or somewhere around there). In January that was 2 trillion. Pretty insane that's all from AI and video cards seem to be taking a backseat for actual gamers. I'm just waiting to see how things pan out as things can change quickly, especially with new technology.
Ability to call service of Home Assistant
Ability to create automation
Ability to get data from external API or web page
Ability to retrieve state history of entities
Option to pass the current user's name to OpenAI via the user message context
I love the setup, I've bought a Atom Echo and also made a setup like this, but my problem with it is the speech recognition.
It's TERRIBLE at recognising what I'm saying, and doesn't hear half the words I say, and gets the others wrong, it works perfectly fine on my phone but not on the ESP32
I suspect it's the microphone & hardware that's bad, and that's why it sucks.
I'm not sure what to do to remedy this, the ESP32-S3 doesn't seem much better as it had the same issue in your demo
Was hoping you would leverage that Fabric thing Network Chuck discussed...
But yes, this was interesting and entertaining, thank you.
Thank you, didn't watched that video of Network Chuck
@@KPeyanski worth the time investment I think.
Use Ollama locally. Wait for next month or so for them to allow local LLMs to control home assistant and then you can control it more.
Yes, I have a Home Assistant Ollama video as well - ruclips.net/video/yp1IkUavVvc/видео.html
Is it possible to use homepods only like speakers and micro to build this system?
Why show only examples where the condition is met? Does it work if you ask it to make you a coffee if it's before noon (if it's 1 PM)?
yes, it does work if conditions are not met
Hello sir, can you pls show me how to integrate MIPC cameras into home assistant
good video but i thought the point of using home assistant was to avoid relying on cloud products with horrifying security risks like llms
Home Assistant allow all options, Cloud, Local Only and Hybrid so you can do whatever you like. I will not use this Cloud AI on my main Home Assistant for now. This video is just for fun...
@@KPeyanski Vobec ma to nepobavilo, sorry
Home Assistant is what ever you want it to be, it doesn't have to be local. If you want to use the cloud and trust certain cloud providers you can.
The great thing about it is its customisability, you can make it what you want
@@NBD739 no it doesnt have to be local , but if you are not bothered about privacy you would likely just use google home wouldnt you.
the major selling point is that its local and not tracked by massive tech companies
I just want to add Piper and it ask me a server address and port ...
What I must put there please ?
Thanks
Server= core-piper
Port= 10200
Yes I'd be glad to see the Google generative ai intégration video
Noted & thanks for you comment. Was the OpenAI GPT integration interesting?
@@KPeyanski Yes definitely interesting
Hi. Nabu Casa should be a free application and not a paid one. He can do no more than an experienced homeassistant user!
We have to pay for the fact that he uses the free GPT chat???
I’m not sure if I understand you correctly, but Nabu Casa subscription is optional you can subscribe or not it is not a must.
@@KPeyanski That's clear to me, my friend. I just don't like the approach of the Nabu Casa itself to the homeassistant app! Well, that's just my opinion, sorry...
@@arnoldbencz6886the fact that it costs isn’t caused by nabu casa but openAI because you use their api. They decided to not make the api free, which isn’t unusual at all cause apis are often used by other apps etc. So that’s their way of getting a piece of the cake.