If you want to take a look at the code I used in these examples, I've posted them here. Thanks for watching! open.substack.com/pub/futuresmarthome/p/ai-in-your-smart-home?r=3wof2h&showWelcomeOnShare=true
Super interesting and gives me some ideas. Two quick questions: - What are you using to track your water consumption? - Are you considering using a local LLM to do the same job without sharing data with ChatGPT?
Perfect! Love that! Thank you for the video. Now we know how it works in your house, but to repeat it, it is not pretty clear in hardware and steps perspective. It feels like will be very hard to repeat. But I would loe to have something like that at home. So if you could start it from the very beginning and make a playlist of the videos... that would be awesome (tip for content of your new channel). P.S. I can see your API key, won't use it. but others can. Make sure you regenerate a new one.
It's an awesome video, thank you for that. It seems, the cost highly depends on the language. I called the API a few times using Hungarian language. Although the requests were pretty simple, huge amount of context tokens was used. Calling API 2 times using Hungarian language costs 0.01 USD. I'll send a few Hungarian books for OpenAI to learn my mother tongue a bit better.😂
This really gets me excited, but currently way over my head as I've only really been digging into HAOS for about a month now. Looking forward to more tutorials about OpenAI integrations though.
Great video which totally inspired me to build something like this myself. . Thank you so much!! . I experienced that the templating part didn't work so well for me though.. For instance It kept saying it couldn't retrieve weather forecasts from the exposed entities. But when i switched to chatgpt-4o it made everything 10 times easier! The assistant didnt even need any templating nor did i give it information where to look things up. It instantly searched for weather entitties in my HA setup and told me the forecasts for the next couple of days.
He Bas zou je wat kunnen delen hoe je dit voor elkaar heb gekregen ik ben ook zeer geïnspireerd door deze video en ben benieuwd hoe jij dit voor elkaar heb gekregen !
@@henkvanholten3305 Sorry lees dit nu pas.. Eigenlijk is het aanpassen van het model naar gpt-4o, het enige wat ik heb gedaan. Dat zijn de default settings. De reden waarom dat niet in deze video wordt benoemd is volgens mij omdat dit pas zeer recent mogelijk is geworden dankzij de HA update van mei 2024
Its funny that i just set up a similar notification thing in HA and run into the same little annoying things like the weird output variable name.. Great video man, I just subscribed. keep it up!
Thanks for this! I just followed the guide, but decided to make a tweak. ChatGPT seems pretty good at parsing Home Assistant data on its own, so to avoid complicated jinja, I just drop the forecast service response blob directly into the OpenAI API without hand feeding it "This is the temperature high" "This is the wind speed", etc. In the configuration for the service, I just told the agent that the weather is from home assistant data. I think there's a little more optimization to be done, but not actually having to use a giant block of text to parse out all the weather details is just as much a selling point as is the ability to have unique and amusing responses.
@@jpcdn80 That's the neat part: It's all just text. From the video at around 12:18, in the Conversation.Process service, you can drop the whole "Current Conditions are below" with all the brackets. Instead, have the text: part just be the response variable from the "get forecasts" service call. Also, in the Integaration settings for Open AI, I have it configured just to deal with the weather. My system prompt is "Parse Home Assistant weather data and describe today's weather forecast using simple English. Don't describe tomorrow's weather or any other future date forecasts. Only use US measurement values. When describing temperatures, only use fahrenheit. Don't use celsius or kilometer measurements. When describing wind speed, round to the nearest whole mile per hour. Be fun, and use occasional jokes or sarcasm to make the response seem more genuine and unique." There are a few more tweaks I did that are hard to type out in a comment here but hopefully this gets the gist of it.
Have you seen OpenAI extended conversation. It’s got a lot of extra features including the ability to control devices. Apart from having it control all my lights etc I have added other things like adding to a shopping list. What’s really great is that I just need to give it my intent and it figures out what I want to do. Telling it I have run out of milk and it adds it or saying that it’s a bit dark downstairs and it turns the lights on is very cool.
3:00 "reduce your environmental footprint". Thanks for the video. May I add a quick thought here by saying that using a cloud-based AI isn't really going to reduce your environmental footprint. Your bill yes maybe, but the environmental footprint, I am not so sure.
Thanks for the video! There seems to be an updated way of integrating the API key. I was wondering where you are supposed to choose which gpt model to use? There is no choice for it when configuring the service in the OpenAI integration. Thanks in advance!
You can also just use the ollama integration and not have to pay or use the cloud. For a lower end computer, I use stablelm-sephyr model LLM. Works well.
This is pretty cool. That said, since I learn the amount of resources (like water) used for each ChatGPT requests I feel like one have to really, REALLY, think before including ChatGPT or any other cloud based LLM into our automations. For things like this use case I would probably try to use some local LLM since rapid response is not an issue.
Totally valid thing to be concerned about. Send me some of those stats if you come across them. I’ve heard training one of the models is the equivalent of 6 cars worth of lifetime emissions.
@@FutureSmartHome I tried to link some stuff but RUclips deleted my comments, probably because it though it was spam. You can just search for "ChatGPT water" and you will find several Articles from reputable sources likes Forbes about the topic. Also you can search for Google intent to put a server farm on Chile where the access to water was already a problem. The training cost that you says is what server farms can do, nothing even remotely possible on a home environment AFAIK.
this is so much difficoult for me to make it work! i only have a Lumi sensor outside but even changing the sensors you use in the video this is still not working! is it possible to send a message with chatgpt output?
Hello. Can you tell us how you interact with the Hoogle Home device? Could I use it as a proxy for voice input/output to simplychat with ChatGPT, with HA as the backend server? T
Thank you! Here is a list of models that are available and the corresponding value that you can type. platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4
Thanks for the video, it was very interesting ! If I recall you said that the Open AI integration could report back on the status of all of your devices and/or entities in HA every morning (if you choose)? Is that right? Could it verbally give you a 'status report' ? Would this be difficult to setup? Lastly, I think it would be great if you could put out more vids on this subject. Thanks again.
Thank you for watching! I’ll definitely do more videos like this in the future. And yes, you could create an automation like this. You can also have a conversation with it about your house and all of the sensors.
@@FutureSmartHome Thanks. That would be a great tool to have for HA. Not surprisingly I have entities that go offline from time to time and getting a notification right away would be extremely helpful.
That seems neat, but not nearly as approachable as using GPT. The minimum requirement of ~8GB of ram and multiple gigs of storage makes it a little steep for the average person. Once my GPT credit runs out, I'll look at switching to that myself.
Is it possible to use display to get responses from assist chat from ha? I have set pipeline with openAI but I want to get responses as notification. Simple > I say something to wyoming satellite and I want response as notification. Is it possible?
Nice one. But I am concerned about privacy. I'm not sure I'm ready to let open AI enter my life and home.....Isn't Microsoft now involved? Seems you already have to hand over your credit card... so where is this heading? Whats next? A whole bunch of messages saying you've been signed up to xyz streaming service/news/energy company and security monitoring etc?
Thanks for watching! A totally fair (& common) point of view. I feel ok with this because I am choosing to send the data when I want to and it’s not unfettered access. I wrote the code that sends the information so, that gives me some comfort.
@@FutureSmartHome Sure. Its good that you have the skills to write the code. That;s awesome! Have subscribed and will follow the channel. I was just making the point having had smart home tech now well over a decade. I know where this can lead. I started with control4 and hated their "dealer only" model. That's why I love and switched to HA. Whilst I understand everyone needs to make a buck out of us home users, we need to be very wary what goes on behind the scenes and under the hood. HA is not like that (now) but could very well head that way (if we aren't careful). Just like FB.. starting out all groovy and "free" no subscriber model.. then 10 years on they are selling our data. I understand nabu casa has a very small subscriber fee for remote access.. that;s fine.. but will that be enough? Isn't "big data" where "the money is"? Probably out of the scope of this discussion, but doesn't Microsoft have their tentacles directly in the HA pie as well...? Thru an "interest" in github? This company (and their true motives) needs to be on everyone’s home automation "watch". That's all I'm saying 🙂
That's fair concern. Not to say that I 100% agree, but if that is bothering you, you can always self host the Ai. Know that it might cost you a bit to get a computer that can respond quickly. If it interests you, check for ollama
@@wapphigh5250nowadays you don't even need the coding skill for this, just ask the AI to write it for you, and tell it to explain what the code does step-by-step. AI doesn't lie willingly
I agree that is the core ethos of HA and I try to follow it. But while we don’t have good, readily accessible, local LLMs, I think this is an avenue for exploration. This isn’t a HACS integration, it’s the official & sanctioned integration with OpenAI.
No, HA is setup for whatever you want it to do. If you choose to share it with OpenAI, that’s your choice. If you choose not to share that info, that’s also your choice.
When i run the 'text to speech' action in the automation i get the error message 'Error rendering data template: UndefinedError: 'agent' is undefined'. Further more, I have found that I don't get an agent id in a format like yours. I see the following 'agent_id: conversation.openai_conversation'. Can you help?
A bit hard to troubleshoot via RUclips comments BUT I’m guessing at the time you are calling the text to speech you haven’t set the response variable of the OpenAI conversation process to value “agent”. Also make sure you are indeed doing the conversation process action.
@@FutureSmartHome I have followed your video instructions, creating the temporary action in the automation to determine the agent ID. I have noticed that I get an additional tick box to set, labelled 'conversation ID', which you don't seem to get in your setup.
@@danthetube5707 Me also I have been trying this for a couple days. I have the response if I use dev tools, but I can seem to find it past there. I don't recive a # agent_id.
@@FutureSmartHome you ever find the issue for no agent_id:#. When I run the conversation.process to get the id it just returns conversation.openai.conversation. Also when I use the dev tool I can get the response, but I have no clue where to find that to send thru. I get either the timestamp as a message, agent, or the “Simon says” reply. Any help is appreciated
If you want to take a look at the code I used in these examples, I've posted them here. Thanks for watching! open.substack.com/pub/futuresmarthome/p/ai-in-your-smart-home?r=3wof2h&showWelcomeOnShare=true
You should really look into wyoming
On my list!
Super interesting and gives me some ideas. Two quick questions:
- What are you using to track your water consumption?
- Are you considering using a local LLM to do the same job without sharing data with ChatGPT?
Using Phyn for water consumption. And yes next video is on how to use local LLMs
"Chat open the door"
"I'm sorry Dave, I'm afraid I can't do that"
😭😭
Perfect! Love that! Thank you for the video. Now we know how it works in your house, but to repeat it, it is not pretty clear in hardware and steps perspective. It feels like will be very hard to repeat. But I would loe to have something like that at home. So if you could start it from the very beginning and make a playlist of the videos... that would be awesome (tip for content of your new channel).
P.S. I can see your API key, won't use it. but others can. Make sure you regenerate a new one.
Thanks for the tips! Where did you see the API key? I’ll make sure to regenerate! Edit: I found the spot I missed! Thanks for the tip.
It's an awesome video, thank you for that.
It seems, the cost highly depends on the language. I called the API a few times using Hungarian language. Although the requests were pretty simple, huge amount of context tokens was used.
Calling API 2 times using Hungarian language costs 0.01 USD.
I'll send a few Hungarian books for OpenAI to learn my mother tongue a bit better.😂
This is really interesting to hear thanks for sharing.
This really gets me excited, but currently way over my head as I've only really been digging into HAOS for about a month now. Looking forward to more tutorials about OpenAI integrations though.
Glad you liked it! It can be a bit intimidating. But you’ll get the hang of it!
Great video which totally inspired me to build something like this myself. . Thank you so much!! . I experienced that the templating part didn't work so well for me though.. For instance It kept saying it couldn't retrieve weather forecasts from the exposed entities. But when i switched to chatgpt-4o it made everything 10 times easier! The assistant didnt even need any templating nor did i give it information where to look things up. It instantly searched for weather entitties in my HA setup and told me the forecasts for the next couple of days.
He Bas zou je wat kunnen delen hoe je dit voor elkaar heb gekregen ik ben ook zeer geïnspireerd door deze video en ben benieuwd hoe jij dit voor elkaar heb gekregen !
@@henkvanholten3305 Sorry lees dit nu pas.. Eigenlijk is het aanpassen van het model naar gpt-4o, het enige wat ik heb gedaan. Dat zijn de default settings. De reden waarom dat niet in deze video wordt benoemd is volgens mij omdat dit pas zeer recent mogelijk is geworden dankzij de HA update van mei 2024
Its funny that i just set up a similar notification thing in HA and run into the same little annoying things like the weird output variable name.. Great video man, I just subscribed. keep it up!
Thanks so much!
Thanks for this! I just followed the guide, but decided to make a tweak. ChatGPT seems pretty good at parsing Home Assistant data on its own, so to avoid complicated jinja, I just drop the forecast service response blob directly into the OpenAI API without hand feeding it "This is the temperature high" "This is the wind speed", etc. In the configuration for the service, I just told the agent that the weather is from home assistant data. I think there's a little more optimization to be done, but not actually having to use a giant block of text to parse out all the weather details is just as much a selling point as is the ability to have unique and amusing responses.
This is a great point that I never tried! Will be adjusting mine as well for the future!
Can you detail a little bit how this was done? I'm only seeing how you would pass text along. Thanks!
@@jpcdn80 That's the neat part: It's all just text. From the video at around 12:18, in the Conversation.Process service, you can drop the whole "Current Conditions are below" with all the brackets. Instead, have the text: part just be the response variable from the "get forecasts" service call. Also, in the Integaration settings for Open AI, I have it configured just to deal with the weather. My system prompt is "Parse Home Assistant weather data and describe today's weather forecast using simple English. Don't describe tomorrow's weather or any other future date forecasts. Only use US measurement values. When describing temperatures, only use fahrenheit. Don't use celsius or kilometer measurements. When describing wind speed, round to the nearest whole mile per hour. Be fun, and use occasional jokes or sarcasm to make the response seem more genuine and unique."
There are a few more tweaks I did that are hard to type out in a comment here but hopefully this gets the gist of it.
@@TheSlowestZombie would you mind copy/pasting what you have in the text block of the conversation.process?
@@welbeschikbaa Fingers crossed that the formatting works out. Here is my entire script:
alias: Weather Report
sequence:
- service: alexa_media.update_last_called
data: {}
- service: weather.get_forecasts
metadata: {}
data:
type: daily
target:
entity_id: weather.forecast_neptune
response_variable: dailyforecast
- variables:
weatherparse: |
{{ dailyforecast['weather.forecast_neptune'].forecast[0] }}
- service: conversation.process
data:
agent_id: conversation.openai_conversation
text: "{{ weatherparse }}"
response_variable: openairesponse
enabled: true
- variables:
plainresponse: |
{{ openairesponse['response'].speech.plain.speech }}
enabled: true
- delay:
hours: 0
minutes: 0
seconds: 0
milliseconds: 500
- service: notify.alexa_media_last_called
data:
message: "{{ plainresponse }}"
enabled: true
mode: single
icon: mdi:weather-sunny-alert
Thanks, works great here. Now I can’t wait to hear the latest weather report!
Love it! Hope you enjoyed it!
Nice video. Certainly sets the mind going with possible ideas. Thanks for your work and good luck with your channel. Eddie.
Thank you for the kind words!! That’s exactly what I was going for.
Have you seen OpenAI extended conversation. It’s got a lot of extra features including the ability to control devices. Apart from having it control all my lights etc I have added other things like adding to a shopping list. What’s really great is that I just need to give it my intent and it figures out what I want to do.
Telling it I have run out of milk and it adds it or saying that it’s a bit dark downstairs and it turns the lights on is very cool.
How you configure it? Can you show or explain?
Awesome and really exciting opportunities for integrating into my home. Love it.
+1 for the Buckaroo Bonsai musical theme! "When are we having AI in our Home?! -- Real Soon!"
No matter where you go, there you are
Very detailed walk through. Love it. Subscribed.
Thank you so much!
Thank you!!
3:00 "reduce your environmental footprint". Thanks for the video. May I add a quick thought here by saying that using a cloud-based AI isn't really going to reduce your environmental footprint. Your bill yes maybe, but the environmental footprint, I am not so sure.
Fair point.
Thanks for the video! There seems to be an updated way of integrating the API key. I was wondering where you are supposed to choose which gpt model to use? There is no choice for it when configuring the service in the OpenAI integration. Thanks in advance!
I’ll try to cover in an upcoming video
You sir have got yourself a new subscriber
Awesome, thank you!
Good examples well explained. I'm now subscribed. 😃
PS: As a Dane I liked and agree on the en-GB British accent remark 😂
💯
You know. All these small expenses add up. That is $0.12 a year! You could buy a year worths of API calls for that!
You can also just use the ollama integration and not have to pay or use the cloud. For a lower end computer, I use stablelm-sephyr model LLM. Works well.
Yes! I’m actually working on a video about this and some of the pros and cons.
This is pretty cool.
That said, since I learn the amount of resources (like water) used for each ChatGPT requests I feel like one have to really, REALLY, think before including ChatGPT or any other cloud based LLM into our automations.
For things like this use case I would probably try to use some local LLM since rapid response is not an issue.
Totally valid thing to be concerned about. Send me some of those stats if you come across them. I’ve heard training one of the models is the equivalent of 6 cars worth of lifetime emissions.
@@FutureSmartHome I tried to link some stuff but RUclips deleted my comments, probably because it though it was spam.
You can just search for "ChatGPT water" and you will find several Articles from reputable sources likes Forbes about the topic.
Also you can search for Google intent to put a server farm on Chile where the access to water was already a problem.
The training cost that you says is what server farms can do, nothing even remotely possible on a home environment AFAIK.
I’m exploring ollama for local GPT. Will probably do a video on it.
this is so much difficoult for me to make it work! i only have a Lumi sensor outside but even changing the sensors you use in the video this is still not working! is it possible to send a message with chatgpt output?
Excellent. Privacy concerns seem valid, but this is excellent.
Glad you enjoyed it!
Really great video. I've got many new ideas on what I can do next with my smart home.
Awesome to hear. So glad it prompted some ideas!
Great video , thanks. If you could share any code snippets would be also nice
Thanks for watching! Check out the pinned comment on this video - I link to my substack which has some snipped.
Great video, you should change your key, you can see it at 5:42
Good catch, thank you! I even tried to blur it out but must’ve cut the blur one frame too soon 😂
And thanks for watching!
Thanks , Everything works fine, but the responses are cut off. Is there a way to remove the character limit per response?
Mobile notifications are limited in the amount of text you can send to the device via push. No way to change that.
Awesome work, keep it up!
Great work i am very inspired by your video is it possible that the agent_id after a update of HA is not a number now but Plain text ?
Thank you so much! Yes the latest update has agent_id in plain text.
When install API key i have error: Unknown error occurred. How to fix this?
I’m not totally sure. You’ll have to check k the home assistant logs to see more. Should be in settings.
120 gallons a day? What do you do with all of that water?
Hello. Can you tell us how you interact with the Hoogle Home device? Could I use it as a proxy for voice input/output to simplychat with ChatGPT, with HA as the backend server? T
Google doesn’t make this very straightforward. The back and forth conversational aspect wouldn’t work.
Excellent video, can you show how to choose gpt4 or gpt 4o? There's no dropdown menu so Im not sure how to choose those ones. Thanks again!
Thank you! Here is a list of models that are available and the corresponding value that you can type. platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4
@@FutureSmartHome Thank you so much!
Hi, how do you get the consumption for water and electricity? From provider or sensors?
Both sensors. But check to see if your provider has an integration! Phyn (water) and Sense (electricity)
Is it the latest version of chatgpt ? (4), and is it local? I guess also its can do us morning brief of calendar? Or even each time i want?
It could totally do a morning briefing of your calendar. It’s not local. And this was before 4o
It would be awesome to hijack google home/alexa speakers to access our home assistant gpt
There are actually some projects out there that completely swap the boards in Google home mini with a custom system that could indeed do this!
Well done! You got a new sub!
Thank you!!
Thanks for the video, it was very interesting ! If I recall you said that the Open AI integration could report back on the status of all of your devices and/or entities in HA every morning (if you choose)? Is that right? Could it verbally give you a 'status report' ? Would this be difficult to setup? Lastly, I think it would be great if you could put out more vids on this subject. Thanks again.
Thank you for watching! I’ll definitely do more videos like this in the future. And yes, you could create an automation like this. You can also have a conversation with it about your house and all of the sensors.
@@FutureSmartHome Thanks. That would be a great tool to have for HA. Not surprisingly I have entities that go offline from time to time and getting a notification right away would be extremely helpful.
Why not run Ollama locally on your hassio box? All ran locally without external dependencies.
That seems neat, but not nearly as approachable as using GPT. The minimum requirement of ~8GB of ram and multiple gigs of storage makes it a little steep for the average person. Once my GPT credit runs out, I'll look at switching to that myself.
I will be looking into this!
Also most open source models are not even close to as useful and "smart" as gpt4
@@gabynevada Llama 3 is on par with gpt 4. Haven't tested against gpt 4o yet.
@@Techguru604 Think I had only tested until Llama 2, gonna give Llama 3 a try then. Thanks!
Are you using a different API key for each service?
Yes. It’s nice to be able to track usage per type. And I can set them up independently with different criteria.
I was working and watching your video. This is so awesome that I'm replacing my voice notifications with this :D
Glad to hear it, Thanks for watching!
I will send my wifi scale data to ChatGPT so it will tell me to go to the gym more often. Awesome Video
I might do that too! Thank you!
@@FutureSmartHome is it also possible to give the OpenAI Integration controll over several entitys to control them?
Yes. In the examples I showed, I’m sending data from multiple different systems.
@@FutureSmartHome Ah sorry i just watches the Kids weather forecast. Great, thanks!
Great video and idea, I will try it. Water sensor missing :-) I need it now... Immediately :-D
Greetings from Germany
Hello! So glad you liked the video. There are a few good water sensors out there!
Is it possible to use display to get responses from assist chat from ha? I have set pipeline with openAI but I want to get responses as notification. Simple > I say something to wyoming satellite and I want response as notification. Is it possible?
Yes that's possible
be nice if you added links
How come no uploads in 5 months? 😢
Life got in the way! But, I’m planning to get back at it soon. Appreciate the sentiment!
@ totally understandable! Looking forward to the next video 😀
OpenAI weather update automations should really become a script since it has no triggers
Fair point!
this is great!
Thank you! Thanks for watching!
How much water does AI require to process your prompt per month? It's estimated to be 500mL per 5-50 prompts. Datacenters use a lot for cooling.
Great video. The title should say one cent a day, not a month.
Thank you and thanks for watching. The GPT 3.5 model, which is mostly what I was using, was one cent a month for all of my usage in March.
skip 4:46 to the part where he says "lets jump into it"
I'd prefer Anthropic haiku model as it's cheaper and better in many languages
I’ll check it out!
You might want to edit this video. You didn't hide your openai API key very well.
Can also see where you live for the same reason. I mean it doesn't really matter, and you can redo the API keys, but just FYI. :)
I did regenerate the API key but what do you mean you can see where I live?
Actually, don’t respond i found it thank you!
@@FutureSmartHome NP!
Nice one. But I am concerned about privacy. I'm not sure I'm ready to let open AI enter my life and home.....Isn't Microsoft now involved? Seems you already have to hand over your credit card... so where is this heading? Whats next? A whole bunch of messages saying you've been signed up to xyz streaming service/news/energy company and security monitoring etc?
Thanks for watching! A totally fair (& common) point of view. I feel ok with this because I am choosing to send the data when I want to and it’s not unfettered access. I wrote the code that sends the information so, that gives me some comfort.
@@FutureSmartHome Sure. Its good that you have the skills to write the code. That;s awesome! Have subscribed and will follow the channel. I was just making the point having had smart home tech now well over a decade. I know where this can lead. I started with control4 and hated their "dealer only" model. That's why I love and switched to HA. Whilst I understand everyone needs to make a buck out of us home users, we need to be very wary what goes on behind the scenes and under the hood. HA is not like that (now) but could very well head that way (if we aren't careful). Just like FB.. starting out all groovy and "free" no subscriber model.. then 10 years on they are selling our data. I understand nabu casa has a very small subscriber fee for remote access.. that;s fine.. but will that be enough? Isn't "big data" where "the money is"? Probably out of the scope of this discussion, but doesn't Microsoft have their tentacles directly in the HA pie as well...? Thru an "interest" in github? This company (and their true motives) needs to be on everyone’s home automation "watch". That's all I'm saying 🙂
In Ollama is free!
That's fair concern. Not to say that I 100% agree, but if that is bothering you, you can always self host the Ai. Know that it might cost you a bit to get a computer that can respond quickly. If it interests you, check for ollama
@@wapphigh5250nowadays you don't even need the coding skill for this, just ask the AI to write it for you, and tell it to explain what the code does step-by-step. AI doesn't lie willingly
Home assistant is all about NOT letting big tech into your home. There are locally hosted ai alternatives to use.
Oh noooo😢😢😢 openAI knows i used 50 gallons of water this week😢😢😢😢
I agree that is the core ethos of HA and I try to follow it. But while we don’t have good, readily accessible, local LLMs, I think this is an avenue for exploration. This isn’t a HACS integration, it’s the official & sanctioned integration with OpenAI.
@@ChrizzeeB send all the internal data from all the sensors to OpenAI… really? Just to read the same notifications in a different way?
No, HA is setup for whatever you want it to do. If you choose to share it with OpenAI, that’s your choice. If you choose not to share that info, that’s also your choice.
@@thrawnis it is, yes, but "local control, local data" is the core value.
$0/month sounds better to me.
When i run the 'text to speech' action in the automation i get the error message 'Error rendering data template: UndefinedError: 'agent' is undefined'. Further more, I have found that I don't get an agent id in a format like yours. I see the following 'agent_id: conversation.openai_conversation'. Can you help?
A bit hard to troubleshoot via RUclips comments BUT I’m guessing at the time you are calling the text to speech you haven’t set the response variable of the OpenAI conversation process to value “agent”. Also make sure you are indeed doing the conversation process action.
@@FutureSmartHome I have followed your video instructions, creating the temporary action in the automation to determine the agent ID. I have noticed that I get an additional tick box to set, labelled 'conversation ID', which you don't seem to get in your setup.
@@danthetube5707 I am a few home assistant versions behind so I’ll have to update and check it out.
@@danthetube5707 Me also I have been trying this for a couple days. I have the response if I use dev tools, but I can seem to find it past there. I don't recive a # agent_id.
@@FutureSmartHome you ever find the issue for no agent_id:#. When I run the conversation.process to get the id it just returns conversation.openai.conversation.
Also when I use the dev tool I can get the response, but I have no clue where to find that to send thru. I get either the timestamp as a message, agent, or the “Simon says” reply. Any help is appreciated