Free and Local AI in Home Assistant using Ollama

Поделиться
HTML-код
  • Опубликовано: 5 окт 2024
  • ► MY HOME ASSISTANT INSTALLATION METHODS FREE WEBINAR - automatelike.p...
    ► DOWNLOAD MY FREE SMART HOME GLOSSARY - automatelike.p...
    ► MY RECORDING GEAR
    MAIN CAMERA: amzn.to/3Ln8qzb
    MAIN & 2ND ANGLE LENS: amzn.to/48bhxMZ
    2ND ANGLE CAMERA: amzn.to/44RjRWs
    SD CARDS: amzn.to/3sT7fRy & amzn.to/3sS0wHu
    MICROPHONE: amzn.to/466Kxne
    BACKUP MIC: amzn.to/468BSkb
    EDITING MACHINE: amzn.to/45LWdvS
    ► SUPPORT MY WORK
    Paypal - www.paypal.me/...
    Patreon - / kpeyanski
    Bitcoin - 1GnUtPEXaeCUVWdJxCfDaKkvcwf247akva
    Revolut - revolut.me/kir...
    Join this channel to get access to perks - / @kpeyanski
    ✅ Don't Forget to like 👍 comment ✍ and subscribe to my channel!
    ► MY ARTICLE ABOUT THAT TOPIC - peyanski.com/h...
    ► DISCLAIMER
    Some of the links above are affiliate links. If you click on these links and purchase an item I will earn a small commission with no additional cost for you. Of course, you don’t have to do so in case you don’t want to support my work!

Комментарии • 73

  • @KPeyanski
    @KPeyanski  5 месяцев назад +1

    Are you going to try this Home Assistant Ollama Integration? And if yes, on what kind of device are you going to install the Ollama software?

  • @RocketBoom1966
    @RocketBoom1966 5 месяцев назад +4

    Thank you, excellent content as usual. I have setup Ollama running in a Docker container on my Unraid server. The server has a low power Nvidia GPU which I make use of to speed up responses.
    Another fun thing to try is to modify the end of the prompt template with something like this:
    Answer the user's questions using the information about this smart home.
    Keep your answers brief and do not apologize. Speak in the style of Captain Picard from Star Trek.
    Yes, my assistant will respond with answers in the style of Captain Picard.

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      Oh that is very interesting thanks for the info, but how you make the HA Ollama Integration to answer with voice?

    • @RocketBoom1966
      @RocketBoom1966 5 месяцев назад

      @@KPeyanski I have seen it done, however I have struggled to make it work. My modified prompt template only responds in text form as you explained in your video. Things are moving so fast with these AI integrations, I imagine it won't be long until Home Assistant includes powerful AI tools by default. Exciting times.

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      exciting times indeed :)

    • @EvgenMo1111
      @EvgenMo1111 3 месяца назад

      hi, what size is your LLM?

  • @FrankGraffagnino
    @FrankGraffagnino 5 месяцев назад +1

    I _REALLY_ appreciate a tutorial that shows how to do this with a local LLM... very cool. Thanks!

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      You're very welcome! Are you going to try it and on what device?

    • @FrankGraffagnino
      @FrankGraffagnino 5 месяцев назад +1

      @@KPeyanski probably not yet. But I just love when consumers can be better educated about local control. Thanks!

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      Yes, I also prefer local. Unfortunately it is not always an option.

  • @AlonsoVPR
    @AlonsoVPR 5 месяцев назад +3

    I was waiting for someone to make a video about this! thank you sir!!

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      Glad it was helpful! On what kind of device are you going to install the Ollama software?

    • @AlonsoVPR
      @AlonsoVPR 5 месяцев назад +1

      @@KPeyanski I don't have enough horsepower for this at the moment, I'm into low consumption at the moment but I'm thinking on getting a proxmox server with a dedicated GPU, At the moment all my house runs on a 2012 i5 Mac mini with 8gb of ram also using proxmox

    • @KPeyanski
      @KPeyanski  5 месяцев назад +1

      I understand, low power consumption is important but i5 is not that bad and you can try Ollama on it. If it is not OK just delete/uninstall it!

    • @AlonsoVPR
      @AlonsoVPR 5 месяцев назад

      @@KPeyanski Maybe when I get a better server with more ram :P sadly my old mac mini has 8gb of ram soldered to the motherboard and all my services are using about 72% of the ram at the moment:P
      Now I'm struggling on finding a good zigbee mmwave sensor that doesn't spams the network :/ Any recomendations?
      I have tried the TUYA-M100 and the MTG275-ZB-RL. although the MTG275-ZB-RL is way better than the TUYA it's still spamming my zigbee network several times per second

    • @ecotts
      @ecotts 5 месяцев назад

      I'm waiting for someone to make a video about all the data that META stole from your system as a result of the installation and then sold on to some random companies.

  • @Palleri
    @Palleri 5 месяцев назад +5

    Could you share the prompt template you are using?

  • @bugsub
    @bugsub 5 месяцев назад +1

    Wow! Fantastic tutorial! Really appreciate your channel!

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      Glad it was helpful and thanks for the kind words!

  • @joeking5211
    @joeking5211 4 месяца назад

    Looks a fantastic vid. Will keep an eye open for the Windows tutorial and come back then.

    • @KPeyanski
      @KPeyanski  4 месяца назад

      it is almost the same for windows. You just have to install the ollama windows version and everything else is the same

  • @SmartTechArabic
    @SmartTechArabic 3 месяца назад

    Thanks for the informative tutorial. I have set Ollama server on a separate server, and it the local LLM is working well through the open web-UI, and I setup the Olama integration on home assistant, and I setup a home assistant assist to use Ollama. But unfortunately whenever I ask a qesution, I am not getting any response. What am I missing?

    • @KPeyanski
      @KPeyanski  3 месяца назад

      try debug on your pipeline and check what is going on...

  • @PauloAbreu
    @PauloAbreu 5 месяцев назад

    Great tutorial! Thanks. Is English the only language available?

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      not sure about that, but I think yes!

  • @danninoash
    @danninoash 5 месяцев назад

    Hi, great video first of all, THANKS!!
    What is missing to me is the BT proxy...how do I configure it? it is a must? why this part isn't mentioned in the video? :(

    • @KPeyanski
      @KPeyanski  5 месяцев назад +1

      BT proxy is not needed at all here. The communication between Home Assistant and Ollama is over the IP network, so just follow the steps from the video and you will have it noting additional is needed

    • @danninoash
      @danninoash 5 месяцев назад

      @@KPeyanski SORRY!! I confused my question with another video of yours - the creating Apple Watch as a device in HA LOL :))

    • @danninoash
      @danninoash 5 месяцев назад

      @@KPeyanski What I wanted to ask here actually is - will I have to put a machine that will be turned on for 24\7?? (whether it's Win\LinuxMacOS)
      I didn't fully understand what should I do with it after I connect my HA with the Ollama integration?
      Qeustion #2 please - does it interrupts somehow to my Alexa or it works alongside it?
      THANKS!!

    • @danninoash
      @danninoash 5 месяцев назад

      ???

  • @jacquesdupontd
    @jacquesdupontd 4 месяца назад

    Thanks for the very good video. I know that you can now make a pretty good integration of GPT in HA and have a trigger and speech exchanges. I imagine it's gonne be even easier and perfect (and creepier at the same time) with GPT-4o. I'm sure we'll be able to control devices and have speech and trigger soon for Ollama. I subscribe to your channel

    • @KPeyanski
      @KPeyanski  4 месяца назад +1

      Thanks for subscribing! Yes, integrating GPT into Home Assistant is becoming increasingly seamless, and GPT-4 will likely make it even more intuitive and powerful. It's exciting (and a bit creepy) to think about how advanced and interactive our smart homes can become soon. Stay tuned for more updates!

    • @jacquesdupontd
      @jacquesdupontd 4 месяца назад

      @@KPeyanski I'm doing the researches to build some kind of Amazon Echo with Local LLM and maybe with a screen. A bit like the ESP32-S3-BOX but better. Not for commercialisation for now (i'm sure there are tons of projects like that being developped). I'm still not sure about what device to use to handle the local LLM. A GPU is a huge plus but takes too much place. The best would be a Mac Mini M1, Ollama LLMS works wonder on it. I have to check how well works Asahi linux and if i can pack everything in it (personnal home server, Home Assistant, Ollama, Voice assistant)

    • @jacquesdupontd
      @jacquesdupontd 3 месяца назад

      Little update. I now have a few ESP32 (KORVO, S3, Atom Echo) and i've been playing a bit (you can check my last videos to see my little setup). For now i'm only using external A.I because Ollama is not able to control our devices yet and also, it is still quite slow compared to Google or GPT. It's working great. My next project is to take a bluetooth speaker and hack it with an ESP32-S3 to make it become a Voice Assistant device like Google Nest or Amazon Echo Dot

  • @guylast9516
    @guylast9516 2 месяца назад

    I am running Home Assistant on Win 11 machine in Vmware. Ollama has been installed on same machine. What exactly would I need to do to make sure Home Assistant has access to Ollama? I think the windows install documentation is lacking.
    Update: Right so I now have it working. Adding the environmental variables didn't really help. The issue it seems was the edits I have done to the windows host files to stop Microsoft from spying on me. I have reversed it for the moment and will revisit it once I finish playing with this.
    I have added Ollama 3,1:8b with webui in docker and got it working on localhost:3000 and localhost:11434, The former for webui and latter for Home assistant.
    Another issue I came across:Had to disable Commodo firewall for integration to work in HASS in Vmware on Win 11. After banging head for days updating Commodo to latest version somehow fixed the issue. No clue how.
    Also added the new Seeed Respeaker lite kit to Esphome which allows me to say Hey Jarvis do something but frankly underwhelmed with the while thing.
    Was going to ask that config you pass to Ollama. Are you able to post it?

  • @MichaelDomer
    @MichaelDomer 5 месяцев назад +1

    Get rid of that llama2, version 3 that was just released completely destroys it.

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      sounds good, are you using it already? And for what exactly?

  • @fred7flinstone
    @fred7flinstone 4 месяца назад

    I am getting "Unexpected error during intent recognition".

  • @Bozz_AU
    @Bozz_AU 5 месяцев назад

    Thanks, excellent video.

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      Glad you enjoyed it! Are you going to try it?

    • @Bozz_AU
      @Bozz_AU 5 месяцев назад

      @@KPeyanski When voice is working

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      no idea, hopefully soon

  • @fdb-you
    @fdb-you 5 месяцев назад

    So for the llama I need a second device to be always on? Is it possibly to install it directly on a hass server

    • @KPeyanski
      @KPeyanski  5 месяцев назад +1

      No, with this integration this is not possible. At least for now...

  • @miguelcid1965
    @miguelcid1965 5 месяцев назад

    With llama is it able to turn on lights or entities in general? I read in the integration page of Hassio that with the llama integration it isnt possible, but maybe was that before? Thanks.

    • @marcomow
      @marcomow 4 месяца назад

      now it's possible, upgrade HA to 2024.06!

  • @markrgriffin
    @markrgriffin 5 месяцев назад

    Probably a dumb question, but how do I expose OLLAMA on my network if I install on Windows. Instructions are not very specific

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      Follow the instructions from the Ollama documentation and add the Ollama IP in your OLLAMA_HOST variable. These are the steps:
      On windows, Ollama inherits your user and system environment variables.
      First Quit Ollama by clicking on it in the task bar
      Edit system environment variables from the control panel
      Edit or create New variable(s) for your user account for OLLAMA_HOST, OLLAMA_MODELS, etc.
      Click OK/Apply to save
      Run ollama from a new terminal window

    • @markrgriffin
      @markrgriffin 5 месяцев назад +1

      @KPeyanski thanks for the reply. So just add the two variables names? With no values? That's where I'm stuck unfortunately. Do I not need to add a path to OLLAMA_MODELS and an ip for the host as variables?

  • @sirmax91
    @sirmax91 5 месяцев назад

    can you make it run on raspberry pi 5 and link it to home assiatant

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      I think yes, but I guess you have to try it.

  • @michaelthompson657
    @michaelthompson657 5 месяцев назад

    Im assuming since it can be installed on Linux you could have this on a separate pi on raspberry pi os lite and connect it to your other pi running HA? Just I have HA on a pi 4 and have a spare pi 3, just wondering if the pi 3 would be powerful enough to run ollama?

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      This is interesting indeed, but I guess you have to try it out. It will be best if you share the result!

    • @michaelthompson657
      @michaelthompson657 5 месяцев назад

      @@KPeyanski do you think I could install it on raspberry pi os lite? Im very inexperienced with pi os

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      I don't know, you can try...

    • @michaelthompson657
      @michaelthompson657 5 месяцев назад

      @@KPeyanski I’m not that good 🤣

  • @hpsfresh
    @hpsfresh 3 месяца назад

    This video needs chapters time codes

    • @KPeyanski
      @KPeyanski  3 месяца назад

      sorry, I'm too lazy for that right now and there is no one willing to help either...

  • @OrlandoPaco
    @OrlandoPaco 5 месяцев назад

    Add voice!

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      Yes, voice is needed here... Maybe in the next release!

  • @KubedPixel
    @KubedPixel 5 месяцев назад +7

    Under NO CIRCUMSTANCES is anything facebook related going ANYWHERE near my network, offline/local or not.

    • @KPeyanski
      @KPeyanski  5 месяцев назад +2

      no problem, you can select another model that have nothing in common with Meta & facebook

    • @andrewtfluck
      @andrewtfluck 5 месяцев назад +3

      Ollama, the tool, is separate from Facebook/Meta. You can run Llama on it, but you have a variety of other LLMs to choose from.

    • @KubedPixel
      @KubedPixel 5 месяцев назад

      @@andrewtfluck WhatsApp WAS a separate tool to Facebook.. not any more.
      Ollama was developed by meta (Facebook) and I'm 99% there's 'call home' beacons in the code somewhere. Also, just out of principle, I will not use anything Facebook related.

    • @Busy_Paws
      @Busy_Paws 4 месяца назад +1

      Paranoia

  • @ecotts
    @ecotts 5 месяцев назад +3

    I will never in my life add anything META related intentionally on any of my systems. Hell No!! 😂

  • @rude_people_die_young
    @rude_people_die_young 5 месяцев назад

    Shouldn’t be hard to do function calling hey

    • @KPeyanski
      @KPeyanski  5 месяцев назад

      you mean voice function hey or something else?

    • @rude_people_die_young
      @rude_people_die_young 5 месяцев назад

      @@KPeyanski I mean where the LLM emits valid JSON that can be used in commands or API calls. It’s a confusing AI term.