Disclaimer: This video was made to showcase how to host Large Language Models (LLMS) Locally in LM Studio to interact with your notes privately inside Obsdian using the Obsidian CoPilot Plugin. Personally I prefer to use the Custom Frames Plugin inside Obsidian inside the right pane which points to a hosted Open WebUI with my External LLM's APIs. This provides a faster and better user experience in my opinion. It's less resource intensive. Hindsight I should have showcased this at the end of the video. I have enjoyed showcasing the CoPilot Plugin but it's not my preferred method of interacting with my notes inside Obsidian. I do like the related notes feature in the CoPilot Plugin, and it's able to interact with my notes if I use my LM Studio Local LLM models but it's easy to forget that Vault VA will send your data to external cloud companies if you're not careful. As this plugin evolves I might be used more in my workflow. In the future I might make a similar video showcasing setting up a local instance of Open WebUI on a Proxmox LXC which makes external api requests if you guys are interested?
How do all these LLM plugins compare in your opinion? I find it hard to navigate them all. Some index the entire vault, some only work on the current note. Some can use most providers, others can only use external providers or even just openai. Some use the fabric set of system prompts. Some can work right inside your notes with writing assistance. Some can help with organising the notes and vault. And some require backends like docker containers to remain fully local.
Hi Choyhsien, Smart Connections I had a lot of problems with in Obsidian. Sometimes the settings wouldn't load or I would get constant notificaitons. Text Generator was a little overwhelming and not something I would use very often. I'm still experimenting with "vault QA" in the CoPilot plugin. I still prefer just running external APIs via a Open Web UI then importing back into Obsidian rather than interacting with notes via CoPilot as I still think there is a long way to go in this space. The Chat and Relavant feature is pretty cool and my external api calls are fast. I prefer the styling in Open WebUI but it's nice to have options.
@@PaulDickson7 Thanks for your comments. I agree, that when I don't want to interact with the contents of my vault, the experience in open webui is much better. But when I do, I prefer not to share the contents with open ai or google. The speed I agree is often not an issue. I also find the relevant feature in smart connections good and I guess it would translate to copilot. The chat in smart connections I find very lacking. The answers miss a lot of content I would consider obvious and often brings in content from irrelevant notes. To the point where I find it still easier to use traditional search.
@@choyhsien This has been my experience as well. I find the experience in open webui to be much better. With Gemini releasing 2.0 Flash, Thinking Experimental and TE with apps + 03-mini and 03-mini-high it's hard not to just keep the AI prompts separate from Obsidian as the user experience is much better. I was going to give Smart Connections a special mentions but I just found the plugin to be way too intrusive and a bit of a resource hog. Had Obsidian crash a few times. Even CoPilot I hesitated as this hasn't been my primary method of using AI with Obsidian and the Vault QA still needs a lot of work. Maybe with CoPilot Plus being released the developer will focus on improvements but I hope it's not just for CoPilot Plus members. Still pretty cool that we have so many options now
The CoPilot Plus RAG can but it costs $10 per month. I haven’t tested with the local embed yet. You could do it directly inside LM Studio but that’s not inside Obsidian.
@@PaulDickson7 interesting, thank you, I am trying to figure out a way for local ai to communicate with my notes in obsidian as well as help me with my books in pdf format, sense obsidian is also what i use as my pdf reader as its easy to make notes on the books, i will continue to look thank you sir
@@orionmusicnetwork1677 Cursor could be a good solution. If you needed more than 2000 completions and 50 slow premium requests then $20usd per month could be a big steep. Personally I don't use AI much in Obsidian to interact with my markdown files or PDFs, only something I have been experimenting since the release of Deepseek R1. I still prefer to distil my notes and only use AI when I need to improve grammer or make something more concise. This is usually done via Open Web UI in a browser.
@@eightbitoni You're welcome eightbit. There is still a lot of development room in the space of RAG inside of Obsidian. RAG (Retrieval-Augmented Generation) is a technique that enhances AI responses by first retrieving relevant information from a knowledge base or document collection, then using that context to generate more accurate and informed answers. I would love to hear your feedback after you have experimented with your own Obsidian Vault. Hopefull the CoPilot Developer doesn't just limit PDF and Image Support to Copilot Plus members only.
Disclaimer: This video was made to showcase how to host Large Language Models (LLMS) Locally in LM Studio to interact with your notes privately inside Obsdian using the Obsidian CoPilot Plugin.
Personally I prefer to use the Custom Frames Plugin inside Obsidian inside the right pane which points to a hosted Open WebUI with my External LLM's APIs. This provides a faster and better user experience in my opinion. It's less resource intensive. Hindsight I should have showcased this at the end of the video.
I have enjoyed showcasing the CoPilot Plugin but it's not my preferred method of interacting with my notes inside Obsidian. I do like the related notes feature in the CoPilot Plugin, and it's able to interact with my notes if I use my LM Studio Local LLM models but it's easy to forget that Vault VA will send your data to external cloud companies if you're not careful. As this plugin evolves I might be used more in my workflow.
In the future I might make a similar video showcasing setting up a local instance of Open WebUI on a Proxmox LXC which makes external api requests if you guys are interested?
😊
😊
How do all these LLM plugins compare in your opinion? I find it hard to navigate them all. Some index the entire vault, some only work on the current note. Some can use most providers, others can only use external providers or even just openai. Some use the fabric set of system prompts. Some can work right inside your notes with writing assistance. Some can help with organising the notes and vault. And some require backends like docker containers to remain fully local.
Hi Choyhsien, Smart Connections I had a lot of problems with in Obsidian. Sometimes the settings wouldn't load or I would get constant notificaitons. Text Generator was a little overwhelming and not something I would use very often. I'm still experimenting with "vault QA" in the CoPilot plugin. I still prefer just running external APIs via a Open Web UI then importing back into Obsidian rather than interacting with notes via CoPilot as I still think there is a long way to go in this space. The Chat and Relavant feature is pretty cool and my external api calls are fast. I prefer the styling in Open WebUI but it's nice to have options.
@@PaulDickson7 Thanks for your comments. I agree, that when I don't want to interact with the contents of my vault, the experience in open webui is much better. But when I do, I prefer not to share the contents with open ai or google. The speed I agree is often not an issue. I also find the relevant feature in smart connections good and I guess it would translate to copilot. The chat in smart connections I find very lacking. The answers miss a lot of content I would consider obvious and often brings in content from irrelevant notes. To the point where I find it still easier to use traditional search.
@@choyhsien This has been my experience as well. I find the experience in open webui to be much better. With Gemini releasing 2.0 Flash, Thinking Experimental and TE with apps + 03-mini and 03-mini-high it's hard not to just keep the AI prompts separate from Obsidian as the user experience is much better. I was going to give Smart Connections a special mentions but I just found the plugin to be way too intrusive and a bit of a resource hog. Had Obsidian crash a few times. Even CoPilot I hesitated as this hasn't been my primary method of using AI with Obsidian and the Vault QA still needs a lot of work. Maybe with CoPilot Plus being released the developer will focus on improvements but I hope it's not just for CoPilot Plus members. Still pretty cool that we have so many options now
can it also interact with pdf docs in obsidian?
The CoPilot Plus RAG can but it costs $10 per month. I haven’t tested with the local embed yet. You could do it directly inside LM Studio but that’s not inside Obsidian.
@@PaulDickson7 Wouldn't you be better off using Cursor if you want to interact with the markdown files and PDF with AI?
@@PaulDickson7 interesting, thank you, I am trying to figure out a way for local ai to communicate with my notes in obsidian as well as help me with my books in pdf format, sense obsidian is also what i use as my pdf reader as its easy to make notes on the books, i will continue to look thank you sir
@@orionmusicnetwork1677 Cursor could be a good solution. If you needed more than 2000 completions and 50 slow premium requests then $20usd per month could be a big steep. Personally I don't use AI much in Obsidian to interact with my markdown files or PDFs, only something I have been experimenting since the release of Deepseek R1. I still prefer to distil my notes and only use AI when I need to improve grammer or make something more concise. This is usually done via Open Web UI in a browser.
@@eightbitoni You're welcome eightbit. There is still a lot of development room in the space of RAG inside of Obsidian. RAG (Retrieval-Augmented Generation) is a technique that enhances AI responses by first retrieving relevant information from a knowledge base or document collection, then using that context to generate more accurate and informed answers. I would love to hear your feedback after you have experimented with your own Obsidian Vault. Hopefull the CoPilot Developer doesn't just limit PDF and Image Support to Copilot Plus members only.
Sorry to ask, are you an AI voice or a real person?
At some points it actually sounds very artificial, so very valid question.
@@huhsaywhat have been asked this before. I’m a human 😊 I often think this when I watch channels now. Fireship Code Reports are an example
@@N4rl0n I spend a lot of time editing out my human mistakes 😂