I kid you not, this youtube algorithm is absolutely crazy. I made a couple of searches a few weeks back and you literally showed up a out of nowhere on my feed today. Exactly what I was looking for. First it showed me the chroma db admin video and now this. Thanks a lot!
It really is such a good combination of tooling, exactly what I use myself. I have personally found that Qwen 2.5 14B tends to follow instructions better than Gemma and it seems more than twice as fast despite being a 14B. My System prompt: You are a tab completion style assistant in the Obsidian editor. After the user provided content you will continue writing the next few sentences of the text as if you were the original writer. Use British English Spelling. IMPORTANT: Write in the same style and tone of the user unless asked to do otherwise. Do not begin the text with any extra characters or '...' and don't summarise the text. My User prompt: {{#context}}Context: {{context}} ================================= {{/context}} Continue the following paragraph: {{last_line}} temperature 0.2 - Note: I set this in the Modelfile, not the Companion extension as I found the Companion extension would incorrectly send the parameter to Ollama as "temp" rather than "temperature" top_p 0.85 (or 1.0 with min_p set to 0.9) num_ctx 16384 num_batch 1024 K/V cache quantisation is set to q8_0
Thanks for highlighting this plugin and how to use it. I’m using it to go over my old notes and at the end of each one I’m just typing ‘In summary:’ and it’s summarising the note content. Amazing!!!
Nice! i was already using the "continue" extension in VS Code and thought it would be nice to have autocomplete in obsidian too. Thanks for the heads-up, will be trying this out right away 🙂
Thanks, Matt! At first, I was very hesitant about everything to do with AI and now, thanks to your great video content, I can't get away from it. Despite free tools and LLMs, it is slowly becoming an expensive pleasure. Not because of the hardware. I have to invite my fiancée to dinner more often so that she doesn't feel resentful when I fall asleep with my head on the keyboard. 😅All jokes aside, as a dev, I constantly have new projects in my mind. Keep it up, thanks for your work!
Thanks, Matt. Not for me, but I passed your video on to a friend with a channel he is building. I was the guy who said you have a very smooth delivery. I figured if you're using this, it has be decent and not just hype. Thanks again.
Man i just became obsidian fan as well. I am using the copilot for obsidian. Really good as well working with Ollama. But companion seems to be even better. Thanks Matt.
This is outstanding. Very helpful and easy to set up. I will give this a try for my next RUclips scripts. One remark though: it would be fantastic, if you could include time stamps for relevant video parts (download, install, config, etc.). Other than that, just thank you.
This is amazing! I love obsidian - I have it on all my machines! Question: if you run obsidian on a computer separate from the ollama server machine, can the plug-in work on the remote machine?
If you have Ollama built with K/V cache quantisation, and set it to q8_0 - the context will use 50% less memory and the generations won't slow down nearly as much towards the end of a larger document.
I heard about it but until you nudged me with this video didn't set it up. Like you, I prefer local model. It is really nice, I can't feel Ollama working in the background which is nice. You are right about Llama 3.2 not really cutting it. Off to try Gemma. Oh boy this is fun.
Awesome video as always! I use it with Groq and it's super fast. But sometimes the line between "help me articulate" and "parrot the AI" gets blurry, we tend to choose the path of least friction and forgot to think hard on what we write, it's just human nature. That's why I was hesitant to introduce inline functionalities to my Copilot for Obsidian plugin. At the time I created it, I preferred interacting with AI on the side and not let it directly modify what I write. But people have requested inline features a lot, I'll probably introduce it anyways.
Matt, forgive my ignorance. I'm a bit of a noob in the AI space as well as obsidian. I have been using obsidian and AI tools for a few months... I can't find an obsidian this configuration page you're talking about. Can you or someone in the chat tell me what subcategory or whatever in the configuration this is under
Hey Matt, while this works well on desktop, on my M1 Air 8Gb, it really hogs it, can you suggest a model I could use from Ollama, I can always use external but there is beauty to use local one. Maybe Phi3.5?
can you make an ollama based version of Claude with computer use capabilities.? if yeah show how to video for this :P we need local open source free version of it :D
the idea is nice but the issue is the system.... my system of taking like where can I implement this... .I'm usually the guy that highlights stuff after I finish typing my notes and edit whatever I typed over and over and over real fast with ai.... hmmm the auto complete is nice.... I still just don't know where can it fit into the whole picture...
@@technovangelist I changed my mind, I've watched your video and it lead me to a cursor ai implementation. and the assisted TAB writing is super useful when editing. I wish for a better implementation as you said it's a bit funky and when the whole document is large it's problamatic
isn't user prompt the actual prompt the user needs to enter? it would be helpful to understand those keywords of {{#context}} vs {{context}} {{/context}} and why do i need the line ====== line and new lines.
I kid you not, this youtube algorithm is absolutely crazy.
I made a couple of searches a few weeks back and you literally showed up a out of nowhere on my feed today. Exactly what I was looking for.
First it showed me the chroma db admin video and now this. Thanks a lot!
Welcome aboard!
It really is such a good combination of tooling, exactly what I use myself. I have personally found that Qwen 2.5 14B tends to follow instructions better than Gemma and it seems more than twice as fast despite being a 14B.
My System prompt:
You are a tab completion style assistant in the Obsidian editor. After the user provided content you will continue writing the next few sentences of the text as if you were the original writer. Use British English Spelling. IMPORTANT: Write in the same style and tone of the user unless asked to do otherwise. Do not begin the text with any extra characters or '...' and don't summarise the text.
My User prompt:
{{#context}}Context:
{{context}}
=================================
{{/context}}
Continue the following paragraph:
{{last_line}}
temperature 0.2 - Note: I set this in the Modelfile, not the Companion extension as I found the Companion extension would incorrectly send the parameter to Ollama as "temp" rather than "temperature"
top_p 0.85 (or 1.0 with min_p set to 0.9)
num_ctx 16384
num_batch 1024
K/V cache quantisation is set to q8_0
wow it works very well. thank you so much
Thanks for highlighting this plugin and how to use it. I’m using it to go over my old notes and at the end of each one I’m just typing ‘In summary:’ and it’s summarising the note content. Amazing!!!
You are probably the most wholesome and humble of all AI content creators! Thank you for Ollama and these awesome videos!
This is absolutely brilliant. I discovered you 2 weeks ago and what I learned since then it's ... incredible. Thank you.
Nice! i was already using the "continue" extension in VS Code and thought it would be nice to have autocomplete in obsidian too. Thanks for the heads-up, will be trying this out right away 🙂
Thanks, Matt! At first, I was very hesitant about everything to do with AI and now, thanks to your great video content, I can't get away from it. Despite free tools and LLMs, it is slowly becoming an expensive pleasure. Not because of the hardware. I have to invite my fiancée to dinner more often so that she doesn't feel resentful when I fall asleep with my head on the keyboard. 😅All jokes aside, as a dev, I constantly have new projects in my mind. Keep it up, thanks for your work!
Copilot for Obsidian is also a great one. Combine these two with the “Run Code” plugin makes Obsidian like a Jupyter Notebook + AI
Thanks, Matt. Not for me, but I passed your video on to a friend with a channel he is building. I was the guy who said you have a very smooth delivery. I figured if you're using this, it has be decent and not just hype. Thanks again.
You rock, enjoyed it, learned a bit too, thk you, and keep it up!
I actually have a rock that says 'You Rock' on it. Former manager when I lived in Amsterdam got it for me.
Tanks Matt! Great channel, great speaking speed, easy to understand and finally a nice way of spending time on internet.
Thank you!
Thanks for sharing!
I like your videos, and the way you are presenting complex topics in such a calm and easy to understand way 👌
Thank you soo much Matt! I've been looking for something like this for ages! +1 for you writing your own version of the plugin,
Man i just became obsidian fan as well. I am using the copilot for obsidian. Really good as well working with Ollama. But companion seems to be even better. Thanks Matt.
Thanks for all your great videos... and super humor :-) Been looking for this plugin for Obsedian (without knowing it, before your video).
Great content! real world uses with great UI
As an obsidian user, I would love to see you make a plugin!
This is outstanding. Very helpful and easy to set up. I will give this a try for my next RUclips scripts. One remark though: it would be fantastic, if you could include time stamps for relevant video parts (download, install, config, etc.). Other than that, just thank you.
This is amazing! I love obsidian - I have it on all my machines! Question: if you run obsidian on a computer separate from the ollama server machine, can the plug-in work on the remote machine?
Will you be doing a video on how you trigger the checkmark on the script in obsidian triggers n8n?
If you have Ollama built with K/V cache quantisation, and set it to q8_0 - the context will use 50% less memory and the generations won't slow down nearly as much towards the end of a larger document.
don't think I am seeing those improvements, but maybe I am using a bad model for it.
Sounds like a good use for the free tier of Gemini!
I heard about it but until you nudged me with this video didn't set it up. Like you, I prefer local model. It is really nice, I can't feel Ollama working in the background which is nice. You are right about Llama 3.2 not really cutting it. Off to try Gemma. Oh boy this is fun.
Awesome video as always! I use it with Groq and it's super fast. But sometimes the line between "help me articulate" and "parrot the AI" gets blurry, we tend to choose the path of least friction and forgot to think hard on what we write, it's just human nature. That's why I was hesitant to introduce inline functionalities to my Copilot for Obsidian plugin. At the time I created it, I preferred interacting with AI on the side and not let it directly modify what I write. But people have requested inline features a lot, I'll probably introduce it anyways.
For something like this, would it be better to find a base text model instead of an insruct fine-tuned one?
Can you use Obsidian Companion offline? or do you need an internet connection?
The tool I showed is completely offline
I am running Ollama as 0.0.0.0 using a Tailscale IP but I can’t seem to get the plugin to work on mobile. Have you had any luck with this or tried it?
Matt, forgive my ignorance. I'm a bit of a noob in the AI space as well as obsidian. I have been using obsidian and AI tools for a few months... I can't find an obsidian this configuration page you're talking about. Can you or someone in the chat tell me what subcategory or whatever in the configuration this is under
Nevermind, what i was missing is that it "Companion" is a plug in...
Great. Please do more obsidian content ❤
Is there a Notion equivalent? :-) Hoping to crowd-source the brain trust here.
Hey Matt, while this works well on desktop, on my M1 Air 8Gb, it really hogs it, can you suggest a model I could use from Ollama, I can always use external but there is beauty to use local one. Maybe Phi3.5?
I mentioned llama3.2 3b, but 1b is also good. Not sure about others.
This idea is fucking awesome
can you make an ollama based version of Claude with computer use capabilities.? if yeah show how to video for this :P we need local open source free version of it :D
That existed before Claude did it.
the idea is nice but the issue is the system.... my system of taking like where can I implement this... .I'm usually the guy that highlights stuff after I finish typing my notes and edit whatever I typed over and over and over real fast with ai.... hmmm the auto complete is nice.... I still just don't know where can it fit into the whole picture...
You don’t say why that system doesn’t work for your flow. Lots do that.
@@technovangelist I changed my mind, I've watched your video and it lead me to a cursor ai implementation. and the assisted TAB writing is super useful when editing.
I wish for a better implementation as you said it's a bit funky and when the whole document is large it's problamatic
isn't user prompt the actual prompt the user needs to enter? it would be helpful to understand those keywords of {{#context}} vs {{context}} {{/context}} and why do i need the line ====== line and new lines.
Oh, missing water bottle!!!
Did you have to call me out with the FOSS Obsidian?
Come on...
I get it, yes Obsidian is obviously the better tool.
You have won :(