Thanks for watching. Here is an initial response from the author you might find helpful: - branching off makes more sense when you have multiple AI messages and definitely not for the bottom one (we might as well hide it for the bottom message) - we tried it audio transcription again and it seems to be working just fine. We use OpenAI and wondering if the key was correct - RAG is something we are working on right now
The best front end would allow remote access since most people only have GPUs in gaming rigs and you may want to host this elsewhere. Also since it is offline, it should be allowed to reference content from local documents also. You got both features, you got gold.
Good UI's and easy to install and setup will help a lot with adoption. Too many of the others are either too difficult to install or setup, don't work on some gpu's, and so on, which likely puts a lot of consumers off using them, and they end up using an online model as it just works. So I'm happy to see that effort is being made to make locally run models more accessible, because a lot more people will likely use them.
Hm... I thought open webui was the best... because last time I checked (few months ago) it was the best. But now I think this one definitely deserves attention. Thank you, Matt! This myst was a great alternative 🤗 - I will try it!
Thank you, great video I have installed it and working well for me. Would you do a video explaining how it can reference your own documents and the Web , so a novice can follow. Many thanks I enjoy your presentation, well done. David.
Thanks for the instructional video but I miss a part where you discuss uploading documents. I really struggle with uploading PDFs and getting them analyzed with local LLMs. Is this possible or not? There is an upload button and I experimented with the size of the context window, but I always get error messages ("fetch failed"). Obviously, local models (7B or 8B) fall behind the cloud models (claude sonnet 3.5 for example), they seem to be much more suitable for this kind of task...
very cool, itd be nice to see more uis like this implement tavern cards and multi-user chats like sillytavern, and more stuff like crewai agent cards the way flowise has gui langchain modules
I really, really love your presentation style. And I love Ollama. You were the person who taught me how to get started and I am forever grateful. By the way, I import GGUF via custom Modelfiles, and sometimed I have to tweak things like the parameters, and I haven't found a way to just update the existing imported weights parameters via the Modelfile. Do you know if it's possible? Currently I delete and re-create the whole import for every change.
@@technovangelist Thank you so much. It was really confusing that there wasn't an "update" command and I never thought to try "create" on something that was already created, hehe. Now I see that it reuses the old blobs on disk when I do that. :) Thanks!
do you mean changing the modelfile with an existing ollama model like this ? ollama show codegeex4:latest --modelfile > codegeex4_modelfile edit codegeex4_modelfile with your changes ollama create my_new_model --file codegeex4_modelfile
Msty seems to be using its own version of ollama under the hood. Is there anyway to know what version of ollama it is using? Lack of URL and PPT file support in the RAG is the other deal breaker for me. Hope they will support them in the upcoming versions.
Hi matt, thanks so much for your videos very informative. When i converse with ollama, i see that there are certain things I need to repeat, like answer with new lines after 2-3 lines or space them out. or don't be too adjusting, give a straight advice. is there a way to save these configs somewhere?
I miss the possibility of creating multiple users available in open webui. I really appreciate this feature to make the models available to non so technical colleagues
Thanks for the intro to the 'Misty' UI for Ollama. I'm using Open Web UI at work and it's great for handling multiple users. Does Misty offer the same kind of support for user sessions and data security? How does it manage each user's data?
Great. Thanks for the video! Is there already a canned web interface for ollama that allows me to serve my model public over the internet but without options on the front end for the user to select different models, document uploads, modifying system prompts etc.? I'm looking for the most basic chat functionality possible. Setting up everything in the admin backend? Like chatgpt in the early days.
not sure what you are asking. if you want the more complicated thing you asked for first, then open webui seems to be your best bet. for the simple choice there are a few options out there
@@technovangelist Thank you for your reply. I'm pretty new to the topic. I spent the last three weeks gaining some basic knowledge of LLmodels, and how to configure/use them. So please forgive me for asking my beginner questions. Currently, I'm testing Open Web UI. From what I learned, even the regular non-admin user can configure the system prompt, advanced parameters, and other stuff in his user settings. I'm looking for an option to provide the model I created and tested within the Olama CLI over the web without any model-response-related configuration options by the user. It might be possible for someone with the necessary knowledge to modify the Open Web UI accordingly, but I'm unfortunately not (yet) capable of doing so.
@@drp111 are any news on this? I also need the User Chat interface, not the "near data science UI" to compare the LLM-models. Streamlit - is the option, but may be some others as well for today?
You can close a Dialog with the ESC key. I had the exact same response.... especially because the window border is so faint. Took me a while to realize was in a modal.
Hi! Nice tool. Have you tried danswer? I deployed it but i couldnt make it work with my local ollama. Only with open ai api key. As web ui it has a clean interface and nice document organization to make Q&A
open webui I'm stuck at connecting docker with ollama ,the webui can't connect to ollama even though I can use ollama with my obsidian copilot , and CLI ollama run...
msty sounds great, but I need a tool that can be installed on my Linux LLM server (like OpenUI) or on the client workstation in my home network - any suggestions?
did you find that the response time was a good bit slower in my thing LLM vs. terminal? or is that normal for GUI's... still learning. I haven't tried LM studio yet but might do that next or prob just stick to command promo for now
I found another example for "Ollama GUI" (not an application with backend) page-assist-a-web-ui-for-local-ai-models - chrome extension. This is what I call GUI ;)
Just recorded the video about it. It’s not as powerful as the last GUI I covered, msty and attempts to do a bunch of things but not very well. But the things it gets right is great.
@@technovangelist oh maybe that's why, I'll download llava right away, and finish setup my payment of GPT-4 API. I thought it only can be activated if I put some online service that can input an image. but if llava could do it, I will not continue that GPT-4 API. thank you, I will report here later.
What I have noticed is that the models used "ollama run" seem to downloaded to one location and the models for "ollama serve" are downloaded to another location and they don't seem to know about each other's models. I have not had the chance to dig into what is going on there
I almost missed this software is closed source, thanks for pointing it out. I don't usually like installing closed source (even if it's free), so I think I'll pass on this one. Anyways it was a nice video.
Matt, sorry but I think msty is NOT an Ollama client. Don't get me wrong, I am a big fan of your videos. The thing is that I am using Ollama through my WSL Ubuntu installation. The whole thing works great as you can still use Ollama in a local address. I need a good UI too, and msty is a great UI. The problem is msty is not using Ollama service, or Ollama OpenAI compatible REST service, or even Ollama REST service. It just uses Ollama's models when Ollama is installed on the same machine. It is not the same being a client to a service and using a program's data (models)... George J.
Actually it is an ollama client. It’s just not using your instance of ollama. They have embedded ollama which is one of the ways ollama was originally intended to be used. If you use the ollama cli and point it at the msty service it will continue to work. It is 100% still ollama.
Excellent video, as always. I'm following this series and I was curious: is there any ollama model that allows training with its own dataset, like Chat with RTX? Thanks, Matt!
Website says: Do I need to have Ollama installed to use Msty? No, you don't need to have Ollama installed to use Msty. Msty is a standalone app that works on its own. However, you can use Ollama models in Msty if you have Ollama installed. So you don't need to download the same model twice.
Seems closed source, fat client is an interesting choice, but it is very polished. I like the web deployment of Open WebUI especially because it can do authentication from header so, for example, if you are using Tailscale mesh network it can authenticate you based on your TS identity automatically. Anyway these are clearly aimed at 2 different user groups
Actually most ollama users are GNU+Linux users. You omit how to install and run or associate the GUI with ollama packages in a GNU+Linux OS. And also eventually how to use the GUI as an Ollama Web scraper to get the most updated information... Again you omit to describe the weight of the MSTY package, its responsiveness, actual bugs to be aware of... and eventual interaction with other programs. Perhaps next time start by how to install, is it quick, quicker, solw... compared with other UI? Does it offer functionality not available elsewhere? Is it implemented better?
No most users are not Linux. Windows outnumbers Linux for ollama by about 3 to 1, then Mac then Linux. Install is the easiest thing to do, not worth showing.and no it doesn’t offer new functionality. That was made very clear. It’s simple.
LM Studio is a great tool to start working with models. A lot of folks run into walls with that pretty soon and migrate over to using Ollama instead. LM Studio has been around a bit longer than Ollama has.
I don’t think it is open source, at least the code is not easily accessible. That said I have been involved with many open source projects that weren’t on GitHub. No idea what the license is.
@@technovangelist Thank for your reply: I attempted to download "llama3.2-vision " using "msty," but encountered the following error message: "Could not add model llama3.2-vision to your library. Please try again." Subsequently, I used the command Ollama run llama3.2-vision in the terminal to download the model. The download was successful, and I confirmed its presence by running the Ollama list command. However, when I opened "msty," the model appeared in the list, but after selecting it and attempting to chat, I received the following message: "llama runner process has terminated: exit status 0xc0000409"
Not sure if the embedding ollama in msty has been updated to use that. I think you can also point msty to your own install of ollama as well. Try that?
JUST a correction - this is not a "simple Ollama GUI"! This is a complete app that: - install libraries - installs ollama - download separately models - runs on its own, regardless if you have one or multiple other servers running on your computer. This is not Ollama UI, but an application uses Ollama with UI😅
This is an app that is very much in keeping with the goals of Ollama. It is a simple gui that uses Ollama. If you have downloaded models ahead of time, you can use those models in Msty, just like any other client ui that uses ollama. If you download the models from msty, you can use them in the cli, just like any other client ui that uses Ollama. As discussed in the video and in the comments, this will be updated soon to allow configuring to use your own ollama instance on your local machine or remotely. No corrections are needed.
@@technovangelist I agree with you. But it just doesn't match my knowledge: Front end = GUI or UI Backend = Engine, workflows MYST BackEnd (ollama) + FrontEnd (UI), not just GUI. That's why doesn't make sense to me to call MYST GUI.
MYST was a game in the 80s that was the first 'killer app' for the CD ROM. msty is the gui for ollama we are talking about here. But you can call it whatever you like. It’s a simple gui that helps folks that need a simple gui to use ollama.
Looks nice but many people run ollama on a headless server with a beefy graphics card then access it from a laptop. So being able to enter an Ollama IP address is key and surely a very easy thing to do.
What do you mean @TheAtassis? The title refers to this being a client for ollama. Because it’s a client for ollama. It couldn’t be a more accurate title.
I think I'm going to wait a couple more versions until they clean up some of the things that they've added in the last few versions. Specifically, being able to automatically update the RAG database when there's changes to the Obsidian Vault.
I don’t know about a downside but it’s just another choice the developer has made. Every dev makes a number of choice they feel are right about a product.
@@technovangelist I am not a big expert, but if we are not able to see the code, we also don't know if user request might be send accross to the developer, which would then be a privacy issue and contradicts open source LLMs in my opinion.
@technovangelist yes, it does. In certain industries, it's a problem. In terms of AI, it creates a bigger problem since at the moment the biggest holdup for some industries is where does the data reside. To promote more adoption today we need more open source solutions that work. The small legal and medical practices I service can't use copilot, openAI is scary, and VMware is to expensive.
OSS is a shield some orgs like to hide behind sure. But being oss doesn’t automatically make safer. How many open source projects have had vulnerabilities that go unnoticed because most don’t look at the code and just assume others do. And any security and compliance team can work with a team from a closed source project to understand the risks. Otherwise no closed source tools would be used and that’s just not the case.
Why are all the UIs coming out browser based? Browser based UIs store data which Microsoft, Mozilla, and Google can steal. :/ I personally decided to just make my own personal UI that fit's my needs using python.
@@technovangelist 1:30am here, I guess I was too zoned out tired at the start that it was an app. And the fact it looks so close to openwebui had me think it must be web. My bad, my bad.
I would love to know more about this. Everything about lmstudio is slow and hard. why do folks like it. I have a video about it but its really hard to find anything positive to say.
I don’t think Msty is making any money. The amount of money I make from RUclips videos is less than what a high schooler makes at the local McDonald’s in a couple days. And I haven’t taken any sponsorships for any video I have ever made. I made a review of the best tool available for ollama and ollama is the best tool for running models. So you are saying anything positive online is paid for? Are you really that stupid or just trying to rile people up.
That said if someone wanted to pay me I am open to it. But I would have to disclose that relationship when posting the video. You can see when a video has taken a sponsorship very clearly in the way the platform presents the video to you.
@@technovangelist hey man, don't worry about it and continue doing your awesome work with ollama and youtube videos, I have some suggestions of tools which I had used with ollama: phidata, lobehub, anythingLLM, LM studio, and pinokio, but I have the same question for all those tools, is all of these 100% private and security?
Do you know where ican get a list of all the 'parameters' '/set' will allow? I stumbled onto the numthread parameter and i like it alot but... Where is a list of all of them? yes i know you dont work for ollama. Thanks :)
Thanks for watching. Here is an initial response from the author you might find helpful:
- branching off makes more sense when you have multiple AI messages and definitely not for the bottom one (we might as well hide it for the bottom message)
- we tried it audio transcription again and it seems to be working just fine. We use OpenAI and wondering if the key was correct
- RAG is something we are working on right now
I installed it immediately and really like it as well! Lots of good ideas and neat UI. Thanks for the demo and discovery.
Yes! The "split-chat' feature of MSTY is great for comparisons across models. Thank You Very Much!!!
Really appreciate the ollama content. Super helpful for catching up on the AI and LLM scene
MSTY looks like an exciting and useful tool. Eagerly awaiting it's implementation of RAG!
The best front end would allow remote access since most people only have GPUs in gaming rigs and you may want to host this elsewhere. Also since it is offline, it should be allowed to reference content from local documents also. You got both features, you got gold.
well the first is part of ollama itself. so any ui apart from this one with rag and you are happy then?
I would appreciate a link in the description, but thank you for the video featuring this app, seems very nice!
It’s there
Look's like a great option. Thanks for looking into these tools and reviewing them in such detail.
I really like the ending water sipper pause and I don't quit the video just to see the end 😄
Who says it's water?
“Sliding Doors” ? LOVE that movie!
all my videos include at least some irrelevant and potentially useless knowledge bouncing around in my head.
This worked perfectly for my needs thank you for this!
great video. I currently use anythingLLM to interface with Ollama in my local network.
Do you find AnythingLLM is the best
Thanks Matt for the video! great introduction to Msty
there are a lot of updates in the last few versions and I will be putting out another video when a few more things are added.
Thanks you for covering this. Good UI's are going to be the thing that gathers pace and gets take up.
Good UI's and easy to install and setup will help a lot with adoption.
Too many of the others are either too difficult to install or setup, don't work on some gpu's, and so on, which likely puts a lot of consumers off using them, and they end up using an online model as it just works.
So I'm happy to see that effort is being made to make locally run models more accessible, because a lot more people will likely use them.
unfortunately there are a lot of really bad UIs. There is one that keeps getting suggested that is hard to find positive things to say about.
Hm... I thought open webui was the best... because last time I checked (few months ago) it was the best.
But now I think this one definitely deserves attention.
Thank you, Matt!
This myst was a great alternative 🤗 - I will try it!
I really like your videos. Thank you for sharing your experience.
thanks for sharing!
Thank you, great video I have installed it and working well for me. Would you do a video explaining how it can reference your own documents and the Web , so a novice can follow. Many thanks I enjoy your presentation, well done. David.
Nice! Didnt know about this one!
Thanks for the instructional video but I miss a part where you discuss uploading documents. I really struggle with uploading PDFs and getting them analyzed with local LLMs. Is this possible or not? There is an upload button and I experimented with the size of the context window, but I always get error messages ("fetch failed"). Obviously, local models (7B or 8B) fall behind the cloud models (claude sonnet 3.5 for example), they seem to be much more suitable for this kind of task...
Interesting. I've been using the openwebui for prepping multishot prompts in main code flows, but I like the folders option here. Will give it a whirl
very cool, itd be nice to see more uis like this implement tavern cards and multi-user chats like sillytavern, and more stuff like crewai agent cards the way flowise has gui langchain modules
I really, really love your presentation style. And I love Ollama. You were the person who taught me how to get started and I am forever grateful. By the way, I import GGUF via custom Modelfiles, and sometimed I have to tweak things like the parameters, and I haven't found a way to just update the existing imported weights parameters via the Modelfile. Do you know if it's possible? Currently I delete and re-create the whole import for every change.
No need to delete. Just run create again
@@technovangelist Thank you so much. It was really confusing that there wasn't an "update" command and I never thought to try "create" on something that was already created, hehe. Now I see that it reuses the old blobs on disk when I do that. :) Thanks!
@@technovangelist Thank you so much for clearing that up! :)
do you mean changing the modelfile with an existing ollama model like this ?
ollama show codegeex4:latest --modelfile > codegeex4_modelfile
edit codegeex4_modelfile with your changes
ollama create my_new_model --file codegeex4_modelfile
I wonder if we can deploy this on some cloud ( AWS, Azure etc ) , thanks for the video Matt Williams!
make a video on UI with RAG functionality as well.
I have
With msky? @@technovangelist
msty doesn’t do rag yet. But I have done a video on a ui with rag
Use anything LLM
No rag :(
Great video!! Is there a web version of MSTY??
You should have put in some sort of link to download it. I cant find it on the web
Thanks for the video! Really nice app
Msty seems to be using its own version of ollama under the hood. Is there anyway to know what version of ollama it is using? Lack of URL and PPT file support in the RAG is the other deal breaker for me. Hope they will support them in the upcoming versions.
Hi matt, thanks so much for your videos very informative. When i converse with ollama, i see that there are certain things I need to repeat, like answer with new lines after 2-3 lines or space them out. or don't be too adjusting, give a straight advice. is there a way to save these configs somewhere?
Great video. Thanks
I miss the possibility of creating multiple users available in open webui. I really appreciate this feature to make the models available to non so technical colleagues
Thank you so much! I was just looking for something like this
Thanks for the intro to the 'Misty' UI for Ollama. I'm using Open Web UI at work and it's great for handling multiple users. Does Misty offer the same kind of support for user sessions and data security? How does it manage each user's data?
Great. Thanks for the video!
Is there already a canned web interface for ollama that allows me to serve my model public over the internet but without options on the front end for the user to select different models, document uploads, modifying system prompts etc.? I'm looking for the most basic chat functionality possible. Setting up everything in the admin backend? Like chatgpt in the early days.
not sure what you are asking. if you want the more complicated thing you asked for first, then open webui seems to be your best bet. for the simple choice there are a few options out there
@@technovangelist Thank you for your reply. I'm pretty new to the topic. I spent the last three weeks gaining some basic knowledge of LLmodels, and how to configure/use them. So please forgive me for asking my beginner questions. Currently, I'm testing Open Web UI. From what I learned, even the regular non-admin user can configure the system prompt, advanced parameters, and other stuff in his user settings. I'm looking for an option to provide the model I created and tested within the Olama CLI over the web without any model-response-related configuration options by the user. It might be possible for someone with the necessary knowledge to modify the Open Web UI accordingly, but I'm unfortunately not (yet) capable of doing so.
@@drp111 are any news on this? I also need the User Chat interface, not the "near data science UI" to compare the LLM-models. Streamlit - is the option, but may be some others as well for today?
You can close a Dialog with the ESC key. I had the exact same response.... especially because the window border is so faint. Took me a while to realize was in a modal.
I'll have to review it again to remember what I did
Does MSTY have a base url, like WebUI, that acts like an API in other applications? I search but couldn't find.
Hi! Nice tool. Have you tried danswer? I deployed it but i couldnt make it work with my local ollama. Only with open ai api key. As web ui it has a clean interface and nice document organization to make Q&A
Thanks for the movie, I'll watch it. :)
open webui I'm stuck at connecting docker with ollama ,the webui can't connect to ollama even though I can use ollama with my obsidian copilot , and CLI ollama run...
Where are the content and chats saved on MAC?
Can you use this for R1?
Thnks again Matt. I wouldn't call it simple though. At least for the average people hahaha. Congrats on how you do it.
msty sounds great, but I need a tool that can be installed on my Linux LLM server (like OpenUI) or on the client workstation in my home network - any suggestions?
Open webui is a great option for that
Hello Matt, you did not demonstrate document upload for analysis, is that capability available now?
Yup. It’s in there
Will it take an image or document as input?
what is the best open-source GUI i can use for my local RAG app ?
there are a lot of options out there, but none of them are very good. at least not yet. this is still new stuff.
did you find that the response time was a good bit slower in my thing LLM vs. terminal? or is that normal for GUI's... still learning. I haven't tried LM studio yet but might do that next or prob just stick to command promo for now
You may want to look into flowwise, it took me some tinkering, but I was able to setup local rag with it and ollama
AnythingLLM is another great option. That's the one I'm using right now. But, I'm always trying new things, so tomorrow, who knows?
Obsidian, coupled with the Copilot plugin, offers an easy setup and allows for swift interaction with documents.
I'd like to see the conversation branching and refinement concepts come to BoltAI which I think is by far the best GUI client.
Bolt only supports Ollama thru the OpenAI compatible API which is always going to be a bit behind so its potentially limited in what it can do
great video, can I have multiple API LLM providers setup at the SAME time? thanks
Yes you can!
@@technovangelist cool, many thanks for you hard work
I found another example for "Ollama GUI" (not an application with backend)
page-assist-a-web-ui-for-local-ai-models - chrome extension.
This is what I call GUI ;)
Just recorded the video about it. It’s not as powerful as the last GUI I covered, msty and attempts to do a bunch of things but not very well. But the things it gets right is great.
@@technovangelist Thank you for the honest opinion about Page Assist (I'm the creator of it) :)
why I don't see any image button? (to upload and ask the LLM)? I installed model Llama-3-Instruct and still no image button.
I don't know why you don't see the image button, but you wouldn't use it with llama3 anyway. You would need to use images with llava models.
@@technovangelist oh maybe that's why, I'll download llava right away, and finish setup my payment of GPT-4 API. I thought it only can be activated if I put some online service that can input an image. but if llava could do it, I will not continue that GPT-4 API. thank you, I will report here later.
@@technovangelist I can confirm that downloading llava makes that button appear. Thank you.
@technovangelist what is the license of msty ? Do u hv link for the license page ?
No i dont have a link
Nice but seems to me that it does not use the context of previous chats when making API requests to Anthropic
Possible to show us how to use a webui or streamlit with open interpreter ?
msty doesn't seem to use models already loaded with ollama. It's also closed source, so are you sure it's using ollama at all?
It does use the models from ollama. Maybe you skipped that option. There is a place to change the model path in the app
What I have noticed is that the models used "ollama run" seem to downloaded to one location and the models for "ollama serve" are downloaded to another location and they don't seem to know about each other's models. I have not had the chance to dig into what is going on there
I almost missed this software is closed source, thanks for pointing it out. I don't usually like installing closed source (even if it's free), so I think I'll pass on this one. Anyways it was a nice video.
there arent two ways to run models like that. the ollama client uses the server. they are one
Love it, the inference is fast. Might need text-to-speech.
Matt, sorry but I think msty is NOT an Ollama client. Don't get me wrong, I am a big fan of your videos.
The thing is that I am using Ollama through my WSL Ubuntu installation. The whole thing works great as you can still use Ollama in a local address. I need a good UI too, and msty is a great UI.
The problem is msty is not using Ollama service, or Ollama OpenAI compatible REST service, or even Ollama REST service. It just uses Ollama's models when Ollama is installed on the same machine.
It is not the same being a client to a service and using a program's data (models)...
George J.
Actually it is an ollama client. It’s just not using your instance of ollama. They have embedded ollama which is one of the ways ollama was originally intended to be used. If you use the ollama cli and point it at the msty service it will continue to work. It is 100% still ollama.
That said they are working on an update that will use your instance as well
I can't find the source code, git repo on the page. Is this a commercial app?
It doesn’t seem to be commercial yet but not open source I think.
Excellent video, as always. I'm following this series and I was curious: is there any ollama model that allows training with its own dataset, like Chat with RTX? Thanks, Matt!
Chat with RTX doesnt train the model, it just feeds it your data using RAG.
@stickmanland Thanks for answering. Is there any Ollama model that could do that?
Matt thanks for the insight do you know if msty can be installed on wsl ubuntu
no idea, but both ollama and msty can be installed on windows without wsl
I dont like docker very difficult to install, setup and manage
Docker desktop makes it easier
enter the Local Multiverse LLM style UI. somehow this is might really useful for me
I can't seem to find the source code. Is it close-source? and if yes, why? makes me kind of doubt it...
My review was about whether it’s a great ai tool. Open source or closed isn’t really relevant to the discussion. Looks to be closed source.
I am confused. Does it actually use OLLAMA i thought it has its own text service?
Website says:
Do I need to have Ollama installed to use Msty?
No, you don't need to have Ollama installed to use Msty. Msty is a standalone app that works on its own. However, you can use Ollama models in Msty if you have Ollama installed. So you don't need to download the same model twice.
Yes it uses ollama.
Correct. You don’t need to install it because it embeds ollama.
Seems closed source, fat client is an interesting choice, but it is very polished. I like the web deployment of Open WebUI especially because it can do authentication from header so, for example, if you are using Tailscale mesh network it can authenticate you based on your TS identity automatically. Anyway these are clearly aimed at 2 different user groups
Actually most ollama users are GNU+Linux users. You omit how to install and run or associate the GUI with ollama packages in a GNU+Linux OS. And also eventually how to use the GUI as an Ollama Web scraper to get the most updated information...
Again you omit to describe the weight of the MSTY package, its responsiveness, actual bugs to be aware of... and eventual interaction with other programs.
Perhaps next time start by how to install, is it quick, quicker, solw... compared with other UI? Does it offer functionality not available elsewhere? Is it implemented better?
No most users are not Linux. Windows outnumbers Linux for ollama by about 3 to 1, then Mac then Linux. Install is the easiest thing to do, not worth showing.and no it doesn’t offer new functionality. That was made very clear. It’s simple.
Thx
Can we upload files and chat with it?
No. Just a simple chat client. Not rag.
So far seems to assume on windows all local models are on C instead of another volume. Researching a work-around
Just specify the path in the app
Are you still behind MSTY ?
Chat GPT has probably also found critical reports about the tool
I didn’t create it but I like it. ChatGPT is definitely not an authority to trust about critical reports.
I use Lm Studio with Anything LLM.
LM Studio is a great tool to start working with models. A lot of folks run into walls with that pretty soon and migrate over to using Ollama instead. LM Studio has been around a bit longer than Ollama has.
ok, i finally tried using it....its dog slow for everything. Why do you like it?
Is it open-source with permissive licensing?
I don’t think it is open source, at least the code is not easily accessible. That said I have been involved with many open source projects that weren’t on GitHub. No idea what the license is.
I could not run or download llama models any one can explain for me WHY???
Need more info. What did you try? What error do you get? Where are you doing it? Might be better to ask in the ollama discord
@@technovangelist Thank for your reply:
I attempted to download "llama3.2-vision
" using "msty," but encountered the following error message: "Could not add model llama3.2-vision
to your library. Please try again." Subsequently, I used the command Ollama run llama3.2-vision in the terminal to download the model. The download was successful, and I confirmed its presence by running the Ollama list command. However, when I opened "msty," the model appeared in the list, but after selecting it and attempting to chat, I received the following message: "llama runner process has terminated: exit status 0xc0000409"
Not sure if the embedding ollama in msty has been updated to use that. I think you can also point msty to your own install of ollama as well. Try that?
Does that model work with ollama on its own on your machine?do other models have that issue
@@technovangelistYes it does work normally in cmd.
JUST a correction - this is not a "simple Ollama GUI"!
This is a complete app that:
- install libraries
- installs ollama
- download separately models
- runs on its own, regardless if you have one or multiple other servers running on your computer.
This is not Ollama UI, but an application uses Ollama with UI😅
This is an app that is very much in keeping with the goals of Ollama. It is a simple gui that uses Ollama. If you have downloaded models ahead of time, you can use those models in Msty, just like any other client ui that uses ollama. If you download the models from msty, you can use them in the cli, just like any other client ui that uses Ollama. As discussed in the video and in the comments, this will be updated soon to allow configuring to use your own ollama instance on your local machine or remotely. No corrections are needed.
@@technovangelist I agree with you.
But it just doesn't match my knowledge:
Front end = GUI or UI
Backend = Engine, workflows
MYST BackEnd (ollama) + FrontEnd (UI), not just GUI.
That's why doesn't make sense to me to call MYST GUI.
MYST was a game in the 80s that was the first 'killer app' for the CD ROM. msty is the gui for ollama we are talking about here. But you can call it whatever you like. It’s a simple gui that helps folks that need a simple gui to use ollama.
@@technovangelist Alright
Looks nice but many people run ollama on a headless server with a beefy graphics card then access it from a laptop. So being able to enter an Ollama IP address is key and surely a very easy thing to do.
Thanks time well saved.
That exact question is covered in the video.
@technovangelist this makes the title of the video misleading. For now it has no relation to ollama
What do you mean @TheAtassis? The title refers to this being a client for ollama. Because it’s a client for ollama. It couldn’t be a more accurate title.
Yup 100% this is exactly what in trying to do lol. Dont want to work on my main comp and want my partner to be able to access it on her laptop.
having a RAG will be cool
Its pretty nice now, but i am looking forawrd to some improvements in the next version or two
We are to Msty 0.9 and they have added some of your suggestion
Pls do a Msty update video…
I think I'm going to wait a couple more versions until they clean up some of the things that they've added in the last few versions. Specifically, being able to automatically update the RAG database when there's changes to the Obsidian Vault.
I cant find a single on for windows not a single one. Its stupid why is that the case I hate docker I hate the stupidity of the lack of a native app.
Great. Make one.
Only downside is that it is not open source, is it?
I don’t know about a downside but it’s just another choice the developer has made. Every dev makes a number of choice they feel are right about a product.
@@technovangelist I am not a big expert, but if we are not able to see the code, we also don't know if user request might be send accross to the developer, which would then be a privacy issue and contradicts open source LLMs in my opinion.
You can't see any of the code in most applications from Microsoft or Apple or plenty of other big companies. Doesn't make them less trustworthy.
@technovangelist yes, it does. In certain industries, it's a problem. In terms of AI, it creates a bigger problem since at the moment the biggest holdup for some industries is where does the data reside. To promote more adoption today we need more open source solutions that work. The small legal and medical practices I service can't use copilot, openAI is scary, and VMware is to expensive.
OSS is a shield some orgs like to hide behind sure. But being oss doesn’t automatically make safer. How many open source projects have had vulnerabilities that go unnoticed because most don’t look at the code and just assume others do. And any security and compliance team can work with a team from a closed source project to understand the risks. Otherwise no closed source tools would be used and that’s just not the case.
Found Jan being mentioned on reddit as an open webui alternative - would love to hear your thoughts on it!
Why are all the UIs coming out browser based? Browser based UIs store data which Microsoft, Mozilla, and Google can steal. :/ I personally decided to just make my own personal UI that fit's my needs using python.
wait, you made that comment on a ui that is not browser based
@@technovangelist 1:30am here, I guess I was too zoned out tired at the start that it was an app. And the fact it looks so close to openwebui had me think it must be web. My bad, my bad.
I find LM Studio way easier to install and use.
It’s pretty common to start there and then move to ollama when you hit the wall.
I would love to know more about this. Everything about lmstudio is slow and hard. why do folks like it. I have a video about it but its really hard to find anything positive to say.
@@technovangelistI don't think there's any deep reason. Just "it's 1 exe file for everything". Very common motivation among Windows users.
Have you looked at AnythingLLM?
Yes
But no video for it. I want to focus on videos I can say mostly good things about
Sounds like you should make one then. 😉 Love your videos, man. Keep'em coming.
Ugh. I can....but shouldn’t.
@@technovangelist no? Not a fan?
Hope it has DARK MODE!!!!
niyce
Nice, but I like more openweb ui
I wouldn’t call this app “simple”) imo, the simplest way to use ollama is Alfred workflow or raycast extension if one don’t like the ollama cli)
ok
How Much did they pay you? clearly biased
I don’t think Msty is making any money. The amount of money I make from RUclips videos is less than what a high schooler makes at the local McDonald’s in a couple days. And I haven’t taken any sponsorships for any video I have ever made. I made a review of the best tool available for ollama and ollama is the best tool for running models. So you are saying anything positive online is paid for? Are you really that stupid or just trying to rile people up.
That said if someone wanted to pay me I am open to it. But I would have to disclose that relationship when posting the video. You can see when a video has taken a sponsorship very clearly in the way the platform presents the video to you.
@@technovangelist hey man, don't worry about it and continue doing your awesome work with ollama and youtube videos, I have some suggestions of tools which I had used with ollama: phidata, lobehub, anythingLLM, LM studio, and pinokio, but I have the same question for all those tools, is all of these 100% private and security?
That’s not something I can guarantee as I don’t work for them.
I really need these UIs to allow us to adjust temp settings, max tokens, etc.
This one does that too
“Sliding Doors”? LOVE that movie!
Do you know where ican get a list of all the 'parameters' '/set' will allow? I stumbled onto the numthread parameter and i like it alot but... Where is a list of all of them? yes i know you dont work for ollama. Thanks :)
type /set parameter and press enter. that will get a lot of them. I haven't seen a full list anywhere
@@technovangelist ay right on. quality vids. good eve from Frostburg Maryland.
*The num_thread parameter. (ex: /set parameter num_thread=13)