This may be my favorite simple Ollama GUI

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024

Комментарии • 189

  • @technovangelist
    @technovangelist  5 месяцев назад +21

    Thanks for watching. Here is an initial response from the author you might find helpful:
    - branching off makes more sense when you have multiple AI messages and definitely not for the bottom one (we might as well hide it for the bottom message)
    - we tried it audio transcription again and it seems to be working just fine. We use OpenAI and wondering if the key was correct
    - RAG is something we are working on right now

  • @supercurioTube
    @supercurioTube 5 месяцев назад +7

    I installed it immediately and really like it as well! Lots of good ideas and neat UI. Thanks for the demo and discovery.

  • @HyperUpscale
    @HyperUpscale 5 месяцев назад +1

    Hm... I thought open webui was the best... because last time I checked (few months ago) it was the best.
    But now I think this one definitely deserves attention.
    Thank you, Matt!
    This myst was a great alternative 🤗 - I will try it!

  • @davidjameslees635
    @davidjameslees635 4 месяца назад +1

    Thank you, great video I have installed it and working well for me. Would you do a video explaining how it can reference your own documents and the Web , so a novice can follow. Many thanks I enjoy your presentation, well done. David.

  • @e-dev9401
    @e-dev9401 3 месяца назад

    I would appreciate a link in the description, but thank you for the video featuring this app, seems very nice!

  • @madytyoo
    @madytyoo 4 месяца назад +1

    I really like your videos. Thank you for sharing your experience.

  • @DhruvJoshiDJ
    @DhruvJoshiDJ 5 месяцев назад +9

    make a video on UI with RAG functionality as well.

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      I have

    • @stanTrX
      @stanTrX 5 месяцев назад

      With msky? ​@@technovangelist

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      msty doesn’t do rag yet. But I have done a video on a ui with rag

    • @drmetroyt
      @drmetroyt 5 месяцев назад

      Use anything LLM

    • @YannochFPV
      @YannochFPV 4 месяца назад

      No rag :(

  • @new_artiko
    @new_artiko 5 месяцев назад +4

    thanks for sharing!

  • @Slimpickens45
    @Slimpickens45 5 месяцев назад +2

    Nice! Didnt know about this one!

  • @LochanaMenikarachchi
    @LochanaMenikarachchi 3 месяца назад

    Msty seems to be using its own version of ollama under the hood. Is there anyway to know what version of ollama it is using? Lack of URL and PPT file support in the RAG is the other deal breaker for me. Hope they will support them in the upcoming versions.

  • @winkler1b
    @winkler1b 3 месяца назад

    You can close a Dialog with the ESC key. I had the exact same response.... especially because the window border is so faint. Took me a while to realize was in a modal.

    • @technovangelist
      @technovangelist  3 месяца назад

      I'll have to review it again to remember what I did

  • @sskohli79
    @sskohli79 3 месяца назад

    Hi matt, thanks so much for your videos very informative. When i converse with ollama, i see that there are certain things I need to repeat, like answer with new lines after 2-3 lines or space them out. or don't be too adjusting, give a straight advice. is there a way to save these configs somewhere?

  • @Canna_Science_and_Technology
    @Canna_Science_and_Technology 5 месяцев назад

    Thanks for the intro to the 'Misty' UI for Ollama. I'm using Open Web UI at work and it's great for handling multiple users. Does Misty offer the same kind of support for user sessions and data security? How does it manage each user's data?

  • @siddharthkandwal6514
    @siddharthkandwal6514 2 месяца назад

    Where are the content and chats saved on MAC?

  • @robwin0072
    @robwin0072 Месяц назад

    Hello Matt, you did not demonstrate document upload for analysis, is that capability available now?

  • @NLPprompter
    @NLPprompter 5 месяцев назад +1

    enter the Local Multiverse LLM style UI. somehow this is might really useful for me

  • @yuvrajdhepe2116
    @yuvrajdhepe2116 5 месяцев назад +6

    I really like the ending water sipper pause and I don't quit the video just to see the end 😄

    • @tecnopadre
      @tecnopadre 5 месяцев назад +1

      Who says it's water?

  • @pagutierrezn
    @pagutierrezn 5 месяцев назад

    I miss the possibility of creating multiple users available in open webui. I really appreciate this feature to make the models available to non so technical colleagues

  • @RickySupriyadi
    @RickySupriyadi 5 месяцев назад

    open webui I'm stuck at connecting docker with ollama ,the webui can't connect to ollama even though I can use ollama with my obsidian copilot , and CLI ollama run...

  • @sammcj2000
    @sammcj2000 5 месяцев назад

    I'd like to see the conversation branching and refinement concepts come to BoltAI which I think is by far the best GUI client.

    • @technovangelist
      @technovangelist  5 месяцев назад

      Bolt only supports Ollama thru the OpenAI compatible API which is always going to be a bit behind so its potentially limited in what it can do

  • @chrisBruner
    @chrisBruner 5 месяцев назад +1

    msty doesn't seem to use models already loaded with ollama. It's also closed source, so are you sure it's using ollama at all?

    • @technovangelist
      @technovangelist  5 месяцев назад

      It does use the models from ollama. Maybe you skipped that option. There is a place to change the model path in the app

    • @DrakeStardragon
      @DrakeStardragon 5 месяцев назад

      What I have noticed is that the models used "ollama run" seem to downloaded to one location and the models for "ollama serve" are downloaded to another location and they don't seem to know about each other's models. I have not had the chance to dig into what is going on there

    • @juanjesusligero391
      @juanjesusligero391 5 месяцев назад

      I almost missed this software is closed source, thanks for pointing it out. I don't usually like installing closed source (even if it's free), so I think I'll pass on this one. Anyways it was a nice video.

    • @technovangelist
      @technovangelist  5 месяцев назад

      there arent two ways to run models like that. the ollama client uses the server. they are one

  • @kevinfox9535
    @kevinfox9535 5 месяцев назад

    You should have put in some sort of link to download it. I cant find it on the web

  • @Alex29196
    @Alex29196 4 месяца назад

    Love it, the inference is fast. Might need text-to-speech.

  • @THE-AI_INSIDER
    @THE-AI_INSIDER 5 месяцев назад

    @technovangelist what is the license of msty ? Do u hv link for the license page ?

  • @isalama
    @isalama 4 месяца назад

    Can we upload files and chat with it?

  • @josuelapa2271
    @josuelapa2271 5 месяцев назад

    I can't seem to find the source code. Is it close-source? and if yes, why? makes me kind of doubt it...

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      My review was about whether it’s a great ai tool. Open source or closed isn’t really relevant to the discussion. Looks to be closed source.

  • @Maisonier
    @Maisonier 5 месяцев назад +1

    I use Lm Studio with Anything LLM.

    • @technovangelist
      @technovangelist  5 месяцев назад

      LM Studio is a great tool to start working with models. A lot of folks run into walls with that pretty soon and migrate over to using Ollama instead. LM Studio has been around a bit longer than Ollama has.

    • @technovangelist
      @technovangelist  4 месяца назад

      ok, i finally tried using it....its dog slow for everything. Why do you like it?

  • @autoboto
    @autoboto 5 месяцев назад

    So far seems to assume on windows all local models are on C instead of another volume. Researching a work-around

  • @Michael-London
    @Michael-London 4 месяца назад

    I am confused. Does it actually use OLLAMA i thought it has its own text service?

    • @Michael-London
      @Michael-London 4 месяца назад

      Website says:
      Do I need to have Ollama installed to use Msty?
      No, you don't need to have Ollama installed to use Msty. Msty is a standalone app that works on its own. However, you can use Ollama models in Msty if you have Ollama installed. So you don't need to download the same model twice.

    • @technovangelist
      @technovangelist  4 месяца назад

      Yes it uses ollama.

    • @technovangelist
      @technovangelist  4 месяца назад

      Correct. You don’t need to install it because it embeds ollama.

  • @GeorgeJCapnias
    @GeorgeJCapnias 4 месяца назад

    Matt, sorry but I think msty is NOT an Ollama client. Don't get me wrong, I am a big fan of your videos.
    The thing is that I am using Ollama through my WSL Ubuntu installation. The whole thing works great as you can still use Ollama in a local address. I need a good UI too, and msty is a great UI.
    The problem is msty is not using Ollama service, or Ollama OpenAI compatible REST service, or even Ollama REST service. It just uses Ollama's models when Ollama is installed on the same machine.
    It is not the same being a client to a service and using a program's data (models)...
    George J.

    • @technovangelist
      @technovangelist  4 месяца назад +1

      Actually it is an ollama client. It’s just not using your instance of ollama. They have embedded ollama which is one of the ways ollama was originally intended to be used. If you use the ollama cli and point it at the msty service it will continue to work. It is 100% still ollama.

    • @technovangelist
      @technovangelist  4 месяца назад

      That said they are working on an update that will use your instance as well

  • @moneyfr
    @moneyfr 5 месяцев назад

    do i need go cpu or gpu to have a fast IA ?

  • @inout3394
    @inout3394 5 месяцев назад

    Thx

  • @IanScrivener
    @IanScrivener 3 месяца назад

    We are to Msty 0.9 and they have added some of your suggestion
    Pls do a Msty update video…

    • @technovangelist
      @technovangelist  3 месяца назад

      I think I'm going to wait a couple more versions until they clean up some of the things that they've added in the last few versions. Specifically, being able to automatically update the RAG database when there's changes to the Obsidian Vault.

  • @Pregidth
    @Pregidth 4 месяца назад +1

    Only downside is that it is not open source, is it?

    • @technovangelist
      @technovangelist  4 месяца назад +1

      I don’t know about a downside but it’s just another choice the developer has made. Every dev makes a number of choice they feel are right about a product.

    • @Pregidth
      @Pregidth 4 месяца назад +1

      @@technovangelist I am not a big expert, but if we are not able to see the code, we also don't know if user request might be send accross to the developer, which would then be a privacy issue and contradicts open source LLMs in my opinion.

    • @technovangelist
      @technovangelist  4 месяца назад

      You can't see any of the code in most applications from Microsoft or Apple or plenty of other big companies. Doesn't make them less trustworthy.

    • @sc0572
      @sc0572 4 месяца назад +1

      ​@technovangelist yes, it does. In certain industries, it's a problem. In terms of AI, it creates a bigger problem since at the moment the biggest holdup for some industries is where does the data reside. To promote more adoption today we need more open source solutions that work. The small legal and medical practices I service can't use copilot, openAI is scary, and VMware is to expensive.

    • @technovangelist
      @technovangelist  4 месяца назад

      OSS is a shield some orgs like to hide behind sure. But being oss doesn’t automatically make safer. How many open source projects have had vulnerabilities that go unnoticed because most don’t look at the code and just assume others do. And any security and compliance team can work with a team from a closed source project to understand the risks. Otherwise no closed source tools would be used and that’s just not the case.

  • @NathanChambers
    @NathanChambers 5 месяцев назад

    Why are all the UIs coming out browser based? Browser based UIs store data which Microsoft, Mozilla, and Google can steal. :/ I personally decided to just make my own personal UI that fit's my needs using python.

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      wait, you made that comment on a ui that is not browser based

    • @NathanChambers
      @NathanChambers 5 месяцев назад +1

      @@technovangelist 1:30am here, I guess I was too zoned out tired at the start that it was an app. And the fact it looks so close to openwebui had me think it must be web. My bad, my bad.

  • @JJ.R-xs8rf
    @JJ.R-xs8rf 4 месяца назад

    I find LM Studio way easier to install and use.

    • @technovangelist
      @technovangelist  4 месяца назад

      It’s pretty common to start there and then move to ollama when you hit the wall.

    • @technovangelist
      @technovangelist  4 месяца назад +1

      I would love to know more about this. Everything about lmstudio is slow and hard. why do folks like it. I have a video about it but its really hard to find anything positive to say.

    • @MyAmazingUsername
      @MyAmazingUsername 4 месяца назад

      ​@@technovangelistI don't think there's any deep reason. Just "it's 1 exe file for everything". Very common motivation among Windows users.

  • @12wsaqw
    @12wsaqw 5 месяцев назад +2

    Hope it has DARK MODE!!!!

  • @JustinJohnson13
    @JustinJohnson13 5 месяцев назад

    Have you looked at AnythingLLM?

    • @technovangelist
      @technovangelist  5 месяцев назад

      Yes

    • @technovangelist
      @technovangelist  5 месяцев назад +2

      But no video for it. I want to focus on videos I can say mostly good things about

    • @JustinJohnson13
      @JustinJohnson13 5 месяцев назад

      Sounds like you should make one then. 😉 Love your videos, man. Keep'em coming.

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      Ugh. I can....but shouldn’t.

    • @JustinJohnson13
      @JustinJohnson13 4 месяца назад

      @@technovangelist no? Not a fan?

  • @stanTrX
    @stanTrX 5 месяцев назад +3

    I dont like docker very difficult to install, setup and manage

    • @petebytes5010
      @petebytes5010 3 месяца назад

      Docker desktop makes it easier

  • @MyAmazingUsername
    @MyAmazingUsername 5 месяцев назад +3

    I really, really love your presentation style. And I love Ollama. You were the person who taught me how to get started and I am forever grateful. By the way, I import GGUF via custom Modelfiles, and sometimed I have to tweak things like the parameters, and I haven't found a way to just update the existing imported weights parameters via the Modelfile. Do you know if it's possible? Currently I delete and re-create the whole import for every change.

    • @technovangelist
      @technovangelist  4 месяца назад +1

      No need to delete. Just run create again

    • @MyAmazingUsername
      @MyAmazingUsername 4 месяца назад

      @@technovangelist Thank you so much. It was really confusing that there wasn't an "update" command and I never thought to try "create" on something that was already created, hehe. Now I see that it reuses the old blobs on disk when I do that. :) Thanks!

    • @MyAmazingUsername
      @MyAmazingUsername 4 месяца назад

      @@technovangelist Thank you so much for clearing that up! :)

  • @milque1854
    @milque1854 5 месяцев назад +2

    very cool, itd be nice to see more uis like this implement tavern cards and multi-user chats like sillytavern, and more stuff like crewai agent cards the way flowise has gui langchain modules

  • @danialothman
    @danialothman 5 месяцев назад +3

    great video. I currently use anythingLLM to interface with Ollama in my local network.

    • @Codegix
      @Codegix 3 месяца назад

      Do you find AnythingLLM is the best

  • @GavinElie
    @GavinElie 4 месяца назад +2

    MSTY looks like an exciting and useful tool. Eagerly awaiting it's implementation of RAG!

  • @vulcan4d
    @vulcan4d 4 месяца назад +2

    The best front end would allow remote access since most people only have GPUs in gaming rigs and you may want to host this elsewhere. Also since it is offline, it should be allowed to reference content from local documents also. You got both features, you got gold.

    • @technovangelist
      @technovangelist  4 месяца назад

      well the first is part of ollama itself. so any ui apart from this one with rag and you are happy then?

  • @davidtindell950
    @davidtindell950 2 месяца назад +1

    Yes! The "split-chat' feature of MSTY is great for comparisons across models. Thank You Very Much!!!

  • @bigpickles
    @bigpickles 5 месяцев назад +1

    Interesting. I've been using the openwebui for prepping multishot prompts in main code flows, but I like the folders option here. Will give it a whirl

  • @kovukumel4917
    @kovukumel4917 3 месяца назад

    Seems closed source, fat client is an interesting choice, but it is very polished. I like the web deployment of Open WebUI especially because it can do authentication from header so, for example, if you are using Tailscale mesh network it can authenticate you based on your TS identity automatically. Anyway these are clearly aimed at 2 different user groups

  • @cristianosorio32
    @cristianosorio32 3 месяца назад

    Hi! Nice tool. Have you tried danswer? I deployed it but i couldnt make it work with my local ollama. Only with open ai api key. As web ui it has a clean interface and nice document organization to make Q&A

  • @BillyBobDingledorf
    @BillyBobDingledorf 4 месяца назад

    Will it take an image or document as input?

  • @shuntera
    @shuntera 5 месяцев назад +1

    “Sliding Doors” ? LOVE that movie!

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      all my videos include at least some irrelevant and potentially useless knowledge bouncing around in my head.

  • @drp111
    @drp111 5 месяцев назад +1

    Great. Thanks for the video!
    Is there already a canned web interface for ollama that allows me to serve my model public over the internet but without options on the front end for the user to select different models, document uploads, modifying system prompts etc.? I'm looking for the most basic chat functionality possible. Setting up everything in the admin backend? Like chatgpt in the early days.

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      not sure what you are asking. if you want the more complicated thing you asked for first, then open webui seems to be your best bet. for the simple choice there are a few options out there

    • @drp111
      @drp111 5 месяцев назад +1

      @@technovangelist Thank you for your reply. I'm pretty new to the topic. I spent the last three weeks gaining some basic knowledge of LLmodels, and how to configure/use them. So please forgive me for asking my beginner questions. Currently, I'm testing Open Web UI. From what I learned, even the regular non-admin user can configure the system prompt, advanced parameters, and other stuff in his user settings. I'm looking for an option to provide the model I created and tested within the Olama CLI over the web without any model-response-related configuration options by the user. It might be possible for someone with the necessary knowledge to modify the Open Web UI accordingly, but I'm unfortunately not (yet) capable of doing so.

    • @-energy1433
      @-energy1433 2 месяца назад

      @@drp111 are any news on this? I also need the User Chat interface, not the "near data science UI" to compare the LLM-models. Streamlit - is the option, but may be some others as well for today?

  • @matthewcuddy129
    @matthewcuddy129 28 дней назад

    msty sounds great, but I need a tool that can be installed on my Linux LLM server (like OpenUI) or on the client workstation in my home network - any suggestions?

  • @giuliogemino6407
    @giuliogemino6407 4 месяца назад

    Actually most ollama users are GNU+Linux users. You omit how to install and run or associate the GUI with ollama packages in a GNU+Linux OS. And also eventually how to use the GUI as an Ollama Web scraper to get the most updated information...
    Again you omit to describe the weight of the MSTY package, its responsiveness, actual bugs to be aware of... and eventual interaction with other programs.
    Perhaps next time start by how to install, is it quick, quicker, solw... compared with other UI? Does it offer functionality not available elsewhere? Is it implemented better?

    • @technovangelist
      @technovangelist  4 месяца назад

      No most users are not Linux. Windows outnumbers Linux for ollama by about 3 to 1, then Mac then Linux. Install is the easiest thing to do, not worth showing.and no it doesn’t offer new functionality. That was made very clear. It’s simple.

  • @bdougie
    @bdougie 4 месяца назад

    Really appreciate the ollama content. Super helpful for catching up on the AI and LLM scene

  • @mohamedkeddache4202
    @mohamedkeddache4202 5 месяцев назад +2

    what is the best open-source GUI i can use for my local RAG app ?

    • @technovangelist
      @technovangelist  5 месяцев назад +3

      there are a lot of options out there, but none of them are very good. at least not yet. this is still new stuff.

    • @utawmuddy5940
      @utawmuddy5940 5 месяцев назад

      did you find that the response time was a good bit slower in my thing LLM vs. terminal? or is that normal for GUI's... still learning. I haven't tried LM studio yet but might do that next or prob just stick to command promo for now

    • @psykedout
      @psykedout 5 месяцев назад +2

      You may want to look into flowwise, it took me some tinkering, but I was able to setup local rag with it and ollama

    • @incrastic6437
      @incrastic6437 5 месяцев назад

      AnythingLLM is another great option. That's the one I'm using right now. But, I'm always trying new things, so tomorrow, who knows?

    • @Alex29196
      @Alex29196 4 месяца назад

      Obsidian, coupled with the Copilot plugin, offers an easy setup and allows for swift interaction with documents.

  • @martin22336
    @martin22336 4 месяца назад

    I cant find a single on for windows not a single one. Its stupid why is that the case I hate docker I hate the stupidity of the lack of a native app.

  • @mountee
    @mountee 5 месяцев назад +1

    great video, can I have multiple API LLM providers setup at the SAME time? thanks

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      Yes you can!

    • @mountee
      @mountee 5 месяцев назад +1

      @@technovangelist cool, many thanks for you hard work

  • @abiolasamuel8092
    @abiolasamuel8092 4 месяца назад

    Does MSTY have a base url, like WebUI, that acts like an API in other applications? I search but couldn't find.

  • @tecnopadre
    @tecnopadre 5 месяцев назад

    Thnks again Matt. I wouldn't call it simple though. At least for the average people hahaha. Congrats on how you do it.

  • @mirkoturco
    @mirkoturco 5 месяцев назад

    Nice but seems to me that it does not use the context of previous chats when making API requests to Anthropic

  • @Naarii14
    @Naarii14 3 месяца назад

    This worked perfectly for my needs thank you for this!

  • @attaboyabhi
    @attaboyabhi 3 месяца назад

    having a RAG will be cool

    • @technovangelist
      @technovangelist  3 месяца назад +1

      Its pretty nice now, but i am looking forawrd to some improvements in the next version or two

  • @ps3301
    @ps3301 4 месяца назад

    Possible to show us how to use a webui or streamlit with open interpreter ?

  • @jaapjob
    @jaapjob 5 месяцев назад

    Look's like a great option. Thanks for looking into these tools and reviewing them in such detail.

  • @gartopu
    @gartopu 5 месяцев назад

    Thanks for the movie, I'll watch it. :)

  • @printingbooks-d4e
    @printingbooks-d4e 2 месяца назад

    Do you know where ican get a list of all the 'parameters' '/set' will allow? I stumbled onto the numthread parameter and i like it alot but... Where is a list of all of them? yes i know you dont work for ollama. Thanks :)

    • @technovangelist
      @technovangelist  2 месяца назад +1

      type /set parameter and press enter. that will get a lot of them. I haven't seen a full list anywhere

    • @printingbooks-d4e
      @printingbooks-d4e 2 месяца назад

      @@technovangelist ay right on. quality vids. good eve from Frostburg Maryland.

    • @printingbooks-d4e
      @printingbooks-d4e 2 месяца назад

      *The num_thread parameter. (ex: /set parameter num_thread=13)

  • @shuntera
    @shuntera 5 месяцев назад

    “Sliding Doors”? LOVE that movie!

  • @cgmiguel
    @cgmiguel 5 месяцев назад

    Thanks for the video! Really nice app

  • @antoniobruce4678
    @antoniobruce4678 4 месяца назад

    Is it open-source with permissive licensing?

    • @technovangelist
      @technovangelist  4 месяца назад

      I don’t think it is open source, at least the code is not easily accessible. That said I have been involved with many open source projects that weren’t on GitHub. No idea what the license is.

  • @marcc2689
    @marcc2689 5 месяцев назад

    Great video. Thanks

  • @rodolfozacarias2900
    @rodolfozacarias2900 5 месяцев назад

    Excellent video, as always. I'm following this series and I was curious: is there any ollama model that allows training with its own dataset, like Chat with RTX? Thanks, Matt!

    • @stickmanland
      @stickmanland 5 месяцев назад +2

      Chat with RTX doesnt train the model, it just feeds it your data using RAG.

    • @rodolfozacarias2900
      @rodolfozacarias2900 4 месяца назад

      ​ @stickmanland Thanks for answering. Is there any Ollama model that could do that?

  • @jlgabrielv
    @jlgabrielv 3 месяца назад

    Thanks Matt for the video! great introduction to Msty

    • @technovangelist
      @technovangelist  3 месяца назад +1

      there are a lot of updates in the last few versions and I will be putting out another video when a few more things are added.

  • @voiceoftreason1760
    @voiceoftreason1760 4 месяца назад

    I can't find the source code, git repo on the page. Is this a commercial app?

    • @technovangelist
      @technovangelist  4 месяца назад

      It doesn’t seem to be commercial yet but not open source I think.

  • @doriboaz
    @doriboaz 5 месяцев назад

    Matt thanks for the insight do you know if msty can be installed on wsl ubuntu

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      no idea, but both ollama and msty can be installed on windows without wsl

  • @marhensa
    @marhensa 5 месяцев назад

    why I don't see any image button? (to upload and ask the LLM)? I installed model Llama-3-Instruct and still no image button.

    • @technovangelist
      @technovangelist  5 месяцев назад

      I don't know why you don't see the image button, but you wouldn't use it with llama3 anyway. You would need to use images with llava models.

    • @marhensa
      @marhensa 5 месяцев назад

      @@technovangelist oh maybe that's why, I'll download llava right away, and finish setup my payment of GPT-4 API. I thought it only can be activated if I put some online service that can input an image. but if llava could do it, I will not continue that GPT-4 API. thank you, I will report here later.

    • @marhensa
      @marhensa 5 месяцев назад

      ​@@technovangelist I can confirm that downloading llava makes that button appear. Thank you.

  • @AdmV0rl0n
    @AdmV0rl0n 5 месяцев назад

    Thanks you for covering this. Good UI's are going to be the thing that gathers pace and gets take up.

    • @paul1979uk2000
      @paul1979uk2000 4 месяца назад

      Good UI's and easy to install and setup will help a lot with adoption.
      Too many of the others are either too difficult to install or setup, don't work on some gpu's, and so on, which likely puts a lot of consumers off using them, and they end up using an online model as it just works.
      So I'm happy to see that effort is being made to make locally run models more accessible, because a lot more people will likely use them.

    • @technovangelist
      @technovangelist  4 месяца назад

      unfortunately there are a lot of really bad UIs. There is one that keeps getting suggested that is hard to find positive things to say about.

  • @HyperUpscale
    @HyperUpscale 5 месяцев назад

    I found another example for "Ollama GUI" (not an application with backend)
    page-assist-a-web-ui-for-local-ai-models - chrome extension.
    This is what I call GUI ;)

    • @technovangelist
      @technovangelist  4 месяца назад +2

      Just recorded the video about it. It’s not as powerful as the last GUI I covered, msty and attempts to do a bunch of things but not very well. But the things it gets right is great.

    • @n4ze3m
      @n4ze3m 4 месяца назад +1

      @@technovangelist Thank you for the honest opinion about Page Assist (I'm the creator of it) :)

  • @3.cha9
    @3.cha9 5 месяцев назад

    niyce

  • @HyperUpscale
    @HyperUpscale 5 месяцев назад

    JUST a correction - this is not a "simple Ollama GUI"!
    This is a complete app that:
    - install libraries
    - installs ollama
    - download separately models
    - runs on its own, regardless if you have one or multiple other servers running on your computer.
    This is not Ollama UI, but an application uses Ollama with UI😅

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      This is an app that is very much in keeping with the goals of Ollama. It is a simple gui that uses Ollama. If you have downloaded models ahead of time, you can use those models in Msty, just like any other client ui that uses ollama. If you download the models from msty, you can use them in the cli, just like any other client ui that uses Ollama. As discussed in the video and in the comments, this will be updated soon to allow configuring to use your own ollama instance on your local machine or remotely. No corrections are needed.

    • @HyperUpscale
      @HyperUpscale 5 месяцев назад

      @@technovangelist I agree with you.
      But it just doesn't match my knowledge:
      Front end = GUI or UI
      Backend = Engine, workflows
      MYST BackEnd (ollama) + FrontEnd (UI), not just GUI.
      That's why doesn't make sense to me to call MYST GUI.

    • @technovangelist
      @technovangelist  5 месяцев назад +2

      MYST was a game in the 80s that was the first 'killer app' for the CD ROM. msty is the gui for ollama we are talking about here. But you can call it whatever you like. It’s a simple gui that helps folks that need a simple gui to use ollama.

    • @HyperUpscale
      @HyperUpscale 5 месяцев назад

      @@technovangelist Alright

  • @maxilp4952
    @maxilp4952 5 месяцев назад +2

    Thank you so much! I was just looking for something like this

  • @OliNorwell
    @OliNorwell 5 месяцев назад +8

    Looks nice but many people run ollama on a headless server with a beefy graphics card then access it from a laptop. So being able to enter an Ollama IP address is key and surely a very easy thing to do.

    • @joeburkeson8946
      @joeburkeson8946 5 месяцев назад +1

      Thanks time well saved.

    • @technovangelist
      @technovangelist  5 месяцев назад +5

      That exact question is covered in the video.

    • @TheAtassis
      @TheAtassis 5 месяцев назад

      ​@technovangelist this makes the title of the video misleading. For now it has no relation to ollama

    • @technovangelist
      @technovangelist  5 месяцев назад +5

      What do you mean @TheAtassis? The title refers to this being a client for ollama. Because it’s a client for ollama. It couldn’t be a more accurate title.

    • @Larimuss
      @Larimuss 2 месяца назад

      Yup 100% this is exactly what in trying to do lol. Dont want to work on my main comp and want my partner to be able to access it on her laptop.

  • @Techonsapevole
    @Techonsapevole 5 месяцев назад

    Nice, but I like more openweb ui

  • @Racife
    @Racife 4 месяца назад

    Found Jan being mentioned on reddit as an open webui alternative - would love to hear your thoughts on it!

  • @thebosha90
    @thebosha90 5 месяцев назад

    I wouldn’t call this app “simple”) imo, the simplest way to use ollama is Alfred workflow or raycast extension if one don’t like the ollama cli)

  • @moe3060
    @moe3060 5 месяцев назад

    How Much did they pay you? clearly biased

    • @technovangelist
      @technovangelist  5 месяцев назад +4

      I don’t think Msty is making any money. The amount of money I make from RUclips videos is less than what a high schooler makes at the local McDonald’s in a couple days. And I haven’t taken any sponsorships for any video I have ever made. I made a review of the best tool available for ollama and ollama is the best tool for running models. So you are saying anything positive online is paid for? Are you really that stupid or just trying to rile people up.

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      That said if someone wanted to pay me I am open to it. But I would have to disclose that relationship when posting the video. You can see when a video has taken a sponsorship very clearly in the way the platform presents the video to you.

    • @ayrtonmaradona
      @ayrtonmaradona 5 месяцев назад

      @@technovangelist hey man, don't worry about it and continue doing your awesome work with ollama and youtube videos, I have some suggestions of tools which I had used with ollama: phidata, lobehub, anythingLLM, LM studio, and pinokio, but I have the same question for all those tools, is all of these 100% private and security?

    • @technovangelist
      @technovangelist  5 месяцев назад +1

      That’s not something I can guarantee as I don’t work for them.

  • @user-jk9zr3sc5h
    @user-jk9zr3sc5h 5 месяцев назад

    I really need these UIs to allow us to adjust temp settings, max tokens, etc.