Does this mean i can run this Llm on my local desktop computer at home, and be able to access it from my laptop online? Sorry if it’s a dumb question im new to this
Hi very interesting indeed, one question what are the requirements to run the model? I’m asking because my goal will be to use flowiseai hosted locally , then host a vector database like supabase, ingest our technical document and create a chatbot for our needs completely local. At the moment I’m using OpenAI ChatGPT api but I want to do it completely offline for privacy reason
Does this mean i can run this Llm on my local desktop computer at home, and be able to access it from my laptop online? Sorry if it’s a dumb question im new to this
I'd like to know why you choose the older version (v2.2.1) instead of the newest one, the 2.6 ? the size of the dolphin-2.6-mistral-7b.Q5_K_M.gguf is almost the same ?
Yes very much possible to do without lm studio, but what lm studio brings to the table is one UI for all that you do in different terminal windows otherwise. It becomes very messy, with the terminals running, then you want to change some configurations, you want to jump from one model to another etc.,
Cool and very useful. . But I would appreciate videos on how to do this without being beholden to some company and its apparently closed source product. Government crackdowns are coming and I want as much of my toolchain on my own hardware and in my control as possible beforehand.
Can you explain a little bit more about what you mean by government crackdowns? Are you talking about the copyright issues with OpenAI and big industries like Hollywood and Newspapers suing them and potentially having them raided by FBI, CIA or some other corrupt agency?
Yup me too I downloaded that shit as much as I could to my local drives. 500GB+ so far... Gimme!! I wouldn't be surprised if they removed uncensored models from the web. Actually they will 100%, because it's a security threat to them. Prepare for the end folks be smart lol
Ugh still too involved for non techies like me. Cant someone just package all this up in an executable? I mean that’s the whole f-ing point of ai isn’t it? So we don’t have to bother with command line BS. Pfft!
Love the pace you present at. No filla.
Man, you always go THE deepest with your AI insights and videos. Kudos, bro! Always #AIDOPE 🍪
Very good information, and well presented. Thank you!
Beautifully explained. Thanks a bunch
This was so incredibly helpful... thank you!
Love that...a mage from Warcraft teaching AI
Does this mean i can run this Llm on my local desktop computer at home, and be able to access it from my laptop online? Sorry if it’s a dumb question im new to this
Hi very interesting indeed, one question what are the requirements to run the model? I’m asking because my goal will be to use flowiseai hosted locally , then host a vector database like supabase, ingest our technical document and create a chatbot for our needs completely local. At the moment I’m using OpenAI ChatGPT api but I want to do it completely offline for privacy reason
Lol I'm doing the same
Always appreciate the overview, thank you.
Brilliant as always!
Thanks!👍👍👍 Useful information.
amazing content. greatly appreciated
Sam Altman: "How dare you!"
woooow
thanks for this
im free with chatgpt
well thanks to you i got a new way to use local ai model as a API. never knew this was possible too. can you make more such videos?
Does this mean i can run this Llm on my local desktop computer at home, and be able to access it from my laptop online? Sorry if it’s a dumb question im new to this
What are the minimum system requirements to run models locally?
i cant open the Model's python file. Notepad says file is too large. What do i do??
I'd like to know why you choose the older version (v2.2.1) instead of the newest one, the 2.6 ? the size of the dolphin-2.6-mistral-7b.Q5_K_M.gguf is almost the same ?
Legend, thanks for sharing this
Can we download the python code?
Tnx, very helpful! Are you ENTP? 🤔
Hey, is my Mac mini m2 strong enough to use opensource ki?
How should I choose among LLMs for download? what are the criteria?
Just a question. What is your hardware configuration for running the local models?
Thank you.
Excellent - what is your hardware setup?
i got a 1080ti and for some reason i can't get it run on my GPU.. is my GPU too old??
Is there a link to the script?
Can you please provide the test3.py file?
were can i get this code and do i have to load it into visual studio
Dont clear understand, isn't it possible to just download models from HF and use them locally? Why is LM studio local server need?
Yes very much possible to do without lm studio, but what lm studio brings to the table is one UI for all that you do in different terminal windows otherwise. It becomes very messy, with the terminals running, then you want to change some configurations, you want to jump from one model to another etc.,
awesome
How would you deploy this to use via api?
do you do this so you can get the fully uncensored output? I tried asking it iffy stuff and it gave me a nope in LM Studio. Same model
Did you try it on the uncensored model he describes at the end?
Are there models on LMStudio that can do image recognition like GPT4?
llava
You are right, there is llava model, but I dont see LMStidio ui option to add photo in chat mode. May be there is python option?
Cool and very useful. . But I would appreciate videos on how to do this without being beholden to some company and its apparently closed source product. Government crackdowns are coming and I want as much of my toolchain on my own hardware and in my control as possible beforehand.
Can you explain a little bit more about what you mean by government crackdowns? Are you talking about the copyright issues with OpenAI and big industries like Hollywood and Newspapers suing them and potentially having them raided by FBI, CIA or some other corrupt agency?
Yup me too I downloaded that shit as much as I could to my local drives. 500GB+ so far... Gimme!!
I wouldn't be surprised if they removed uncensored models from the web.
Actually they will 100%, because it's a security threat to them.
Prepare for the end folks be smart lol
Are you saying lm studio is the issue? Just curious to understand what you mean.
@@AberrantArtyes it is not open-source
My understanding is that Jan AI is an open source alternative to LM Studio but there are not many tutorials on how to use the local inference server.
Fungerar AMD gpu's?
You are Late for the party 🥳
Ugh still too involved for non techies like me. Cant someone just package all this up in an executable? I mean that’s the whole f-ing point of ai isn’t it? So we don’t have to bother with command line BS. Pfft!
Very Very slow though
Step number one - to download some proprietary shit... Not interested