FREE Local LLMs on Apple Silicon | FAST!
HTML-код
- Опубликовано: 9 май 2024
- Step by step setup guide for a totally local LLM with a ChatGPT-like UI, backend and frontend, and a Docker option.
Temperature/fan on your Mac: www.tunabellysoftware.com/tgp... (affiliate link)
Run Windows on a Mac: prf.hn/click/camref:1100libNI (affiliate)
Use COUPON: ZISKIND10
🛒 Gear Links 🛒
* 🍏💥 New MacBook Air M1 Deal: amzn.to/3S59ID8
* 💻🔄 Renewed MacBook Air M1 Deal: amzn.to/45K1Gmk
* 🎧⚡ Great 40Gbps T4 enclosure: amzn.to/3JNwBGW
* 🛠️🚀 My nvme ssd: amzn.to/3YLEySo
* 📦🎮 My gear: www.amazon.com/shop/alexziskind
🎥 Related Videos 🎥
* 🌗 RAM torture test on Mac - • TRUTH about RAM vs SSD...
* 🛠️ Host the PERFECT Prompt - • Hosting the PERFECT Pr...
* 🛠️ Set up Conda on Mac - • python environment set...
* 🛠️ Set up Node on Mac - • Install Node and NVM o...
* 🤖 INSANE Machine Learning on Neural Engine - • INSANE Machine Learnin...
* 💰 This is what spending more on a MacBook Pro gets you - • Spend MORE on a MacBoo...
* 🛠️ Developer productivity Playlist - • Developer Productivity
🔗 AI for Coding Playlist: 📚 - • AI
Repo
github.com/open-webui/open-webui
Docs
docs.openwebui.com/
Docker Single Command
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
- - - - - - - - -
❤️ SUBSCRIBE TO MY RUclips CHANNEL 📺
Click here to subscribe: / @azisk
- - - - - - - - -
Join this channel to get access to perks:
/ @azisk
- - - - - - - - -
📱 ALEX ON X: / digitalix
#machinelearning #llm #softwaredevelopment - Наука
I really like that you showed the non-docker install first. I think too many rely on docker black-boxes. I prefer this. Thanks!
Dockers are not a black-box. You can get it in them, and change stuff!!!
This channel is the gift that keeps on giving.
Great video Alex! yes please make videos on image generation!
Another great video Alex, I really enjoy your videos. And I really appreciate your perfect diction in English, which makes it easy to follow your explanations even for those who do not have English as their first language.
Very interesting, will definitely be trying this when I get a little downtime!
Thank you, got it to work without docker
Amazing tutorial. Great stuff!
Thank you! Cheers!
Alex, I love this video very much. Thank you!
Awesome thing waiting for more videos on the way
Thanks Alex for videos like this 👍
I would like to see Image generation follow up video 😍
Great video!! And yes, please add a video explaining how to add the images generator.
Yes Alex, you will help us more if we could learn with you how on how to add an image generator as well. We thank you for your time and colaboraron. Your channel is a must have subscription in it now-a-days.
Woot woot! great stuff. Nice easy tutorial and I now have a 'smarter' Mac. Thanks :)
Great Video Alex. Thanks.
Glad you liked it!
A video on how you could incorporate these LLMs in your applications would be super interesting! Let's say that in your application you have a set of pdfs or html files that provide documentation on your product. If you let these LLMs analyse that documentation, then the user could get very useful information just asking and not searching through all of the documentation files!
+1
Amazing stuff. Thank you
My M1 Mac 16GB be real frightened on the side rn.
I ran 7b variants no problem on my now sold m1 air 16g
got macbook with the same specs. tried to run 15b starcoder2 quantized k5m in LM studio on it, max GPU layers, getting me around 12-13 tokens per sec, not good but manageable
Don't be, unless you are using other things that are super heavy as well. Llama3 8B(?) takes up about 4.7GB of Ram, with the Silicon's event use of the Nvme and Swap you'll be fine. (I prefer using LM Studio now to Ollama as it has CLI and Web built in, no need for Docker/OrbStack but, Ollama on its own without a WebUI works too)
😂
Great video. What format are LLM models download as? Looking into how I can use those downloaded with OLLAMA with other technologies like .NET
Nice. Image generation and integrating new chatgpt in to this will be great.
One thing for sure... I'll be implementing this on my menu bar for easy access :D
if ur trying docker, make sure it is version 4.29+, as host network driver (for mac) revealed there as a beta feature
Amazing video omg, incredible tutorial man
Glad you liked it!
Thanks! Very nice video
Wow! Thank you!
Excellent Video giving it a try tonight on my M3 Max 14 inch model and see what are the results will share probably...
Thx for sharing good stuff for us. Nice onec
Great video. Awesome 👏
Thanks for the video.😊😊
by the way, I just joined your channel, I really enjoyed these videos, very helpful, thanks!
awesome. welcome!
These videos are so exciting for me; this channel is the number one on RUclips. That's why I subscribe and gladly pay for RUclips Premium. A hug, Alex!
thanks for saying! means a lot
Now we need 1TB MEMORY DRIVES (Like the Amiga used to have 'fast ram' )
@@AZisk Is their any chance you could incorporate a PC GPU Relative Performance Equivalence to each new apple silicon microchip that you review?
Alex, you are awesome!
This was interesting, thanks
Yes I’m interested in an image generation video. I’m running llama3 in Bash, haven’t had time to set up a front end yet. Cool video.
Amazing video! I'd just recommend Volta over nvm.
I would love to see the image generation tutorial 😁
I was gonna spring for a maxed M3 Max MBP, but saw rumors that the M4 Max will have more AI-related chops, so just picked up a maxed M1 Max to tide me over 😁
Really excited about setting all this up, finding this vid was very timely, thanks!
Just some food for thought for future vids: Anaconda's licensing terms changed to require any org > 200 employees to license it. For this reason, many Enterprises are steering their devs away from Anaconda. Would be helpful if the tutorials used "vanilla" Python (e.g.: venv) unless Conda were truly necessary. Thanks for the vids and keep up the great work!
good to know. thanks
I was able to, by tracking down your Conda video, get this running.
I have some web dev and Linux experience, so it wasn’t a huge chore but certainly not easy going in relatively blind.
Great tutorial though. Much thanks.
instant sub, great content thank you!
Welcome aboard!
i believe my laptop has 80 Tensor cores. for starters. This looks like a really good shift for a fri night! thanks.
I've just started my career as a Data Scientist, and I found this video to be awesome! 🤩🥳Could you please consider making a video on image generation (in LLama 3) in a private PC environment?🥺🥺
As a game dev, this is so good to have. Btw am gonna try this on parallels for my m1 pro
You mean in windows through parallels? why would it be useful?
Oh you got distracted! You're a true developer!
Yes! Image generation, please!
YO! Finally hearing of a big Svelte project!
Like really, it's so much quicker and easier to ship with Svelte than others, why am I only seeing this now?
Svelte for the win!
Well.. Apple, Brave, New York times, IKEA among other big names all use svelte
@@precisionchoker But they do not acknowledge that too much..
i like this tutorial, it is computer dummy friendly~
Tried llama3 on 8GB ram M1 :D ... I guess I was too optimistic
Thanks!
Wow 🤩 thanks so much!
thanks alex
Yes, we do.
So cool, and it's free (if we don't count the 4 grands spent for the machine). I'd love to see the images generation
Here you have a super like - and a cup of coffee 🙂
Yay, thank you! I haven't been to Denmark in a while - beautiful country.
Well be happ to see a tutorial for automatic 1111 ❤
BTW - One of the BEST programmer channels!
Hi Alex, I would like to see the image generation video
Amazing stuff as Usual. Now make a tutorial on Automatic 1111
Thanks 👍🏻
I am running llama, code Gemma on my laptop for local files intelligence. It's slow but damm it reads all my PDFs and give perfect overview
Do you do it through ollama and open webui ? I m curious as to how you can send files to be processed by llms
@@devinou-programmationtechn9979 GP4All works fairly well with attachments. But I personally use Obsidian as a RAG to process markdown files and PDFs. There are tons of plugins like Text Generator and Smart Connections that can work with Ollama, LM Studio, etc.
Can you describe this “perfect overview”? Just curious what you mean by
Yes running open webui for llama and code Gemma llms on windows machine. Running open webui on localhost gives textarea where you can upload the file. The upload takes time. Once it is done, you can ask questions like give me an overview of this document, tell me all the important points of this document etc
Gemma doesn’t seem to work well on Apple silicon
Great video Alex, is there anyway to have an LLM execute local shell scripts to perform tasks?
thanks!
This is really cool, love the channel and the videos Alex! Just curious, how is this different to an app like LM Studio? Keep up the good work!
My guess is that this web UI has more capabilities such as image generation which LM Studio doesn’t have. If the goal is simply to have text interaction, then I agree that this may not be necessary
Is it fast on mac m1 pro too?
How many storage used for all instalation sir?
Your video is awesome!
Can't believe, I found this video today because I just started searching for Local LLMs yesterday and today, I found the complete guide. Great video Alex :)
You live in Matrix. Wake up
Please show image generation
lets do some image generation please it would be super helpful
Thanks @Alex, by the way is there a reason it can only use GPU, any reason not taking advantage of NPUs ?
Alex, excellent video!
Can my MacBook air m2 with 16G RAM host these AI engines smoothly?
Great channel! I just did a build something similar with lm studio and flask based web ui. I’m going to try this method now. Btw, what was the ‘code .’ command you ran? Are you using visual studio code? Thanks again!
Thanks! and thanks for joining. I did the flask thing a few videos ago, but it's just another thing to maintain. I find this webui a lot more feature rich and better looking. And yes, the 'code .' command just opens the current folder in VSCode
Mr. Alex Ziskind
Could you clarify whether training deep learning models on a GPU for the Apple Silicon M3 Pro might reduce its lifespan?
Thank you.
When there will be a video to run LLM on an iPhone or iPad? Like using LLMFarm
Great video! So are you saying that we can get ChatGPT like quality just faster, more private and for free by running local LLM's on our personal machines? Like, do you feel that this replaces ChatGPT?
Thanks ! Is Macbook air enough for that?
very intresting
Thank You Alex, amazing video, I followed all steps and I enjoyed the process and the results with my m3 max. I wonder if there is a GPT that we can use from the laptop and have searches online since the cutoff knowledge date of these models seem to be over a year ago or more. For example when I ask questions of what is the terraform provider version for aws or other type of platform, is old and there is a potential to have deprecated code responses. What do you recommend in this case? not sure if you have already a video for that lol.
that’s a great question. you’ll need to use a framework like flowise or langchain to accomplish this I believe, but i don’t know much about them - it’s on my list of things to learn
@@AZisk makes sense, I will do some research about it and see what I can find out to test but I will look forward when you share a video with this type of model orchestration, will be fantastic.
Yes yes please make a video generation video!!!
Yes do images please 🙏🏻
now benchmark it vs mac air :) also wonder how much these are usefull tools and not just toys
What about a new M4 iPad Pro video?
Great video. But I think Jan AI is a lot easier to configure and setup for mac users
Why not doing a deployment with Electron, so you have a desktop application. Btw I love this thing!!!
Hey Alex, would you say Apple is in a very good position when it comes to AI and the required hardware? So far Apple has been really quiet and lots of ppl dont think Apple can have an edge here. Whats your thought in general here?
What advantage does this have over using LM Studio that you can install directly as an app instead of using the Terminal? (Genuine question)
I use Ollama with Continue plugin with VSCode. And Chatbox GUI when not code related. Work well with both Mac and Linux with Ryzen 7000 CPU. On linux it's running in a podman(docker) container. But best experience is with MacBook Pro, apple silicon and unified memory make it speedy.
Can you train these local LLMs with your own code files? For example adding all files from a project as context so the AI suggests things based on your current code structure and classes.
Yeap, you can then make a RAG with the LLMs you prefer. Will be making my own RAG with llama3 this weekend.
easy question, if i am not developer, what's the benefit i get from installing LLM in my apple silicon, what's the difference, between free version, or paid version of ai models ?
How do we get the models updated regularly?
Hi, please please, if possible to generate images through ollama webui
MBP M1 Pro with 16GB of RAM would be enough to run this?
Alex why M1 Mac getting heated when use like 10 minutes?
Image generation video please
Is mps available on docker for Apple Silicon already?
Which model is good for programing on JavaScript no a Apple Silicon 16GB?
thanks for the video man
How do I find out about the hardware requirements like RAM, disk space, GPU?
should i upgrade from macbook pro 2020 (intel core i5 8th gen quad-core 1.4ghz) to macbook air m3 15 inch for coding?
Just install LM Studio
Can someone tell me how is it different from LM studio, Anything LLM or using Lamafile? I get a bit confused with all these. Also can I make this to run with RAG?
the question is: are opensource LLMs just as good as say chatGPT or Gemini?
Было бы здорово ещё дать краткое описание каждой из моделей и рейтинг популярности или узконаправленности. А то установишь какие-то неизвестные модели себе на мак)
do you know if any llm would run on base model M1 MacBook Air (8GB memory)?
Will the m4 chips be many times faster still?
just setup my last night