We will need to wait 2 generations more to find cheap and powerful Hardware. As Ray Kurzweil predicted we will obtain Hardware with the power equivalent to a Human Brain for under 1000$ in...... 2029. Those are 2 Nvidia gens after Blackwell B200 of 2024-25.
We will need to wait 2 generations more to find cheap and powerful Hardware. As Ray Kurzweil predicted we will obtain Hardware with the power equivalent to a Human Brain for under 1000$ in...... 2029. Those are 2 Nvidia gens after Blackwell B200 of 2024-25.
So, for everyone who is struggling to get the wsl 2 set as your default, you need the command "wsl --set-version 2" in your PowerShell. Spent an hour figuring it out and know this will save a few headaches. Thanks for the video Dave. I hope to get this operational before bed.
@@eugene3d875 Right, was kidding, I hope my 12GB 3060 and 48GB of plain RAM should let me go relatively far -- esp. as now the layers / whatever can be partially offloaded to CPU, as far as I understand
@@heyheyhophop I found that 48 GB would be sufficient. I'm running inference on CPU only, due to incompatible graphics card, and it still performs quite well, while keeping the total RAM load under 32gb. So I think your setup will do great
...and if you're having issues running that large Docker command copied from the video description, it has "sudo" missing from the start of it... and make sure you run it from the initial wsl command line rather than any other you may have opened.
Thank you for this post! I am a Mac user with basically no knowledge of computers (that's why I am a Mac user), but with your steps and a couple of google searches I was able to install ollama, docker and Web UI. My mac does not have an NVIDIA card of course, so it's a bit slower, but the privacy factor makes it totally worth it. Thanks again!
For those with less hardware, the Llama 3.2 3B Instruct model is good for chat and requires a lot lower specs to run. I am able to run it on GPU using an Nvidia GTX1070Ti with only 8GiB of VRAM. So far it has been on par with Llama 3.0 & 3.1 for my use while being a lot faster. To get it, run the following: ollama pull llama3.2:3b-instruct-q4_K_M
That was excellent and I was able to get it up and running just like your instructions concisely provided. After trying it for several hours, I can say that it isn't a bad language model at all.
Thank you for putting this video together; it's very helpful! When I first saw the 13-minute length, I doubted you'd cover the entire process, especially since the first 5-7 minutes focused on the benefits of handling it in-house rather than using the cloud.
Lot of Tech channels out there, but there is only one dave. Thorough explanation, Even have critical commands in the description. I'll be seeking your knowledge out more often. Thanks for everything you do. Youre Goat status
I had a feeling this was the Ollama model - I can verify as a Linux user that the install for this is as simple as installing ollama from the package manager / flathub; then running the two commands; ollama serve, then 'ollama run' - which automatically fetched the repository if it is not already there... Two *very useful* commands within the chat interface are /load and /save. You can keep your AI 'alive' and contextually relevant by saving it before exiting. 5 minutes is my average prompt time, if anyone asks...
I started using ollama (which support many models) on macOS, i never imagined it would be this easy. It also performs very well I use m1 max with 32gb (actually i expect to change to m3 max with 64 gb soon :-) )
There is nothing quite like watching automated processes fully utilize hardware; gaming, work, test benching or just messing around with some horrendously written operation. I love that feeling too!
This is brilliant Dave. But you are underestimating your own expertise and some of the configurations you already have in place on your machine - or simply do automatically. So lots of failures and error messages. But if you are an ageing weird computer nerd like me it's fun sorting it out :-) Thanks.
Wow, THANK YOU! Great video again! Would be interesting how much faster YOUR SETUP is compared to my 10 core Laptop, 32GB, and a 4GB 3050 . Right now it kinda crawls on most questions, but one can always come back 10 minutes later. The amazing thing is IT DOES give answers standalone.
I bet your issue is the 4GB of VRAM on the GPU. I ran it on just my CPU (5800x, 8 core Zen 3), and responses took less than a minute with no GPU acceleration. You might get better performance by cutting out the GPU entirely and letting it run just on the CPU so the model doesn’t have to load into VRAM piecemeal on every query.
What a cool rabbit hole... I installed it on my unraid server and loaded 3.1 and i'm hooked. I have no idea how it works and am like a kid in a candy store. Surprisingly this was the best video to get me up and running. I'm already thinking about a heavy lift system build because If my P2000 does this good, I can't wait to see what it can do with some amped up hardware.
This is pretty cool. I installed it on a Lenovo laptop, Windows 11 Home, 13th Gen Intel i7-1355U,10 core, 16GB ram, SSD. Runs decent enough to experiment with. I am only running from a command prompt.
Thank you!!! Just got ollama going on my computer. I had many bookmarks i wanted to try for local AI but it is your autistic flow that spoke to me the best
Already running InvokeAI (Stable Diffusion) and text-generation-webui (for Llama) for months locally. This is the first year I especially bought a GeForce gfx card (with 16GB RAM) not primarily for gaming but for Generative AI. The times they are a-changing. ;)
Snap didn't work but APT did! Now getting the following error after trying the web-ui command. docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Asked llama and chat-gpt for answers but they recommend systemctl commands that do not work on WSL 2 Ubuntu (I think). Is there any solution? (Really grateful for your time!)
Dave, you're always producing high quality entertaining and educational content. Thank you for such dedication and pushing through to share such rich information with all of us.
I’ve tried several different ways to install both ollama and open webui . I ended up with docker for open webui, but the native windows install for ollama because it’s noticeably faster than a docker or WSL install. Great video.
I gave all that a try. But somehow, this conversational "natural language" ai is not having the same effect on me. I have no interest in talking to these things. I don't understand why everyone is so excited. maybe my brain just works different. I am not so much of a 'talkey" social sort of person. I'm not excited by a pretend conversation with a piece of software. I don't care if it is "smart", it's NOT HUMAN. This world has isolated and atomized us so much. I'm sure I'm not the only one who crave sin person human contact. But then stuff like this comes around to further isolate and atomize us socially. The loneliness is torture, and this stuff only make sit worse, pulling everyone into the cyber-void, away from each other. God, I hate the 21st century.
I use a 10900K 128GB and RTX 4070 ti 16gb, and i found out it can work better than the latest ChatGPT. In my case chatgpt destroys quite often my scripts and deletes methods and do a lot of nonsense. Llama 3.1 seems to do a quite decent job. For example analysing a bugs.
not sure just how much work it would have taken but with everything running through docker it likely is easy to test it on a lower end machine. While I think it's really cool to see performance an a machine that i may have something comparable to in 15-20 years... it would also have been informative to see the performance on anything close to normal consumer levels of performance as a comparison.
I'm running it on (don't laugh...it's paid for!) a Dell Opti790 with 32GB RAM (yes, you CAN install that much) and a poor ol' RTX 1070Ti vid card and it works like gangbusters! The graphic card is modestly overclocked. Turned off the overclocking and I'm not sure if it ran with just a slight performance hit or maybe that was just my imagination. Point is, although there's no doubt that the Threadripper would leave my system PAINFULLY smoked in the dust on more intensive work, the basic chat works great as it. For something that's isolated from web learning, I was very surprised in it's breath of knowledge (it even knew what WSL is...) A roughly 8+ year-old machine will run this configuration just fine...!
I've been waiting for a straightforward tutorial like this to share after setting one up myself! Mistral has an amazing 12B small model that most GPUs can run.
Even though I've been running local inference and RAG for about a year now I still stopped everything to listen to Dave's explanation.... Because Dave..🕺🤖
What I wanna know is if I can get an AI on my steam deck that I can use as a personal assistant computer for creating Contant strictly based on all of the data and information I create I just put all my information inside of a single file of course it’s not gonna be a single file. There’s multiple different layers of stuff. I’m going to put it of course information but I just wanna be able to give like a custom GPT or llama three all this information and just have it so that it’s a personal assistant for one topic alone, but it’s an expert in that topic.
Finally! I've been looking for a way to learn to make a GPT AI to consume rulebooks and modules for Old School Essentials so I can ask it questions and generate random encounters.
In the early 80's I had a program called 'Whatsit?' maybe you know it, it was a early learning piece of software running on CP/M that did the same, it learned as long as the things you put into it. AI is based on these efforts. Tried later to make a similar program in AmigaDOS with a friend and it was fun. Just text base.
There is no way you made anything like LLMs on a CP/M.. Imagine thinking you invented something on a computer that could barely have enough ram for a primitive OS and that has only been in research for the last 10 years.
That program's ability to learn was really a simplistic classifier, in that it had limited scope and could not really learn AI is based on a lot of things that have gone before us, Whatsit included, although it was more that Whatsit was based on other fundamentals in it's time
@@TheBodgybrothers LLM's have been around as a concept well beyond 10 years! There have been LLM's that ran in a batch mode that were large but because of the limits of systems and ram, they were so slow as to be unusable but they existed In regards to MrKiilerno1, of course he didn't run an LLM on a CP/M machine, I don't think that was his point, he was referring to a system that engages in a conversation and that tracked that conversation across multiple interactions, in this regard, he is correct, role-play games and learning systems have been using this for decades now In terms of Ram, there are older OS's who can easily run programs beyond the constraints of their system memory, OpenVMS for example has both swapping and paging mechanisms. OS's now focus on speed so they demand more memory rather than use concepts like swapping to run extremely large programs but systems of old use to run large application in very limited memory. I worked on one that had 16K of memory and ran a whole accounting ledger for a large municipality of over 1 million people
@@stultuses And still I had a lot of fun with it. Most days when I talk to Alexa, Siri or Google, they tend to do sometimes what they want. I appreciate the way to communicate with them by voice, this on itself, I think, is a masterpiece in programming.
@@TheBodgybrothers That time I was working for a company that made medical database software, to store all their data about medicine and patients in. It had to be big machines, they were very pricy at the time and had a large storage device on them, harddrives. It was also the time 16 bit computers were coming and Microsoft took hold of many branches. Luckily this software evolved into the current database system it is these days. It all started by one man and his machine, selling his product and hardware to numerous institutions and health practitioners (doctors offices). As long as you had the thousends of dollars, you could buy it.
Thanks for this breakdown! Many of my colleagues and I use AI for various work, but there’s always that security concern to say nothing of cost. I’m going to set one of these up on the extra hardware I have around the house. Air it works out, we might be spinning up our own for the company to use. This video was very helpful!
I bet you’re gonna hit 1M subs with this one! Deal for the next video on how to train your local AI? Also, how much power does that beast of a workstation pull? Can't to see how long it'll take to get a response on a human desktop...
I've been using Ollama on my PC for a while now (I opted for a Windows install with AnythingLLM as my front end...easy and no Linux needed). It's pretty good over all. Not bleeding edge. Maybe not even cutting edge. Regardless, it does fine if you pick the right model(s) for your needs. It definitely wants to stretch its legs, though. More disk space (for larger versions of models) and more CUDA cores (speed, baby) are definitely more better.
@@chrisbegg290 Nothing complicated. download and install Ollama and pull at least one of the models. Download AnythingLLM and install it in the usual way. When you start it, go to settings and the LLM option and select an installed model to use. Go back to the main screen and chat away. There are of course more options to fiddle with if you want, but that gets you going. There are also some vids here on the RUclipss if you want more depth.
Been doing that for a while. Not too hard. The largest problem is having a good enough model to run that runs on what is a reasonably priced home computer.
That's amusing, "saying, casually summoning ai models" like requesting your own personal butlers to remove the plates off the table once you are done eating. Or like using Uber, where is my ride "alexa" I called for it 30 minutes ago 😊
Daemon, not a demon. "It's the work of the devil" - Mama Boucher No it's not. It's just zeros and ones. On's and off's. Same voltages and vibrations, vibes and grooves as the rest of the universe. The oneness is us. Oh eye see.
thank you. Decentralized, uncensored, 100% private, etc. AI/AGI really is important. ‘The path to hell is paved with good intentions’ is a quote that comes to mind when I hear governments, corporations, etc. trying to limit freedoms of individuals. AI is something that is too important to not have individuals be able to have absolute freedom over their own AI’s/AGI’s.
Thanks for this. I've been running LM Studio on my windows box and experimenting with a few different models. Was looking for inspiration to build a docker based AI server, and this hit the spot.
Dave, would you be willing to do a follow up video outlining a few lower levels of hardware? You don't even have to run the model on them (although that would be awesome), but describe some machines say in the 1k, 5k, 10k, and 25k range?
I’ve been running this exact setup in Linux (Pop!OS) for a couple weeks now, and it works fine on any modern (eg less than 6 years old) hardware. Initially I tried it with just my CPU (Ryzen 5800x), and it was fine albeit a little slow. But definitely usable. After I enabled GPU acceleration on my nVidia 2070 Super, the responses came back stupid fast. Like 10x faster than I could read them. The only thing I would note is that either CPU or GPU (whichever one is enabled) is going to be pinned at 100% utilization while responses are being generated. The practical effect is going to be not trivial power draw, and for laptops much shorter battery life unless the unit is plugged into the wall. But don’t let that put you off. Even modest hardware ($1,000 PC brand new 5 years ago) is more than sufficient. Just be aware of your battery level if you’re going to do this on a laptop.
Ollama recommends a 10th gen i5 CPU or AMD equivalent and a NVidia GPU 20xx with 8GB or more VRAM. Make sure your windows drive or host drive has enough diskspace as the models can easily rack up 100-400 GB at out of nowhere.
Nice. I've done something similar using LM Studio. While my system isn't even close to the power of what your using Dave, I can give quick response from my uncensored model. Really enjoy videos and look forward to seeing what you come up with next.
This entire process needs to be automated with a one click, no typing install procedure. I just don't understand why the setup is so complicated with multiple dependencies...linux, webui and then docker and then finally the LLM. There needs to be an automated script or batch file one can download to make this process as simple as a one or two click procedure. 99% of the public will never run a local LLM if installation is so cumbersome. Heck, 99% of the general computing public has never seen a DOS prompt let alone used the powershell. Don't get me wrong. I appreciate the straight forward steps you provided (for some reason I installed Llama but without linux a month or two ago but too slow for my $500 4 year old Ryzen laptop). But the general computing public will not jump through all these hoops.
We just happened to have a 12 GPU open frame bit coin mining style computer purchased for colleague who barely used it and then left our organization. Ok so its not the latest and greatest, but Ollama is perfect for it. I'm experimenting now. Cheers Dave. (also on the Spectrum).
If anyone is getting errors running sudo snap install docker follow along: Set the systemd flag set in your WSL distro settings You will need to edit the wsl.conf file to ensure systemd starts up on boot. Add these lines to the /etc/wsl.conf (note you will need to run your editor with sudo privileges, e.g: sudo nano /etc/wsl.conf): Copy [boot] systemd=true And close out of the nano editor using CTRL+O to save and CTRL+X to exit. Final steps With the above steps done, close your WSL distro Windows and run wsl.exe --shutdown from PowerShell to restart your WSL instances. Upon launch you should have systemd running. You can check this with the command systemctl list-unit-files --type=service which should show your services’ status.
Installing docker via snap caused me issues with not being able to access my Nvidia graphics card to run the AI. I believe this is because I'm running an "unsupported" Linux version. Installing docker via "apt" fixed this.
@@kenniejp23 No I haven't yet but I was getting an error about "Nvidia container toolkit" and"libnvidia-ml.so.1" so I installed cuda toolkit. I searched about apt but it seamed fairly complex setting up on wsl especially to use localhost and so on.
awesome video and tutorial, thank you. is there a way to implement this local AI onto mobile phone? even if it's remotely tapping into PC from phone? I find chatgpt app very convenient on mobile, but would be even more awesome to have an unlocked gpt on my phone instead!
@@roncaruso931 Even better. Dell noticed he does videos about computing and has a good-sized following; so they sent him a $50,000 computer (on loan) in exchange for featuring it in one or more videos.
@blshouse Yes, I did know that, but he is a retired MS software engineer. The man is worth millions of dollars. He could easily afford a $50,000 PC. $50,000 is like 5 cents for him.
Love the pdp11 shirt. I spent many hours writing a RSTS subsystem and basics+2 to c translator for “Unix” I’m saving this episode and will install a local ai. Thanks.
@@alok.01 I saw some news somewhere that the cost or depreciation of AI development was like 97% which made it the fastest dropping market in history, so we're going there 😅
Future me installing this on my $100 quantum computer set-top box and thinking how quaint Dave looked installing this on a $50k server. Now, if only I had a time machine. Thanks Dave, really cool!
I've been running LLM locally for a while. The most useful thing i made it do was create weather reports from weather data from NWS. I have 2 rtx 4060 ti (16gb each) and my old rtx 2060 (6gb) in my server. Its really just a gaming desktop moonlighting as a server, it can run a 70B model decently though.
How do you like the complexity of the answers you get? Really annoyed openai is forcing people to use their hardware especially since most people use chatgpt for personal use. Is it worth it to setup something locally?
@@organicdinosaur5259 its pretty good honestly. I'm running a Q4 model (Llama-3.1-70B) but despite that its rather accurate. For me personally, I say its worth it. mostly because I can just download a random model then throw it on the server so you're not just stuck with chatgpt. Qwen is a really good general model, comes in a bunch of sizes. You can also run most 8B and lower models on CPU at a pretty brisk speed, but its better if you have a GPU with at least 8gb of vram. ollama is super easy to setup on both windows and linux so its absolutely worth at least giving it a shot.
Openwebui is the bees knees. I have every API and local model running through it for the past month and i just love it. I don't use docker rubbish though, much easier install on Linux
@@jml_53 I have an AMD motherboard and Ryzen 7 and two used Nvidia 3080’s I bought from a retired crypto miner. Depending on the model size you could run it well on a single GPU with sufficient (20G) ram. I’m running standard Debian on bare metal. The whole rig was around $2k.
@@UnwalledGarden Could integrated graphcis work? they can address the main memory. My irisXe addressed 30G once running Call of Duty, but i suspect there was a memory leak. Anyway always wondered if that could be a cheaper way around the high gpu memory requirement for AI's, using inegrating gpu that can tap into more memory.
@@mikejones-vd3fg Recent Nvidia drivers on Windows allow some system RAM to be shared with the GPU. For me this is an additional 16G out of 32G system RAM, on top of my RTX 4070 TI Super's 16GB VRAM. Sadly this is not available on Linux yet. It's quite a bit slower than VRAM, but it makes some use cases possible that weren't before. BTW: Everything that Dave described in this video is possible to achieve in native windows (with access to the shared memory) .
Thank you, good sir. Playing with on-premises AI has been on my list of things to do for some time. This is exactly the motivation I needed. Up and running, very cool.
I did everything though I had to install docker for windows and use it's WSL integration, I can display the web gui but there is no model available there whilst it's working in the CLI.
When I run the docker command, after downloading, it gives me the following error: docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.
@@jqoutlaw Thanks. That worked. In my casual sleuthing of the problem, it looked as if the NVIDIA gpu had something to do with it. I found that I have an AMD Radeon 780M gpu, so I looked to see how to run it with that, but none of the solutions I found worked. So I guess I'll just run it using the cpu instead.
My guess is that it's complaining about not finding the NVIDIA CUDA toolkit (only works if you have an nvidia gpu as @jqoutlaw mentions). Also do an update/upgrade: 'sudo apt update; sudo apt upgrade'
Dell Price: $35,594.36
Shipping: Free
Thank God the shipping is free.
😂
We will need to wait 2 generations more to find cheap and powerful Hardware. As Ray Kurzweil predicted we will obtain Hardware with the power equivalent to a Human Brain for under 1000$ in...... 2029. Those are 2 Nvidia gens after Blackwell B200 of 2024-25.
We will need to wait 2 generations more to find cheap and powerful Hardware. As Ray Kurzweil predicted we will obtain Hardware with the power equivalent to a Human Brain for under 1000$ in...... 2029. Those are 2 Nvidia gens after Blackwell B200 of 2024-25.
I’ll deal with an idiot AI to start….
$35.5k and no SSD options? I'd ask for a refund.
So, for everyone who is struggling to get the wsl 2 set as your default, you need the command "wsl --set-version 2" in your PowerShell. Spent an hour figuring it out and know this will save a few headaches. Thanks for the video Dave. I hope to get this operational before bed.
New? It's been out for over 4 years now ;p
@@JustinEmlay Thanks for pointing out the spelling error, I typed that late into the night.
How do we link the Ollama AI to the OpenAI, seemed to skip that part?
i think you skipped the part where he explained, that this managment ui is similar to openAI ui but custom download@@AIG-Development .
@@gelisob Where what number, he skipped over the details on installation?
I’m cool with living vicariously through Dave.
And just like this, 13minutes lead to an evening of successful tinkering. Thanks for the inspiration!
Glad to see some of us have 50K worth of hardware at hand 😅
@@heyheyhophop lol, just use your gaming machine, it still works quite fast. I certainly don't have the same beast of a machine.
@@eugene3d875 Right, was kidding, I hope my 12GB 3060 and 48GB of plain RAM should let me go relatively far -- esp. as now the layers / whatever can be partially offloaded to CPU, as far as I understand
@@heyheyhophop I found that 48 GB would be sufficient. I'm running inference on CPU only, due to incompatible graphics card, and it still performs quite well, while keeping the total RAM load under 32gb. So I think your setup will do great
@@EugeneShamshurin Many thanks for letting me know
Just casually throwing out a 13min video that can completely transform your life and business... that's so Dave.
Well he is on the Spectrum so does this all the time, no big deal , ha ha. My family has no Idea why I get so happy using Ollama on my Pi5.
@@FlintStone-c3s Ollama on a Pi5? You are either a very brave or a very patient person.
I need to do some reading, but what are people actually using it for? I can't think of what I might ask it to do.
@@BastetFurrysays nothing, its all about the model
@@craigknights You can ask it what to ask
Masterpiece on how to keep the audience hooked with minimal visual and audio jargon.
Superb presentation.
Thank you.
Yes. So glad you're covering this topic. Being able to use the files on your own PC without having to upload those files to other companies servers
The powershell command is wslinstall
Thanks!
Thank you, I found this after it didn't work, and it still didn't, but then tried "wsl --update" which then started the install.
... as I notice it says on Dave's very next slide!
comments coming to the rescue ;)
...and if you're having issues running that large Docker command copied from the video description, it has "sudo" missing from the start of it... and make sure you run it from the initial wsl command line rather than any other you may have opened.
Thank you for this post! I am a Mac user with basically no knowledge of computers (that's why I am a Mac user), but with your steps and a couple of google searches I was able to install ollama, docker and Web UI. My mac does not have an NVIDIA card of course, so it's a bit slower, but the privacy factor makes it totally worth it. Thanks again!
Dave, As a retired Deccie, I love your appreciation of the pdp11. Worked on many pdp 11/34's way back when. Oh! and love the shirt.
pdp/8 is really the only machine worth talking about.🙂
For those with less hardware, the Llama 3.2 3B Instruct model is good for chat and requires a lot lower specs to run. I am able to run it on GPU using an Nvidia GTX1070Ti with only 8GiB of VRAM. So far it has been on par with Llama 3.0 & 3.1 for my use while being a lot faster. To get it, run the following: ollama pull llama3.2:3b-instruct-q4_K_M
Thanks a lot buddy
Hey Dave, as an former employee of Digital Equipment Corp for over ten years, I love the t-shirt.
Worked for them when Compaq gutted it then HP burned it to the ground. Was an awesome company to work for.
I've been souping up my old Atari ST , I'm using digital research GEM desktop.
That was excellent and I was able to get it up and running just like your instructions concisely provided. After trying it for several hours, I can say that it isn't a bad language model at all.
Can you remove the adult content limitation?
@@markae0You can download an uncensored or erotic roleplaying model for that
Dave has Jedi level IT/AI/Communications skills! He almost convinces me I could do this❗🤠
"open the garage doors, HAl."
"I'm afraid I can't do that, Dave."
Open source ai models are more like Jarvis than Hal.
@@lonewitness just watch out for The Riddler
Dave? Why are you doing this, Dave?
@@javabeanz8549 nope it is the candyman for special Art-E-Fish'ale Philantrophy's.
did anybody seen karl marx at the sicknuts from disney ?
Lots of cleverness in these 2 lines... Well played Mr. fake W
Thank you for putting this video together; it's very helpful! When I first saw the 13-minute length, I doubted you'd cover the entire process, especially since the first 5-7 minutes focused on the benefits of handling it in-house rather than using the cloud.
Thanks, Dave! I got Ollama and Open-WebUI installed on my media center docker rig. It works great!
Lot of Tech channels out there, but there is only one dave. Thorough explanation, Even have critical commands in the description. I'll be seeking your knowledge out more often. Thanks for everything you do. Youre Goat status
I had a feeling this was the Ollama model - I can verify as a Linux user that the install for this is as simple as installing ollama from the package manager / flathub; then running the two commands; ollama serve, then 'ollama run' - which automatically fetched the repository if it is not already there...
Two *very useful* commands within the chat interface are /load and /save. You can keep your AI 'alive' and contextually relevant by saving it before exiting.
5 minutes is my average prompt time, if anyone asks...
I started using ollama (which support many models) on macOS, i never imagined it would be this easy. It also performs very well
I use m1 max with 32gb (actually i expect to change to m3 max with 64 gb soon :-) )
You are absolutely the best at explaining everything so simply and well! Thanks for another great clip!
There is nothing quite like watching automated processes fully utilize hardware; gaming, work, test benching or just messing around with some horrendously written operation. I love that feeling too!
This is brilliant Dave. But you are underestimating your own expertise and some of the configurations you already have in place on your machine - or simply do automatically. So lots of failures and error messages. But if you are an ageing weird computer nerd like me it's fun sorting it out :-) Thanks.
Wow, THANK YOU! Great video again! Would be interesting how much faster YOUR SETUP is compared to my 10 core Laptop, 32GB, and a 4GB 3050 . Right now it kinda crawls on most questions, but one can always come back 10 minutes later. The amazing thing is IT DOES give answers standalone.
Still pointless then... I'll keep using CoPilot for now..
I bet your issue is the 4GB of VRAM on the GPU. I ran it on just my CPU (5800x, 8 core Zen 3), and responses took less than a minute with no GPU acceleration. You might get better performance by cutting out the GPU entirely and letting it run just on the CPU so the model doesn’t have to load into VRAM piecemeal on every query.
What a cool rabbit hole... I installed it on my unraid server and loaded 3.1 and i'm hooked. I have no idea how it works and am like a kid in a candy store. Surprisingly this was the best video to get me up and running. I'm already thinking about a heavy lift system build because If my P2000 does this good, I can't wait to see what it can do with some amped up hardware.
It will mostly only bring speed. A beefier system will bring higher IQ but probably not very noticable unless you scale to cloud solutions.
This is pretty cool. I installed it on a Lenovo laptop, Windows 11 Home, 13th Gen Intel i7-1355U,10 core, 16GB ram, SSD. Runs decent enough to experiment with. I am only running from a command prompt.
Thank you!!!
Just got ollama going on my computer.
I had many bookmarks i wanted to try for local AI but it is your autistic flow that spoke to me the best
Were you able to get the Docker container working? I'm getting an OCI runtime create failed error at that step.
Love DEC! Both my parents got jobs there in the 80s, which resulted in me moving out of Dorchester to Melrose :)
I spent 5 hours trying to Google this...to find no real answer. Then this. Thanks.
Please make more videos about this subject.
Already running InvokeAI (Stable Diffusion) and text-generation-webui (for Llama) for months locally. This is the first year I especially bought a GeForce gfx card (with 16GB RAM) not primarily for gaming but for Generative AI. The times they are a-changing. ;)
Stable diffusion runs on Pi5 8GB, a bit slow, 3 minutes per image. Hoping the Hailo AI Hat can run it faster.
For those that don't know, (if you installed Debian), replace 'snap' command with the 'apt' command. BTW- Debian does all of this quite well also.
I believe you can use apt and snap both on Ubuntu, since it's Debian based, and snap is developed by Canonical.
@@JimmyS2 - use Mint, so snap isn't used.
I LOVE YOU! Stay awesome!
Snap didn't work but APT did! Now getting the following error after trying the web-ui command.
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Asked llama and chat-gpt for answers but they recommend systemctl commands that do not work on WSL 2 Ubuntu (I think). Is there any solution? (Really grateful for your time!)
Dave, you're always producing high quality entertaining and educational content. Thank you for such dedication and pushing through to share such rich information with all of us.
I’ve tried several different ways to install both ollama and open webui . I ended up with docker for open webui, but the native windows install for ollama because it’s noticeably faster than a docker or WSL install. Great video.
Doing the same. Local windows llama, docker open webui.
I gave all that a try. But somehow, this conversational "natural language" ai is not having the same effect on me. I have no interest in talking to these things. I don't understand why everyone is so excited. maybe my brain just works different. I am not so much of a 'talkey" social sort of person. I'm not excited by a pretend conversation with a piece of software. I don't care if it is "smart", it's NOT HUMAN.
This world has isolated and atomized us so much. I'm sure I'm not the only one who crave sin person human contact. But then stuff like this comes around to further isolate and atomize us socially.
The loneliness is torture, and this stuff only make sit worse, pulling everyone into the cyber-void, away from each other.
God, I hate the 21st century.
This new tech
is ruining the
social fabric,
I use a 10900K 128GB and RTX 4070 ti 16gb, and i found out it can work better than the latest ChatGPT. In my case chatgpt destroys quite often my scripts and deletes methods and do a lot of nonsense. Llama 3.1 seems to do a quite decent job. For example analysing a bugs.
not sure just how much work it would have taken but with everything running through docker it likely is easy to test it on a lower end machine. While I think it's really cool to see performance an a machine that i may have something comparable to in 15-20 years... it would also have been informative to see the performance on anything close to normal consumer levels of performance as a comparison.
I'm running it on (don't laugh...it's paid for!) a Dell Opti790 with 32GB RAM (yes, you CAN install that much) and a poor ol' RTX 1070Ti vid card and it works like gangbusters! The graphic card is modestly overclocked. Turned off the overclocking and I'm not sure if it ran with just a slight performance hit or maybe that was just my imagination. Point is, although there's no doubt that the Threadripper would leave my system PAINFULLY smoked in the dust on more intensive work, the basic chat works great as it. For something that's isolated from web learning, I was very surprised in it's breath of knowledge (it even knew what WSL is...) A roughly 8+ year-old machine will run this configuration just fine...!
Yep, can confirm that it runs really well on my older AMD Ryzen 3600 with 32GB RAM and a 2070 GPU.
just installed this on an intel i7-9750H 16gb laptop and it runs really well, very impressed🙂 - thanks for the vid
HI, curious as to how you solved the docker run issue? thanks
Thanks!
This is the tutorial I didn't know I needed, until now.
I've been waiting for a straightforward tutorial like this to share after setting one up myself! Mistral has an amazing 12B small model that most GPUs can run.
Even though I've been running local inference and RAG for about a year now I still stopped everything to listen to Dave's explanation.... Because Dave..🕺🤖
What I wanna know is if I can get an AI on my steam deck that I can use as a personal assistant computer for creating Contant strictly based on all of the data and information I create I just put all my information inside of a single file of course it’s not gonna be a single file. There’s multiple different layers of stuff. I’m going to put it of course information but I just wanna be able to give like a custom GPT or llama three all this information and just have it so that it’s a personal assistant for one topic alone, but it’s an expert in that topic.
Very impressive, Dave. I appreciate the work you do. Thanks!
Love the shirt, was there slightly after PDP days but got to see the old Alpha starting with EV54. You made this too easy to install on a laptop :)
Hi Dave, I just want to thank you and appreciate that we have you. You read my mind, this setup is exactly what i have been looking for.
Love the DIGITAL t-shirt 🤓. I started working at DEC in 1980.
Finally! I've been looking for a way to learn to make a GPT AI to consume rulebooks and modules for Old School Essentials so I can ask it questions and generate random encounters.
In the early 80's I had a program called 'Whatsit?' maybe you know it, it was a early learning piece of software running on CP/M that did the same, it learned as long as the things you put into it. AI is based on these efforts. Tried later to make a similar program in AmigaDOS with a friend and it was fun. Just text base.
There is no way you made anything like LLMs on a CP/M.. Imagine thinking you invented something on a computer that could barely have enough ram for a primitive OS and that has only been in research for the last 10 years.
That program's ability to learn was really a simplistic classifier, in that it had limited scope and could not really learn
AI is based on a lot of things that have gone before us, Whatsit included, although it was more that Whatsit was based on other fundamentals in it's time
@@TheBodgybrothers
LLM's have been around as a concept well beyond 10 years!
There have been LLM's that ran in a batch mode that were large but because of the limits of systems and ram, they were so slow as to be unusable but they existed
In regards to MrKiilerno1, of course he didn't run an LLM on a CP/M machine, I don't think that was his point, he was referring to a system that engages in a conversation and that tracked that conversation across multiple interactions, in this regard, he is correct, role-play games and learning systems have been using this for decades now
In terms of Ram, there are older OS's who can easily run programs beyond the constraints of their system memory, OpenVMS for example has both swapping and paging mechanisms. OS's now focus on speed so they demand more memory rather than use concepts like swapping to run extremely large programs but systems of old use to run large application in very limited memory. I worked on one that had 16K of memory and ran a whole accounting ledger for a large municipality of over 1 million people
@@stultuses And still I had a lot of fun with it. Most days when I talk to Alexa, Siri or Google, they tend to do sometimes what they want. I appreciate the way to communicate with them by voice, this on itself, I think, is a masterpiece in programming.
@@TheBodgybrothers That time I was working for a company that made medical database software, to store all their data about medicine and patients in. It had to be big machines, they were very pricy at the time and had a large storage device on them, harddrives. It was also the time 16 bit computers were coming and Microsoft took hold of many branches. Luckily this software evolved into the current database system it is these days. It all started by one man and his machine, selling his product and hardware to numerous institutions and health practitioners (doctors offices). As long as you had the thousends of dollars, you could buy it.
Thanks for this breakdown! Many of my colleagues and I use AI for various work, but there’s always that security concern to say nothing of cost. I’m going to set one of these up on the extra hardware I have around the house. Air it works out, we might be spinning up our own for the company to use. This video was very helpful!
I bet you’re gonna hit 1M subs with this one! Deal for the next video on how to train your local AI? Also, how much power does that beast of a workstation pull? Can't to see how long it'll take to get a response on a human desktop...
Thanks Dave, your presentation helped me a lot to understand it.❤
I've been using Ollama on my PC for a while now (I opted for a Windows install with AnythingLLM as my front end...easy and no Linux needed). It's pretty good over all. Not bleeding edge. Maybe not even cutting edge. Regardless, it does fine if you pick the right model(s) for your needs. It definitely wants to stretch its legs, though. More disk space (for larger versions of models) and more CUDA cores (speed, baby) are definitely more better.
Care to explain your process?
@@chrisbegg290 Nothing complicated. download and install Ollama and pull at least one of the models. Download AnythingLLM and install it in the usual way. When you start it, go to settings and the LLM option and select an installed model to use. Go back to the main screen and chat away. There are of course more options to fiddle with if you want, but that gets you going. There are also some vids here on the RUclipss if you want more depth.
Love the shirt Dave; many fond memories working on PDP-11s, then VAX 11/780 back in the day
Came for the tutorial, stayed for the DEC comments. Still one of my favourite work experiences in my career -- it was a special place.
So, we’re just casually summoning AIs at home now?
Been doing that for a while. Not too hard. The largest problem is having a good enough model to run that runs on what is a reasonably priced home computer.
lol I love this in this context
That's amusing, "saying, casually summoning ai models" like requesting your own personal butlers to remove the plates off the table once you are done eating. Or like using Uber, where is my ride "alexa" I called for it 30 minutes ago 😊
Daemon, not a demon.
"It's the work of the devil" - Mama Boucher
No it's not. It's just zeros and ones. On's and off's.
Same voltages and vibrations, vibes and grooves as the rest of the universe. The oneness is us. Oh eye see.
Summon sounds too archaic. Wait, not archaic enough.
We invoke them.
Thank you for the straightforward, no nonsense, walkthrough.
Dave, getting close to 1mil subscribers!
thank you. Decentralized, uncensored, 100% private, etc. AI/AGI really is important.
‘The path to hell is paved with good intentions’ is a quote that comes to mind when I hear governments, corporations, etc. trying to limit freedoms of individuals. AI is something that is too important to not have individuals be able to have absolute freedom over their own AI’s/AGI’s.
Sweet PDP11 shirt.
Just thinking the comparable size of a digital pdp11 to that thread ripper unit.
Oh this is interesting! Please more of this, Dave!
This is incredible, exactly what I have been looking for! ❤
Thanks for this. I've been running LM Studio on my windows box and experimenting with a few different models. Was looking for inspiration to build a docker based AI server, and this hit the spot.
Dave, would you be willing to do a follow up video outlining a few lower levels of hardware? You don't even have to run the model on them (although that would be awesome), but describe some machines say in the 1k, 5k, 10k, and 25k range?
I’ve been running this exact setup in Linux (Pop!OS) for a couple weeks now, and it works fine on any modern (eg less than 6 years old) hardware. Initially I tried it with just my CPU (Ryzen 5800x), and it was fine albeit a little slow. But definitely usable. After I enabled GPU acceleration on my nVidia 2070 Super, the responses came back stupid fast. Like 10x faster than I could read them.
The only thing I would note is that either CPU or GPU (whichever one is enabled) is going to be pinned at 100% utilization while responses are being generated. The practical effect is going to be not trivial power draw, and for laptops much shorter battery life unless the unit is plugged into the wall. But don’t let that put you off. Even modest hardware ($1,000 PC brand new 5 years ago) is more than sufficient. Just be aware of your battery level if you’re going to do this on a laptop.
Ollama recommends a 10th gen i5 CPU or AMD equivalent and a NVidia GPU 20xx with 8GB or more VRAM.
Make sure your windows drive or host drive has enough diskspace as the models can easily rack up 100-400 GB at out of nowhere.
Nice. I've done something similar using LM Studio. While my system isn't even close to the power of what your using Dave, I can give quick response from my uncensored model. Really enjoy videos and look forward to seeing what you come up with next.
Sitting on an airplane with a laptop or in your shop next to a threadripper is exactly the same noise level.
This entire process needs to be automated with a one click, no typing install procedure. I just don't understand why the setup is so complicated with multiple dependencies...linux, webui and then docker and then finally the LLM.
There needs to be an automated script or batch file one can download to make this process as simple as a one or two click procedure. 99% of the public will never run a local LLM if installation is so cumbersome. Heck, 99% of the general computing public has never seen a DOS prompt let alone used the powershell.
Don't get me wrong. I appreciate the straight forward steps you provided (for some reason I installed Llama but without linux a month or two ago but too slow for my $500 4 year old Ryzen laptop). But the general computing public will not jump through all these hoops.
I'd like to add that ollama plays very nice with the Continue VScode extension which means....private local github copilot too!
We just happened to have a 12 GPU open frame bit coin mining style computer purchased for colleague who barely used it and then left our organization. Ok so its not the latest and greatest, but Ollama is perfect for it. I'm experimenting now. Cheers Dave. (also on the Spectrum).
(also on the spectrum) that's our what's up my N-
And just like that I've got an AI running locally on my machine. Feels kind of weird tbh. Awesome video guide, thanks a lot!
If anyone is getting errors running sudo snap install docker follow along:
Set the systemd flag set in your WSL distro settings
You will need to edit the wsl.conf file to ensure systemd starts up on boot.
Add these lines to the /etc/wsl.conf (note you will need to run your editor with sudo privileges, e.g: sudo nano /etc/wsl.conf):
Copy
[boot]
systemd=true
And close out of the nano editor using CTRL+O to save and CTRL+X to exit.
Final steps
With the above steps done, close your WSL distro Windows and run wsl.exe --shutdown from PowerShell to restart your WSL instances. Upon launch you should have systemd running. You can check this with the command systemctl list-unit-files --type=service which should show your services’ status.
@pcase74 Great advice! Actually I do have exactly this issue. Can you tell a bit more about the thing, like: how does the entry look like.
Thanks Dave! I am a 68 yo and have experienced dos - current OS and you always amaze me with your knowledge of OS.
Jesus so glad I found this channel. Keep up the good work!
Installing docker via snap caused me issues with not being able to access my Nvidia graphics card to run the AI.
I believe this is because I'm running an "unsupported" Linux version.
Installing docker via "apt" fixed this.
I'm using ubuntu but still same issue
@@MythicAudioBooks Did you remove docker and reinstall with apt?
@@kenniejp23 No I haven't yet but I was getting an error about "Nvidia container toolkit" and"libnvidia-ml.so.1" so I installed cuda toolkit.
I searched about apt but it seamed fairly complex setting up on wsl especially to use localhost and so on.
awesome video and tutorial, thank you.
is there a way to implement this local AI onto mobile phone? even if it's remotely tapping into PC from phone?
I find chatgpt app very convenient on mobile, but would be even more awesome to have an unlocked gpt on my phone instead!
£50,000 PC!??? This video deserves a million more views.
Dave is a multi millionaire. He can afford anything. He lives in a different universe.
For those that can only afford a Raspberry Pi5 8GB, Ollama runs on it. The big models are a bit slow, the smaller ones are usable.
@@roncaruso931 Even better. Dell noticed he does videos about computing and has a good-sized following; so they sent him a $50,000 computer (on loan) in exchange for featuring it in one or more videos.
@blshouse Yes, I did know that, but he is a retired MS software engineer. The man is worth millions of dollars. He could easily afford a $50,000 PC. $50,000 is like 5 cents for him.
Love the “Autodidactism” 👍🏻 very underrated
Love the "digital" t-shirt
Love the pdp11 shirt. I spent many hours writing a RSTS subsystem and basics+2 to c translator for “Unix” I’m saving this episode and will install a local ai. Thanks.
Great shirt and add i add XXDP to the list of OS ;-)
Thanks Dave, I have a couple RHEL Linux systems at work I shall try this.
Wow, Dave! I had no idea. I don't have the beast machine you have but I'm going to put this to use. Thanks.
Your computer’s specs make me want to cry.
Just wait a couple of years and it’ll be commonplace. Of course you’ll still be upset by his future setup, but your computer will be this powerful.
@@robbybobbyhobbiesMakes me want to wait a few years so ai compatible tech can mature and become cheap
@@alok.01 I saw some news somewhere that the cost or depreciation of AI development was like 97% which made it the fastest dropping market in history, so we're going there 😅
Another excellent presentation, thank you Dave! I'll definitely be tinkering with this. Nice shirt btw!
I have i9-KS w/4090 collecting dust on my desk. Now it finally found it's purpose. Thanks, Dave.
Wow that's just plain amazing. Local ai I thought was hard work but now I know it can be done
Future me installing this on my $100 quantum computer set-top box and thinking how quaint Dave looked installing this on a $50k server. Now, if only I had a time machine. Thanks Dave, really cool!
This made it all simple to follow now the hard part to do it… thank you
I loved your reference to HAL 9000
"Hello, Dave. How can I help you today?"
I personally really like using LM Studio. It does everything, including downloading and loading models in a simple UI
I've been running LLM locally for a while. The most useful thing i made it do was create weather reports from weather data from NWS.
I have 2 rtx 4060 ti (16gb each) and my old rtx 2060 (6gb) in my server. Its really just a gaming desktop moonlighting as a server, it can run a 70B model decently though.
How do you like the complexity of the answers you get? Really annoyed openai is forcing people to use their hardware especially since most people use chatgpt for personal use. Is it worth it to setup something locally?
@@organicdinosaur5259 its pretty good honestly. I'm running a Q4 model (Llama-3.1-70B) but despite that its rather accurate. For me personally, I say its worth it. mostly because I can just download a random model then throw it on the server so you're not just stuck with chatgpt. Qwen is a really good general model, comes in a bunch of sizes.
You can also run most 8B and lower models on CPU at a pretty brisk speed, but its better if you have a GPU with at least 8gb of vram. ollama is super easy to setup on both windows and linux so its absolutely worth at least giving it a shot.
I just set this up on my homelab a week ago. Was hoping you had a good android app to go with this setup. Great Video!
How can we be sure that Dave made this video, and not his AI model?
That's what a bot would say!
@@DavesGarage sounds like something a synth would say.
Thank you for the guide Dave! This will help me tremendously debugging!
Openwebui is the bees knees. I have every API and local model running through it for the past month and i just love it. I don't use docker rubbish though, much easier install on Linux
Great tutorial. Would like see to one on how to create a custom model or add training to an existing model.
Don’t be spooked by the cost. You can get a perfectly serviceable hardware setup for less that 10% of Dave’s killer rig.
Any recommendations?
I wish I could download Dave's rig :)
@@jml_53 I have an AMD motherboard and Ryzen 7 and two used Nvidia 3080’s I bought from a retired crypto miner. Depending on the model size you could run it well on a single GPU with sufficient (20G) ram. I’m running standard Debian on bare metal. The whole rig was around $2k.
@@UnwalledGarden Could integrated graphcis work? they can address the main memory. My irisXe addressed 30G once running Call of Duty, but i suspect there was a memory leak. Anyway always wondered if that could be a cheaper way around the high gpu memory requirement for AI's, using inegrating gpu that can tap into more memory.
@@mikejones-vd3fg Recent Nvidia drivers on Windows allow some system RAM to be shared with the GPU. For me this is an additional 16G out of 32G system RAM, on top of my RTX 4070 TI Super's 16GB VRAM. Sadly this is not available on Linux yet. It's quite a bit slower than VRAM, but it makes some use cases possible that weren't before.
BTW: Everything that Dave described in this video is possible to achieve in native windows (with access to the shared memory) .
Thank you, good sir. Playing with on-premises AI has been on my list of things to do for some time. This is exactly the motivation I needed. Up and running, very cool.
I did everything though I had to install docker for windows and use it's WSL integration, I can display the web gui but there is no model available there whilst it's working in the CLI.
To be honest I'm almost more impressed with the hardware showcased in the video than the actual video topic haha
When I run the docker command, after downloading, it gives me the following error:
docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.
Same here. Would love to learn how to fix this.
Same, frustrating. Any ideas anyone?
Remove the --gpus=all parameter in the docker command. I have this running on a VM in proxmox using just the CPU and it fixed my issue.
@@jqoutlaw Thanks. That worked.
In my casual sleuthing of the problem, it looked as if the NVIDIA gpu had something to do with it. I found that I have an AMD Radeon 780M gpu, so I looked to see how to run it with that, but none of the solutions I found worked.
So I guess I'll just run it using the cpu instead.
My guess is that it's complaining about not finding the NVIDIA CUDA toolkit (only works if you have an nvidia gpu as @jqoutlaw mentions). Also do an update/upgrade: 'sudo apt update; sudo apt upgrade'
Cool shirt! I worked at Digital in the 90's on Sepulveda in Los Angeles.
Thanks - now pretty please download the most ridiculous model available (maybe Llama3.1:405b ?) and show us what killer hardware can do !
Hello Dave! it's so fun listening this
Thank you. This helped me create my current girlfriend.
Great informative video. I used the "cheap" and almost premade option of an Intel Arc A770 with their AI Playground program.