Set up a Local AI like ChatGPT on your own machine!

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 990

  • @NachosElectric
    @NachosElectric Месяц назад +151

    Dell Price: $35,594.36
    Shipping: Free
    Thank God the shipping is free.

    • @shaddow11ro
      @shaddow11ro Месяц назад +10

      😂

    • @azhuransmx126
      @azhuransmx126 Месяц назад +6

      We will need to wait 2 generations more to find cheap and powerful Hardware. As Ray Kurzweil predicted we will obtain Hardware with the power equivalent to a Human Brain for under 1000$ in...... 2029. Those are 2 Nvidia gens after Blackwell B200 of 2024-25.

    • @azhuransmx126
      @azhuransmx126 Месяц назад +1

      We will need to wait 2 generations more to find cheap and powerful Hardware. As Ray Kurzweil predicted we will obtain Hardware with the power equivalent to a Human Brain for under 1000$ in...... 2029. Those are 2 Nvidia gens after Blackwell B200 of 2024-25.

    • @ToxicNets
      @ToxicNets Месяц назад

      I’ll deal with an idiot AI to start….

    • @marcosdiez7263
      @marcosdiez7263 Месяц назад +2

      $35.5k and no SSD options? I'd ask for a refund.

  • @davidgreen9834
    @davidgreen9834 2 месяца назад +84

    So, for everyone who is struggling to get the wsl 2 set as your default, you need the command "wsl --set-version 2" in your PowerShell. Spent an hour figuring it out and know this will save a few headaches. Thanks for the video Dave. I hope to get this operational before bed.

    • @JustinEmlay
      @JustinEmlay Месяц назад +1

      New? It's been out for over 4 years now ;p

    • @davidgreen9834
      @davidgreen9834 Месяц назад +1

      @@JustinEmlay Thanks for pointing out the spelling error, I typed that late into the night.

    • @AIG-Development
      @AIG-Development Месяц назад

      How do we link the Ollama AI to the OpenAI, seemed to skip that part?

    • @gelisob
      @gelisob Месяц назад +1

      i think you skipped the part where he explained, that this managment ui is similar to openAI ui but custom download@@AIG-Development .

    • @AIG-Development
      @AIG-Development Месяц назад +1

      @@gelisob Where what number, he skipped over the details on installation?

  • @TheBugkillah
    @TheBugkillah 2 месяца назад +57

    I’m cool with living vicariously through Dave.

  • @eugene3d875
    @eugene3d875 2 месяца назад +60

    And just like this, 13minutes lead to an evening of successful tinkering. Thanks for the inspiration!

    • @heyheyhophop
      @heyheyhophop Месяц назад +5

      Glad to see some of us have 50K worth of hardware at hand 😅

    • @eugene3d875
      @eugene3d875 Месяц назад +3

      @@heyheyhophop lol, just use your gaming machine, it still works quite fast. I certainly don't have the same beast of a machine.

    • @heyheyhophop
      @heyheyhophop Месяц назад

      @@eugene3d875 Right, was kidding, I hope my 12GB 3060 and 48GB of plain RAM should let me go relatively far -- esp. as now the layers / whatever can be partially offloaded to CPU, as far as I understand

    • @EugeneShamshurin
      @EugeneShamshurin Месяц назад +3

      @@heyheyhophop I found that 48 GB would be sufficient. I'm running inference on CPU only, due to incompatible graphics card, and it still performs quite well, while keeping the total RAM load under 32gb. So I think your setup will do great

    • @heyheyhophop
      @heyheyhophop Месяц назад

      @@EugeneShamshurin Many thanks for letting me know

  • @osterbybruk
    @osterbybruk 2 месяца назад +148

    Just casually throwing out a 13min video that can completely transform your life and business... that's so Dave.

    • @FlintStone-c3s
      @FlintStone-c3s 2 месяца назад +4

      Well he is on the Spectrum so does this all the time, no big deal , ha ha. My family has no Idea why I get so happy using Ollama on my Pi5.

    • @BastetFurry
      @BastetFurry 2 месяца назад +1

      @@FlintStone-c3s Ollama on a Pi5? You are either a very brave or a very patient person.

    • @craigknights
      @craigknights 2 месяца назад +1

      I need to do some reading, but what are people actually using it for? I can't think of what I might ask it to do.

    • @KimYoungUn69
      @KimYoungUn69 Месяц назад

      @@BastetFurrysays nothing, its all about the model

    • @KimYoungUn69
      @KimYoungUn69 Месяц назад +1

      @@craigknights You can ask it what to ask

  • @toulasantha
    @toulasantha Месяц назад +2

    Masterpiece on how to keep the audience hooked with minimal visual and audio jargon.
    Superb presentation.
    Thank you.

  • @jaz093
    @jaz093 2 месяца назад +12

    Yes. So glad you're covering this topic. Being able to use the files on your own PC without having to upload those files to other companies servers

  • @gertleusink5125
    @gertleusink5125 2 месяца назад +61

    The powershell command is wslinstall

    • @toploaded2078
      @toploaded2078 2 месяца назад +4

      Thanks!

    • @Mopharli
      @Mopharli 2 месяца назад +4

      Thank you, I found this after it didn't work, and it still didn't, but then tried "wsl --update" which then started the install.

    • @Mopharli
      @Mopharli 2 месяца назад

      ... as I notice it says on Dave's very next slide!

    • @OldPoi77
      @OldPoi77 2 месяца назад

      comments coming to the rescue ;)

    • @Mopharli
      @Mopharli 2 месяца назад +3

      ...and if you're having issues running that large Docker command copied from the video description, it has "sudo" missing from the start of it... and make sure you run it from the initial wsl command line rather than any other you may have opened.

  • @KeyframeHolder
    @KeyframeHolder Месяц назад +9

    Thank you for this post! I am a Mac user with basically no knowledge of computers (that's why I am a Mac user), but with your steps and a couple of google searches I was able to install ollama, docker and Web UI. My mac does not have an NVIDIA card of course, so it's a bit slower, but the privacy factor makes it totally worth it. Thanks again!

  • @markkenefick644
    @markkenefick644 2 месяца назад +25

    Dave, As a retired Deccie, I love your appreciation of the pdp11. Worked on many pdp 11/34's way back when. Oh! and love the shirt.

    • @lancairtalk7237
      @lancairtalk7237 День назад

      pdp/8 is really the only machine worth talking about.🙂

  • @JenniferBishop-ty6tt
    @JenniferBishop-ty6tt Месяц назад +13

    For those with less hardware, the Llama 3.2 3B Instruct model is good for chat and requires a lot lower specs to run. I am able to run it on GPU using an Nvidia GTX1070Ti with only 8GiB of VRAM. So far it has been on par with Llama 3.0 & 3.1 for my use while being a lot faster. To get it, run the following: ollama pull llama3.2:3b-instruct-q4_K_M

  • @jamndude
    @jamndude Месяц назад +14

    Hey Dave, as an former employee of Digital Equipment Corp for over ten years, I love the t-shirt.

    • @hdguppies
      @hdguppies Месяц назад +3

      Worked for them when Compaq gutted it then HP burned it to the ground. Was an awesome company to work for.

    • @DeadCat-42
      @DeadCat-42 Месяц назад

      I've been souping up my old Atari ST , I'm using digital research GEM desktop.

  • @OceanusHelios
    @OceanusHelios 2 месяца назад +13

    That was excellent and I was able to get it up and running just like your instructions concisely provided. After trying it for several hours, I can say that it isn't a bad language model at all.

    • @markae0
      @markae0 2 месяца назад

      Can you remove the adult content limitation?

    • @taihuynhuc3135
      @taihuynhuc3135 Месяц назад

      ​@@markae0You can download an uncensored or erotic roleplaying model for that

  • @JohnBabisDJC
    @JohnBabisDJC Месяц назад +3

    Dave has Jedi level IT/AI/Communications skills! He almost convinces me I could do this❗🤠

  • @AFNacapella
    @AFNacapella 2 месяца назад +352

    "open the garage doors, HAl."
    "I'm afraid I can't do that, Dave."

    • @lonewitness
      @lonewitness 2 месяца назад +7

      Open source ai models are more like Jarvis than Hal.

    • @javabeanz8549
      @javabeanz8549 2 месяца назад +2

      @@lonewitness just watch out for The Riddler

    • @dighawaii1
      @dighawaii1 2 месяца назад +15

      Dave? Why are you doing this, Dave?

    • @ApeStimplair-et9yk
      @ApeStimplair-et9yk 2 месяца назад +2

      @@javabeanz8549 nope it is the candyman for special Art-E-Fish'ale Philantrophy's.
      did anybody seen karl marx at the sicknuts from disney ?

    • @Roberto-SergeiIVVonYamashita
      @Roberto-SergeiIVVonYamashita 2 месяца назад +4

      Lots of cleverness in these 2 lines... Well played Mr. fake W

  • @gurmeet4you
    @gurmeet4you 14 дней назад

    Thank you for putting this video together; it's very helpful! When I first saw the 13-minute length, I doubted you'd cover the entire process, especially since the first 5-7 minutes focused on the benefits of handling it in-house rather than using the cloud.

  • @hbhamilton3
    @hbhamilton3 2 месяца назад +5

    Thanks, Dave! I got Ollama and Open-WebUI installed on my media center docker rig. It works great!

  • @IdRadical
    @IdRadical Месяц назад +1

    Lot of Tech channels out there, but there is only one dave. Thorough explanation, Even have critical commands in the description. I'll be seeking your knowledge out more often. Thanks for everything you do. Youre Goat status

  • @Vilvaran
    @Vilvaran 2 месяца назад +13

    I had a feeling this was the Ollama model - I can verify as a Linux user that the install for this is as simple as installing ollama from the package manager / flathub; then running the two commands; ollama serve, then 'ollama run' - which automatically fetched the repository if it is not already there...
    Two *very useful* commands within the chat interface are /load and /save. You can keep your AI 'alive' and contextually relevant by saving it before exiting.
    5 minutes is my average prompt time, if anyone asks...

    • @bjarne431
      @bjarne431 Месяц назад

      I started using ollama (which support many models) on macOS, i never imagined it would be this easy. It also performs very well
      I use m1 max with 32gb (actually i expect to change to m3 max with 64 gb soon :-) )

  • @Minglarr
    @Minglarr 2 месяца назад +5

    You are absolutely the best at explaining everything so simply and well! Thanks for another great clip!

  • @Alex-y6o7h
    @Alex-y6o7h Месяц назад +4

    There is nothing quite like watching automated processes fully utilize hardware; gaming, work, test benching or just messing around with some horrendously written operation. I love that feeling too!

  • @MrNilOrange
    @MrNilOrange 2 месяца назад +12

    This is brilliant Dave. But you are underestimating your own expertise and some of the configurations you already have in place on your machine - or simply do automatically. So lots of failures and error messages. But if you are an ageing weird computer nerd like me it's fun sorting it out :-) Thanks.

  • @hstrinzel
    @hstrinzel 2 месяца назад +16

    Wow, THANK YOU! Great video again! Would be interesting how much faster YOUR SETUP is compared to my 10 core Laptop, 32GB, and a 4GB 3050 . Right now it kinda crawls on most questions, but one can always come back 10 minutes later. The amazing thing is IT DOES give answers standalone.

    • @Planetdune
      @Planetdune 2 месяца назад

      Still pointless then... I'll keep using CoPilot for now..

    • @benjaminlynch9958
      @benjaminlynch9958 2 месяца назад +8

      I bet your issue is the 4GB of VRAM on the GPU. I ran it on just my CPU (5800x, 8 core Zen 3), and responses took less than a minute with no GPU acceleration. You might get better performance by cutting out the GPU entirely and letting it run just on the CPU so the model doesn’t have to load into VRAM piecemeal on every query.

  • @doozowings4672
    @doozowings4672 Месяц назад +2

    What a cool rabbit hole... I installed it on my unraid server and loaded 3.1 and i'm hooked. I have no idea how it works and am like a kid in a candy store. Surprisingly this was the best video to get me up and running. I'm already thinking about a heavy lift system build because If my P2000 does this good, I can't wait to see what it can do with some amped up hardware.

    • @Glademist
      @Glademist Месяц назад

      It will mostly only bring speed. A beefier system will bring higher IQ but probably not very noticable unless you scale to cloud solutions.

  • @TKevinRussell
    @TKevinRussell Месяц назад +3

    This is pretty cool. I installed it on a Lenovo laptop, Windows 11 Home, 13th Gen Intel i7-1355U,10 core, 16GB ram, SSD. Runs decent enough to experiment with. I am only running from a command prompt.

  • @domagoj1978zagreb
    @domagoj1978zagreb 2 месяца назад +2

    Thank you!!!
    Just got ollama going on my computer.
    I had many bookmarks i wanted to try for local AI but it is your autistic flow that spoke to me the best

    • @jrherita
      @jrherita Месяц назад

      Were you able to get the Docker container working? I'm getting an OCI runtime create failed error at that step.

  • @MrBaboon1212
    @MrBaboon1212 2 месяца назад +4

    Love DEC! Both my parents got jobs there in the 80s, which resulted in me moving out of Dorchester to Melrose :)

  • @realmstupid-on8df
    @realmstupid-on8df 2 месяца назад +10

    I spent 5 hours trying to Google this...to find no real answer. Then this. Thanks.

  • @TroySkirchak
    @TroySkirchak 2 месяца назад +23

    Please make more videos about this subject.

  • @TheTuubster
    @TheTuubster 2 месяца назад +12

    Already running InvokeAI (Stable Diffusion) and text-generation-webui (for Llama) for months locally. This is the first year I especially bought a GeForce gfx card (with 16GB RAM) not primarily for gaming but for Generative AI. The times they are a-changing. ;)

    • @FlintStone-c3s
      @FlintStone-c3s 2 месяца назад +1

      Stable diffusion runs on Pi5 8GB, a bit slow, 3 minutes per image. Hoping the Hailo AI Hat can run it faster.

  • @tubeDude48
    @tubeDude48 2 месяца назад +20

    For those that don't know, (if you installed Debian), replace 'snap' command with the 'apt' command. BTW- Debian does all of this quite well also.

    • @JimmyS2
      @JimmyS2 2 месяца назад

      I believe you can use apt and snap both on Ubuntu, since it's Debian based, and snap is developed by Canonical.

    • @tubeDude48
      @tubeDude48 2 месяца назад

      @@JimmyS2 - use Mint, so snap isn't used.

    • @christiandior8726
      @christiandior8726 2 месяца назад

      I LOVE YOU! Stay awesome!

    • @christiandior8726
      @christiandior8726 2 месяца назад +2

      Snap didn't work but APT did! Now getting the following error after trying the web-ui command.
      docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
      Asked llama and chat-gpt for answers but they recommend systemctl commands that do not work on WSL 2 Ubuntu (I think). Is there any solution? (Really grateful for your time!)

  • @jonaskromwell4464
    @jonaskromwell4464 Месяц назад

    Dave, you're always producing high quality entertaining and educational content. Thank you for such dedication and pushing through to share such rich information with all of us.

  • @LaughterOnWater
    @LaughterOnWater 2 месяца назад +10

    I’ve tried several different ways to install both ollama and open webui . I ended up with docker for open webui, but the native windows install for ollama because it’s noticeably faster than a docker or WSL install. Great video.

    • @tehtechie1
      @tehtechie1 2 месяца назад

      Doing the same. Local windows llama, docker open webui.

    • @peterbelanger4094
      @peterbelanger4094 2 месяца назад

      I gave all that a try. But somehow, this conversational "natural language" ai is not having the same effect on me. I have no interest in talking to these things. I don't understand why everyone is so excited. maybe my brain just works different. I am not so much of a 'talkey" social sort of person. I'm not excited by a pretend conversation with a piece of software. I don't care if it is "smart", it's NOT HUMAN.
      This world has isolated and atomized us so much. I'm sure I'm not the only one who crave sin person human contact. But then stuff like this comes around to further isolate and atomize us socially.
      The loneliness is torture, and this stuff only make sit worse, pulling everyone into the cyber-void, away from each other.
      God, I hate the 21st century.

    • @peterbelanger4094
      @peterbelanger4094 2 месяца назад

      This new tech

    • @peterbelanger4094
      @peterbelanger4094 2 месяца назад

      is ruining the

    • @peterbelanger4094
      @peterbelanger4094 2 месяца назад

      social fabric,

  • @Johan-rm6ec
    @Johan-rm6ec 19 дней назад

    I use a 10900K 128GB and RTX 4070 ti 16gb, and i found out it can work better than the latest ChatGPT. In my case chatgpt destroys quite often my scripts and deletes methods and do a lot of nonsense. Llama 3.1 seems to do a quite decent job. For example analysing a bugs.

  • @loop8946
    @loop8946 2 месяца назад +14

    not sure just how much work it would have taken but with everything running through docker it likely is easy to test it on a lower end machine. While I think it's really cool to see performance an a machine that i may have something comparable to in 15-20 years... it would also have been informative to see the performance on anything close to normal consumer levels of performance as a comparison.

    • @The_Craphound
      @The_Craphound 2 месяца назад +2

      I'm running it on (don't laugh...it's paid for!) a Dell Opti790 with 32GB RAM (yes, you CAN install that much) and a poor ol' RTX 1070Ti vid card and it works like gangbusters! The graphic card is modestly overclocked. Turned off the overclocking and I'm not sure if it ran with just a slight performance hit or maybe that was just my imagination. Point is, although there's no doubt that the Threadripper would leave my system PAINFULLY smoked in the dust on more intensive work, the basic chat works great as it. For something that's isolated from web learning, I was very surprised in it's breath of knowledge (it even knew what WSL is...) A roughly 8+ year-old machine will run this configuration just fine...!

    • @SixplusonemediaAu
      @SixplusonemediaAu Месяц назад +1

      Yep, can confirm that it runs really well on my older AMD Ryzen 3600 with 32GB RAM and a 2070 GPU.

  • @jpalmz1978
    @jpalmz1978 Месяц назад +1

    just installed this on an intel i7-9750H 16gb laptop and it runs really well, very impressed🙂 - thanks for the vid

    • @S1mpleHuman
      @S1mpleHuman 4 дня назад

      HI, curious as to how you solved the docker run issue? thanks

  • @tomaselke3670
    @tomaselke3670 2 месяца назад +6

    Thanks!
    This is the tutorial I didn't know I needed, until now.

  • @TheRex42
    @TheRex42 2 месяца назад +12

    I've been waiting for a straightforward tutorial like this to share after setting one up myself! Mistral has an amazing 12B small model that most GPUs can run.

  • @angrd020
    @angrd020 2 месяца назад +17

    Even though I've been running local inference and RAG for about a year now I still stopped everything to listen to Dave's explanation.... Because Dave..🕺🤖

    • @DWSP101
      @DWSP101 2 месяца назад +1

      What I wanna know is if I can get an AI on my steam deck that I can use as a personal assistant computer for creating Contant strictly based on all of the data and information I create I just put all my information inside of a single file of course it’s not gonna be a single file. There’s multiple different layers of stuff. I’m going to put it of course information but I just wanna be able to give like a custom GPT or llama three all this information and just have it so that it’s a personal assistant for one topic alone, but it’s an expert in that topic.

  • @stevesomers7366
    @stevesomers7366 Месяц назад +2

    Very impressive, Dave. I appreciate the work you do. Thanks!

  • @harryheinisch3446
    @harryheinisch3446 2 месяца назад +4

    Love the shirt, was there slightly after PDP days but got to see the old Alpha starting with EV54. You made this too easy to install on a laptop :)

  • @Nimitz_oceo
    @Nimitz_oceo 2 месяца назад

    Hi Dave, I just want to thank you and appreciate that we have you. You read my mind, this setup is exactly what i have been looking for.

  • @rudiklein
    @rudiklein Месяц назад +3

    Love the DIGITAL t-shirt 🤓. I started working at DEC in 1980.

  • @micahvanella2938
    @micahvanella2938 2 месяца назад +2

    Finally! I've been looking for a way to learn to make a GPT AI to consume rulebooks and modules for Old School Essentials so I can ask it questions and generate random encounters.

  • @MrKillerno1
    @MrKillerno1 2 месяца назад +6

    In the early 80's I had a program called 'Whatsit?' maybe you know it, it was a early learning piece of software running on CP/M that did the same, it learned as long as the things you put into it. AI is based on these efforts. Tried later to make a similar program in AmigaDOS with a friend and it was fun. Just text base.

    • @TheBodgybrothers
      @TheBodgybrothers Месяц назад +1

      There is no way you made anything like LLMs on a CP/M.. Imagine thinking you invented something on a computer that could barely have enough ram for a primitive OS and that has only been in research for the last 10 years.

    • @stultuses
      @stultuses Месяц назад +1

      That program's ability to learn was really a simplistic classifier, in that it had limited scope and could not really learn
      AI is based on a lot of things that have gone before us, Whatsit included, although it was more that Whatsit was based on other fundamentals in it's time

    • @stultuses
      @stultuses Месяц назад +1

      ​@@TheBodgybrothers
      LLM's have been around as a concept well beyond 10 years!
      There have been LLM's that ran in a batch mode that were large but because of the limits of systems and ram, they were so slow as to be unusable but they existed
      In regards to MrKiilerno1, of course he didn't run an LLM on a CP/M machine, I don't think that was his point, he was referring to a system that engages in a conversation and that tracked that conversation across multiple interactions, in this regard, he is correct, role-play games and learning systems have been using this for decades now
      In terms of Ram, there are older OS's who can easily run programs beyond the constraints of their system memory, OpenVMS for example has both swapping and paging mechanisms. OS's now focus on speed so they demand more memory rather than use concepts like swapping to run extremely large programs but systems of old use to run large application in very limited memory. I worked on one that had 16K of memory and ran a whole accounting ledger for a large municipality of over 1 million people

    • @MrKillerno1
      @MrKillerno1 Месяц назад +1

      @@stultuses And still I had a lot of fun with it. Most days when I talk to Alexa, Siri or Google, they tend to do sometimes what they want. I appreciate the way to communicate with them by voice, this on itself, I think, is a masterpiece in programming.

    • @MrKillerno1
      @MrKillerno1 Месяц назад

      @@TheBodgybrothers That time I was working for a company that made medical database software, to store all their data about medicine and patients in. It had to be big machines, they were very pricy at the time and had a large storage device on them, harddrives. It was also the time 16 bit computers were coming and Microsoft took hold of many branches. Luckily this software evolved into the current database system it is these days. It all started by one man and his machine, selling his product and hardware to numerous institutions and health practitioners (doctors offices). As long as you had the thousends of dollars, you could buy it.

  • @Phileosophos
    @Phileosophos 2 месяца назад

    Thanks for this breakdown! Many of my colleagues and I use AI for various work, but there’s always that security concern to say nothing of cost. I’m going to set one of these up on the extra hardware I have around the house. Air it works out, we might be spinning up our own for the company to use. This video was very helpful!

  • @guiduz3469
    @guiduz3469 2 месяца назад +3

    I bet you’re gonna hit 1M subs with this one! Deal for the next video on how to train your local AI? Also, how much power does that beast of a workstation pull? Can't to see how long it'll take to get a response on a human desktop...

  • @Condinginsight
    @Condinginsight Месяц назад +2

    Thanks Dave, your presentation helped me a lot to understand it.❤

  • @terpcj
    @terpcj 2 месяца назад +3

    I've been using Ollama on my PC for a while now (I opted for a Windows install with AnythingLLM as my front end...easy and no Linux needed). It's pretty good over all. Not bleeding edge. Maybe not even cutting edge. Regardless, it does fine if you pick the right model(s) for your needs. It definitely wants to stretch its legs, though. More disk space (for larger versions of models) and more CUDA cores (speed, baby) are definitely more better.

    • @chrisbegg290
      @chrisbegg290 Месяц назад

      Care to explain your process?

    • @terpcj
      @terpcj Месяц назад

      @@chrisbegg290 Nothing complicated. download and install Ollama and pull at least one of the models. Download AnythingLLM and install it in the usual way. When you start it, go to settings and the LLM option and select an installed model to use. Go back to the main screen and chat away. There are of course more options to fiddle with if you want, but that gets you going. There are also some vids here on the RUclipss if you want more depth.

  • @Sage2291
    @Sage2291 2 месяца назад +1

    Love the shirt Dave; many fond memories working on PDP-11s, then VAX 11/780 back in the day

    • @frasermacdonald6614
      @frasermacdonald6614 Месяц назад

      Came for the tutorial, stayed for the DEC comments. Still one of my favourite work experiences in my career -- it was a special place.

  • @DataIsBeautifulOfficial
    @DataIsBeautifulOfficial 2 месяца назад +176

    So, we’re just casually summoning AIs at home now?

    • @KimForsberg
      @KimForsberg 2 месяца назад +22

      Been doing that for a while. Not too hard. The largest problem is having a good enough model to run that runs on what is a reasonably priced home computer.

    • @TheRex42
      @TheRex42 2 месяца назад +3

      lol I love this in this context

    • @phils744
      @phils744 2 месяца назад +3

      That's amusing, "saying, casually summoning ai models" like requesting your own personal butlers to remove the plates off the table once you are done eating. Or like using Uber, where is my ride "alexa" I called for it 30 minutes ago 😊

    • @GungaLaGunga
      @GungaLaGunga 2 месяца назад +7

      Daemon, not a demon.
      "It's the work of the devil" - Mama Boucher
      No it's not. It's just zeros and ones. On's and off's.
      Same voltages and vibrations, vibes and grooves as the rest of the universe. The oneness is us. Oh eye see.

    • @GrayeWilliams
      @GrayeWilliams 2 месяца назад +12

      Summon sounds too archaic. Wait, not archaic enough.
      We invoke them.

  • @TheYashakami
    @TheYashakami 2 месяца назад

    Thank you for the straightforward, no nonsense, walkthrough.

  • @skunked42
    @skunked42 2 месяца назад +4

    Dave, getting close to 1mil subscribers!

  • @Machiavelli2pc
    @Machiavelli2pc Месяц назад +2

    thank you. Decentralized, uncensored, 100% private, etc. AI/AGI really is important.
    ‘The path to hell is paved with good intentions’ is a quote that comes to mind when I hear governments, corporations, etc. trying to limit freedoms of individuals. AI is something that is too important to not have individuals be able to have absolute freedom over their own AI’s/AGI’s.

  • @JasonKingKong
    @JasonKingKong 2 месяца назад +40

    Sweet PDP11 shirt.

    • @Dirtyharry70585
      @Dirtyharry70585 2 месяца назад +2

      Just thinking the comparable size of a digital pdp11 to that thread ripper unit.

  • @Zyphera
    @Zyphera Месяц назад +2

    Oh this is interesting! Please more of this, Dave!

  • @TheMusicPoint
    @TheMusicPoint 2 месяца назад +3

    This is incredible, exactly what I have been looking for! ❤

  • @RussFryman
    @RussFryman 2 месяца назад +2

    Thanks for this. I've been running LM Studio on my windows box and experimenting with a few different models. Was looking for inspiration to build a docker based AI server, and this hit the spot.

  • @andrewperkins2083
    @andrewperkins2083 2 месяца назад +11

    Dave, would you be willing to do a follow up video outlining a few lower levels of hardware? You don't even have to run the model on them (although that would be awesome), but describe some machines say in the 1k, 5k, 10k, and 25k range?

    • @benjaminlynch9958
      @benjaminlynch9958 2 месяца назад +6

      I’ve been running this exact setup in Linux (Pop!OS) for a couple weeks now, and it works fine on any modern (eg less than 6 years old) hardware. Initially I tried it with just my CPU (Ryzen 5800x), and it was fine albeit a little slow. But definitely usable. After I enabled GPU acceleration on my nVidia 2070 Super, the responses came back stupid fast. Like 10x faster than I could read them.
      The only thing I would note is that either CPU or GPU (whichever one is enabled) is going to be pinned at 100% utilization while responses are being generated. The practical effect is going to be not trivial power draw, and for laptops much shorter battery life unless the unit is plugged into the wall. But don’t let that put you off. Even modest hardware ($1,000 PC brand new 5 years ago) is more than sufficient. Just be aware of your battery level if you’re going to do this on a laptop.

    • @kuromiLayfe
      @kuromiLayfe 2 месяца назад +2

      Ollama recommends a 10th gen i5 CPU or AMD equivalent and a NVidia GPU 20xx with 8GB or more VRAM.
      Make sure your windows drive or host drive has enough diskspace as the models can easily rack up 100-400 GB at out of nowhere.

  • @DarinM1967
    @DarinM1967 Месяц назад

    Nice. I've done something similar using LM Studio. While my system isn't even close to the power of what your using Dave, I can give quick response from my uncensored model. Really enjoy videos and look forward to seeing what you come up with next.

  • @Konrad-z9w
    @Konrad-z9w 2 месяца назад +14

    Sitting on an airplane with a laptop or in your shop next to a threadripper is exactly the same noise level.

  • @dbreardon
    @dbreardon Месяц назад +1

    This entire process needs to be automated with a one click, no typing install procedure. I just don't understand why the setup is so complicated with multiple dependencies...linux, webui and then docker and then finally the LLM.
    There needs to be an automated script or batch file one can download to make this process as simple as a one or two click procedure. 99% of the public will never run a local LLM if installation is so cumbersome. Heck, 99% of the general computing public has never seen a DOS prompt let alone used the powershell.
    Don't get me wrong. I appreciate the straight forward steps you provided (for some reason I installed Llama but without linux a month or two ago but too slow for my $500 4 year old Ryzen laptop). But the general computing public will not jump through all these hoops.

  • @seanreynolds1266
    @seanreynolds1266 2 месяца назад +26

    I'd like to add that ollama plays very nice with the Continue VScode extension which means....private local github copilot too!

  • @andrewscott1253
    @andrewscott1253 Месяц назад +1

    We just happened to have a 12 GPU open frame bit coin mining style computer purchased for colleague who barely used it and then left our organization. Ok so its not the latest and greatest, but Ollama is perfect for it. I'm experimenting now. Cheers Dave. (also on the Spectrum).

    • @xgtwb6473
      @xgtwb6473 Месяц назад

      (also on the spectrum) that's our what's up my N-

  • @c0d3warrior
    @c0d3warrior Месяц назад +4

    And just like that I've got an AI running locally on my machine. Feels kind of weird tbh. Awesome video guide, thanks a lot!

  • @pcase74
    @pcase74 Месяц назад

    If anyone is getting errors running sudo snap install docker follow along:
    Set the systemd flag set in your WSL distro settings
    You will need to edit the wsl.conf file to ensure systemd starts up on boot.
    Add these lines to the /etc/wsl.conf (note you will need to run your editor with sudo privileges, e.g: sudo nano /etc/wsl.conf):
    Copy
    [boot]
    systemd=true
    And close out of the nano editor using CTRL+O to save and CTRL+X to exit.
    Final steps
    With the above steps done, close your WSL distro Windows and run wsl.exe --shutdown from PowerShell to restart your WSL instances. Upon launch you should have systemd running. You can check this with the command systemctl list-unit-files --type=service which should show your services’ status.

    • @RalfKuns
      @RalfKuns Месяц назад

      @pcase74 Great advice! Actually I do have exactly this issue. Can you tell a bit more about the thing, like: how does the entry look like.

  • @BigDawgCleveland
    @BigDawgCleveland 2 месяца назад +5

    Thanks Dave! I am a 68 yo and have experienced dos - current OS and you always amaze me with your knowledge of OS.

  • @Chrispyy__
    @Chrispyy__ 27 дней назад

    Jesus so glad I found this channel. Keep up the good work!

  • @kenniejp23
    @kenniejp23 2 месяца назад +11

    Installing docker via snap caused me issues with not being able to access my Nvidia graphics card to run the AI.
    I believe this is because I'm running an "unsupported" Linux version.
    Installing docker via "apt" fixed this.

    • @MythicAudioBooks
      @MythicAudioBooks Месяц назад

      I'm using ubuntu but still same issue

    • @kenniejp23
      @kenniejp23 Месяц назад

      @@MythicAudioBooks Did you remove docker and reinstall with apt?

    • @MythicAudioBooks
      @MythicAudioBooks Месяц назад

      @@kenniejp23 No I haven't yet but I was getting an error about "Nvidia container toolkit" and"libnvidia-ml.so.1" so I installed cuda toolkit.
      I searched about apt but it seamed fairly complex setting up on wsl especially to use localhost and so on.

  • @tehcarnage
    @tehcarnage Месяц назад +1

    awesome video and tutorial, thank you.
    is there a way to implement this local AI onto mobile phone? even if it's remotely tapping into PC from phone?
    I find chatgpt app very convenient on mobile, but would be even more awesome to have an unlocked gpt on my phone instead!

  • @leonard8766
    @leonard8766 2 месяца назад +11

    £50,000 PC!??? This video deserves a million more views.

    • @roncaruso931
      @roncaruso931 2 месяца назад +3

      Dave is a multi millionaire. He can afford anything. He lives in a different universe.

    • @FlintStone-c3s
      @FlintStone-c3s 2 месяца назад +3

      For those that can only afford a Raspberry Pi5 8GB, Ollama runs on it. The big models are a bit slow, the smaller ones are usable.

    • @blshouse
      @blshouse 2 месяца назад +2

      @@roncaruso931 Even better. Dell noticed he does videos about computing and has a good-sized following; so they sent him a $50,000 computer (on loan) in exchange for featuring it in one or more videos.

    • @roncaruso931
      @roncaruso931 2 месяца назад +1

      @blshouse Yes, I did know that, but he is a retired MS software engineer. The man is worth millions of dollars. He could easily afford a $50,000 PC. $50,000 is like 5 cents for him.

  • @bjrnhjjakobsen2174
    @bjrnhjjakobsen2174 Месяц назад

    Love the “Autodidactism” 👍🏻 very underrated

  • @joeysartain6056
    @joeysartain6056 2 месяца назад +3

    Love the "digital" t-shirt

  • @jp7357
    @jp7357 Месяц назад +1

    Love the pdp11 shirt. I spent many hours writing a RSTS subsystem and basics+2 to c translator for “Unix” I’m saving this episode and will install a local ai. Thanks.

    • @staffanrenhorn9401
      @staffanrenhorn9401 Месяц назад

      Great shirt and add i add XXDP to the list of OS ;-)

  • @erickdanielsson6710
    @erickdanielsson6710 2 месяца назад +4

    Thanks Dave, I have a couple RHEL Linux systems at work I shall try this.

  • @88spaces
    @88spaces Месяц назад +1

    Wow, Dave! I had no idea. I don't have the beast machine you have but I'm going to put this to use. Thanks.

  • @solarisone1082
    @solarisone1082 2 месяца назад +15

    Your computer’s specs make me want to cry.

    • @robbybobbyhobbies
      @robbybobbyhobbies 2 месяца назад +2

      Just wait a couple of years and it’ll be commonplace. Of course you’ll still be upset by his future setup, but your computer will be this powerful.

    • @alok.01
      @alok.01 Месяц назад

      ​@@robbybobbyhobbiesMakes me want to wait a few years so ai compatible tech can mature and become cheap

    • @SweNay
      @SweNay Месяц назад +1

      ​@@alok.01 I saw some news somewhere that the cost or depreciation of AI development was like 97% which made it the fastest dropping market in history, so we're going there 😅

  • @MarkoVukovic0
    @MarkoVukovic0 2 месяца назад

    Another excellent presentation, thank you Dave! I'll definitely be tinkering with this. Nice shirt btw!

  • @eskwadrat
    @eskwadrat 2 месяца назад +3

    I have i9-KS w/4090 collecting dust on my desk. Now it finally found it's purpose. Thanks, Dave.

  • @mikewatts7122
    @mikewatts7122 2 месяца назад

    Wow that's just plain amazing. Local ai I thought was hard work but now I know it can be done

  • @samshort365
    @samshort365 2 месяца назад +5

    Future me installing this on my $100 quantum computer set-top box and thinking how quaint Dave looked installing this on a $50k server. Now, if only I had a time machine. Thanks Dave, really cool!

  • @crgotit
    @crgotit Месяц назад

    This made it all simple to follow now the hard part to do it… thank you

  • @PATRIK67KALLBACK
    @PATRIK67KALLBACK 2 месяца назад +10

    I loved your reference to HAL 9000
    "Hello, Dave. How can I help you today?"

  • @TechDunk
    @TechDunk Месяц назад +1

    I personally really like using LM Studio. It does everything, including downloading and loading models in a simple UI

  • @Bp1033
    @Bp1033 2 месяца назад +3

    I've been running LLM locally for a while. The most useful thing i made it do was create weather reports from weather data from NWS.
    I have 2 rtx 4060 ti (16gb each) and my old rtx 2060 (6gb) in my server. Its really just a gaming desktop moonlighting as a server, it can run a 70B model decently though.

    • @organicdinosaur5259
      @organicdinosaur5259 Месяц назад +2

      How do you like the complexity of the answers you get? Really annoyed openai is forcing people to use their hardware especially since most people use chatgpt for personal use. Is it worth it to setup something locally?

    • @Bp1033
      @Bp1033 Месяц назад

      @@organicdinosaur5259 its pretty good honestly. I'm running a Q4 model (Llama-3.1-70B) but despite that its rather accurate. For me personally, I say its worth it. mostly because I can just download a random model then throw it on the server so you're not just stuck with chatgpt. Qwen is a really good general model, comes in a bunch of sizes.
      You can also run most 8B and lower models on CPU at a pretty brisk speed, but its better if you have a GPU with at least 8gb of vram. ollama is super easy to setup on both windows and linux so its absolutely worth at least giving it a shot.

  • @DerekPeldo
    @DerekPeldo 2 месяца назад

    I just set this up on my homelab a week ago. Was hoping you had a good android app to go with this setup. Great Video!

  • @tonylewis4661
    @tonylewis4661 2 месяца назад +27

    How can we be sure that Dave made this video, and not his AI model?

    • @DavesGarage
      @DavesGarage  2 месяца назад +33

      That's what a bot would say!

    • @adjoho1
      @adjoho1 2 месяца назад +4

      ​@@DavesGarage sounds like something a synth would say.

  • @david3199
    @david3199 2 месяца назад

    Thank you for the guide Dave! This will help me tremendously debugging!

  • @bigpickles
    @bigpickles 2 месяца назад +6

    Openwebui is the bees knees. I have every API and local model running through it for the past month and i just love it. I don't use docker rubbish though, much easier install on Linux

  • @vbisbest
    @vbisbest 2 месяца назад +2

    Great tutorial. Would like see to one on how to create a custom model or add training to an existing model.

  • @UnwalledGarden
    @UnwalledGarden 2 месяца назад +25

    Don’t be spooked by the cost. You can get a perfectly serviceable hardware setup for less that 10% of Dave’s killer rig.

    • @jml_53
      @jml_53 2 месяца назад +2

      Any recommendations?

    • @ChristophBerg-vi5yr
      @ChristophBerg-vi5yr 2 месяца назад +8

      I wish I could download Dave's rig :)

    • @UnwalledGarden
      @UnwalledGarden 2 месяца назад +8

      @@jml_53 I have an AMD motherboard and Ryzen 7 and two used Nvidia 3080’s I bought from a retired crypto miner. Depending on the model size you could run it well on a single GPU with sufficient (20G) ram. I’m running standard Debian on bare metal. The whole rig was around $2k.

    • @mikejones-vd3fg
      @mikejones-vd3fg 2 месяца назад +2

      @@UnwalledGarden Could integrated graphcis work? they can address the main memory. My irisXe addressed 30G once running Call of Duty, but i suspect there was a memory leak. Anyway always wondered if that could be a cheaper way around the high gpu memory requirement for AI's, using inegrating gpu that can tap into more memory.

    • @vannoo67
      @vannoo67 2 месяца назад +2

      @@mikejones-vd3fg Recent Nvidia drivers on Windows allow some system RAM to be shared with the GPU. For me this is an additional 16G out of 32G system RAM, on top of my RTX 4070 TI Super's 16GB VRAM. Sadly this is not available on Linux yet. It's quite a bit slower than VRAM, but it makes some use cases possible that weren't before.
      BTW: Everything that Dave described in this video is possible to achieve in native windows (with access to the shared memory) .

  • @qdsmith
    @qdsmith 2 месяца назад

    Thank you, good sir. Playing with on-premises AI has been on my list of things to do for some time. This is exactly the motivation I needed. Up and running, very cool.

  • @ahmetrefikeryilmaz4432
    @ahmetrefikeryilmaz4432 2 месяца назад +4

    I did everything though I had to install docker for windows and use it's WSL integration, I can display the web gui but there is no model available there whilst it's working in the CLI.

  • @baddragonite
    @baddragonite Месяц назад

    To be honest I'm almost more impressed with the hardware showcased in the video than the actual video topic haha

  • @SoloGuitar1000
    @SoloGuitar1000 2 месяца назад +10

    When I run the docker command, after downloading, it gives me the following error:
    docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error running hook #0: error running hook: exit status 1, stdout: , stderr: Auto-detected mode as 'legacy' nvidia-container-cli: initialization error: load library failed: libnvidia-ml.so.1: cannot open shared object file: no such file or directory: unknown.

    • @ephemerallyfe
      @ephemerallyfe 2 месяца назад +1

      Same here. Would love to learn how to fix this.

    • @lyndenp
      @lyndenp 2 месяца назад +2

      Same, frustrating. Any ideas anyone?

    • @jqoutlaw
      @jqoutlaw 2 месяца назад +1

      Remove the --gpus=all parameter in the docker command. I have this running on a VM in proxmox using just the CPU and it fixed my issue.

    • @SoloGuitar1000
      @SoloGuitar1000 2 месяца назад

      @@jqoutlaw Thanks. That worked.
      In my casual sleuthing of the problem, it looked as if the NVIDIA gpu had something to do with it. I found that I have an AMD Radeon 780M gpu, so I looked to see how to run it with that, but none of the solutions I found worked.
      So I guess I'll just run it using the cpu instead.

    • @jsflood
      @jsflood 2 месяца назад +1

      My guess is that it's complaining about not finding the NVIDIA CUDA toolkit (only works if you have an nvidia gpu as @jqoutlaw mentions). Also do an update/upgrade: 'sudo apt update; sudo apt upgrade'

  • @RC-SATX
    @RC-SATX Месяц назад +2

    Cool shirt! I worked at Digital in the 90's on Sepulveda in Los Angeles.

  • @Billwzw
    @Billwzw 2 месяца назад +3

    Thanks - now pretty please download the most ridiculous model available (maybe Llama3.1:405b ?) and show us what killer hardware can do !

  • @katatekan
    @katatekan Месяц назад

    Hello Dave! it's so fun listening this

  • @tsdbhg
    @tsdbhg 2 месяца назад +13

    Thank you. This helped me create my current girlfriend.

  • @stevenhawkins179
    @stevenhawkins179 2 месяца назад +2

    Great informative video. I used the "cheap" and almost premade option of an Intel Arc A770 with their AI Playground program.