Free local AI Server at Home: Step-by-Step Guide

Поделиться
HTML-код
  • Опубликовано: 19 дек 2024

Комментарии • 23

  • @WojciechLepczynski
    @WojciechLepczynski  25 дней назад +5

    🎞 If you’re interested in this topic, I recommend watching my follow-up video on generating images with Stable Diffusion. It enhances the AI server we created in this tutorial, but can also be used independently if you're mainly interested in generating images🖼✨ruclips.net/video/I6b-dPOmIKc/видео.htmlsi=QNhPAvY1GV-1vJnu

  • @RamonddeVrede
    @RamonddeVrede 29 дней назад +6

    Great video! Thanks

    • @WojciechLepczynski
      @WojciechLepczynski  29 дней назад +3

      Thanks for the feedback, soon another part about generating images from text

    • @RamonddeVrede
      @RamonddeVrede 29 дней назад +1

      @@WojciechLepczynski That was indeed a question I forgot to ask! reading my mind!

    • @WojciechLepczynski
      @WojciechLepczynski  29 дней назад +2

      I have installed this several times in different ways, and I will try to record the easiest way and discuss generating text to image and image to image. If you have any questions, please do not hesitate and thanks again for the feedback

    • @RamonddeVrede
      @RamonddeVrede 27 дней назад +1

      @@WojciechLepczynski I have a nvidia quadro p2000 would like to use this gpu if possible :)

    • @WojciechLepczynski
      @WojciechLepczynski  27 дней назад +2

      @@RamonddeVrede cool, by the way I've seen people also run AI servers without a graphics card, but the more powerful the hardware, the larger models you can use and the better it works

  • @JolantaLepczyńska
    @JolantaLepczyńska Месяц назад +3

    Inspirujesz nie tylko mnie, ale pewnie wielu innych ludzi!

  • @Cha_HCM-je9qe
    @Cha_HCM-je9qe 29 дней назад +4

    Can you please tell me what is your laptop/pc hardware specs? Thanks.

    • @WojciechLepczynski
      @WojciechLepczynski  29 дней назад +1

      Sure, no problem, in the video I showed an example of installation on an old laptop with 16GB of RAM, i7-6820HQ (8 CPUs) 2.7GHz with NVIDIA Quadro M2000M graphics card(4GB Display Memory & 8GB Shared Memory), of course with Windows 10
      Here you can find hardware requirements for llama 3.2 llamaimodel.com/requirements-3-2/
      The parameters of my old laptop are ok but for small models, for larger ones it's better to use something stronger. Note that during the editing process the length of the video was reduced so some parts of the video are sped up.
      The purpose of the video was to show how to easily install and configure your own AI server at home. The speed of operation depends on what equipment you have and what model you choose. I hope I helped. Soon I will record a video showing how to generate images using AI - converting text to image and image to image.

  • @ventor11111
    @ventor11111 27 дней назад +1

    Czy 3.2 vision jest dobry do rozpoznawania tekstu z dokumentów ? Próbowałeś moze ?

    • @WojciechLepczynski
      @WojciechLepczynski  27 дней назад +2

      Testowałem tylko na kilku skanach z poziomu CLI i muszę przyznać, że było ok, ale będę niedługo przeprowadzał bardziej zaawansowane testy i wtedy się okaże. Planuję dodać trochę automatyzacji i translację tekstu do tego. Jak skończę, to podzielę się wynikami na blogu albo w jakimś filmie.

  • @miqski
    @miqski Месяц назад +1

    Czy ten model jest również ocenzurowany?

    • @WojciechLepczynski
      @WojciechLepczynski  Месяц назад +1

      W filmie pokazuję kilka modeli, to zależy, od tego który wybierzesz. Nie sprawdzałem tego, ale widziałem informacje, że ludzie na forach narzekali na to, że modele są ocenzurowane. Są sposoby by to obejść, ale ja przy modelach które udostępniam dzieciakom nakładam jeszcze większe ograniczenia😅

  • @joeking5211
    @joeking5211 29 дней назад +1

    eerrr @ 6:57 'Create Account' why, you are on you private server. Or are you ???.

    • @WojciechLepczynski
      @WojciechLepczynski  29 дней назад +1

      I didn't notice in the video that the error was displayed at the moment you wrote about.
      I logged into the local server - address localhost:3000.
      You need to create an account there, the first account you create will be the admin account. If you get an error, check if you performed the previous steps correctly.
      Depending on what error you get, it can mean many different things.

    • @joeking5211
      @joeking5211 29 дней назад +1

      @@WojciechLepczynski Yep, but WHY an account if it's local ?????

    • @WojciechLepczynski
      @WojciechLepczynski  29 дней назад +3

      @@joeking5211 it is local because you connect to your local server localhos:3000 that you just created and not to a server somewhere in the cloud
      When you create an account, you will see in the admin panel that there is only one account that you created, of course you can create more accounts on your local server.
      If you want to be 100% sure that it is local, disconnect your computer from the internet, connect to localhost:3000 and then create an account, you will see that everything will work even without network access, but you will then have access only from the computer on which you installed OpenWebUI

  • @MahatmaLevolence
    @MahatmaLevolence 20 дней назад +1

    I've been trying to install this for three days now. I downloaded and installed ollama and got powershell running (now it doesn't work anymore) but couldn't read the miniscule text on the terminal so i looked for solutions and found this video. I downloaded Docker and it won't open, even as admin. Therefore this video isn't much use really. I'm not at all techie, but it's not like i wanted to create the Star ship Enterprise or anything flash, i just wanted to be able to run a private desktop AI on my computer. By tomorrow i'll be running around the house shouting CUCKOO CUCKOO! Fkn machines!

    • @WojciechLepczynski
      @WojciechLepczynski  20 дней назад +1

      Hi, if you have a problem with docker, maybe you don't have virtualization enabled? I guess. if so you can look for example here forums.docker.com/t/hardware-assisted-virtualization-and-data-execution-protection-must-be-enabled-in-the-bios/109073
      By the way if you have ollama installed correctly it should work from the command line. only when it works start working on openWebUI.