Run A.I. Locally On Your Computer With Ollama

Поделиться
HTML-код
  • Опубликовано: 4 янв 2025
  • НаукаНаука

Комментарии • 113

  • @jacksonreventan
    @jacksonreventan 3 месяца назад +44

    To those who wants the one click approach.. There's a flatpak that provides a Gui with a chat interface and downloading, changing or removing models. Name : Alpaca

    • @RafaCoringaProducoes
      @RafaCoringaProducoes 3 месяца назад +2

      i wasnt able to run the flatpak way but nice to know others can, might be some devuan shenanigans since DT showed systemd in the video... For any openrc experts, flatpak shoudnt be affected?

    • @Skelterbane69
      @Skelterbane69 3 месяца назад +3

      @@RafaCoringaProducoes Dinit user here and yeah flatpak has never ever had problems with me not using systemd.

    • @robotron1236
      @robotron1236 3 месяца назад

      Oooh, really? I’ve been using the CLI for months now. I like it, but a GUI would be nice. I was trying to use AnythingLLM and LM_Studio, but one of them wouldn’t work on my system for some reason.

    • @TecnocraciaLTDA
      @TecnocraciaLTDA 3 месяца назад

      I want flatpaks and snaps to burn in fire

    • @terrydaktyllus1320
      @terrydaktyllus1320 3 месяца назад +1

      @@RafaCoringaProducoes I'm an openrc expert - it's why I don't use flatpak. You need to speak to the systemd-ers - they want everything "different for the sake of being different" while endlessly whining about stuff still being too difficult for them.

  • @gnulinuxunilung
    @gnulinuxunilung 3 месяца назад +26

    Hey DT, ollama is in the arch linux's official repo, BTW!

  • @xU9aC6jQ5kR2zU0x
    @xU9aC6jQ5kR2zU0x 3 месяца назад +4

    I have been running Ollama and Open WebUI on a dedicated server/machine in my house. This way I can use it with any device that is connected to my network.

  • @sridhartn83
    @sridhartn83 2 месяца назад +1

    I am able to run it on raspberry pi5, a small llama 3.2 1B model, pi5 is able to run it with docker and openweb ui. Really awesome!. Without a GPU it runs fine on CPU though it uses 100% cpu if we query, but thats just a small time burst while its being used but workable.

  • @tylerdean980
    @tylerdean980 3 месяца назад +2

    Oatmeal-bin in the aur gives you a nice TUI front end for ollama

  • @sridhartn83
    @sridhartn83 2 месяца назад

    For web UI- openweb UI can be used for ollama and access it from any device inside your network.

  • @marqueluxuriante5571
    @marqueluxuriante5571 3 месяца назад +2

    Recently started using Ollama with Gemma 2 (9B) on my main PC (AMD Ryzen 5900H + 32gb RAM) Really good so far, the responses are very solid.
    I’ve just started using it and haven’t really dove into the complex use-case.
    So much better than paying for subscription use of LLM that depend on internet!

  • @friedrichdergroe9664
    @friedrichdergroe9664 3 месяца назад +2

    Massively Cool.

  • @derekr54
    @derekr54 3 месяца назад +7

    aardvark comes before addax and agouti alphabetically, ollama has got it in third place,not very bright of it,naughty llama

  • @MeMyself-gf7fn
    @MeMyself-gf7fn 3 месяца назад +5

    Can it draw or interact with pictures and video, is it able to "see" things, or is it strictly text based?

    • @TecnocraciaLTDA
      @TecnocraciaLTDA 3 месяца назад

      I think this ollama 3.1 is specific for text

    • @simplytuts
      @simplytuts 3 месяца назад

      it depends on the model you run. llama3.1 is strictly text-based

    • @terrydaktyllus1320
      @terrydaktyllus1320 3 месяца назад +1

      @@simplytuts Oh dear, mlllennials don't like text. It's "too long" and they "don't (can't) read". They want moving pictures only. And someone else to make them for free.

    • @simplytuts
      @simplytuts 3 месяца назад

      @@terrydaktyllus1320 if everything was best done over text, why would we have moved on from text-based computer interfaces from the 80s?

    • @terrydaktyllus1320
      @terrydaktyllus1320 3 месяца назад +3

      @@simplytuts Please don't put words into my mouth and then argue against them, because you then just confirm to me that you can't read text properly.
      What I said was "millennials are too lazy to read text", not that "everything is best done over text".
      So, please, try a bit harder to keep up. Many thanks.
      Do you want me to make a video reply, if that's easier?

  • @LinuxRenaissance
    @LinuxRenaissance 2 месяца назад

    3:12 nice speed of the answer!

  • @kiwibruzzy
    @kiwibruzzy 3 месяца назад +3

    this was simple to understand bro thanks!

  • @arghya_333
    @arghya_333 3 месяца назад

    Hollama is an ollama client if you prefer a GUI

  • @moetocafe
    @moetocafe 3 месяца назад +1

    Does this Ollama connect to the Internet, if you want to extract info from the Web or it only works with its built-in presets of data and knowledge ?

    • @DistroTube
      @DistroTube  3 месяца назад +4

      You don't need to be connected to the Internet to use ollama, but you do need to be connected to download the language models that you want to use.

    • @moetocafe
      @moetocafe 3 месяца назад +1

      @@DistroTube just found there is WebGPT - on github and it's FOSS, but it requires manual implementation to connect it with ChatGPT through their API (not free).
      Maybe in the near future we'll be able to do the same - to feed pretrained models with real-time data for generating output. Interesting.

    • @MeMyself-gf7fn
      @MeMyself-gf7fn 3 месяца назад +1

      I understand that, I mean can it pull information from the web to complete tasks or answer questions or would that first have to be downloaded and manually added?

    • @RafaCoringaProducoes
      @RafaCoringaProducoes 3 месяца назад

      ​@@moetocafecheck out shellgenie

    • @pueraeternus.
      @pueraeternus. 3 месяца назад +1

      idk but theres a sillytavern plugin for this

  • @Bicyclesidewalk
    @Bicyclesidewalk 3 месяца назад

    Ollama is great~!

  • @MeMyself-gf7fn
    @MeMyself-gf7fn 3 месяца назад +1

    Can it interact and learn from the web on its own or does it only work from within the files you download for it? Can it grow organically, adding and organizing information on its own? Can it remember and infer things from its previous interactions indefinitely?

    • @Arthur-jg4ji
      @Arthur-jg4ji 3 месяца назад +1

      Nab bro that thing is not AGI ahahzhzha it Just a AI

    • @simplytuts
      @simplytuts 3 месяца назад

      depending on the frontend, you can give it the ability to do things like store data indefinitely

  • @Vhoover3609
    @Vhoover3609 3 месяца назад +2

    I wonder if you could get the smart ass comments of Ryan Reynolds

  • @diyurdupakistan
    @diyurdupakistan Месяц назад

    wich termunl you are usinh

  • @robotron1236
    @robotron1236 3 месяца назад

    I’m building a budget monster PC with 2 old 14 core xeons a titan xp GPU for AI (Skyrim herika mod) and maybe an AMD RX 6800xt or something to actually run my games and hopefully give it 128gb (shooting for 256gb) of RAM.
    P.S. I love the sound of your keyboard 😂

  • @GhostCoder83
    @GhostCoder83 3 месяца назад +7

    Hey DT, please make video on how to integrate ollama with vim, neovim, emacs etc.

    • @DistroTube
      @DistroTube  3 месяца назад +7

      Oh, we are definitely adding ollama to Emacs. Stay tuned! ;)

  • @TrustJesusToday
    @TrustJesusToday 3 месяца назад

    Fun stuff.

  • @zeocamo
    @zeocamo 3 месяца назад +3

    it is in the extra repos in Arch under the name ollama

  • @luigitech3169
    @luigitech3169 3 месяца назад +2

    Ollama is the best piece of Local AI, I hope it will be integrated better on Linux Desktop

  • @denizkendirci
    @denizkendirci 3 месяца назад +1

    to keep locally installed ones updated is pain for me, i use iimg generation models locally but i rather keep using chatgpt or others as online services it makes more sense for my usage. especially in the case of chat ai, i want them to be updated to latest, because my questions are about mostly new tech etc.

  • @spideyBoi1
    @spideyBoi1 3 месяца назад

    Make a video on that window manager

  • @send2gl
    @send2gl 3 месяца назад +2

    What is the advantage of having a local AI as opposed to an online service, is it merely a local AI does not need an internet connection?

    • @nicodelle99
      @nicodelle99 3 месяца назад +4

      ir's also reassuring privacy-wise, i guess

    • @simplytuts
      @simplytuts 3 месяца назад +3

      @@nicodelle99 more than reassuring lol
      all the big ai companies use all of your queries to train itself

  • @benstechroom
    @benstechroom 3 месяца назад

    I have been playing with ollama for a while now, I dont have a powerful GPU, so I am running it on a pretty dedicated server, (I used docker in a VM) just one more VM on that one... Its not fast, but its fun to play with. I also have Stable Diffusion, and automatic installed. I hope some one updates how to get them to talk to each other soon...

  • @tiamem
    @tiamem 3 месяца назад +2

    The "alphabetical" list of mammals:
    1. Addax
    2. Agouti
    3. Aardvark

    • @TecnocraciaLTDA
      @TecnocraciaLTDA 3 месяца назад

      Yeah lol. You can't believe Generative language models, they don't have any commitment to the truth, and when they don't know, they create/invent.

  • @thechadbuddha
    @thechadbuddha 3 месяца назад +2

    oatmeal is great if you want llm in the terminal

  • @speakersr-lyefaudio6830
    @speakersr-lyefaudio6830 2 месяца назад

    Im running this, but unfortunatly, it doesnt recognize my gpu lol. It is indeed very slow on cpu.

  • @MSThalamus-gj9oi
    @MSThalamus-gj9oi 3 месяца назад +2

    Can I get it to respond to me as though it's... oh I don't know... a *computer*? One of my chief complaints about all these LLM front ends is their insistence on aping human behavior. I absolutely despise that. Between them pretending to be engaged in an actual conversation and them pretending they're typing the response, they're really just wasting my time.

  • @SIGSEGV200
    @SIGSEGV200 3 месяца назад

    i would suggest u take a look at openwebui since u mentioned the frontend part of ollama

  • @MerkDolf
    @MerkDolf 3 месяца назад

    Thank you

  • @YrmiZ
    @YrmiZ 3 месяца назад +2

    Why do I always read "..Computer With Obama"

  • @Hyperboid
    @Hyperboid 3 месяца назад

    2:40 what the heck is mario

    • @TecnocraciaLTDA
      @TecnocraciaLTDA 3 месяца назад

      Aquele que te comeu atrás do armário

    • @TecnocraciaLTDA
      @TecnocraciaLTDA 3 месяца назад

      🇧🇷🇧🇷🇧🇷 Brazilians will understand the joke 😜 Never leave a question open "who is Mario?" around the web

  • @tda0626
    @tda0626 3 месяца назад +2

    Google has their hands all over this. It was one thing it admitted to so just a heads up. Personally, I am not a fan of Google and they are not for being known as an advocate of privacy.
    "My training data is a collaborative effort between:
    Google's parent company Alphabet: My primary developer, which uses its infrastructure to store and process the large corpus of text."

    • @simplytuts
      @simplytuts 3 месяца назад +1

      llama3.1 is by meta, not google, and it runs offline

    • @tda0626
      @tda0626 3 месяца назад

      @@simplytuts I posted its output or did you not read it?

    • @simplytuts
      @simplytuts 3 месяца назад

      @@tda0626 my comment got removed, but i said LLMs frequently hallucinate. you can read about llama 3.1 on meta's blog

  • @bianca.tamborcita
    @bianca.tamborcita 3 месяца назад +1

    Literally me

  • @ironlungx
    @ironlungx 3 месяца назад +1

    hello

  • @mundo524
    @mundo524 3 месяца назад +1

    Ironically, DT flipped from "I'll never use these AI products [...]" to "Add AI to everything"

    • @simplytuts
      @simplytuts 3 месяца назад +6

      talking to a local LLM is not the same thing as adding ai to everything

    • @terrydaktyllus1320
      @terrydaktyllus1320 3 месяца назад

      @@simplytuts "I only had one beer, officer, honestly!"

    • @simplytuts
      @simplytuts 3 месяца назад

      @@terrydaktyllus1320 there's a difference between running an LLM on your computer and making it your entire personality

    • @terrydaktyllus1320
      @terrydaktyllus1320 3 месяца назад

      @@simplytuts Once again, you seem to be not reading my comments properly (typical illiterate and uneducated millennial, presumably?) and either misquoting me or replying to something completely different.
      Once again, please do try harder to keep up.
      Many thanks.

  • @KomradeMikhail
    @KomradeMikhail 3 месяца назад +2

    No... Tell us how to remove A.I. instead.

  • @melbgrk6725
    @melbgrk6725 3 месяца назад +1

    is there a logical reason why or just a gimmick ? ... personally no thanks ...

    • @aintnochange
      @aintnochange 3 месяца назад +9

      If you don't get it after watching, just keep it that way

  • @roseredthorns
    @roseredthorns 3 месяца назад +2

    Oh sweet cheap and easy art theft

    • @SkyyySi
      @SkyyySi 3 месяца назад +4

      Ollama is for GPT models, not for image generation.
      Though I guess text is also an art form...

    • @jackt-z2m
      @jackt-z2m 3 месяца назад +4

      Cry

    • @xamp_exclammark
      @xamp_exclammark 3 месяца назад

      I dont even think this ai can generate images rn lol

    • @UnknownEntity-69
      @UnknownEntity-69 Месяц назад

      _"theft"_
      Cope and seethe

  • @terrydaktyllus1320
    @terrydaktyllus1320 3 месяца назад

    "Osloth" would be a better name for it - to indicate that it's for lazy millennials that went through a poor quality education system that never taught them how to read manuals, try stuff out for themselves and learn on their own without copying things parrot-fashion from a video somebody else made for them free-of-charge.
    On the other hand, me with a fully developed brain who loves tinkering to learn and can think critically as well as fault find on his own, will give this one a miss - Open Source or no Open Source.
    I am already completely sick and tired of hearing about AI.

    • @juipeltje
      @juipeltje 3 месяца назад +1

      I'm so proud of you. You want a cookie now?

    • @terrydaktyllus1320
      @terrydaktyllus1320 3 месяца назад

      @@juipeltje ...and there's you getting your jollies from deformed Japanese cartoon characters with the features of minors...
      Sorry, whose the one with the problem here?

    • @terrydaktyllus1320
      @terrydaktyllus1320 3 месяца назад

      @@juipeltje ...says they with the avatar of a deformed Japanese cartoon character with the features of a child.

    • @terrydaktyllus1320
      @terrydaktyllus1320 3 месяца назад

      @@juipeltje Oh dear, one of those "anime types" that likes the kiddie cartoons is more focused on my supposed issues that its own...

    • @terrydaktyllus1320
      @terrydaktyllus1320 3 месяца назад

      @@juipeltje ...says the anime fan with its own problems, clearly.

  • @michgingras
    @michgingras 3 месяца назад

    run AI ? now why would someone do something stupid like this ?

  • @Keizer_Soze
    @Keizer_Soze 3 месяца назад +1

    gpt4all flatpak -> 👍

  • @alizia2186
    @alizia2186 3 месяца назад +1

    Look at the Token generation speed, while mine is potatoes🥲