Create a LOCAL Python AI Chatbot In Minutes Using Ollama

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 164

  • @patrickmateus-iq8bi
    @patrickmateus-iq8bi 2 месяца назад +17

    THE 🐐 I became the Python developer I am today because of this channel. From learning Python for my AS level exams in 2020,
    to an experienced backend developer. From the bottom of my heart, Thank You Tim. I'm watching this video because I have entered a Hackathon that requires something similar. This channel has never failed me.

  • @umeshlab987
    @umeshlab987 4 месяца назад +99

    Whenever I get a idea this guy makes a video about it

  • @YangEdward-ex1pw
    @YangEdward-ex1pw Месяц назад +1

    already used it to set up a Q&A system to answer customers' common questions. Thank you so much for the sharing and demo.

  • @modoulaminceesay9211
    @modoulaminceesay9211 4 месяца назад +4

    Thanks for saving the day. i been following your channel for four years now

  • @JordanCassady
    @JordanCassady 3 месяца назад

    The captions with keywords are like built-in notes, thanks for doing that

  • @joohuynbae5084
    @joohuynbae5084 Месяц назад +3

    for some Window users, if all of the commands don't work for you, try source name/Scripts/activate to activate the venv.

  • @krisztiankoblos1948
    @krisztiankoblos1948 4 месяца назад +24

    The context will fill up the context windows very fast. You can store the conversations embedings with the messages in a vector database and pull the related parts from it.

    • @Larimuss
      @Larimuss 4 месяца назад +6

      Yes but that's a bit beyond this video. But I guess he should quickly mention there is a memory limit. But storing in vector is a whole other beast I'm looking to get into next with langxhain 😂

    • @krisztiankoblos1948
      @krisztiankoblos1948 3 месяца назад +10

      @@Larimuss It is not that hard. I coded it locally and store them in json file. You just store the embeding with the messages then you create the new message embedings and with cosine distance you grab the most matching 10-20 messages. It is less then 100 lines. this is the distance fucntion: np.dot(v1, v2)/(norm(v1)*norm(v2)) . I also summerize the memories with llm too so I can get a shorter length.

    • @landinolandese8298
      @landinolandese8298 3 месяца назад

      @@krisztiankoblos1948 This would be awesome to learn how to implement. Do you have any recommendations on tutorials for this?

    • @czombiee
      @czombiee 3 месяца назад

      ​​​@@krisztiankoblos1948Hi! Do u have a repo to share? Sounds interesting!

    • @star_admin_5748
      @star_admin_5748 3 месяца назад

      @@krisztiankoblos1948 Brother, you are beautiful.

  • @yuvrajkukreja9727
    @yuvrajkukreja9727 2 месяца назад +1

    Awesome that was "the tutorial of the month" from you tim !!! because you didn't use some sponsored tech stack ! they usually are terrable !

  • @WhyHighC
    @WhyHighC 4 месяца назад +10

    New to the world of coding. Teaching myself through YT for now and this guy is clearly S Tier.
    I like him and Programming with Moshs' tutorials . Any other recommendations? I'd prefer more vids like this with actual walkthroughs on my feed.

    • @S1mpai_Official
      @S1mpai_Official 3 месяца назад +2

      idk but I never understood anything from programming with Mosh videos. Tim is a way better explainer for me, especially that 9 hour from beginner to advanced video.

    • @M.V.CHOWDARI
      @M.V.CHOWDARI 3 месяца назад +1

      Bro code is GOAT 🐐

    • @WhyHighC
      @WhyHighC 3 месяца назад

      @@M.V.CHOWDARI Appreciate it!

  • @Larimuss
    @Larimuss 4 месяца назад

    Wow thanks! This is really simple, straightforward guide to start me getting into writting the python rather than just using peoples UI. Love the explanations.

  • @T3ddyPro
    @T3ddyPro 3 месяца назад +9

    Thanks to your tutorial I recreated Jarvis with a custom GUI and using llama3 model to make Jarvis, i use it in italian cuz i'm italian, but you can also use it in english and other languages.

    • @akhilpadmanaban3242
      @akhilpadmanaban3242 3 месяца назад

      these models beiung completely free?

    • @leodark_animations2084
      @leodark_animations2084 10 дней назад

      ​@@akhilpadmanaban3242with llama yes as they run locally and well you are not using apis. But they are pretty resource consuming..tried it and they couldn't run

    • @T3ddyPro
      @T3ddyPro 6 дней назад

      @@akhilpadmanaban3242 Yes

  • @specialize.5522
    @specialize.5522 10 дней назад

    Very much enjoyed your instruction style - subscribed!

  • @leonschaefer4832
    @leonschaefer4832 3 месяца назад +3

    This just inspired me saving GPT Costs for our SaaS Product, Thanks Tim!

    • @CashLoaf
      @CashLoaf 3 месяца назад +1

      hey i'm into saas too did u make any project yet?

  • @konradriedel4853
    @konradriedel4853 2 месяца назад +2

    Hey man thanks a Lot, could you explain how to implement own Data, PDF, web sources etc. for giving answers when I need to give it some more detailed knowledge about certain internal information about possible questions regarding my use Case?

  • @franxtheman
    @franxtheman 3 месяца назад +3

    Do you have a video on fine-tuning or prompt engineering? I don't want it to be nameless please.😅

  • @carsongutierrez7072
    @carsongutierrez7072 4 месяца назад +1

    This is what I need right now!!! Thank you CS online mentor!

  • @arxs_05
    @arxs_05 4 месяца назад +1

    Wow, so cool ! You really nailed the tutorial🎉

  • @repairstudio4940
    @repairstudio4940 4 месяца назад +1

    Awesomesauce! Tim make more vids covering LangChain projects please and maybe an in depth tutorial! ❤🎉

  • @kfleming78
    @kfleming78 4 дня назад

    Fantastic explanation - thank you for this

  • @techknightdanny6094
    @techknightdanny6094 3 месяца назад

    Timmy! Great explanation, concise and to the point. Keep 'em coming boss =).

  • @MwapeMwelwa-wn9ed
    @MwapeMwelwa-wn9ed 4 месяца назад +6

    Tech With Tim is my favorite.

    • @WhyHighC
      @WhyHighC 4 месяца назад +1

      Can I ask who is in 2nd and 3rd?

    • @tech_with_unknown
      @tech_with_unknown 3 месяца назад +2

      @@WhyHighC 1: tim 2: tim 3: tim

  • @RevanthK-y1l
    @RevanthK-y1l 29 дней назад +1

    Could you please tell us about how to create a fine tunning chatbot using our own dataset.

  • @KumR
    @KumR 4 месяца назад +3

    Hi Tim - Now we can download Llama3.1 too... By the way can u also convert this to UI using streamlit

  • @davidtindell950
    @davidtindell950 4 месяца назад +3

    Adding a context, of course, generates interesting results: context": "Hot and Humid Summer" --> chain invoke result = To be honest, I'm struggling to cope with this hot and humid summer. The heat and humidity have been really draining me lately. It feels like every time I step outside, I'm instantly soaked in sweat. I just wish it would cool down a bit! How about you? ...🥵

  • @timstevens3361
    @timstevens3361 Месяц назад

    very helpful video Tim !

  • @jagaya3662
    @jagaya3662 3 месяца назад

    Thanks, super useful and simple!
    I just wondered with the new Llama model coming out, how I could best use it - so perfect timing xD
    Would have added that Llama is made by Meta - so despite being free, it's compareable to the latest OpenAI models.

  • @ShahZ
    @ShahZ 3 месяца назад

    Thanks Tim, ran into buncha errors when running the sciprt. Guess who came to my rescue, chatGPT :)

  • @build.aiagents
    @build.aiagents 2 месяца назад +2

    lol thumbnail had me thinking there was gonna be a custom UI with the script

  • @bause6182
    @bause6182 3 месяца назад +1

    If you combine this with a webview you can make a sorta of artifact in your local app

  • @proflead
    @proflead 3 месяца назад

    Simple and useful! Great content! :)

  • @asharathod9765
    @asharathod9765 2 месяца назад

    Awesome.....i really needed a replica of chatbot for a project and this worked perfectly....thank you

  • @dimox115x9
    @dimox115x9 4 месяца назад +2

    Thank you very much for the video, i'm gonna try that :)

  • @SAK_The_Coder
    @SAK_The_Coder 2 месяца назад

    This is what i need thank you bro ❤

  • @praveertiwari3545
    @praveertiwari3545 3 месяца назад +1

    Hi Tim,
    I recently completed your video on django-react project , but i need an urgent help from your side if you can make a video on how to deploy django-react project on vercel,render etc. or other known platform (this would really be helpful as there are many users on the django forum still confused on deployment of django-react project to some popular web deployment sites.
    Kindly help into this.

  • @TechyTochi
    @TechyTochi 4 месяца назад +1

    This is Very useful content Keep it up

  • @31-jp6ok
    @31-jp6ok 3 месяца назад

    If you read my message, thank you for teaching and would you mind teaching me more about fine-tune? What should I do? (I want Tensorflow) and I want it to be able to learn what I can't answer by myself. What should I do?

  • @rhmagalhaes
    @rhmagalhaes 4 месяца назад

    I love how you make it easy for us.
    After that we need an UI and bingo.
    Btw, does it keep the answers in memory after we exit? Don't think so, right?

    • @josho225
      @josho225 3 месяца назад

      based on the code, no. only a single runtime

  • @weiguangli593
    @weiguangli593 2 месяца назад

    Great video, thank you very much!

  • @taymalsous5894
    @taymalsous5894 3 месяца назад +1

    hello tim! this video is awesome, but the only problem i have is that the ollama chatbot is responding very slowly, do you have any idea on how to fix this?

  • @Money4Jam2011
    @Money4Jam2011 3 месяца назад

    Great video learned a lot. Can you advise me the route I would take if I wanted to build a chatbot around a specific niche like comedy. build an app that I could sell or give away for free. I would need to train the model on the specific niche and that niche only. Then host it on a server I would think. An outline on these steps would be much appreciated.

  • @siddhubhai2508
    @siddhubhai2508 4 месяца назад +9

    Please Tim help me how to add long term (infact ultra long) memory to my cool AI agent using only ollama and rich library. May be memgpt will be nice approach. Please help me!

    • @birdbeakbeardneck3617
      @birdbeakbeardneck3617 4 месяца назад +1

      not na ai expert so i could said something wrong:
      you mean the ai remeber things from messagrd way back in the conversation? if so thats called context of the ai, and is limited by the training and is also an area of current developpement, on the other hand tim is just making an intrface for already trained ai

    • @siddhubhai2508
      @siddhubhai2508 4 месяца назад +1

      @@birdbeakbeardneck3617 I know that bro but I want custom solutions for what I said, like vector database or postgre, the fact is I don't know how to use them, the tutorials are not streight forward unlike Tim's tutorial also docs are not able to provide me specific solution. Yes I know after reading docs I will be able to do that but I have very little time (3 Days), and under these days I will have to add 7 tools to the AI agent. Otherwise I'm continuously trying to do that. ❤️ If you can help me through any article or blog or email, please do that 🙏❤️

    • @davidtindell950
      @davidtindell950 4 месяца назад +3

      Thx. Tim ! Now, llama3.1 is available under Ollama, It generates great results and has a large context memory !

    • @siddhubhai2508
      @siddhubhai2508 4 месяца назад +1

      @@davidtindell950 But bro my project is accordingly that can't depend on the LLM's context memory. Please tell me if you can help me with that!

    • @davidtindell950
      @davidtindell950 4 месяца назад

      ​@@siddhubhai2508 I have found FAISS vector store provides an effective and large capacity "persistent memory" with CUDA GPU support.
      ...

  • @sacv2
    @sacv2 Месяц назад

    This is great! thanks

  • @skadi3399
    @skadi3399 4 месяца назад

    Great video! Is there any way to connect a personal database to this model? (So that the chat can answer questions based on the information in the database). I have a database in Postgre, already used RAG on it, but I have no idea how to connect the db and the chat. Any ideas?

  • @toddgattfry5405
    @toddgattfry5405 4 месяца назад

    Cool!! Could I get this to summarize my e-library?

  • @H4R4K1R1x
    @H4R4K1R1x 4 месяца назад

    This is swag, how can we create a custom personality for the llama3 model?

  • @bsick6856
    @bsick6856 4 месяца назад +1

    Thank you so much!!

  • @pixelmz
    @pixelmz 4 месяца назад

    Hey there, is your VSCode theme public? It's really nice, would love to have it to customize

  • @AlexTheChaosFox1996
    @AlexTheChaosFox1996 3 месяца назад +1

    Will this run on an android tablet?

  • @arunbalakrishnan8978
    @arunbalakrishnan8978 4 месяца назад +1

    Useful. keep doing

  • @sunhyungkim5764
    @sunhyungkim5764 Месяц назад

    Amazing!

  • @swankyshivy
    @swankyshivy 10 дней назад +1

    how can this be moved from locally to on an internal website

  • @ccKuang-ziqian
    @ccKuang-ziqian 18 дней назад

    should i install ollama in a virtural env?

  • @БогданСірський
    @БогданСірський 3 месяца назад

    Hey, Tim! Thanks for your tutorial. A haver a problem. Bot isn't responding to me? Mabe someone else have the same problem. Give me some feedback, please

  • @TanujSharma-d9o
    @TanujSharma-d9o 4 месяца назад

    Can You teach us how to implement it in GUI form, i don't want to run the program every time i want help of this type things

  • @kinuthiastevie4031
    @kinuthiastevie4031 4 месяца назад +2

    Nice one

  • @alexandresemenov8671
    @alexandresemenov8671 4 месяца назад +1

    Hello! Tim when i run ollama directly there is no delay in response but using script with langchain some delay appear. Why is that? How to solve it?

  • @That_Narrator001
    @That_Narrator001 13 часов назад

    which version of the python did you use ?
    trying to navigate version 3.13.
    in the line it shows for when inside the terminal C:\Users\Computer\.ollama>
    I may have done it wrong.

  • @AmitErandole
    @AmitErandole 4 месяца назад

    Can you show us how to do RAG with llama3?

  • @jorgeochoa4032
    @jorgeochoa4032 3 месяца назад

    hello do you know if its possible to use this model as a "pre trained" one, and add some new let say.. local information to the model to use it for an specific task?

  • @sean_vikoren
    @sean_vikoren Месяц назад

    thank you.

  • @tengdayz2
    @tengdayz2 4 месяца назад

    Thank You.

  • @m.saksham3409
    @m.saksham3409 4 месяца назад

    I have not implemented myself, but I have doubt, you are using langchain where the model is llama 3.1, langchain manages everything here, then what's the use of Ollama ?

    • @gunabaki7755
      @gunabaki7755 3 месяца назад

      the langchain simplifies interactions with LLM's, it doesn't provide the LLM. We use Ollama to get the LLM

  • @thegamingaristocrat7615
    @thegamingaristocrat7615 2 месяца назад

    Is there any way to make python script to automatically train a locally-ran model?

  • @andhika277
    @andhika277 3 месяца назад

    How much ram required to make this program running well? Cause i have 4GB ram only

  • @cyrilypil
    @cyrilypil 3 месяца назад

    How do you get Local LLM to show? I don’t have that in my VS Code

  • @davidtindell950
    @davidtindell950 4 месяца назад +1

    You may find it 'amusing' or 'interesting' that when I (nihilistically) prompted with "Hello Cruel World!', 'llama3.1:8b' responded: " A nod to the Smiths' classic song, 'How Soon is Now?' (also known as 'Hello, Hello, How are You?') " !?!?!🤣

  • @TigerBrownTiger
    @TigerBrownTiger Месяц назад

    Why does microsoft publisher window keep popping up saying unlicensed product and will not allow it to run?

  • @okotjakimgonzalo2270
    @okotjakimgonzalo2270 4 месяца назад

    Where do you get all this stuff from

  • @sharanvellore9016
    @sharanvellore9016 3 месяца назад

    Hi, I have tried this and its working, but the model is taking long response time anything I can do for reducing that?

  • @Eyuel3256
    @Eyuel3256 4 месяца назад

    I had been using the program Ollama on my laptop, and it was utilizing 101% of my CPU's processing power. This excessive usage threatened to overheat my device and decrease its performance. Therefore, I decided that I would discontinue using the program.

  • @ruthirockstar2852
    @ruthirockstar2852 3 месяца назад

    is it possible to host this in a cloud server? so that i can access my custom bot whenever i want?

  • @lyndonyang1269
    @lyndonyang1269 4 месяца назад

    how coincidental, i made this project just 2 days ago

  • @antoniosa
    @antoniosa 3 месяца назад

    A dummy question.. Where is used the template ?

  • @felipemachado8311
    @felipemachado8311 11 дней назад

    can i train this model? give him information that he can answer to me before?

  • @TotoyBabes
    @TotoyBabes 2 месяца назад

    do i need to install longchain?

  • @kingkd7179
    @kingkd7179 2 месяца назад

    It was a great tutorial and I follow it properly but still I am getting an error :
    ConnectError: [WinError 10061] No connection could be made because the target machine actively refused it
    I am running this code on my office machine which has restricted the openai models and Ai site

  • @abuhabban-tz8xj
    @abuhabban-tz8xj 3 месяца назад

    what's your pc specs sir?

  • @bhaveshsinghal6484
    @bhaveshsinghal6484 4 месяца назад

    Tim this ollama is running on my cpu and hence really slow can I make it run on my GPU somehow?

  • @trevoro.9731
    @trevoro.9731 4 месяца назад

    If you need to work with large amounts of data OpenAI performance still can't be matched locally, unless you spend a ridiculous amount on your computer build.

    • @hirthikbalajic
      @hirthikbalajic 4 месяца назад

      it can be matched by running llama 3.1 405B model !.

  • @Hrlover205
    @Hrlover205 3 месяца назад

    i dont know what is happening when i run python file in cmd it shows me hello world then the command ends

  • @VatsalyaB
    @VatsalyaB 3 месяца назад

    thx ;)

  • @khushigupta5798
    @khushigupta5798 3 месяца назад

    Hey how can i show this as a ui
    I want to create a chatbot which can provide me the programming related ans with user authentication otp
    Please tell me how can i create this by using this model
    And create my UI i am a full stack developer ane new to ml please reply

  • @ParanoidNotAndroid
    @ParanoidNotAndroid 3 месяца назад

    Does anybody know what type of data the llama software is exchanging ?

  • @silasknapp4450
    @silasknapp4450 Месяц назад

    Hi. Is there a way to uninstall llama3 again?

  • @vivekanandl8798
    @vivekanandl8798 4 месяца назад

    Does respose speed of AI bot depend on gpu like llama ?

  • @734833
    @734833 3 месяца назад

    Nice

  • @aviralshastri
    @aviralshastri 3 месяца назад

    how can we stream output ??

  • @乾淨核能
    @乾淨核能 4 месяца назад

    what's the minimum hardware requirement? thank you!

  • @mit2874
    @mit2874 4 месяца назад

    do i need vram 4 this ?

  • @GeneKim-g1w
    @GeneKim-g1w 3 месяца назад

    How do I activate this on windows ??

  • @nomannosher8928
    @nomannosher8928 19 дней назад

    Anyone knows Tim HW specs?

  • @PaulRamone356
    @PaulRamone356 4 месяца назад

    PS C:\Windows\system32> ollama pull llama3
    Error: could not connect to ollama app, is it running?
    what seems to be wrong? (sorry for hte noob question)

    • @gunabaki7755
      @gunabaki7755 3 месяца назад +2

      you need to run the ollama application first, it usually starts when u boot up ur pc

    • @PaulRamone356
      @PaulRamone356 3 месяца назад

      @@gunabaki7755 will try ths thanks bro!

  • @annismehdi343
    @annismehdi343 19 дней назад

    Hello there ... Can i e-mail you ? I am facing a problem and really could use some help

  • @pixelmz
    @pixelmz 4 месяца назад

    Sadly even though I have 32GB of ram, the 7B "llama3" takes up to 1 minute to answer

  • @opita_opica
    @opita_opica Месяц назад

    this context thing is not working, the bot does not know what was earlier in the conversation

  • @rajeshm2416
    @rajeshm2416 4 месяца назад +1

    ❤🎉

  • @CrocodilesDen
    @CrocodilesDen 3 месяца назад

    How do i deploy it to my website?

    • @gunabaki7755
      @gunabaki7755 3 месяца назад

      I think u can try to convert it into API using FASTAPI and call the API from frontend

  • @TDark1
    @TDark1 24 дня назад

    where is the script bro?

  • @anneevict3839
    @anneevict3839 3 месяца назад

    can i embed this chatbot into a website? doing this for an assignment

  • @SnowballOfficials
    @SnowballOfficials 4 месяца назад

    does it require gpu ?

    • @gunabaki7755
      @gunabaki7755 3 месяца назад

      depends on size of the model, smaller models don't require