Spring AI - Run Meta's LLaMA 2 Locally with Ollama 🦙 | Hands-on Guide |

Поделиться
HTML-код
  • Опубликовано: 5 окт 2024
  • #JavaTechie #SpringAI #Ollama #LLaMA2
    👉 In this tutorial I will walk you through the steps to understand how to run LLM locally using ollama & spring ai , We will also work on a hands-on project with the popular LLM, Llama2
    What You Will Learn:
    👉 What is Spring AI?
    👉 What is Ollama?
    👉 What is LLaMA 2 ?
    👉 hands-on project with the popular LLM, Llama2
    Spring documentation : docs.spring.io...
    🧨 Hurry-up & Register today itself!🧨
    Devops for Developers course (Live class ) 🔥🔥:
    javatechie.ong...
    COUPON CODE : NEW24
    Spring boot microservice Premium course lunched with 70% off 🚀 🚀
    COURSE LINK : Spring boot microservice course link :
    javatechie.ong...
    PROMO CODE : SPRING50
    GitHub:
    github.com/Jav...
    Blogs:
    / javatechie4u
    Facebook:
    / javatechie
    Join this channel to get access to perks:
    www.youtube.co...
    🔔 Guys, if you like this video, please do subscribe now and press the bell icon to not miss any update from Java Techie.
    Disclaimer/Policy:
    📄 Note: All uploaded content in this channel is mine and it's not copied from any community, you are free to use source code from the above-mentioned GitHub account.

Комментарии • 68

  • @dmode1535
    @dmode1535 3 месяца назад +9

    Thanks for posting this sir, I have learn a lot from you. Developers should be the highest paid group of any field because we do more learning than any field. We never stop learning, there is always something new to learn.

    • @itdev7097
      @itdev7097 2 месяца назад

      "Developers should be highest paid group in any field".... exactly the opposite should happen.

  • @Derrick-f8m
    @Derrick-f8m Месяц назад

    man Busan you are awesome. you are the only guy on RUclips that innovates. you've been doing this for long time and never quit. No other teacher on this platform has continued to serve his dev community like this. Java will never die.

  • @dhavasanth
    @dhavasanth 27 дней назад

    I appreciate your tireless efforts-thank you! We are eagerly awaiting more information on RAG and Vector Database.

  • @praveenj3112
    @praveenj3112 22 дня назад

    Thank you for making this video and giving some basic info regarding the gen AI and spring AI

  • @TheMrtest123
    @TheMrtest123 3 месяца назад +1

    Thanks for starting a series on AI. This is the need of the hour. Thanks for accepting our request for AI demos

  • @_vku
    @_vku 2 месяца назад +1

    Thanks Basant sir, as a developer i like ur video to watch, whenever new leaning required i follow ur video.

  • @gopisambasivarao5282
    @gopisambasivarao5282 2 месяца назад +1

    Thanks Basant. Appreciate your efforts. God bless you!

  • @SouravDalal1981
    @SouravDalal1981 2 месяца назад +2

    Great job. Will look forward for Vector database with AI integration

  • @grrlgd3835
    @grrlgd3835 3 месяца назад +1

    thanks for this JT. another great video. I'm going to take a look at the ETL framework for Data Engineering. look forward to more content

  • @rajasekharkarampudi2669
    @rajasekharkarampudi2669 3 месяца назад +1

    Great start to the Java AI world bro,
    Will be Waiting for the demos in this playlist on RAG applications using Vector DB, and Function calling,
    Take your time and plan a video on E2E real world projects, deploying it on a server.

  • @abdus_samad890
    @abdus_samad890 3 месяца назад +1

    Loved it... Yes you can create more.

  • @rabiulbiswas5908
    @rabiulbiswas5908 3 месяца назад

    Thanks for the video. I hope more will come on Spring AI soon.

  • @SrkSalman416
    @SrkSalman416 3 месяца назад +1

    Very Good Video On Spring-AI. Your content(Videos) like Java Magnet, Its attract very fast.

  • @satyendrabhagbole3746
    @satyendrabhagbole3746 2 месяца назад +1

    Brilliant explanation

  • @technoinshan
    @technoinshan 2 месяца назад

    Yes please create a detailed video with spring java and llm

  • @ChandraSekhar-jm3sr
    @ChandraSekhar-jm3sr 2 месяца назад +1

    Please share different model videos...they are very helpful

  • @rajkiran4572
    @rajkiran4572 2 месяца назад +1

    Thanks

    • @Javatechie
      @Javatechie  2 месяца назад

      Thanks buddy 🙂👍

  • @kappaj01
    @kappaj01 3 месяца назад +2

    Great video. Can you do a RAG solution using the Weaviate Vector DB? I had one with AI 0.8 running, but it has changed so muched to 1.0...

    • @Javatechie
      @Javatechie  3 месяца назад +1

      Sure I will give a try as currently I am learning it 😊

  • @psudhakarreddy6548
    @psudhakarreddy6548 3 месяца назад +1

    Thank you

  • @muralikrishna-qh6vg
    @muralikrishna-qh6vg 3 месяца назад +1

    Thank you Basanth, can you please take us to Spring ai with google cloud (like vertexAiGemini) also vertextai with java

  • @mayankkumarshaw635
    @mayankkumarshaw635 3 месяца назад +1

    Very nice ❤❤

  • @petersabraham7423
    @petersabraham7423 3 месяца назад

    Great video. Thank you for always making effort to update us with the latest technologies. I have one issue though, after implementing the generate rest api, my response takes too long, sometimes it takes up to 7minutes and i also noticed that you used GET request while llama "/api/chat" expects a POST request. Is there a particular reason you used a GET request?
    2. Is it possible to train the llama model to recognize and provide responses based on the trained data?

  • @MrSumeetshetty
    @MrSumeetshetty 3 месяца назад +1

    Thanks bro.. You are a good motivation for all developers to keep learning ❤

  • @shrirangjoshi6568
    @shrirangjoshi6568 Месяц назад +1

    Please add the videos to a playlist.

  • @ashokpandit1367
    @ashokpandit1367 2 месяца назад

    That's we are waiting for , let's give left and right to the Python people now 😅😅

  • @driveDoses
    @driveDoses 3 месяца назад +1

    Great start in AI world. I had question If I have an spring boot application and I want ans related to my application data only. how we can achieve that

    • @Javatechie
      @Javatechie  3 месяца назад

      I am not sure will check and update you

  • @liqwis9598
    @liqwis9598 3 месяца назад +1

    Hey basant nice video , can you please teach us how to use RAG functionality in locally running llms

    • @Javatechie
      @Javatechie  3 месяца назад +1

      I will do this

    • @liqwis9598
      @liqwis9598 3 месяца назад

      @@Javatechie thank you as always 🙂

  • @RohitSharma-qb1vw
    @RohitSharma-qb1vw 9 дней назад

    Awesome video, Please make videos on remaining Models also.

  • @atulgoyal358
    @atulgoyal358 2 месяца назад

    I got below issue
    Error: model requires more system memory (8.4 GiB) than is available (3.9 GiB)
    docker exec -it ollama ollama run llama2

  • @srinivaschannel6230
    @srinivaschannel6230 Месяц назад +1

    any course starting regarding Spring AI

    • @Javatechie
      @Javatechie  Месяц назад

      No buddy not planned any

  • @koseavase
    @koseavase 2 месяца назад +1

    Here comes Spring boot to challenge Python

  • @Nilcha-2
    @Nilcha-2 Месяц назад +1

    Is it possible to augment the model with local files (PDF, txt, docs) and then have llama scan through the files and answer any relevant questions in that file?

    • @Javatechie
      @Javatechie  Месяц назад +1

      Yes we can do that. I already did poc on it do let me know if you want video on this concept

    • @Nilcha-2
      @Nilcha-2 Месяц назад

      @@Javatechie Yes sir. I will greatly appreciate if you can do a video on that.
      Asking generic questions can also be done on free chatgpt and Gemini. The main use case is when business wants to provide their employees/customers with chatbot on their data. E.g HR policies, corporate plans, etc.
      A complex requirement is when a chatbot is required to parse database and summarize data, etc
      Currently we do that using Azure chatbot. But management do not like the idea of uploading confidential files. So if ollama can handle that locally and securely that will be the main use case.

  • @technoinshan
    @technoinshan 2 месяца назад +1

    hi getting below error when ever running that project
    java.lang.RuntimeException: [500] Internal Server Error - {"error":"model requires more system memory (8.4 GiB) than is available (7.5 GiB)"}

  • @flutterdevfarm
    @flutterdevfarm 3 месяца назад +1

    Sir, are you launching any latest course spring boot & microservices?
    The course which is listed on your site, is it Live?

    • @attrayadas8067
      @attrayadas8067 3 месяца назад +1

      It's pre-recorded!

    • @Javatechie
      @Javatechie  3 месяца назад

      It was recorded session of past live class currently I don't have any plan for new batch but If I have any plan in future then definitely I will update first in my RUclips channel for my audience 😀

  • @vaibhavshetty3781
    @vaibhavshetty3781 3 месяца назад +1

    getting this error while ,
    Error: llama runner process has terminated: signal: killed

    • @Javatechie
      @Javatechie  3 месяца назад

      At what step are you getting this error?

    • @vaibhavshetty3781
      @vaibhavshetty3781 3 месяца назад

      @@Javatechie while running
      docker exec -it ollama ollama run llama2

    • @vaibhavshetty3781
      @vaibhavshetty3781 3 месяца назад

      does it require any higher machine spec

    • @Javatechie
      @Javatechie  3 месяца назад

      No specification required

    • @abhishekkumar2020
      @abhishekkumar2020 2 месяца назад

      getting same error

  • @moinakdasgupta3341
    @moinakdasgupta3341 3 месяца назад +1

    Hi JT, I am running both ollama and my spring boot app using docker compose, but app is getting 500 response when hitting ollama api. This is fixed only when I manually run ollama run llama2 or ollama pull llama2 in the ollama container. Is there any way to automatically pull the model while starting from docker compose? I tried command: ["ollama", "pull", "llama2"] in docker compose file with no luck :(

    • @Javatechie
      @Javatechie  3 месяца назад

      not sure buddy, I will check and update you

  • @theniteshkumarjain
    @theniteshkumarjain 3 месяца назад +1

    Do we need tokens to generate the response? Also are these models for free?

    • @Javatechie
      @Javatechie  3 месяца назад

      No tokens required yes these models are open-source

    • @theniteshkumarjain
      @theniteshkumarjain 3 месяца назад

      @@Javatechie thanks

  • @rishiraj2548
    @rishiraj2548 3 месяца назад +1

    😎👍🏻💯🙏🏻

  • @2RAJ21
    @2RAJ21 2 месяца назад

    I got below issue
    verifying sha256 digest
    writing manifest
    removing any unused layers
    success
    Error: model requires more system memory (8.4 GiB) than is available (2.9 GiB)
    after running -> docker exec -it ollama ollama run llama2
    how to solve this ?? please help me..

    • @atulgoyal358
      @atulgoyal358 2 месяца назад

      I got same issue ? did you got solution.

    • @2RAJ21
      @2RAJ21 2 месяца назад

      @@atulgoyal358 I did not touch after this issue.
      I think increase docker memory size.

    • @atulgoyal358
      @atulgoyal358 2 месяца назад

      @@2RAJ21 Need to check how to increase docker memory size

    • @2RAJ21
      @2RAJ21 2 месяца назад

      @@atulgoyal358 no idea bro..
      I am confused with ram or memory..

    • @atulgoyal358
      @atulgoyal358 2 месяца назад

      @@2RAJ21 Need to configure .wslconfig file in %userprofile% increase RAM and processor then it will work..

  • @CenturionDobrius
    @CenturionDobrius 3 месяца назад +1

    As usual, great job ❤
    Please, if possible, work on your microphone voice quality recording

    • @Javatechie
      @Javatechie  3 месяца назад

      Hello buddy thanks for your suggestion actually mic quality is good and things are echoing so will definitely try to improve it