LangChain and Ollama: Build Your Personal Coding Assistant in 10 Minutes

Поделиться
HTML-код
  • Опубликовано: 10 дек 2024

Комментарии •

  • @AISoftwareDeveloper
    @AISoftwareDeveloper  2 месяца назад +1

    Here is the source code repo: github.com/aidev9/tuts/tree/main/langchain-ollama

  • @TimHoffeller
    @TimHoffeller Месяц назад +1

    Hi, pretty cool stuff and very helpful! You mentioned the rag approach. A follow up with this approach would be very cool😊. Thanks a lot for your work!

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  Месяц назад +1

      Hi @TimHoffeller, yes RAG with LangChain would provide a more scalable solution. Luckily, my next video will be covering that very topic with LangChain, Supabase and a local Ollama. Check back in the next few days and let me know your thoughts. Thank you for the comment.

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  Месяц назад +1

      @@TimHoffeller the RAG video is now available. Any feedback is appreciated.

  • @arunbhati101
    @arunbhati101 2 месяца назад +1

    Great explanation and easy to understand example. Thanks for sharing your knowledge.

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад

      Thank you @arunbhati101, I am glad you got something out of it. What videos would you like to see in the future?

  • @AaronBlox-h2t
    @AaronBlox-h2t 2 месяца назад +1

    Cool video. Source code is great . Also including the relevant urls in the video would be good. Thanks.

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад

      Thanks, here are the links, also available in the video description now.
      github.com/pillarstudio/standards/blob/master/reactjs-guidelines.md
      gist.github.com/nlaffey/99fdb37c0ba286f38a0582564061dea8

  • @Zenith_pop
    @Zenith_pop 2 месяца назад +3

    one thing in case of bigger model u need it for more memory in tha case vram is an issue and u need to quantization but then accuracy defects using 8bit or 4 bit also in that case code errors happen , smaller cap are usless due to being applicable in real life projects , but good

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад

      Yes, for bigger models, VRAM and GPU can be highly beneficial. And you're right, bigger models will deliver better results for real life projects. Thank you for the comment.

  • @adriangpuiu
    @adriangpuiu 2 месяца назад +1

    what it would be nice, is to have a meta agent that creates dynamic tools and reinserts them into the flow when needed

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад

      Yeah, that would be cool. Based on the user prompt, the meta agent creates its own tools and executes them as needed. Controlling the agent will be a beast

  • @AlexK-xb4co
    @AlexK-xb4co Месяц назад +1

    ollama models by default are configured to have 2k context size, fyi

  • @genzprogrammer
    @genzprogrammer 2 месяца назад +1

    Thanks Bro, 🎉 Correct Time. Was looking for something like this

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад

      Thanks for the comment. What would you like to see next?

    • @genzprogrammer
      @genzprogrammer 2 месяца назад

      @@AISoftwareDeveloper How can we load a complete codebase from a git repo and Implement a Rag for that codebase?

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад

      @@genzprogrammer Easy. What would the RAG do - are you thinking a chatbot to talk to the codebase or something more advanced, like artifacts generation?

    • @genzprogrammer
      @genzprogrammer 2 месяца назад

      @@AISoftwareDeveloper Thinking of build something like a Chat where it takes my query and search the vectordb for which file should i change Nd what should i change

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад

      @@genzprogrammer that's a great idea. So, it will tell you what files need to be updated based on a feature change you're thinking. Can you give me one or two examples of queries you'd ask?

  • @Aksafan
    @Aksafan 2 месяца назад +4

    Hey, this was surprisingly helpful!! Thank you so much!
    Can I ask for a source code please?

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад +1

      Here you go: github.com/aidev9/tuts/tree/main/langchain-ollama

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад

      Thanks for the comment. What topic would you like to see next?

    • @Aksafan
      @Aksafan 2 месяца назад

      @@AISoftwareDeveloper Some tuning would be great, cause the quality of code (that React components) is so bad tbh. I asked several frontend engineers to look at it and it was not the best one.
      Maybe it's cause of models I used (llama 3.2-3b and 3.1-7b).
      Also, that would be great to know how to prepare a proper guideline for a model to use, cause links (event to GitHub raw MD file) are not working when there are a bunch of other links on that page.

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад

      Here is the repo: github.com/aidev9/tuts/tree/main/langchain-ollama

  • @sathishbabu3322
    @sathishbabu3322 Месяц назад +1

    Awesome video. Can you teach me how to convert this coding to .py file so I can run it in my local machine visual studio code app and check with other compatible models

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  Месяц назад +1

      Thank you for the comment. Yes, you can save a Jupyter notebook and run it as a Python scrip locally. Watch this to learn how: Jupyter Notebooks in VS Code on MacOS
      ruclips.net/video/3pbFb7X2ObU/видео.html

    • @sathishbabu3322
      @sathishbabu3322 Месяц назад +1

      @@AISoftwareDeveloper I am using windows machine. Hope that video does covers it as well. And thank you so much for a quick response. Really appreciated

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  Месяц назад +1

      The video doesn’t cover windows, but once you have VSCode installed, running Python is the same. I hope that helps. Thank you for your comment.

  • @umarkhan8787
    @umarkhan8787 2 месяца назад

    Nice man 👍🏻, just wanted to where should i append my tool/function response, like in system or in tools😅

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад

      You'd want to append the tool response into the messages array. You can then filter that array by instance type, passing ToolMessage as the filter. That will do the trick 👍

  • @agraciag
    @agraciag 2 месяца назад +1

    Next step? using voice to give it the prompt :-)

  • @MubashirullahD
    @MubashirullahD 2 месяца назад +2

    Build it in 10 minutes, waste days in the future dealing with dependency issues

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад +2

      Not to worry, mate. We’ll have AI to fix all dependency issues 😉

    • @codelinx
      @codelinx 2 месяца назад +2

      Sounds like someone has SDE …. Small developer energy😂😂🫰

    • @MubashirullahD
      @MubashirullahD 2 месяца назад

      @codelinx
      XD

  • @NanoGi-lt5fc
    @NanoGi-lt5fc 2 месяца назад

    I hv one question can i somehow create github co pilot who just giveme suggestions wht i need to do using this ?

    • @AISoftwareDeveloper
      @AISoftwareDeveloper  2 месяца назад

      Hey, definitely. You can use this and create your own Github Copilot as a VSCode extension. Here's how to get started: code.visualstudio.com/api/get-started/your-first-extension

    • @NanoGi-lt5fc
      @NanoGi-lt5fc 2 месяца назад +1

      @@AISoftwareDeveloper thanks sir I will try this will it give me suggestions like an co pilot give can I publish that extension as well