Mastering LangGraph: Agentic Workflows, Custom Tools, and Self-Correcting Agents with Ollama!

Поделиться
HTML-код
  • Опубликовано: 21 окт 2024

Комментарии • 20

  • @MukulTripathi
    @MukulTripathi  2 месяца назад +1

    GitHub repo link is in the description now.

  • @ronwiltgen2698
    @ronwiltgen2698 2 месяца назад +1

    What a perfect explanation of different concepts! Following other tutorials on RUclips is so hard and they leave out critical explanation and pieces for newer developers. This tutorial walks through everything step by step. Absolutely amazing job here. Thank you, please keep it up and I'll be following all your upcoming videos!

    • @MukulTripathi
      @MukulTripathi  2 месяца назад

      Thank you for the feedback! I'll be perfecting this framework in upcoming days and weeks with addition of custom_tools annotation and adding more visibility.

  • @reinerzufall3123
    @reinerzufall3123 2 месяца назад

    Thank you a thousand times , sir!
    Every single tutorial is packed with so many "hidden" pieces of information that 10 * 5-minute videos can't even come close to your content. I hope you will create many more tutorials like this. I have absorbed all your videos, recreated them, modified them, etc. If I may make a request, it would be for an explanation of all the different "function calls" that exist. Native function calls, "normal" function calls, tool usage, etc. Is this all the same? For example, what is the difference between built-in (into LangChain/LangGraph) tools/functions and those written by myself?
    An explanation of frdel/agent-zero or a smaller rebuilt version of it would also be great 😁🙈

    • @MukulTripathi
      @MukulTripathi  2 месяца назад +1

      Hello, and thank you for the feedback! I appreciate it.
      All function calls, whether it's done via "native" or via LangChain/tools are essentially the same. They are exactly same when we are sending the request to the server and it's attempting to generate a response back based on the system prompt provided. Only difference is that some of these frameworks write the boilerplate code for us, so that we don't have to do it. However, I like writing my own code sometimes, so that I can debug it. Most of the existing frameworks are new, and even though they are stable enough, I feel like they are bloated slfor most of the simple tasks that I want to do. It just makes sense to use a reduced feature set but writing my own code and using as less if of the framework as possible.
      Doing things natively (without python based frameworks) means that developers from other language, like java, can easily develop code. That's why I lean towards not being tied to a framework.
      Hope it makes sense!

    • @MukulTripathi
      @MukulTripathi  2 месяца назад +1

      Oh yes, agent zero is such an interesting concept..if you look into my playlists, I have created a series on secure dockerized dev containers series. Agent zero uses some concepts from there. It's on my radar. I want to finish building my local Jarvis while I build my own lightweight framework like that!

    • @reinerzufall3123
      @reinerzufall3123 2 месяца назад

      @@MukulTripathi Yes sir, it makes sense now! So llama3-groq-tool-use e.g. is not a function calling LLM, but rather a model that has explicitly "learned" some functions, specifically the groq-ecosystem functions. With native functions or with real "function calling", this model will likely fail because it's not actually a function-calling model, is that correctly summarized?

    • @reinerzufall3123
      @reinerzufall3123 2 месяца назад

      ​@@MukulTripathi Okay, then I will take a look at this tutorial series and try to understand it 🤗

    • @MukulTripathi
      @MukulTripathi  2 месяца назад

      @reinerzufall3123 That's about right. In reality all these function calling models are just fine-tuned version that have been trained on a specific data set which lets the model understand the function calling aspect of it. So when the system prompt or some kind of user prompt provides a tool set to the model, models recognize that they have been previously trained on a data set that provided them these tools and they were instructed to respond back in a particular way when the tools were provided to them. So when in a request prompt we provide a similar kind of syntactical structure they respond back in a similar way to how the fine tuned training data set initially was.

  • @DEVANK-gb7ed
    @DEVANK-gb7ed 2 месяца назад

    Great tutorial!.

  • @DaleIsWigging
    @DaleIsWigging 2 месяца назад

    Does the code assume you have an ai server (named a certain way) connected to your network?

    • @MukulTripathi
      @MukulTripathi  2 месяца назад

      Yeah, you can change it to localhost:11434 in case you have it locally deployed :)

  • @MrTapan1994
    @MrTapan1994 16 дней назад

    i didn't understand the part, as how a llm model is generating responses even if it doesn't know. cause that's what hallusination is ! can you share why is it happening .

    • @MukulTripathi
      @MukulTripathi  14 дней назад

      It's the concept of self reflection. When you give a chance to LLM to think about it and reason, it follows that system prompt. Of course nothing is full proof, but you do solidify your chances of getting a good response this way.

  • @daljeetsingh5489
    @daljeetsingh5489 2 месяца назад

    Hi Mukul, these are very informative. Do you have a github repo for the code?

    • @MukulTripathi
      @MukulTripathi  2 месяца назад

      I'll be sharing repo soon. It's all locally hosted as of now in my personal gittea repo.

    • @MukulTripathi
      @MukulTripathi  2 месяца назад +1

      Github repo link is in the description now:
      github.com/Teachings/langgraph-learning

  • @rayzorr
    @rayzorr Месяц назад

    results are poor with the conditional edge when using a lesser model like llama3.1:8b or phi3.5. Lots of "crashing" or incorrect boolean returns.

    • @MukulTripathi
      @MukulTripathi  Месяц назад

      If you're using 8b Llama3.1 then don't use the Q4 quantized model. If you use the Q8 or FP16 model then you'll get better results.