LLM Function Calling - AI Tools Deep Dive

Поделиться
HTML-код
  • Опубликовано: 21 ноя 2024

Комментарии • 28

  • @IdPreferNot1
    @IdPreferNot1 3 месяца назад +7

    The BEST walkthrough and sample code on tool calling on RUclips!

  • @JustinHennessy
    @JustinHennessy День назад

    Amazing run down on tools. Thanks so much for sharing.

  • @therealsergio
    @therealsergio 4 месяца назад +2

    Adam, your content is very helpful and thoughtfully put together. It’s clear there are a lot of hours going into their preparation

  • @sunitjoshi3573
    @sunitjoshi3573 3 месяца назад +1

    Nice video! I was just thinking about function calling…and your video showed up! Thanks 😊

  • @Horizont.
    @Horizont. 4 месяца назад +2

    Great video. Dude's been speaking nonstop for 30min straight. Now, I need a 2 hour break.

    • @amk2298
      @amk2298 3 месяца назад

      😂😂😂😂

  • @JimHeil
    @JimHeil 4 месяца назад +2

    Awesome video! Thank you!

  • @Karthik-ln7eg
    @Karthik-ln7eg 4 месяца назад +1

    great video. very clear explanation.

  • @amrit20061994
    @amrit20061994 Месяц назад

    Wait so does Langchain create the whole Json for you using the Tool decorator?

  • @aishwaryaallada0925
    @aishwaryaallada0925 3 месяца назад

    Awesome video! Is the code shared anywhere? 😊

    • @AdamLucek
      @AdamLucek  3 месяца назад

      Yes! In description and direct link here: github.com/ALucek/tool-calling-guide

  • @youtubemensch
    @youtubemensch 3 месяца назад

    How are possible with a LLM running on a own Server? So without an Ai-API. Like a Llama model. Is this possible with a specific model?

  • @sirishkumar-m5z
    @sirishkumar-m5z 3 месяца назад

    An intriguing exploration into LLM function calling! Investigating further AI tools may improve your comprehension even more.

  • @coolmcdude
    @coolmcdude 4 месяца назад

    My favorite subject

  • @siddheshwarpandhare1698
    @siddheshwarpandhare1698 3 месяца назад

    Hi Adam, well explained all content, I need to disable parallel calling, but I am not sure where to Put parallel_function_tool:false , can you help me in this case?

    • @AdamLucek
      @AdamLucek  3 месяца назад

      Sure, that's placed here with OpenAI's API
      response = client.chat.completions.create(
      model="gpt-4o",
      messages=messages,
      tools=first_tools,
      tool_choice="auto",
      parallel_tool_calls=False # parallel function calling
      )
      or with Langchain during the bind_tools stage
      llm_tools = llm.bind_tools(tools, parallel_tool_calls=False)

    • @siddheshwarpandhare1698
      @siddheshwarpandhare1698 3 месяца назад

      @@AdamLucek Completions.create() got an unexpected keyword argument 'parallel_tool_calls' getting thsi error

    • @AdamLucek
      @AdamLucek  3 месяца назад +1

      I was getting that too then I updated my OpenAI api with pip and it fixed it! Make sure you restart your kernel after if you’re in a notebook environment

    • @siddheshwarpandhare1698
      @siddheshwarpandhare1698 3 месяца назад

      Thank you ​@@AdamLucek

  • @DaleIsWigging
    @DaleIsWigging 3 месяца назад +1

    The tutorial doesn't say how to insert your API key. To do this replace
    {
    client = OpenAI()
    }
    with :
    {
    import os
    from dotenv import load_dotenv
    # Load environment variables
    load_dotenv()
    client = OpenAI()
    # OpenAI API configuration
    client.api_key = os.getenv("OPENAI_API_KEY")
    }
    ignore the "{" and "}".
    Then you can use a .env file with:
    OPENAI_API_KEY="sk-proj..."

    • @AdamLucek
      @AdamLucek  2 месяца назад

      Also possible to do
      Import os
      os.environ["OPENAI_API_KEY"] = "your-api-key-here"
      And it will automatically pull the environment variable when creating the client etc. Thanks for pointing out!

  • @gani2an1
    @gani2an1 4 месяца назад

    where can we see the functions that are available with each model?

  • @maxpalmer8660
    @maxpalmer8660 3 месяца назад

    Kinda hate this dudes voice ngl. Anyone else with me?