Hi Adam, well explained all content, I need to disable parallel calling, but I am not sure where to Put parallel_function_tool:false , can you help me in this case?
Sure, that's placed here with OpenAI's API response = client.chat.completions.create( model="gpt-4o", messages=messages, tools=first_tools, tool_choice="auto", parallel_tool_calls=False # parallel function calling ) or with Langchain during the bind_tools stage llm_tools = llm.bind_tools(tools, parallel_tool_calls=False)
I was getting that too then I updated my OpenAI api with pip and it fixed it! Make sure you restart your kernel after if you’re in a notebook environment
The tutorial doesn't say how to insert your API key. To do this replace { client = OpenAI() } with : { import os from dotenv import load_dotenv # Load environment variables load_dotenv() client = OpenAI() # OpenAI API configuration client.api_key = os.getenv("OPENAI_API_KEY") } ignore the "{" and "}". Then you can use a .env file with: OPENAI_API_KEY="sk-proj..."
Also possible to do Import os os.environ["OPENAI_API_KEY"] = "your-api-key-here" And it will automatically pull the environment variable when creating the client etc. Thanks for pointing out!
The BEST walkthrough and sample code on tool calling on RUclips!
Amazing run down on tools. Thanks so much for sharing.
Adam, your content is very helpful and thoughtfully put together. It’s clear there are a lot of hours going into their preparation
Nice video! I was just thinking about function calling…and your video showed up! Thanks 😊
Perfect!
Great video. Dude's been speaking nonstop for 30min straight. Now, I need a 2 hour break.
😂😂😂😂
Awesome video! Thank you!
Thanks!!!
great video. very clear explanation.
Thanks!
Wait so does Langchain create the whole Json for you using the Tool decorator?
Awesome video! Is the code shared anywhere? 😊
Yes! In description and direct link here: github.com/ALucek/tool-calling-guide
How are possible with a LLM running on a own Server? So without an Ai-API. Like a Llama model. Is this possible with a specific model?
An intriguing exploration into LLM function calling! Investigating further AI tools may improve your comprehension even more.
My favorite subject
Hi Adam, well explained all content, I need to disable parallel calling, but I am not sure where to Put parallel_function_tool:false , can you help me in this case?
Sure, that's placed here with OpenAI's API
response = client.chat.completions.create(
model="gpt-4o",
messages=messages,
tools=first_tools,
tool_choice="auto",
parallel_tool_calls=False # parallel function calling
)
or with Langchain during the bind_tools stage
llm_tools = llm.bind_tools(tools, parallel_tool_calls=False)
@@AdamLucek Completions.create() got an unexpected keyword argument 'parallel_tool_calls' getting thsi error
I was getting that too then I updated my OpenAI api with pip and it fixed it! Make sure you restart your kernel after if you’re in a notebook environment
Thank you @@AdamLucek
The tutorial doesn't say how to insert your API key. To do this replace
{
client = OpenAI()
}
with :
{
import os
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
client = OpenAI()
# OpenAI API configuration
client.api_key = os.getenv("OPENAI_API_KEY")
}
ignore the "{" and "}".
Then you can use a .env file with:
OPENAI_API_KEY="sk-proj..."
Also possible to do
Import os
os.environ["OPENAI_API_KEY"] = "your-api-key-here"
And it will automatically pull the environment variable when creating the client etc. Thanks for pointing out!
where can we see the functions that are available with each model?
Kinda hate this dudes voice ngl. Anyone else with me?
I been thinking this…
No. I am not with you.