Thanks for the update and a great video!. 2 Questions - 1) How can this be integrated inside the Open Web UI tool, so that I can use its fantastic chat interface to chat with the local LLMs and in turn the LLMs can call the necessary tools. 2) Any example with Streamlit would be great!
wow, thanks a lot, great video with Ollama new updates. One question about function-call demo, the function call has output, while the response.message.content is empty string. like message=Message(role='assistant', content=' ', so if use function call, need to decode every function output?
Interesting video. Maybe someone can explain in the comments how the LLM would know which tool to use in the event there were multiple tools. That is, does the LLM sort of do a first pass through the names of the tools and try to think which one is best suited to handle the query. Is there a way to provide additional information so that the LLM makes a good choice about which tool to call under what circumstances?
Under the hood, Ollama Python library uses Pydantic and docstring parsing to generate the JSON schema which was previously required to be provided manually as a tool SO this helps the LLM to choose the needed tool (function to use) This means that if you want more reliable results, it is better to add docstring to the function (tool).
cool video. one question though, is there a way to make the response of tool be included in AI response? like for example, on chatgpt you can ask a custom gpt to create a custom pdf then it would return "here is your custom pdf you requested {pdf file}" and the response might differ sometimes
Instead of returning the tool response it self to the LLM so it can build a formatted answer, include the function inside a function that return the response you want the user to receive and direct this formatted response directly to the user not to the LLM, also in this case you need to add the tool message to the chat history array if you want the LLM to remember this response (i.e. when there might be a follow up questions that may depends on this response)
Nice video. However, what I came to know is that Ollama is built to be used in conjunction with Nvidia GPU. Running it on a CPU will be a painful experience.
I want to give the AI the tool of python and have it make a program to figure out the math that it needs to do and then scrap the program... And same with everything else like you know...
import subprocess def execute_dynamic_python_script(code): """ Takes a Python code snippet as input, writes it to a temporary file, executes it, and returns the output. """ # Write the code to a temporary Python file with open("dynamic_script.py", "w") as f: f.write(code) # Execute the Python script and capture the output try: result = subprocess.check_output(["python", "dynamic_script.py"], universal_newlines=True) return result except subprocess.CalledProcessError as e: return f"Error in script execution: {e.output}" # Example: Create a Python script dynamically user_code = """ # Example Python script def calculate(): return 5 + 7 * 3 result = calculate() print(result) """ output = execute_dynamic_python_script(user_code) print(f"Output from the script: {output}") Would this work to get the AI to program whatever tool it needs?
Great improvement of Ollama, thanks for your video.
i like it works with a small model, thanks for the update
Thanks for the update and a great video!. 2 Questions - 1) How can this be integrated inside the Open Web UI tool, so that I can use its fantastic chat interface to chat with the local LLMs and in turn the LLMs can call the necessary tools. 2) Any example with Streamlit would be great!
wow, thanks a lot, great video with Ollama new updates. One question about function-call demo, the function call has output, while the response.message.content is empty string. like message=Message(role='assistant', content=' ', so if use function call, need to decode every function output?
Interesting video. Maybe someone can explain in the comments how the LLM would know which tool to use in the event there were multiple tools. That is, does the LLM sort of do a first pass through the names of the tools and try to think which one is best suited to handle the query. Is there a way to provide additional information so that the LLM makes a good choice about which tool to call under what circumstances?
Under the hood, Ollama Python library uses Pydantic and docstring parsing to generate the JSON schema which was previously required to be provided manually as a tool
SO this helps the LLM to choose the needed tool (function to use)
This means that if you want more reliable results, it is better to add docstring to the function (tool).
@ thanks for your very helpful response
Thanks for the video, it's really interesting.
can i make him use when he needs the function calling and when he dosnt he wont use it?
If I ask recent listed company stock price what will happen 🙂
cool, very cool!!!
cool video. one question though, is there a way to make the response of tool be included in AI response? like for example, on chatgpt you can ask a custom gpt to create a custom pdf then it would return "here is your custom pdf you requested {pdf file}" and the response might differ sometimes
Instead of returning the tool response it self to the LLM so it can build a formatted answer, include the function inside a function that return the response you want the user to receive and direct this formatted response directly to the user not to the LLM, also in this case you need to add the tool message to the chat history array if you want the LLM to remember this response (i.e. when there might be a follow up questions that may depends on this response)
Nice video. However, what I came to know is that Ollama is built to be used in conjunction with Nvidia GPU. Running it on a CPU will be a painful experience.
This is super cool.
I want to give the AI the tool of python and have it make a program to figure out the math that it needs to do and then scrap the program... And same with everything else like you know...
Here comes the programmers, saving the day xD it's the real life version of the stories hahaha. Raise the Python serpent lol, hail Satan
import subprocess
def execute_dynamic_python_script(code):
"""
Takes a Python code snippet as input, writes it to a temporary file,
executes it, and returns the output.
"""
# Write the code to a temporary Python file
with open("dynamic_script.py", "w") as f:
f.write(code)
# Execute the Python script and capture the output
try:
result = subprocess.check_output(["python", "dynamic_script.py"], universal_newlines=True)
return result
except subprocess.CalledProcessError as e:
return f"Error in script execution: {e.output}"
# Example: Create a Python script dynamically
user_code = """
# Example Python script
def calculate():
return 5 + 7 * 3
result = calculate()
print(result)
"""
output = execute_dynamic_python_script(user_code)
print(f"Output from the script:
{output}")
Would this work to get the AI to program whatever tool it needs?