This is exactly what I needed. I was struggling with extracting the message from the message object group. This short 10 min video just solved a problem I was working on for 4 hours.
Thanks for the video man. This helped me understand how I can transfer my custom-built OpenAI chatbot to my website/SaaS. Basically it's an interrelationship with VSCode and OpenAI and then once I'm happy I just use the API and chabot ID and somehow plug it into my website on the backend. At least that's what I think. We'll see.
Glad to hear it helped you get going. There's a lot of different ways to build what you want, so the best thing is just to go ahead and try it out like you are doing. Good luck!
AI agents and tools are revolutionary! I've been enjoying how SmythOS makes creating multi-agent systems devoid of code so simple. creates a plethora of opportunities for automation #SmythOS #AItools
Great video! What steps would need to be included to allow the Assistant to use data from a 3rd party API? I’m not a coder and hope there is a way to connect to an external API from the Playground using Functions tool?
That's probably a bit more complex, I think currently you do need a bit of coding to get that working. So the OpenAI assistant will just give you the raw data to send to the 3rd party API. You'll have to write (or generate) some code to use that input and make the call yourself, then you can decide what to do with the response (e.g. feed it back to the assistant, or do something else with it).
Great video! thank you so much. I have two questions, the data file (pdf file) that I gave to the assistant contains images related to each subject. However, the assistant is just giving me the text related to the question without the images, how can I solve this problem? is the assistant capable of giving back the images + text response? Thank you again!
Awesome thank you! @pixegami Do you know how to print screen what tool the assistant is using while answering your query? Assuming there are multiple tools available to the assistant. Thanks.
Hmm, I think maybe if you enable the verbose setting in the assistant, you might be able to get more details out of it? Not quite sure, but I think it'll probably be somewhere in the OpenAI docs. I'd be shocked if they didn't expose that feature :P
Thank you for the great video pixemi! I want to build my own data analyst assistant that would do excel work, data visualization... based on your experience, do you think I should use 3 assistants that each is specialized for a certain task or one general assistants?
I'm not sure how possible that is from the Assistant tooling itself. But (either using this or using the something like Langchain and RAG directly) you could also implement your own validation that the data it uses must closely relate to the material you've given it.
Would you like to be a Judge? 1. CodeCraft Duel: Super Agent Showdown 2. Pixel Pioneers: Super Agent AI Clash 3. Digital Duel: LLM Super Agents Battle 4. Byte Battle Royale: Dueling LLM Agents 5. AI Code Clash: Super Agent Showdown 6. CodeCraft Combat: Super Agent Edition 7. Digital Duel: Super Agent AI Battle 8. Pixel Pioneers: LLM Super Agent Showdown 9. Byte Battle Royale: Super Agent AI Combat 10. AI Code Clash: Dueling Super Agents Edition
But the GPT's in the gpt store are capable of accessing external API's, why aren't the assistant capable of doing so ? Do you know any alternate models which allow for external API access ?
I'm not sure why they allowed the GPT store to do that (do they really!?), but not custom assistants. But I guess that's something OpenAI expects you to handle in your own server side logic.
I just prepare the graphic in photoshop (it's just a gradient circle), then use OBS to record. I put a circular mask around my camera image, and just stick that on top of the circle.
ImportError: cannot import name 'OpenAI' from 'openai' This is a library compatibility problem. Any fix? ChatGPT has me running in circles literally with the same 2 set of fixes. Thanks. Still a good video!
The problem was cause by VS Code IDE. When I created a new venv, ran the main.py as a script, it was fine. So I need to either new a new IDE, or find the config issue in my VSC. Thanks.
It's probably that you need to select the Python environment to use. By default, I think VSCode might not know which Python environment you need to use. Don't worry, it's quite a common snag that I get caught on a lot as well. Here's how to fix it: code.visualstudio.com/docs/python/environments
Any suggestions on fast cheats for creating these JSON or Yaml schemas? Something in VS code, feeding question into GPT? Havent been able to get them to automatedly generate these template styles
Hmm, for something like this, I think since it's usually a "one-off" cost to developing an assistant, it's probably best to do the schemas by hand so you can validate them and make sure they are to your spec. You'll always get variable results with LLMs to help on schemas like this that are not widely used or documented in its training data.
Thanks your video make me understand. I have One account platform and have 1 assitand have done too make all rule with file . But i still confused about “how to assistant that calling with Api to WHATSAUTO/Bot Auto AI needed api Openai?” Yeay i have watching your video, with phyton make thread and response all question, but where is api for Assistand can i input on WHATSAUTOAI
Please tell me, according to your explanation, OPENAI can now allow external programs (such as chat programs) to directly call and use OPENai Assistant_id = 'asst_XXX....' and use OPENai vector_store_id = 'vs_XXX'? The ID above is built into my OPENai developer mode. Can it be called directly? I've tried using the method described in your video but still can't find a solution...
It lets you configure the GPT with a bit more context upfront, so if you wanted to build an app or a UI on top of this for other people to use, this gives you a head start.
sorry im new to OAI assistant API..i have already had 'credentials' from my custom GPT in GPT store. how can i connect it to my separate app with assistant API? not recreating the instruction on assistant API but just connect it to custom GPT in GPT store that i have already created earlier
I think the Custom GPT and the Assistant APIs are slightly different products in OpenAI's backend, so you'll probably have to re-implement your Custom GPT as an Assistant first, then use that as an API in your app.
Hey there! To get the output in a chatbot UI, you'd need to build a frontend interface and send the assistant's responses there instead of printing to the terminal. Basically, you'd make API calls from your frontend, then update the UI with the responses. If you're new to this, starting with a simple web framework like Flask or FastAPI could be a good way to go.
Verry cool but do you know the limit to pdf’s you can upload. The limit seems s bit low unfortinately. I just tried creating a Plug in but this is the same right?
Yeah, the Assistants API is pretty cool! I'm not sure about the exact PDF upload limits - you might want to check the official OpenAI docs for the most up-to-date info on that. As for Plugins, they're actually a different feature from the Assistants API (from what I understand, it's more to do with tool-use or function calling).
As far as I understand, it is impossible to call external API directly from the chat GPT agent. It is only possible to call an assistant from the Python code locally on a PC.
Hmm, I don't think the Assistants API allows you to do that. If you want a finer level of control, it might be best to implement your own agent using the model directly and something like Langchain library for utility.
Hmm, do you have the premium membership subscription? That's all I have and it shows up for me. I don't know if there's any geographical restrictions... (I'm in Australia).
Hey @willyu7515, all you need to do is add some credits to your account (minimum $5). It's terrible that they don't mention it, but I had the same problem, and as soon as I added some money, the GPT 4 popped up. Hope it helps!
Hi, thank you very much for the video, very helpful. I am getting this error ” 2024-06-12 11:53:56,194 - openai - INFO - error_code=None error_message=‘Unrecognized request argument supplied: assistant_id’ error_param=None error_type=invalid_request_error message=‘OpenAI API error received’ stream_error=False 2024-06-12 11:53:56,194 - __main__ - ERROR - Error calling OpenAI API: Unrecognized request argument supplied: assistant_id" What should I do, GPT doesn't give a correct answer to this
HI my friend .It's really good and useful .❤❤ I just have a question . Is it possible to suddenly run out of tokens? if is it true how can i solve this ?
Yup, there are fees for using the OpenAI API, and they vary depending on the model you choose. For the most up-to-date pricing info, check out OpenAI's official pricing page: openai.com/pricing It's usually quite cheap though, and every new model/update tends to make it cheaper and cheaper.
Yup, it can! The Assistants API has a feature called "threads" that maintains conversation context. You can find more details in the OpenAI docs: platform.openai.com/docs/assistants/overview
You'll need a paid OpenAI account to use the Assistants API. The free tier doesn't include access to this feature (at least as far as I know right now).
so.. can you add an external API to the schema? And when you finish the assistant.. how do you then use the API for that assistant somewhere else? im going to watch those parts again haha
The way they constructed this API, with runs and threads and messages - it's hilariously unintuitive and bad. Almost as bad as a Google API - I mean not that bad but still. It's like they purposely decided to make it incomprehensible.
I agree it was incredibly unintuitive to use. But designing APIs like this will always have a lot of trade-offs, and I think the designers here were really positioning it for high-scaling, async-first use cases.
This is exactly what I needed. I was struggling with extracting the message from the message object group. This short 10 min video just solved a problem I was working on for 4 hours.
That makes me really happy to hear, glad you solved your problem!
Excellent Bro!!! Clear and concise👍
Thank you :) Hope you enjoyed it.
Thanks for the video man. This helped me understand how I can transfer my custom-built OpenAI chatbot to my website/SaaS. Basically it's an interrelationship with VSCode and OpenAI and then once I'm happy I just use the API and chabot ID and somehow plug it into my website on the backend. At least that's what I think. We'll see.
Glad to hear it helped you get going. There's a lot of different ways to build what you want, so the best thing is just to go ahead and try it out like you are doing. Good luck!
I see how much I have been over complicating this. Thanks.!
Glad it helped!
Such a simple and easy eplaination. I wish you were my college professor.
Thank you :)
AI agents and tools are revolutionary! I've been enjoying how SmythOS makes creating multi-agent systems devoid of code so simple. creates a plethora of opportunities for automation #SmythOS #AItools
To the point, good quality. Subscribed.
Thank you!
Wow you are amazing. Everything is well explained and completed. I am so impressed.
Thank you so much! I really appreciate the feedback :)
so concise and clear! thank you!
Glad to hear it was useful!
Another awesome video! Thanks!
Glad you enjoyed it! Thank you :)
Dude you are much better looking than Sam. Use your own pic and be proud :-)
I'm flattered. Thank you!
This is why Ai can never replace humans.
Love your videos. Thanks for sharing 👍
Thank you, glad you enjoyed it!
That was very helpful, Thanks a lot.
Glad it was helpful!
Very good video. Thank you 👌
Glad you liked it!
Good tutorial! Thanks! 👍
Thanks!
great work man! thanks for sharing
You're welcome! Glad you liked it!
Great video. I will try it a little bit.
Thank you. Good luck!
thanks a lot it's really helpful, can you make part 2 for build a custom UI using streamlit with AI Assistant
❤ thanks a lot, can you tell more about how personal information is kept safe in openai
Thx so much! I didn't know how to make an assistant as good as Copilot or ChatGPT, or even something to tell me what the weather lol😅
Thanks! Glad it helped :)
Great video! What steps would need to be included to allow the Assistant to use data from a 3rd party API? I’m not a coder and hope there is a way to connect to an external API from the Playground using Functions tool?
That's probably a bit more complex, I think currently you do need a bit of coding to get that working.
So the OpenAI assistant will just give you the raw data to send to the 3rd party API. You'll have to write (or generate) some code to use that input and make the call yourself, then you can decide what to do with the response (e.g. feed it back to the assistant, or do something else with it).
Great video! thank you so much. I have two questions, the data file (pdf file) that I gave to the assistant contains images related to each subject. However, the assistant is just giving me the text related to the question without the images, how can I solve this problem? is the assistant capable of giving back the images + text response? Thank you again!
Nice video!!!!!
Thank you!
Awesome thank you! @pixegami Do you know how to print screen what tool the assistant is using while answering your query? Assuming there are multiple tools available to the assistant. Thanks.
Hmm, I think maybe if you enable the verbose setting in the assistant, you might be able to get more details out of it? Not quite sure, but I think it'll probably be somewhere in the OpenAI docs. I'd be shocked if they didn't expose that feature :P
Thank you for the great video pixemi! I want to build my own data analyst assistant that would do excel work, data visualization... based on your experience, do you think I should use 3 assistants that each is specialized for a certain task or one general assistants?
Can you force it only to look into your data for its responses? Very nice video. Thank you.
I'm not sure how possible that is from the Assistant tooling itself. But (either using this or using the something like Langchain and RAG directly) you could also implement your own validation that the data it uses must closely relate to the material you've given it.
Would you like to be a Judge?
1. CodeCraft Duel: Super Agent Showdown
2. Pixel Pioneers: Super Agent AI Clash
3. Digital Duel: LLM Super Agents Battle
4. Byte Battle Royale: Dueling LLM Agents
5. AI Code Clash: Super Agent Showdown
6. CodeCraft Combat: Super Agent Edition
7. Digital Duel: Super Agent AI Battle
8. Pixel Pioneers: LLM Super Agent Showdown
9. Byte Battle Royale: Super Agent AI Combat
10. AI Code Clash: Dueling Super Agents Edition
But the GPT's in the gpt store are capable of accessing external API's, why aren't the assistant capable of doing so ? Do you know any alternate models which allow for external API access ?
I'm not sure why they allowed the GPT store to do that (do they really!?), but not custom assistants. But I guess that's something OpenAI expects you to handle in your own server side logic.
It is what it is.
great video
Thank you :)
Perfect
Thank you!
great video,i ve been struggling with the API connexion with my assistant,do you have an equivalent code for R?
Ah, sorry I don't have a ton of experience with R. :(
Hey - what do you use to edit the gradient circle around your camera?
I just prepare the graphic in photoshop (it's just a gradient circle), then use OBS to record. I put a circular mask around my camera image, and just stick that on top of the circle.
ImportError: cannot import name 'OpenAI' from 'openai'
This is a library compatibility problem. Any fix? ChatGPT has me running in circles literally with the same 2 set of fixes. Thanks. Still a good video!
The problem was cause by VS Code IDE. When I created a new venv, ran the main.py as a script, it was fine. So I need to either new a new IDE, or find the config issue in my VSC. Thanks.
It's probably that you need to select the Python environment to use. By default, I think VSCode might not know which Python environment you need to use. Don't worry, it's quite a common snag that I get caught on a lot as well.
Here's how to fix it: code.visualstudio.com/docs/python/environments
Any suggestions on fast cheats for creating these JSON or Yaml schemas? Something in VS code, feeding question into GPT? Havent been able to get them to automatedly generate these template styles
Hmm, for something like this, I think since it's usually a "one-off" cost to developing an assistant, it's probably best to do the schemas by hand so you can validate them and make sure they are to your spec. You'll always get variable results with LLMs to help on schemas like this that are not widely used or documented in its training data.
Can the custom knowledge source be a Python package like Openbb for financial data?
Thanks your video make me understand.
I have One account platform and have 1 assitand have done too make all rule with file .
But i still confused about “how to assistant that calling with Api to WHATSAUTO/Bot Auto AI needed api Openai?”
Yeay i have watching your video, with phyton make thread and response all question, but where is api for Assistand can i input on WHATSAUTOAI
Please tell me, according to your explanation, OPENAI can now allow external programs (such as chat programs) to directly call and use OPENai Assistant_id = 'asst_XXX....'
and use OPENai vector_store_id = 'vs_XXX'?
The ID above is built into my OPENai developer mode. Can it be called directly?
I've tried using the method described in your video but still can't find a solution...
Have you integrated the API KEY yet? If you use linux, add the api key to Bashrc
Great video. I'm still new. How is this different from asking chat GPT to summarize a document for me?
It lets you configure the GPT with a bit more context upfront, so if you wanted to build an app or a UI on top of this for other people to use, this gives you a head start.
Why cant i see model 4 and beyond just as yours in the video? I Just see 3.5 and its variations!
sorry im new to OAI assistant API..i have already had 'credentials' from my custom GPT in GPT store. how can i connect it to my separate app with assistant API? not recreating the instruction on assistant API but just connect it to custom GPT in GPT store that i have already created earlier
I think the Custom GPT and the Assistant APIs are slightly different products in OpenAI's backend, so you'll probably have to re-implement your Custom GPT as an Assistant first, then use that as an API in your app.
so this is basically doing a task which langchain or llamaindex is doing?
Sort of. The UI/UX is a little easier and more managed. But you can definitely also implement something like this yourself with Langchain.
Gracias
retrieval is not showing on my screen...what do I need to do?
Does the ai assistant have the ability to retain memory?
how would you get the output to come out of chatbot ui on the frontend instead of output coming out of the terminal
Hey there! To get the output in a chatbot UI, you'd need to build a frontend interface and send the assistant's responses there instead of printing to the terminal. Basically, you'd make API calls from your frontend, then update the UI with the responses. If you're new to this, starting with a simple web framework like Flask or FastAPI could be a good way to go.
Verry cool but do you know the limit to pdf’s you can upload. The limit seems s bit low unfortinately. I just tried creating a Plug in but this is the same right?
Yeah, the Assistants API is pretty cool! I'm not sure about the exact PDF upload limits - you might want to check the official OpenAI docs for the most up-to-date info on that. As for Plugins, they're actually a different feature from the Assistants API (from what I understand, it's more to do with tool-use or function calling).
As far as I understand, it is impossible to call external API directly from the chat GPT agent. It is only possible to call an assistant from the Python code locally on a PC.
Yup, that's my understanding as well.
Is there any way to personalize assistant on mobile app?
I guess you can write a mobile app as a frontend to interact with your own AI assistant API?
CAn I ajust Tokens and temperature?
Hmm, I don't think the Assistants API allows you to do that. If you want a finer level of control, it might be best to implement your own agent using the model directly and something like Langchain library for utility.
im unable to save documents for reference.. any idea?
Hmm, did you check if the file format of the doc (e.g. PDF, text, etc) and the file size is supported by OpenAI?
May I ask wht I only have GOT3.5 turbo, no Gpt 4. I also paid.
Hmm, do you have the premium membership subscription? That's all I have and it shows up for me. I don't know if there's any geographical restrictions... (I'm in Australia).
Hey @willyu7515, all you need to do is add some credits to your account (minimum $5). It's terrible that they don't mention it, but I had the same problem, and as soon as I added some money, the GPT 4 popped up. Hope it helps!
There is no Retrieval in the tools on my screen. Can you guess what the problem is?
but assistant api still not have history right ?
I think with "threads" you can have history for that thread. You'll just have to store the thread ID to retrieve it?
Hi, thank you very much for the video, very helpful. I am getting this error
” 2024-06-12 11:53:56,194 - openai - INFO - error_code=None error_message=‘Unrecognized request argument supplied: assistant_id’ error_param=None error_type=invalid_request_error message=‘OpenAI API error received’ stream_error=False
2024-06-12 11:53:56,194 - __main__ - ERROR - Error calling OpenAI API: Unrecognized request argument supplied: assistant_id"
What should I do, GPT doesn't give a correct answer to this
HI my friend .It's really good and useful .❤❤ I just have a question . Is it possible to suddenly run out of tokens? if is it true how can i solve this ?
This is how the custom GPTs work under the hood.
I have a question, is there a fee for choosing different models and using this API key?
Yup, there are fees for using the OpenAI API, and they vary depending on the model you choose. For the most up-to-date pricing info, check out OpenAI's official pricing page: openai.com/pricing
It's usually quite cheap though, and every new model/update tends to make it cheaper and cheaper.
hello can I create a small database with RUclips URLs to obtain much better tutorials
Can you do a django tutorial ?
Thanks :) Django is requested quite a lot so I've scheduled in to work on a tutorial in the next 3 months or so :)
doesn't it works with excel files?
I'm not sure, I think it does? But if not, you can probably export the Excel file into a CSV (which I'm pretty sure it should work with).
Can it keep a context?
Yup, it can! The Assistants API has a feature called "threads" that maintains conversation context. You can find more details in the OpenAI docs: platform.openai.com/docs/assistants/overview
Can I built this assistant(bot) in my free openai account? or I need some paid account?
@anyone please answer!!
You'll need a paid OpenAI account to use the Assistants API. The free tier doesn't include access to this feature (at least as far as I know right now).
I have the free account and it's working fine, I only paid for the assistant usage (min. 5$)
I feel tupid because i stil don't understand the function calling and adding API and such 😅 I mean I'm not a programmer... that doesn't help :P
so.. can you add an external API to the schema? And when you finish the assistant.. how do you then use the API for that assistant somewhere else? im going to watch those parts again haha
Unfortunately it's not free, it reponds with quota exceeded.
Yup, I think it might require an OpenAI account. Have you tried changing it to a different (older) model?
Define "easy"
2:10
It is not free. :(
The way they constructed this API, with runs and threads and messages - it's hilariously unintuitive and bad. Almost as bad as a Google API - I mean not that bad but still.
It's like they purposely decided to make it incomprehensible.
I agree it was incredibly unintuitive to use. But designing APIs like this will always have a lot of trade-offs, and I think the designers here were really positioning it for high-scaling, async-first use cases.
*most livable *most boring :D
Well... 🤷♂️
Thank you for this great tutorial🤍
why it doesn't show me chatGPT4 and only chatGPT3.5? and can i upload images to the assistant?
you need to get the premium subscription for chatgpt-4 and i think it does accept images
Yup, it seems as someone else mentioned, you might need to have the OpenAI premium subscription to access GPT-4.