Really appreciate the clarity and insight in your videos! The way you break down complex topics like the Assistants API makes it accessible and interesting. Keep up the fantastic work!
“This time it lied to me, so maybe we’re getting somewhere” - The funny thing is that actually makes sense 😂 (As always, your stuff is *so* clear, thanks!)
@@WesGPT can’t wait to see it! Out of curiosity, when you first showed it, you had a knowledge base of several newsletters for it to refer to. Later you removed that, and gave it one newsletter example in the instructions area instead. Why that change? I always assumed a knowledge base would be more powerful, but of course I could be mistaken, and obviously you switched it up for a reason.
@@Andru Good question! I changed that because of a few things I learned about Custom GPTs: 1. The more bloated the prompt, the less effective the custom GPT is (so more examples bloats it). This is also the reason why I'm moving away from ChatGPT and instead trying to build these GPTs out in Bubble (more control). 2. No matter how hard I tried, it would rarely look at the knowledge base of examples. Even if I asked it multiple times in the prompt. 3. It seems to read "Morning Brew" at the start and then make up its own idea of what that type of newsletter would like. However, I still have all of those collected examples in a document. Do you want it so that you can experiment building a Newsletter GPT yourself?
Dude, I want it! I've been experimenting with the multi agent frameworks (chat dev, AI Town, autogen, etc) interested in checking what's under the hood🤖🧠
Damn! I'm going to say no workaround for this specific course (because all of it is building AI tools IN Bubble using their API connector) It's only because Bubble is what I know best 😊
I won’t be trying out assistants until it’s more integrated. I would love to improve my GPTs using actions though. Have you covered that process yet?? Thanks Wes! 💪. Peace ✌️
Hey brother i didn't get a chance to check the comments I'm not sure if you already know but it is a very powerful tool if you do a little more digging. It can search the web and it honestly is kinda bonkers what the possibilities are. If you need some guidance I'll do my best to explain it more straight forward and simplified manner if requested. I've learned a lot from this channel thx for your videos and efforts you put into the channel
Im not sure the functions feature is intended to "search the web" as a function because it is essentially not paired with the second half of what a user must define and structure within the json format schema @@WesGPT
supplied by 3.5 turbo let me know if you would like a detailed overview OpenAI Assistance API purposes: 1. **Smart Chatbots:** - **Function:** Enables chatbots with detailed information on diverse topics. - **Applications:** Customer support, education. 2. **Generalized Virtual Assistants:** - **Function:** Handles tasks like scheduling, answering questions, and providing recommendations. - **Applications:** Personalized virtual assistants. 3. **Automation in Applications:** - **Function:** Integrates with apps for automating tasks like content generation and summarization. - **Applications:** Writing tools, content summarization, document creation. 4. **Cross-Device Communication:** - **Function:** Facilitates natural language communication between devices. - **Applications:** Voice-activated devices, smart home automation. 5. **Content Creation Assistance:** - **Function:** Aids in content creation with suggestions and improved language flow. - **Applications:** Blogging, content creation. 6. **Interactive Educational Platforms:** - **Function:** Engages learners through dynamic conversations and personalized learning. - **Applications:** AI tutors, language learning apps. 7. **Smart Data Analysis:** - **Function:** Analyzes and summarizes data, generates reports. - **Applications:** Business intelligence, automated reporting. 8. **Dynamic Conversational Interfaces:** - **Function:** Integrates natural language into software applications. - **Applications:** Improved user interfaces. 9. **Context-Aware Professional Assistants:** - **Function:** Provides context-aware assistance for professionals. - **Applications:** Field-specific professional assistants. 10. **Multi-Modal User Interfaces:** - **Function:** Combines text, speech, and visuals for interactive experiences. - **Applications:** VR, AR, or MR interfaces. API's adaptability, are dependent on user applications of experiences and task efficiency in diverse scenarios.
thx wes, u made this right when i was trying to figure out how to use, love your videos, - cap also i dont know if i will get into assistants, i am trying to mzke a dougdoug ai character bot foir my streams, it has been hard. i got everything ready, just need to fine-tune the responses somehow using api
Hey Wes, you're misinterpreting what function calling is in the context of OpenAI models. These are not functions the model can call, these are made up functions you as the programmer give GPT to force it to respond with a specific json object that you can later use in your own application code.
So I mentioned that functions are built by you and then filled in with the model's outputs. Like in the get_stock_price example, that JSON body is written by you and the GPT-4 fills in the stock symbol. Did I get it wrong?
@@WesGPT You're right. You create a function like the examples with parameters, etc, in order to force the model to always reply with that specific json object, in proper json and without adding anything before or after the response nor changing the format with every completion. This was a huge deal for programmers using the API because before the model would change the format of the response every time or produce improper json and that would break the program. Now, by using function calling you know exactly in what format the response is going to come and can process it accordingly, and be confident that 95% of the time the model will abide by the format you imposed it using the function calling functionality. Also, it is possible that the function you're defining for the model actually exists , either in your codebase or as an external api, in which case you simply take the model's return and pass it over to that function or API. But this is not a necessity: you may just want the response in a specific format, and for that you create a fake function definition so the model CONSISTENTLY responds with the data formatted in that specific way you defined.
bro i don’t know where to start but everything you show in this video is wrong the reason you have assistants is to implement it in your own apps, that’s what you can’t do with custom gpts because it is built into a product of open ai (Gpt Plus) and why your function call to get the stock price did not work is because you have not implemented any code to get the stockprice you can’t just write get stockprice and the ai will get the stockprice 😂 you would need to make a function in your program that calls some stock api or something
So? Not sure I get what you’re laying down but I’m new to assistant creation. Would you mind sharing a bit more? I see that you have a yt channel. Can you create a video? Thanks. 🙏. I LOVE WesGPT. As far as I am concerned, he is number one in being able to explain all this in a way that I can follow. He has freely shared some workflows that I am hoping to monetize over time. This is a great and helpful community. I hope that you will participate in helping us grow! ✨🙏
@@CM-zl2jw in this video he was wondering why the „get stockprice“ function did not work for him. I can explain why. This function was just an example in the documentation of the API. This function needs to be implemented first to give it a functionality. Let’s say you would want to give your assistant the function to give you the stockprice: What you would have to do is to build a function in your code that is able to call the stockprice. You would do that by pinging some API that offers that service. And then you tell your assistant that you have this function and explain what it does and what input the assistant should feed into the function, for instance a company name „Microsoft“ And then when you ask what is the current stockprice of apple the assistant would automatically know that it should call this function and execute the code and give you the answer. You could to this with whatever functionality you want as long as you know to code it.
Sorry, I must not have explained that part of the video correctly. I understand why the get_stock_price didn't work - I was trying to ask why they didn't give us two examples where we could actually see the results of our API calls. They gave us two examples where our app, or the Assistants model, would need Web Browsing enabled in order for the call to work.
Really appreciate the clarity and insight in your videos! The way you break down complex topics like the Assistants API makes it accessible and interesting. Keep up the fantastic work!
So glad you learned something from this video, Scott! I appreciate you 🙏
very cool stuff… looking forward to them adding all the features and taking it out of beta 💪😎
Me too 💯
“This time it lied to me, so maybe we’re getting somewhere” - The funny thing is that actually makes sense 😂
(As always, your stuff is *so* clear, thanks!)
Haha good catch on that line, forgot I said that!
Love reading all of your thoughts and comments, Dave 🙏
Random question. With the auto blogger code on bubble. Can you have it post to wix as opposed to word press?
Thanks for the feature suggestion, I'll work on it!
My mans got Newsletter GPT 2.0 hiding in that dashboard 👀😁
Haha! I was testing something for the next version 😉
@@WesGPT can’t wait to see it! Out of curiosity, when you first showed it, you had a knowledge base of several newsletters for it to refer to. Later you removed that, and gave it one newsletter example in the instructions area instead. Why that change? I always assumed a knowledge base would be more powerful, but of course I could be mistaken, and obviously you switched it up for a reason.
@@Andru Good question!
I changed that because of a few things I learned about Custom GPTs:
1. The more bloated the prompt, the less effective the custom GPT is (so more examples bloats it). This is also the reason why I'm moving away from ChatGPT and instead trying to build these GPTs out in Bubble (more control).
2. No matter how hard I tried, it would rarely look at the knowledge base of examples. Even if I asked it multiple times in the prompt.
3. It seems to read "Morning Brew" at the start and then make up its own idea of what that type of newsletter would like.
However, I still have all of those collected examples in a document. Do you want it so that you can experiment building a Newsletter GPT yourself?
Dude, I want it! I've been experimenting with the multi agent frameworks (chat dev, AI Town, autogen, etc) interested in checking what's under the hood🤖🧠
Just checked your course. Interested. However the deal killer is Bubble = $ monthly.
Do you have a work-around / substitute?
Damn! I'm going to say no workaround for this specific course (because all of it is building AI tools IN Bubble using their API connector)
It's only because Bubble is what I know best 😊
I won’t be trying out assistants until it’s more integrated. I would love to improve my GPTs using actions though. Have you covered that process yet?? Thanks Wes! 💪. Peace ✌️
Completely forgot about the Actions feature in Custom GPTs! I'll see if I can make a video about that soon 😊
Hey brother i didn't get a chance to check the comments I'm not sure if you already know but it is a very powerful tool if you do a little more digging. It can search the web and it honestly is kinda bonkers what the possibilities are. If you need some guidance I'll do my best to explain it more straight forward and simplified manner if requested. I've learned a lot from this channel thx for your videos and efforts you put into the channel
I don't think it can search the web yet! Unless I missed something?
Im not sure the functions feature is intended to "search the web" as a function because it is essentially not paired with the second half of what a user must define and structure within the json format schema
@@WesGPT
supplied by 3.5 turbo let me know if you would like a detailed overview
OpenAI Assistance API purposes:
1. **Smart Chatbots:**
- **Function:** Enables chatbots with detailed information on diverse topics.
- **Applications:** Customer support, education.
2. **Generalized Virtual Assistants:**
- **Function:** Handles tasks like scheduling, answering questions, and providing recommendations.
- **Applications:** Personalized virtual assistants.
3. **Automation in Applications:**
- **Function:** Integrates with apps for automating tasks like content generation and summarization.
- **Applications:** Writing tools, content summarization, document creation.
4. **Cross-Device Communication:**
- **Function:** Facilitates natural language communication between devices.
- **Applications:** Voice-activated devices, smart home automation.
5. **Content Creation Assistance:**
- **Function:** Aids in content creation with suggestions and improved language flow.
- **Applications:** Blogging, content creation.
6. **Interactive Educational Platforms:**
- **Function:** Engages learners through dynamic conversations and personalized learning.
- **Applications:** AI tutors, language learning apps.
7. **Smart Data Analysis:**
- **Function:** Analyzes and summarizes data, generates reports.
- **Applications:** Business intelligence, automated reporting.
8. **Dynamic Conversational Interfaces:**
- **Function:** Integrates natural language into software applications.
- **Applications:** Improved user interfaces.
9. **Context-Aware Professional Assistants:**
- **Function:** Provides context-aware assistance for professionals.
- **Applications:** Field-specific professional assistants.
10. **Multi-Modal User Interfaces:**
- **Function:** Combines text, speech, and visuals for interactive experiences.
- **Applications:** VR, AR, or MR interfaces.
API's adaptability, are dependent on user applications of experiences and task efficiency in diverse scenarios.
thx wes, u made this right when i was trying to figure out how to use, love your videos, - cap
also i dont know if i will get into assistants, i am trying to mzke a dougdoug ai character bot foir my streams, it has been hard. i got everything ready, just need to fine-tune the responses somehow using api
Great minds think alike!
I assume you're trying to make your AI character using DALL-E 3 on the API?
Do you have to join to get the two free ones
Hey 😊
Which free things are you talking about?
Actually got it to use Bing.
Can you please tell us how? Thanks!
No way, how!?
Do you have a email to ask questions that you don't wont to ask here
Yep, you can send it to heywesfrank@gmail.com
Hey Wes, you're misinterpreting what function calling is in the context of OpenAI models. These are not functions the model can call, these are made up functions you as the programmer give GPT to force it to respond with a specific json object that you can later use in your own application code.
So I mentioned that functions are built by you and then filled in with the model's outputs. Like in the get_stock_price example, that JSON body is written by you and the GPT-4 fills in the stock symbol. Did I get it wrong?
@@WesGPT You're right. You create a function like the examples with parameters, etc, in order to force the model to always reply with that specific json object, in proper json and without adding anything before or after the response nor changing the format with every completion.
This was a huge deal for programmers using the API because before the model would change the format of the response every time or produce improper json and that would break the program.
Now, by using function calling you know exactly in what format the response is going to come and can process it accordingly, and be confident that 95% of the time the model will abide by the format you imposed it using the function calling functionality.
Also, it is possible that the function you're defining for the model actually exists , either in your codebase or as an external api, in which case you simply take the model's return and pass it over to that function or API. But this is not a necessity: you may just want the response in a specific format, and for that you create a fake function definition so the model CONSISTENTLY responds with the data formatted in that specific way you defined.
bro i don’t know where to start but everything you show in this video is wrong
the reason you have assistants is to implement it in your own apps, that’s what you can’t do with custom gpts because it is built into a product of open ai (Gpt Plus)
and why your function call to get the stock price did not work is because you have not implemented any code to get the stockprice
you can’t just write get stockprice and the ai will get the stockprice 😂 you would need to make a function in your program that calls some stock api or something
So? Not sure I get what you’re laying down but I’m new to assistant creation. Would you mind sharing a bit more? I see that you have a yt channel. Can you create a video? Thanks. 🙏. I LOVE WesGPT. As far as I am concerned, he is number one in being able to explain all this in a way that I can follow. He has freely shared some workflows that I am hoping to monetize over time. This is a great and helpful community. I hope that you will participate in helping us grow! ✨🙏
Whoa 👏👏. Looking forward to the shopify thing coming. Thank you 🙏 🎉
@@CM-zl2jw yes i like WesGPTs Videos too and i appreciate his style of explaining. But in this video he was just uninformed
@@CM-zl2jw in this video he was wondering why the „get stockprice“ function did not work for him.
I can explain why. This function was just an example in the documentation of the API. This function needs to be implemented first to give it a functionality.
Let’s say you would want to give your assistant the function to give you the stockprice:
What you would have to do is to build a function in your code that is able to call the stockprice. You would do that by pinging some API that offers that service.
And then you tell your assistant that you have this function and explain what it does and what input the assistant should feed into the function, for instance a company name „Microsoft“
And then when you ask what is the current stockprice of apple the assistant would automatically know that it should call this function and execute the code and give you the answer.
You could to this with whatever functionality you want as long as you know to code it.
Sorry, I must not have explained that part of the video correctly.
I understand why the get_stock_price didn't work - I was trying to ask why they didn't give us two examples where we could actually see the results of our API calls. They gave us two examples where our app, or the Assistants model, would need Web Browsing enabled in order for the call to work.
kool
Glad you liked it 😊
first
Haha you got it 💯