Excellent tutorial - I suggest one thing, it seems the messages changes make it broken. EDIT: FIX BELOW! From 19:11 onwards, the changes from "messages": [{"role": "user", "content": ""}] to "messages": [] breaks it. EDIT: How to fix it, change the body parameters value to {"role": "system", "content": "You are a helpful assistant"}
I'm running into the same issue here but don't quite understand what you mean. Would you mind sharing your code? I thought I understood what you meant but it didn't fix my issue
I believe I'm following these steps correctly, but I don't see a '(body) messages' input field like that which is present @19:23 . Mine reads ' only when '
Thanks for the video! Is there a way to generate the output animated word by word? Or does the entire response have to be returned and then displayed? Basically if there is a way to give the illusion that the bot is typing the words while the words are being processed similar to how chatgpt actually functions
The core Bubble platform doesn't support the API streaming protocol 😞 However, we have record a video demonstrating a Bubble.io Plugin that adds in the streaming function. Watch here: www.planetnocode.com/tutorial/plugins/openai/chatgpt-and-openai-real-time-streaming-plugin-demo/
Hi bro, when I provide this layout everything works very well but after 4 or 5 messages the system starts to give an error. when I create a new conversation there is no problem but when the conversation gets longer it gives an error again. I think reading all the json data causes a very large data density.
I tried to implement this message chat workflow to my project although I failed to display chat data to messages group page l, and my all set up is the same with you I am missing something but I can’t figure out and drives me crazy. I add the constrain for search for messages to find parents chat you know all is the same and database receives all messages correctly and yet no displaying on the page I can’t figure out. Thank you for your time
I found just one thing that if to make every call to open ai(EachItemJsonWith) then yes it remember previous conversation but it sending all messages from that conversation on every call. And its wasting a lot of tokens from your opean ai api. for example you have a chat where already 20-30 mesages or even more. you just send one new message bout something and it gonna send newreqquest for the api 20-30 messages+new one. its a bit problem with that and too expensive as i think...thinkin right now how to solve that....
Sending all previous messages is currently the only way for the latest message to be aware of all previous messages. You could experiment with only sending the last 5 messages for example as one way to reducing the token cost.
When setting up the workflow, under Plugins, I see many options for OpenAI but not send message. For example, I see ChatGPT endpoint, Edit Text, Generate Completion and Moderation Text. But not send message. Did I miss a step in the set-up?
Thank you for your comment! It seems that you are interested in plugins created by others that connect with the OpenAI API. However, these plugins may have certain limitations. That's why this tutorial recommends using the Bubble API Connector. With this connector, you can essentially create your own plugin and make API calls, such as the "Send Message" function. To achieve a similar setup as shown in our video, please install the Bubble API Connector plugin.
Hi, thank you for the video and your effort. I am doing all the things exactly as shown. Everything works until I change the Body to {"model": "gpt-3.5-turbo", "messages": [] }. However, after that when I try to chat with the interface, I get a HTTP 400 error: "We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. the OpenAI API expects a JSON payload, but what was sent was not valid JSON." What does that mean? Any idea how I could resolve this error?
@@planetnocode I am encountering the same issue, basically you can't Reinitialize Call of the API body once you insert the dynamic value, because it is not deemed JSON safe. Now I can't add that API call as an action, any solution for that?
Thank you a lot. Is there also a way to give the system instructions so I can use all of this for a more detailed purpose? Can I just add a system instruction in the Api setup?
Hey, quick question. The conversations in the side bar are sorted by modified date so the last convo you used shows up at the top. However, when you send a message, it doesnt modify the convo so the order doesnt change. How can I fix that?
Thanks so much for this! Im all up and running and it looks fab. I have one issue! in a conversation, when I ask it a 4th question it doesnt retun a response in the front end but I see the response is in the back end data? any ideas why it isn't retuning a 4th response? I then want to apply a system input so that the bot is primed as a professional interior designer. I also cant see how i do that either, rather having it as a standalone bot. any help would be hugely appreciated.
I’ve increased the rows to 10 and that seems to have corrected everything! I just now want to prime the system with a prompt making it a interior design specialist bla bla bla so i tried to input role: system.. above where we put in the plugin as i thought this would be the most obvious place but no joy. Any ideas where I need to put the system prompt? Thanks so much. @@planetnocode
You can put a system prompt with dynamic data into the body of the API call in the Bubble API Connector as long as you follow the JSON structure shown here: platform.openai.com/docs/api-reference/chat/create
Thanks for the useful content. I want to know if there is any way to make the reply from OpenAI be displayed faster. It is now to slow and users will be impatient
GPT3.5-turbo is the quickest chat model and we've found the speed can really vary, likely based on the load on OpenAI's servers. User impatiences can be partly combated with effective UI design such as: - Showing a loading animation (Lots of examples here loading.io/) - You can estimate the time OpenAI will need using the length of the whole message you send. You can show the user this estimate.
@@manhdungluu2836 The Bubble API Connector does not support the steam protocol. However, there is a plugin we've demoed that adds this feature in. Watch here www.planetnocode.com/tutorial/plugins/openai/chatgpt-and-openai-real-time-streaming-plugin-demo/
Thanks so much for the valuable content, however as I have followed step by step your tutorial, whenever I ask "List 5 places to visit there" I get an HTTP400 error, Does someone with the same problem or know how to fix it?
for some reason i am unable to add the JSON code to the field, i am able to create the JSON as text but when i add the code after the '=' sign it always deletes my code when i press enter or anything. when i click the input dynamic data, i also don't see the same options as you..any ideas why?
This might be it. Are you doing exactly what we do here?: ruclips.net/video/Oqs9xgR-MBM/видео.html We click after the [Click] placeholder, then press backspace. This prepares the field to receive plain text instead of dynamic content.
This is awesome! Would like to know how to put prompts at the end of each response generation so that users are guided to put in specific information. Eg, “would you like to know which travel destination is better?” Then “ if yes” “if no” etc. thanks for your videos they are super helpful!😊👍
No problem! It appears you want to create a step-by-step form which, when filled out, will generate an OpenAI request based on the form data. A straightforward approach would be to capture the form data using Bubble inputs, and then send a prompt containing all the form data to OpenAI. The response can then be presented to the user. This is something our team can help with and should be able to resolve for you with a 1 hour Bubble coaching session www.planetnocode.com/bubble-coaching/
Hi @planetnocode , when I add the to the Body it stops working. I've an error message on my preview saying "we could not parse the JSON body of your request." What am I missing ?
Sounds like you might be not be including a correctly structured JSON array for the messages in the API Connector. Assuming you are wanting to make the middle section of this dynamic to list your messages: "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Hello!" } ] ------- Try as demo text in : {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}
Actually, I found the issue! The JSON for the assistant's return message needs to have quotes added to it, otherwise it will error out once we include it as part of the array for the second round of questions.
Hi Matt, I did it! Yeah! Followed you step by step and have secured the historical context👏👏👏 Question: how can I allow the user to clear/delete the responses in the repeating group? Thanks as always!👍
@@planetnocode all the conversations in that group. I am concerned about having a lot of data stored in my database that could slow down the app’s processing
How about adding a trash icon? Then when the trash icon is clicked run a workflow that deletes a list of things and enter the Repeating Groups List as the list to delete.
@@planetnocode actually I had put a button and ran that workflow but the conversation was not deleted. I think I did it incorrectly. Do you have any video on how to do it correctly?
Starting a new conversation doesn’t help the total token usage right? It is big problem with conversational structure what is your solutions for that except changing model type to 16K or 32K? There are some options such as splitting, summarizing previous content and redirecting it but they all so confusing for non coder so I was wondering could you suggest to create solution. I appreciate you and thank you for your time
If you add a constraint to the Do a Search for Messages in the OpenAI workflow action to only retrieve messages from the current conversation then this will help to reduce tokens used as you won't be sending every message in the app database. Just those from the current conversation. We've got some videos in production now about using different models with larger token limits and how to send just the most recent X number of messages.
@@planetnocode I try to find permanent solution for the memory and I have been searching about flowise pinecone integrations to current chatgpt API calls but there are no even single video explain this so far I did not see if you do know could you share it with me? I will try to find solution in the meantime but please consider to make whole video for your channel would be awesome. And keep doing good job! You are rock brother!
@@planetnocode when I saw it I said great timing or this guys is amazing and I’ll go with second lol Thank you for that but when I tried to implement that I realized my chats and messages are not connecting with parent group and matter in fact I watched 35 min videos and followed steps 3 times but still can’t figure out why my chats and messages are not showing on the app. Basically I created early 4 page form and after that I am creating new chat and showing directly messages to chat but neither my messages or my conversation chats’s text doesn’t show the database entries. To solve that after I tried to implement your way(display data in page load and add a constrain to messages to (conversation = parents group conversation) and nothing. And than I tried multiple other ways from great tutors as you and I set up custom state to display it but still it did not work. Although I would like to point that my all database and workflow are working correctly but for now I will not stopping until I’m over it! If you have any other suggestions to workaround please let me know. And keep doing awesome work bro! And also 3.way I tried to solve my connecting problem was the sending current cells data through URLs parameter to pass on chat conversation data to message page repeating group even though parent group datatype is chat message repeating group doesn't link to parents chat somehow 🤯🤦🏻♂️😂
Can you tell me any other ways to link messages to the parent group chat except adding constrain because I couldn't handle it oddly. When I moved the constraint I could see the messages from the database as a whole but I can't separate them and link them to the parent chat. Methods that I have tried to solve this were adding constraint displaying data workflow and setting state to show message channel but no results so far 🤯 Just in case you wonder, - Added the conversation=parents group conversation to the step of create a messages workflow (both step) - Messages Repeating Groups parent page datatype is “conversation” - conversation datatype has conversation data field in my database as to link it up.
Thanks for all your content. It's really good, congrats!!! Do you know how can I reset the context of the conversation? I want to save the conversation and reset the history if someone else access the link but don't lose the history in my platform. Thanks in advance.
Right now I'm not interested in asking for a sign up or something like that to use the platform. I just want to refresh all the context if someone in another IP access the platform. Am I being clear? Sorry. English is not my first language...
We suggest you set up privacy rules around Current User as it sounds like you're trying to keep conversations private from different visitors. Is that right? By default any visitor can be referred to as Current User and Bubble uses a cookie to detect if the same visitor returns in the same session. So if you set up privacy rules for your messages, chat and user data types e.g. This Messages Creator is Current User then test using different browsers/private browsing we think you'll get the result you're looking for. However, when using OpenAI there is a responsibility on you to know your users as ultimately anything submitted through the API is linked to your API account so we advise using OpenAI with logged in users.
There isn't a limit on messages but there is a limit on how many tokens can be sent. Details can be found on the OpenAI website: platform.openai.com/docs/models/overview
One more question: I did a “run as” another user and I was able to see the conversation when run as a different user. Is this supposed to happen? If so, how to fix it so that only current user sees the conversation? Thanks as always👍
Specifically, how do I set the messages so they are connected to the user, eg “current user’s message.” ? This way I don’t have to set it at the privacy rule level. Thanks in advance for your help😊
You don't need to have a Message field on User. Your Messages have a Creator field built in that will be the User who created the message. So just add a Privacy Rule to the Message data type like This Message's Creator is This/Current User. Then test with multiple users to make sure it works.
After I Initialize the Call it shows this msg There was an issue setting up your call. Raw response for the API Status code 429 { "error": { "message": "You exceeded your current quota, please check your plan and billing details.", "type": "insufficient_quota", "param": null, "code": null } }
Double check because the ChatGPT Premium service is different to the OpenAI API billing. Here's the API billing portal: platform.openai.com/account/billing/overview
Hey that was really great, showed me quickly and easily how to setup the chatGPT clone. However, when I press New Conversation, even if I don't type anything and press send, it always creates a new conversation with the conversation being blank on the conversation sidebar, is there any way I can simply not allow the conversation to be created if there is no question inputted? Thanks, let me know if I didn't make it clear what I'm trying to say :)
Thanks for the question. What if you set the data source for your Conversation group to be the most recent conversation? That way if a New Conversation is created, then the page is refreshed the user will actually be presented with the empty conversation created before.
@@planetnocode thanks for the response, unfortunately when i do this it still creates another blank conversation when pressing new conversation, but I would like it to just go to the conversation that hasn't been used already. Thanks
Yes you can use the 'system' role in the first message. A simple example from the OpenAI API documentation would be: { "role": "system", "content": "You are a helpful assistant." } Source: platform.openai.com/docs/api-reference/chat
@@planetnocode thanks for the response, would this be in the workflow tab? and would this be on step 4 of the ‘button send is clicked’ because I already have the JSON = {“role”: “assistant”, “content”: Result of step 3 (open ai - send message)’s choices:first item’s message content:formatted as JSON-safe} so where would it go, thanks sorry I’m new to bubble :)
can someone help me, i want to fetch data from third party api and send data to gpt model then find result also i want to add knowledge in this chatbot
This should be 100% possible with Bubble. Quite a few different steps to explain in the comments section. May we recommend booking a 1 to 1 call with Matt in the videos. Find out more about 1 to 1 Bubble support here www.planetnocode.com/bubble-coaching/
What error are you receiving? It's likely just one small punctuation related issue with your JSON. We can help you fix this during a Bubble coaching call, working 1 to 1 with one of our Bubble experts: www.planetnocode.com/bubble-coaching/
You're giving away gold quality content man! Thanks for the great value
Thanks for the feedback! We have even more videos on our website www.planetnocode.com
Excellent tutorial - I suggest one thing, it seems the messages changes make it broken. EDIT: FIX BELOW!
From 19:11 onwards, the changes from "messages": [{"role": "user", "content": ""}] to "messages": [] breaks it.
EDIT: How to fix it, change the body parameters value to {"role": "system", "content": "You are a helpful assistant"}
I'm running into the same issue here but don't quite understand what you mean. Would you mind sharing your code? I thought I understood what you meant but it didn't fix my issue
Facing same issue and don't understand your solution. @planetnocode please explain it in more details
Same issue here. Does anyone know how to fix this?
Awesome video! helped me a lot!!
I believe I'm following these steps correctly, but I don't see a '(body) messages' input field like that which is present @19:23 . Mine reads ' only when '
Is your dynamic field 'messages' checked as private? Uncheck private is so.
Thanks for the video! Is there a way to generate the output animated word by word? Or does the entire response have to be returned and then displayed? Basically if there is a way to give the illusion that the bot is typing the words while the words are being processed similar to how chatgpt actually functions
The core Bubble platform doesn't support the API streaming protocol 😞 However, we have record a video demonstrating a Bubble.io Plugin that adds in the streaming function.
Watch here: www.planetnocode.com/tutorial/plugins/openai/chatgpt-and-openai-real-time-streaming-plugin-demo/
Hi bro, when I provide this layout everything works very well but after 4 or 5 messages the system starts to give an error. when I create a new conversation there is no problem but when the conversation gets longer it gives an error again. I think reading all the json data causes a very large data density.
I tried to implement this message chat workflow to my project although I failed to display chat data to messages group page l, and my all set up is the same with you I am missing something but I can’t figure out and drives me crazy. I add the constrain for search for messages to find parents chat you know all is the same and database receives all messages correctly and yet no displaying on the page I can’t figure out. Thank you for your time
did you fix it? are you sure you put Current cell's Copilot's JSON under the message text in the repeating group of the main body?
This is excellent, thanks! It’s a shame you can’t stream API response. For whatever reason it’s extremely slow and makes for a horrible UX
oh, thats def. a no go…
I found just one thing that if to make every call to open ai(EachItemJsonWith) then yes it remember previous conversation but it sending all messages from that conversation on every call. And its wasting a lot of tokens from your opean ai api. for example you have a chat where already 20-30 mesages or even more. you just send one new message bout something and it gonna send newreqquest for the api 20-30 messages+new one. its a bit problem with that and too expensive as i think...thinkin right now how to solve that....
Sending all previous messages is currently the only way for the latest message to be aware of all previous messages. You could experiment with only sending the last 5 messages for example as one way to reducing the token cost.
When setting up the workflow, under Plugins, I see many options for OpenAI but not send message. For example, I see ChatGPT endpoint, Edit Text, Generate Completion and Moderation Text. But not send message. Did I miss a step in the set-up?
Thank you for your comment! It seems that you are interested in plugins created by others that connect with the OpenAI API. However, these plugins may have certain limitations. That's why this tutorial recommends using the Bubble API Connector. With this connector, you can essentially create your own plugin and make API calls, such as the "Send Message" function.
To achieve a similar setup as shown in our video, please install the Bubble API Connector plugin.
Thank you!! I just previewed by app and when I ask ChatGPT a question, it responds only with "GOT IT" - any idea where I went wrong?@@planetnocode
Hi, thank you for the video and your effort. I am doing all the things exactly as shown. Everything works until I change the Body to {"model": "gpt-3.5-turbo", "messages": [] }. However, after that when I try to chat with the interface, I get a HTTP 400 error: "We could not parse the JSON body of your request. (HINT: This likely means you aren't using your HTTP library correctly. the OpenAI API expects a JSON payload, but what was sent was not valid JSON."
What does that mean? Any idea how I could resolve this error?
Please reach out to support@planetnocode.com, and give you give an example of the JSON you are putting into . We'll see how we can help.
@@planetnocode I am encountering the same issue, basically you can't Reinitialize Call of the API body once you insert the dynamic value, because it is not deemed JSON safe. Now I can't add that API call as an action, any solution for that?
Getting the same issue, the is bugging it out@@planetnocode
Hi! I have the same issue. Did you find a resolution for this? :)
Thank you a lot. Is there also a way to give the system instructions so I can use all of this for a more detailed purpose? Can I just add a system instruction in the Api setup?
Yes you can include a message with the role set as system at the start of your message thread.
thanks for the value! tried everything and it worked. except, what if the text is too long to be fully displayed in the cell?
ok repeating group layout was fixed, i changed to row and it seems correct now.
Fixed layout does cause the majority of layout issues. Try and use columns and rows where possible.
Hey, quick question. The conversations in the side bar are sorted by modified date so the last convo you used shows up at the top. However, when you send a message, it doesnt modify the convo so the order doesnt change. How can I fix that?
You can add a step to the message workflow that updates a date field in the Conversation/Chat datatype.
Thanks so much for this! Im all up and running and it looks fab. I have one issue! in a conversation, when I ask it a 4th question it doesnt retun a response in the front end but I see the response is in the back end data? any ideas why it isn't retuning a 4th response? I then want to apply a system input so that the bot is primed as a professional interior designer. I also cant see how i do that either, rather having it as a standalone bot. any help would be hugely appreciated.
Is your repeating group limited to 3 rows?
I’ve increased the rows to 10 and that seems to have corrected everything! I just now want to prime the system with a prompt making it a interior design specialist bla bla bla so i tried to input role: system.. above where we put in the plugin as i thought this would be the most obvious place but no joy. Any ideas where I need to put the system prompt? Thanks so much. @@planetnocode
You can put a system prompt with dynamic data into the body of the API call in the Bubble API Connector as long as you follow the JSON structure shown here: platform.openai.com/docs/api-reference/chat/create
Hey thanks very much for the video -very interesting! How can I add streamingto make it more user friendly?
The Bubble API Connector does not support the streaming protocol. You may find a plugin that supports this.
thank you for the video! How can i make it so the users Text is a different background form the Generated reply?
Add a conditional statement for the chat bubble using the on the Message's role.
Thank you
Thanks for the useful content. I want to know if there is any way to make the reply from OpenAI be displayed faster. It is now to slow and users will be impatient
GPT3.5-turbo is the quickest chat model and we've found the speed can really vary, likely based on the load on OpenAI's servers.
User impatiences can be partly combated with effective UI design such as:
- Showing a loading animation (Lots of examples here loading.io/)
- You can estimate the time OpenAI will need using the length of the whole message you send. You can show the user this estimate.
@@planetnocode I found that OpenAI API has a stream option to display the response in real-time. Can you help us to implement it in Bubble?
@@manhdungluu2836 The Bubble API Connector does not support the steam protocol. However, there is a plugin we've demoed that adds this feature in. Watch here www.planetnocode.com/tutorial/plugins/openai/chatgpt-and-openai-real-time-streaming-plugin-demo/
Thanks so much for the valuable content, however as I have followed step by step your tutorial, whenever I ask "List 5 places to visit there" I get an HTTP400 error, Does someone with the same problem or know how to fix it?
Check your API key. Do you have bearer before your key? Do you have a payment method added to your OpenAI account?
for some reason i am unable to add the JSON code to the field, i am able to create the JSON as text but when i add the code after the '=' sign it always deletes my code when i press enter or anything. when i click the input dynamic data, i also don't see the same options as you..any ideas why?
Can you please share a timestamp from the video for when you get stuck?
15:56 @@planetnocode where you are trying to add memory to the chat. when i try to replicate, it won't let me add code even if i shorten it
This might be it. Are you doing exactly what we do here?: ruclips.net/video/Oqs9xgR-MBM/видео.html
We click after the [Click] placeholder, then press backspace. This prepares the field to receive plain text instead of dynamic content.
yes that worked! thanks!@@planetnocode
This is very helpful! Thanks so much!😊👍
This is awesome! Would like to know how to put prompts at the end of each response generation so that users are guided to put in specific information. Eg, “would you like to know which travel destination is better?” Then “ if yes” “if no” etc. thanks for your videos they are super helpful!😊👍
Sounds like you'd have to achieve this with UI in Bubble, rather than a regular chat interface.
@@planetnocode can you explain further please?
No problem! It appears you want to create a step-by-step form which, when filled out, will generate an OpenAI request based on the form data. A straightforward approach would be to capture the form data using Bubble inputs, and then send a prompt containing all the form data to OpenAI. The response can then be presented to the user.
This is something our team can help with and should be able to resolve for you with a 1 hour Bubble coaching session www.planetnocode.com/bubble-coaching/
@@planetnocode this is awesome! Thanks so much and will contact you guys soon!👍😊💕
Hi @planetnocode , when I add the to the Body it stops working. I've an error message on my preview saying "we could not parse the JSON body of your request." What am I missing ?
Sounds like you might be not be including a correctly structured JSON array for the messages in the API Connector.
Assuming you are wanting to make the middle section of this dynamic to list your messages:
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
-------
Try as demo text in :
{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"}
Thanks @@planetnocode ! It was a conflict between the JSON field in the data type and the query. Fixed it :) Thanks again for your awesome videos !
@@benoitsommerlad3000 do you remember how you actually fixed this? I'm running into the same issue...
Actually, I found the issue! The JSON for the assistant's return message needs to have quotes added to it, otherwise it will error out once we include it as part of the array for the second round of questions.
Hi Matt, I did it! Yeah! Followed you step by step and have secured the historical context👏👏👏 Question: how can I allow the user to clear/delete the responses in the repeating group? Thanks as always!👍
Are you looking to delete 1 message at the time or all messages in the repeating group?
@@planetnocode all the conversations in that group. I am concerned about having a lot of data stored in my database that could slow down the app’s processing
How about adding a trash icon? Then when the trash icon is clicked run a workflow that deletes a list of things and enter the Repeating Groups List as the list to delete.
@@planetnocode actually I had put a button and ran that workflow but the conversation was not deleted. I think I did it incorrectly. Do you have any video on how to do it correctly?
It did it!👏👏 Thanks so much Matt😊💕
Starting a new conversation doesn’t help the total token usage right? It is big problem with conversational structure what is your solutions for that except changing model type to 16K or 32K? There are some options such as splitting, summarizing previous content and redirecting it but they all so confusing for non coder so I was wondering could you suggest to create solution. I appreciate you and thank you for your time
If you add a constraint to the Do a Search for Messages in the OpenAI workflow action to only retrieve messages from the current conversation then this will help to reduce tokens used as you won't be sending every message in the app database. Just those from the current conversation.
We've got some videos in production now about using different models with larger token limits and how to send just the most recent X number of messages.
@@planetnocode I try to find permanent solution for the memory and I have been searching about flowise pinecone integrations to current chatgpt API calls but there are no even single video explain this so far I did not see if you do know could you share it with me? I will try to find solution in the meantime but please consider to make whole video for your channel would be awesome. And keep doing good job! You are rock brother!
Have you seen this video? www.planetnocode.com/tutorial/plugins/openai/bubble-openai-send-only-recent-messages/
@@planetnocode when I saw it I said great timing or this guys is amazing and I’ll go with second lol Thank you for that but when I tried to implement that I realized my chats and messages are not connecting with parent group and matter in fact I watched 35 min videos and followed steps 3 times but still can’t figure out why my chats and messages are not showing on the app. Basically I created early 4 page form and after that I am creating new chat and showing directly messages to chat but neither my messages or my conversation chats’s text doesn’t show the database entries. To solve that after I tried to implement your way(display data in page load and add a constrain to messages to (conversation = parents group conversation) and nothing. And than I tried multiple other ways from great tutors as you and I set up custom state to display it but still it did not work. Although I would like to point that my all database and workflow are working correctly but for now I will not stopping until I’m over it! If you have any other suggestions to workaround please let me know. And keep doing awesome work bro!
And also 3.way I tried to solve my connecting problem was the sending current cells data through URLs parameter to pass on chat conversation data to message page repeating group even though parent group datatype is chat message repeating group doesn't link to parents chat somehow 🤯🤦🏻♂️😂
Can you tell me any other ways to link messages to the parent group chat except adding constrain because I couldn't handle it oddly. When I moved the constraint I could see the messages from the database as a whole but I can't separate them and link them to the parent chat. Methods that I have tried to solve this were adding constraint displaying data workflow and setting state to show message channel but no results so far 🤯
Just in case you wonder,
- Added the conversation=parents group conversation to the step of create a messages workflow (both step)
- Messages Repeating Groups parent page datatype is “conversation”
- conversation datatype has conversation data field in my database as to link it up.
Thanks for all your content. It's really good, congrats!!! Do you know how can I reset the context of the conversation? I want to save the conversation and reset the history if someone else access the link but don't lose the history in my platform. Thanks in advance.
Right now I'm not interested in asking for a sign up or something like that to use the platform. I just want to refresh all the context if someone in another IP access the platform. Am I being clear? Sorry. English is not my first language...
We suggest you set up privacy rules around Current User as it sounds like you're trying to keep conversations private from different visitors. Is that right?
By default any visitor can be referred to as Current User and Bubble uses a cookie to detect if the same visitor returns in the same session. So if you set up privacy rules for your messages, chat and user data types e.g. This Messages Creator is Current User then test using different browsers/private browsing we think you'll get the result you're looking for.
However, when using OpenAI there is a responsibility on you to know your users as ultimately anything submitted through the API is linked to your API account so we advise using OpenAI with logged in users.
Yes, that's right. Do you have some video explaining how to do that? Thanks a lot for your help@@planetnocode
how can I train chatgpt with my documents?
how much text of previous messages I can send with the api call?
There isn't a limit on messages but there is a limit on how many tokens can be sent. Details can be found on the OpenAI website: platform.openai.com/docs/models/overview
One more question: I did a “run as” another user and I was able to see the conversation when run as a different user. Is this supposed to happen? If so, how to fix it so that only current user sees the conversation? Thanks as always👍
Specifically, how do I set the messages so they are connected to the user, eg “current user’s message.” ? This way I don’t have to set it at the privacy rule level. Thanks in advance for your help😊
You don't need to have a Message field on User. Your Messages have a Creator field built in that will be the User who created the message. So just add a Privacy Rule to the Message data type like This Message's Creator is This/Current User. Then test with multiple users to make sure it works.
@@planetnocode It did! Thanks so much Matt!👍
After I Initialize the Call it shows this msg
There was an issue setting up your call.
Raw response for the API
Status code 429
{
"error": {
"message": "You exceeded your current quota, please check your plan and billing details.",
"type": "insufficient_quota",
"param": null,
"code": null
}
}
Have you added a payment method to your OpenAI account?
@@planetnocode yes I get charged monthly for the premium or 4.0
Double check because the ChatGPT Premium service is different to the OpenAI API billing. Here's the API billing portal: platform.openai.com/account/billing/overview
Hey that was really great, showed me quickly and easily how to setup the chatGPT clone. However, when I press New Conversation, even if I don't type anything and press send, it always creates a new conversation with the conversation being blank on the conversation sidebar, is there any way I can simply not allow the conversation to be created if there is no question inputted? Thanks, let me know if I didn't make it clear what I'm trying to say :)
Thanks for the question. What if you set the data source for your Conversation group to be the most recent conversation? That way if a New Conversation is created, then the page is refreshed the user will actually be presented with the empty conversation created before.
@@planetnocode thanks for the response, unfortunately when i do this it still creates another blank conversation when pressing new conversation, but I would like it to just go to the conversation that hasn't been used already. Thanks
@@planetnocode Also, do you know how to give the bot a set personality prompt before the user sends anything? Thanks
Yes you can use the 'system' role in the first message. A simple example from the OpenAI API documentation would be:
{
"role": "system",
"content": "You are a helpful assistant."
}
Source: platform.openai.com/docs/api-reference/chat
@@planetnocode thanks for the response, would this be in the workflow tab? and would this be on step 4 of the ‘button send is clicked’ because I already have the JSON = {“role”: “assistant”, “content”: Result of step 3 (open ai - send message)’s choices:first item’s message content:formatted as JSON-safe} so where would it go, thanks sorry I’m new to bubble :)
can someone help me, i want to fetch data from third party api and send data to gpt model then find result also i want to add knowledge in this chatbot
This should be 100% possible with Bubble. Quite a few different steps to explain in the comments section. May we recommend booking a 1 to 1 call with Matt in the videos. Find out more about 1 to 1 Bubble support here www.planetnocode.com/bubble-coaching/
can't see more than 4 repeating groups. like when iam chatting with the bot, i can't see more than 4 messages.
Are you using a fixed layout or do you have the repeating group rows set to 4?
@@planetnocode repeating group row was set to 4. Thanks a lot
Thanks a lot man
If I use a vector database then how to include the retrieved texts (from vector database) in the completion call (to openai)?
why is it so slow?
In our experience the OpenAI API can really vary depending on the time of day and, we would imagine, usage spikes.
A single dynamic value is just NOT working, no matter what I try.
What error are you receiving? It's likely just one small punctuation related issue with your JSON. We can help you fix this during a Bubble coaching call, working 1 to 1 with one of our Bubble experts: www.planetnocode.com/bubble-coaching/