I agree! That's very valuable. To see how intelligent people go through analyzing a problem and looking for a solution. And it also gives time for us to think along the video.
I was about to write "you don't need an API key" but then I did a sanity check. I thought I was using `gpt-3.5-turbo` API for free, but what's actually true is ... if OPENAI_API_KEY is in your environment variables, then `import openai` will automatically find that key and use it. I'd previously set the env var for testing `text-davinci-003` (GPT-3) AND I'd included `openai.api_key = os.getenv('OPENAI_API_KEY')` in my code, but when I tested `gpt-3.5-turbo` for the first time I forgot the second line and then when it worked I assumed they'd removed the need for a key. Great video! Thanks!
I’ve been a sub of your for years, funny thing is I am not a programmer, or even remotely work in the field you produce videos on. I just love watching your curiosity take you around the j Ferber and take your time to teach others. Well done mate
Total side note but I wanted to tell you how amazing your Neural Networks from Scratch book is. Ive started down a few roads with NNs and I normally prefer video but you have really made it so clear and so much fun to learn. Congratulations on creating the perfect technology book!
@@sentdexI am a painter, and like all you do, even the mind blowing 1st hand errors you can't hold your laugh about, I rofl than 2, everything. Simply because u rite
Amazing!!! Thank you!!! I was always waiting for this!! I have notifications set for your channel but I never get any notifications, also I didn't even see any of your videos for the past year on my feed or anything
Bro. This is a great tutorial. Most other people are just publishing some nonsense gpt-api stuff. You are bit of a whacko (compliment) but i 😍 your speed of tutoring. You did not waste our time by going back to check on AI reply about which moon it was sizing. Great stuff dude.
In my experience, the system role is really useful for things like restraining the bot as to what it can do , and also to give it some background information like how it would like to be called or what tasks it can perform. So when the user asks :" what can you do for me ?" The chatbot can answer what the system says it will be able to do or what it's main purpose is , and the personality you want the bot to have. The system message may be something like "You are a language translation helping bot, you cannot talk about anything else, your name is Bob and you are a stern but calm teacher."
Greatest guy on the internet, always loved the way you follow your passion and work not on just some classical stuff but playing with whatever is interesting for you If you’ll ever want to work on AI-aided chemistry/medicine - our Chemistry and Artificial Intelligence lab in ITMO university is fully open for you 🥰 Continue making great stuff 👍
Amazing! I am building a cocktail maschine with this. I recognise voice with speech to text, feed it into the API. Like "I want a martini please", I will feed in with the custom add on to convert the cocktail to a json with a given format, my cocktail machine can use and make a cocktail with. :)
System is where I define the persona of the bot, any special instructions, and most importantly, where I dump any additional information that will be useful to the bot. Text retrieved using semantic search, summarized chat logs from previous conversations etc.
Thanks for the great content, sentdex! If I understand the process correctly, every time the user adds a message, we need to extend message_history and pass the entire message_history to ChatGPT is that right? My concern is that the cost for giving N responses would scale on the order of N^2 (if all future messages require the full history). Although I cannot think of any other way to use ChatGPT currently--unless there is some "delta" api call that can pass in new messages and load past tokens for free? I think this is a rather big barrier to "indie" developers adding ChatGPT to certain applications--wonder if you have any thoughts on this!
Timestamps [ 00:06:30 ] Subject: Token Limit. Feeding a conversation back in a prompt as context ... You can SUMMERIZE, and ChapGPT can do that for you. .. ... [ 00:12:01 ] ChatGPT Forgets What Moon It's Talking About 😂 ... [ 00:30:57 ] Isn't It Ironic! SentDex is trying to break out of the Matrix with the Help Of ChatGPT ...
It's quite interesting. I try to make it a role with the "spell" : "You are a well-trained AI multi-task language translator. When I input non-Chinese sentences, you should output Chinese translation sentences. While I input Chinese sentences, you should output Vietnamese. You only need to ouput the translation result, no need other words or explanation. If you understand, say OK." It succeed in the first, but with more sentences input, it confused, even I input a chinese sentence, it return a chinese sentence "translation" (which is the same words because no need to translate) but not a vietnamese one. I'm not sure why but it just can't understand or forget the tasks when I input like 5~6 non-chinese sentences and some chinese sentences.
Great work! The API is extremely easy to use, and I was able to create a small hack (little-reasoner) that combines the power of ChatGPT and the Z3 theorem prover.
I love how you took one of the previous top comments (previous video) into consideration to "live" code again, like "back then" when your channel was small.
Is Azure's generative AI solutions the only option to both fine-tune and build guardrails for niche chatbots? It seems to be the only option to feed custom indexes in a GUI so that the chatbot is bespoken for specialized use cases.
question. to remove the user's input from the textbox in Gradio. Do you need to use "with gr.Blocks () as demo? i noticed I was using 'gr.interface': demo = gr.Interface(( fn=CustomChatGPT, inputs=input_textbox, outputs=output_textbox, title="" ) )
15:20 I was using the website free version yesterday and ran in to a similar problem. I requested an output based on information I had provided previously (above) and it said it could not refer to my previous messages. Maybe is something they temp discontinued to increase speed.
I used the api to make a chat bot for my discord community, but they used it too much, and I could not afford to keep it going, but man having a group chat with AI is crazy.
Great vid but was just wondering what the rate limit on gpt-3.5-turbo is since I couldn't find any solid documentation online and wanted to know it since I plan on mostly using it for my own recreational use which will involve quite a bit of requests being sent? Currently still on the free plan but I want to confirm this before going paid.
Hi, I want to learn about data analysis. I don't know anything about it, but I'm interested in starting something new. I was looking into starting the data analysis course on Coursera, but just wanted to see if I should take another route before doing that. What would you recommend? Thanks in advance!
@@brandonbahret5632 it still need user's prompt in it to answer any questions, in short: it doesn't change a thing about how it interacts with the user.
@@Primarycolours- what? No, you can totally have chatgpt interview an instance of itself. Its just like asking GPT-3 to generate a transcript and not providing any stop codes.
Although they say that the assistant role is needed in order for ChatGPT remember the previous responses, from my experience, it only works if you also define a system role.
15:23 Since the initial question never said “Earth’s moon” the AI had to infer that’s what you meant. It is technically true that if you had referenced the “Earth’s moon” in some prior conversation then the history of that prior conversation would not be given to the AI. The AI can access chat history, but only the current chat history.
Thanks for your video. I am wondering, how much will it cost as we keep sending the message history? My question is really: if we keep the history building between messages, will our cost increase because we keep submitting the history ?
you said that the API itself isn't going to manage your history for you, so how might we do that?just start with some sort of message history variable for now to keep it simple, but we might use a database or some other storage method. Can you explain how we can do that using a database for example?
one way to do this would be to store each message along with its associated metadata (such as sender, timestamp, etc.) in a database table. Then, when generating responses using the ChatGPT API, you could query the database for relevant messages and use them to provide context for the API.
@@funkahontas ok I got it, but how can you relate the content ID with the answer. For example: if I say to the AI, my name is "X". The AI says "Hi X, nice to meet you". I store this two entries in the DB. But then? I have to do function that will scan the entire DB in order to search something "my name is: ..." and take the context?
In my experience, its difficult to restrain GPT. For example if you make a request to it that future messages should conform to some format but you later ask it to stop that, it will stop. No matter how ademant you are that it should not violate a rule in a message, this can be overruled in a future message. Thoughts?
That's entirely dependent on you (and your team) and your project. Most of the website stuff I do is fairly simple so I just use nodeJS but if I were to write more complex endpoints I'd use django (python) or springboot (java). If you know one of those languages already, go with that. otherwise chose one of them and learn it
It makes perfect sense. It is not a bug. GPT has no idea what moon you are refering to. It just knows what people have said about the moon and if they didn't clarify what moon, then it has no idea. In fact, it never has any idea. It is a stocastic parrot.
Great video, thanks for posting it. Can you try editing the first prompt and have it say what is the circumference of Earth's moon? My guess is the script could reference the message history but since there are so many moons it wasn't sure you meant Earth's. Anywho, good content!
so what is the difference between these LLM's (gpt3/4, Alpaca, etc), what was AlphaFold/ESM/2, and the types of systems that were used to create efficient biologically similar structures like frames for vehicles or furniture? And Alpha Tensor? Wolfram Alpha? what other types of AI/ML systems are there? some are trying to do things as good as humans, some are doing things we can not do. how are these different things coded? what are the ideas these are based upon? how can they be merged? can each be used to improve the others? what are evoformers vs transformers? and what other things are there?
Not sure if it's to do with chatGPT but I noticed copilot seems to be reading the code underneath now. Before if you insert a new line above a line of code and type something similar to the below line, it would act as if it only reads the code above, but now it appears to make predictions based on whats written below.
Is it possible to actually give internet access to the gpt levrraging google api for example? So that bot can search on the internet and get knowledge of the most recent events?
Lets' not forget that other planets in our solar system have named moons -- ours is actually named, "the moon" -- hopefully that doesn't send the AI into an infinite loop
I have always loved that you don't edit out errors and mistakes, and show us your process of trying to understand them.
I always love that he laughs at his mistakes.
I agree! That's very valuable. To see how intelligent people go through analyzing a problem and looking for a solution. And it also gives time for us to think along the video.
This is a beginning for numerous of startups
I love how simple the API is! We really are in the gold rush of AI based applications.
We are indeed, it's exciting and scary. I'm writing my first chatgpt app that will troll scammers on craigslist.
@@nickwinn simple but it would have been nice if they managed the history?
I started watching your videos in 2017 in college. Thanks to you and specifically your pygame series I'm a mid level SWE
I was about to write "you don't need an API key" but then I did a sanity check. I thought I was using `gpt-3.5-turbo` API for free, but what's actually true is ... if OPENAI_API_KEY is in your environment variables, then `import openai` will automatically find that key and use it.
I'd previously set the env var for testing `text-davinci-003` (GPT-3) AND I'd included `openai.api_key = os.getenv('OPENAI_API_KEY')` in my code, but when I tested `gpt-3.5-turbo` for the first time I forgot the second line and then when it worked I assumed they'd removed the need for a key.
Great video! Thanks!
I’ve been a sub of your for years, funny thing is I am not a programmer, or even remotely work in the field you produce videos on. I just love watching your curiosity take you around the j Ferber and take your time to teach others. Well done mate
So true❤
Total side note but I wanted to tell you how amazing your Neural Networks from Scratch book is. Ive started down a few roads with NNs and I normally prefer video but you have really made it so clear and so much fun to learn. Congratulations on creating the perfect technology book!
Awesome to hear this! Thank you!
Please share the book name
@@sentdexI am a painter, and like all you do, even the mind blowing 1st hand errors you can't hold your laugh about, I rofl than 2, everything. Simply because u rite
Who has been waiting for this for a long time now?
Amazing!!! Thank you!!! I was always waiting for this!!
I have notifications set for your channel but I never get any notifications, also I didn't even see any of your videos for the past year on my feed or anything
The longer between this video and your next one, the more excited I get.😂😂
Bro. This is a great tutorial. Most other people are just publishing some nonsense gpt-api stuff. You are bit of a whacko (compliment) but i 😍 your speed of tutoring. You did not waste our time by going back to check on AI reply about which moon it was sizing. Great stuff dude.
In my experience, the system role is really useful for things like restraining the bot as to what it can do , and also to give it some background information like how it would like to be called or what tasks it can perform. So when the user asks :" what can you do for me ?" The chatbot can answer what the system says it will be able to do or what it's main purpose is , and the personality you want the bot to have.
The system message may be something like "You are a language translation helping bot, you cannot talk about anything else, your name is Bob and you are a stern but calm teacher."
I smell spam . I Red this 1 twice already
Greatest guy on the internet, always loved the way you follow your passion and work not on just some classical stuff but playing with whatever is interesting for you
If you’ll ever want to work on AI-aided chemistry/medicine - our Chemistry and Artificial Intelligence lab in ITMO university is fully open for you 🥰
Continue making great stuff 👍
Best channel to learn python.
A big thanks from an Indian. Amazing stuff you post. God bless you.
Your awesome. Keep making simple videos like this! I just subscribed because of how simple this was.
Nice job, man. Regards from Brazil!
Perfect timing, I was just about to use it in my next project.
Always loved your tutorials videos
Amazing! I am building a cocktail maschine with this. I recognise voice with speech to text, feed it into the API. Like "I want a martini please", I will feed in with the custom add on to convert the cocktail to a json with a given format, my cocktail machine can use and make a cocktail with. :)
System is where I define the persona of the bot, any special instructions, and most importantly, where I dump any additional information that will be useful to the bot. Text retrieved using semantic search, summarized chat logs from previous conversations etc.
Which extension in VS Code helps with the completion of syntax like that ?
thank you so much for this brotha. real lifesaver
Thanks for the great content, sentdex! If I understand the process correctly, every time the user adds a message, we need to extend message_history and pass the entire message_history to ChatGPT is that right? My concern is that the cost for giving N responses would scale on the order of N^2 (if all future messages require the full history). Although I cannot think of any other way to use ChatGPT currently--unless there is some "delta" api call that can pass in new messages and load past tokens for free? I think this is a rather big barrier to "indie" developers adding ChatGPT to certain applications--wonder if you have any thoughts on this!
Yeah I'd like to know this as well.
I can see it getting out of hand and just swallowing tokens.
1:41 whats the downside to using jupyter notebook?? ;-;
Man, I love you.
Timestamps
[ 00:06:30 ] Subject: Token Limit. Feeding a conversation back in a prompt as context ... You can SUMMERIZE, and ChapGPT can do that for you. ..
...
[ 00:12:01 ] ChatGPT Forgets What Moon It's Talking About 😂
...
[ 00:30:57 ] Isn't It Ironic! SentDex is trying to break out of the Matrix with the Help Of ChatGPT
...
It's quite interesting. I try to make it a role with the "spell" : "You are a well-trained AI multi-task language translator. When I input non-Chinese sentences, you should output Chinese translation sentences. While I input Chinese sentences, you should output Vietnamese. You only need to ouput the translation result, no need other words or explanation. If you understand, say OK."
It succeed in the first, but with more sentences input, it confused, even I input a chinese sentence, it return a chinese sentence "translation" (which is the same words because no need to translate) but not a vietnamese one. I'm not sure why but it just can't understand or forget the tasks when I input like 5~6 non-chinese sentences and some chinese sentences.
Great work! The API is extremely easy to use, and I was able to create a small hack (little-reasoner) that combines the power of ChatGPT and the Z3 theorem prover.
awesome video as always thank you very much, I hope you have a great day!
Back to tutorials! Hell yeah!
Tons of wisdom, as always. We thank you! 🤓
Thank you so much for sharing 💚💚💚💚
Which intellisense are you using in VS Code?
@20:19 isnt that type of prompt what they give as an example in the docs for system prompts?
Wouldn't having to send message history every time you want a predication get very expensive token wise?
I just love your videos ♥️
ahh yeah this one it is cheap to use and this is a great example thank you!
hi, how do u set if the api uses gpt3.5 or gpt 4 ? there is no setting when you generate the key as far as I can see... please help. cheers
I love how you took one of the previous top comments (previous video) into consideration to "live" code again, like "back then" when your channel was small.
You are a nerd's nerd, and I love it.
Is Azure's generative AI solutions the only option to both fine-tune and build guardrails for niche chatbots? It seems to be the only option to feed custom indexes in a GUI so that the chatbot is bespoken for specialized use cases.
question. to remove the user's input from the textbox in Gradio. Do you need to use "with gr.Blocks () as demo?
i noticed I was using 'gr.interface':
demo = gr.Interface((
fn=CustomChatGPT,
inputs=input_textbox,
outputs=output_textbox,
title=""
) )
Always loved your work .thank u for ur inspiration.m deep fan of urs.
15:20 I was using the website free version yesterday and ran in to a similar problem. I requested an output based on information I had provided previously (above) and it said it could not refer to my previous messages. Maybe is something they temp discontinued to increase speed.
You probably went past the token limit
What extension are you using for auto complete, co-pilot?
Nevermind. You answered it in the video. ;-)
I used the api to make a chat bot for my discord community, but they used it too much, and I could not afford to keep it going, but man having a group chat with AI is crazy.
Great vid but was just wondering what the rate limit on gpt-3.5-turbo is since I couldn't find any solid documentation online and wanted to know it since I plan on mostly using it for my own recreational use which will involve quite a bit of requests being sent? Currently still on the free plan but I want to confirm this before going paid.
Hi, I want to learn about data analysis. I don't know anything about it, but I'm interested in starting something new. I was looking into starting the data analysis course on Coursera, but just wanted to see if I should take another route before doing that. What would you recommend?
Thanks in advance!
Thank you
I wonder where the conversations would end up if you have another ChatGPT model play the role of the user
Black holes my friend
@@brandonbahret5632 it still need user's prompt in it to answer any questions, in short: it doesn't change a thing about how it interacts with the user.
@@Primarycolours- what? No, you can totally have chatgpt interview an instance of itself. Its just like asking GPT-3 to generate a transcript and not providing any stop codes.
Great video, thanks.
how do you feel about using a terminal within vscode?
Although they say that the assistant role is needed in order for ChatGPT remember the previous responses, from my experience, it only works if you also define a system role.
15:23 Since the initial question never said “Earth’s moon” the AI had to infer that’s what you meant. It is technically true that if you had referenced the “Earth’s moon” in some prior conversation then the history of that prior conversation would not be given to the AI. The AI can access chat history, but only the current chat history.
Thanks for your video. I am wondering, how much will it cost as we keep sending the message history?
My question is really: if we keep the history building between messages, will our cost increase because we keep submitting the history ?
Which vs code plugin are u using to get those code suggestions?
Me: Sad, having a rough day....
Laptop: "What is going on everybody..." and I'm happy again!
hi I have a question please.
How can I activate the autocomplete that you are using
How would I customize the page where you are asking questions. For example, if you wanted to turn the textbox green and chatbot box red?
Wow, Thank you
What font family are you using for the vs code?
What is the VS Code extension that you use for interactive Python?
Why do I keep getting the error that AttributeError: module 'openai' has no attribute 'ChatCompletion'?
Love your python setup in V.S. for openAI !! Do you have a video tutorial on it? Thanks!
What os are you using? And was that a bash terminal? Looks more like zsh terminal, would love to see a video about your setup
It looks like Ubuntu
What extension is it that autocompletes the code? Thanks
Are you gonna be continuing the nnfs series? :(
Yes
thx. the video! how can I publish it to public? so not run only locally?
you said that the API itself isn't going to manage your history for you, so how might we do that?just start with some sort of message history variable for now to keep it simple, but we might use a database or some other storage method. Can you explain how we can do that using a database for example?
one way to do this would be to store each message along with its associated metadata (such as sender, timestamp, etc.) in a database table. Then, when generating responses using the ChatGPT API, you could query the database for relevant messages and use them to provide context for the API.
@@funkahontas ok I got it, but how can you relate the content ID with the answer. For example: if I say to the AI, my name is "X". The AI says "Hi X, nice to meet you". I store this two entries in the DB. But then? I have to do function that will scan the entire DB in order to search something "my name is: ..." and take the context?
In my experience, its difficult to restrain GPT. For example if you make a request to it that future messages should conform to some format but you later ask it to stop that, it will stop. No matter how ademant you are that it should not violate a rule in a message, this can be overruled in a future message. Thoughts?
This API is crazy.
Hi, sendtex. Python or JavaScript for backend, which would you recommend? I can't decide between them
That's entirely dependent on you (and your team) and your project.
Most of the website stuff I do is fairly simple so I just use nodeJS but if I were to write more complex endpoints I'd use django (python) or springboot (java).
If you know one of those languages already, go with that. otherwise chose one of them and learn it
It makes perfect sense. It is not a bug. GPT has no idea what moon you are refering to. It just knows what people have said about the moon and if they didn't clarify what moon, then it has no idea. In fact, it never has any idea. It is a stocastic parrot.
12:44 maybe it did not catch the message history. It probably answered from the data it was trained on.
Why do I get "module 'openai' has no attribute "ChatCompletion"!?
How can I use this and create my own using my own answers for the bot? Thanks~
I was wondering when you will upload such video
You are amazing .. thanxxxxxxxxx
Is there a limit to how long the message history could be?
Great video, thanks for posting it. Can you try editing the first prompt and have it say what is the circumference of Earth's moon? My guess is the script could reference the message history but since there are so many moons it wasn't sure you meant Earth's. Anywho, good content!
Couldn't even get past the first run. "openai.error.InvalidRequestError: The model `gpt-3.5-turbo0302` does not exist"
so what is the difference between these LLM's (gpt3/4, Alpaca, etc), what was AlphaFold/ESM/2, and the types of systems that were used to create efficient biologically similar structures like frames for vehicles or furniture? And Alpha Tensor? Wolfram Alpha? what other types of AI/ML systems are there? some are trying to do things as good as humans, some are doing things we can not do. how are these different things coded? what are the ideas these are based upon? how can they be merged? can each be used to improve the others? what are evoformers vs transformers? and what other things are there?
Do you see any changes in the GitHub Copilot since ChatGPT released?
Not sure if it's to do with chatGPT but I noticed copilot seems to be reading the code underneath now. Before if you insert a new line above a line of code and type something similar to the below line, it would act as if it only reads the code above, but now it appears to make predictions based on whats written below.
@@fordimension Yes noticed that too recently
amazing !
they gotta come out with a self-hosted version. Not having to send all the data to openai to get a prediction would be a game changer.
First we need to figure out a way to make these models smaller, currently you need a very beefy computer to run them at any reasonable speed
actually they do have one: ruclips.net/video/rGsnkkzV2_o/видео.html
i got similar behavior my first go yesterday. seems like it's confused by my role and it's role
What theme is he using?
man this world is getting good
you use wich linux???
Yeah, sometimes they depreciate it on purpose.
Thanks sentdex.
You could explore building something with Langchain
Great work. How do you get the code to autocomplete?
Github copilot
Github Copilot.
Thought on NeRF / Luma labs AI
Is it possible to actually give internet access to the gpt levrraging google api for example? So that bot can search on the internet and get knowledge of the most recent events?
Lets' not forget that other planets in our solar system have named moons -- ours is actually named, "the moon" -- hopefully that doesn't send the AI into an infinite loop
Luna ?
But the message history doesn't specify which moon . . .
And to be honest the question wasn't clear, phrasing was just wrong I couldnt make sense of it either
lol I'm following along but I didn't get this fluke with the "which moon is this in reference to" question - worked fine for me
What is your linux distros ?
I just use ubuntu
can AI replace ml/Ai engineers?