Not all users of ChatGPT plus have access to plugins yet, access is being granted first to those on the waitlist - sign up here openai.com/waitlist/plugins
James, you are on top of your game! Really clean explain action on how to develop a chat GPT plugin! Love the way you teach us in a relax but so effective way
just finished watching, fantastic, thanks!! 👍🙏now waiting for clearing the plugin waitlist. This would be super useful to quickly get started as you are covering the nuances very well.
Awesome, congrats. I'm just wondering why, especially for smaller documents, data has to be stored in the database as embeddings and not just text. Is it only a matter of speed or are the answers more accurate (semantic search)? In contrast, chatbots with access to the internet such as WebGPT or Bing Chat they just fetch clear text data and may create (if at all) the according embeddings of the webpage results on the fly. What's your take on that? Thanks for clarifying.
With embeddings you can search semantically. That will get the most important snippets (in the example it was 5 snippets), what is then feed in to chatgpt to create natural language responses. You can't give entire documents to chatGPT because of the limitation of tokens it can have as input.
After openAi releases chatGPT browse ability, is this still relevant? I mean now GPT can browse without any 3d party services like LangChain, so we don't need this complex architecture anymore. Am I right?
Awesome, thanks bro 👋🏽👋🏽 I have one question tho, how can we let the user to upload his own data? or how can we take additional data(could be in different format) directly from the user? Thanks again 🙏🏽
maybe we could allow the plugin access to the /upsert endpoint, and give instructions like "if user says 'save this information, send it to the /upsert endpoint' - could be pretty cool
Hmm question, could I use this to turn a github repository into a model and allow chatGPT to have have knowledge of my entire codebase in a conversation? This could be a powerful tool 🤔
Hi James, That is all very interesting. Thanks for explaining this in layman terms. Currently I am trying to do essentially the same but not via plugins but via GPTs. Have you tried that, too? It works quite well over a Heroku app with open API and I use this to build and query a knowledge base. When I enter text and ask it to commit to the knowledge base that works quite well. Then I thought, why not try with complete documents as Chat GPT (in principle) accepts them... So I created another endpoint that accepts documents - and, over the open API (swagger ui) that works fine - but the custom GPT persistently tells me, it is not equipped for that in the "current environment" whatever that means... Do you know, if this is actually a limitation of the GPTs? Because file handling and passing to API out of chat GPT can be very interesting - e.g. if it is an audio/video and you like to run it through whisper for transcript, extraction of main topics, commit to knowledgebase - all in one GPT-instance...
Would you say that thank chain is a plug-in and that the choice for a new developer would be either one uses line chain to do all of his GPT work or develop isolated plug-ins for the same work and if so which would you choose to use? Or is lying chain just to mature and capable to be called a plug-in had to compete with plug-ins and more so is a handholding friend to plug-ins? The reason I ask is I have a client that wants to modify chat with STT and mostly friend and you I stuff but deeper stuff as well so I got to get them started on the right process and I’m brand new it coding so I wanna get in the right pool and swim in the water that is most likely going to be around for some time thank you again for all of your videos!!
How would a plug-in work that needs to lookup data from a sql db or an api work? For example I saw a plug-in for airline tickets. You would need the dates and airports at a minimum, and you would need to query the airlines for the prices. How would you extract those facts from the ChatGPT prompt? Or would you make your /query api contain those date and airport fields and ChatGPT would be responsible for filling those fields? Then your api would find out the prices and return the prices and options in a nice to read string or table?
yes chatgpt needs to populate the api fields, in fact, chatgpt is basically writing the api request - from what I can tell, chatgpt is looking at the openapi.yaml spec in order to figure out which endpoints to call, how to structure the requests etc - after sending the request the api does as you said, returns the prices and options, but in a JSON format - chatgpt then parses that information and returns it in a more user friendly (and conversational) format
I followed your instructions but keep getting a build error, needing more memory. So I added a droplet, volume, more containers, etc. I still get this error.
pinecone scales to billions of docs, you just increase hardware so you'd probably need to initialize the index separately (meaning, not via the chatgpt-retrieval-plugin) with *a lot* of s1 pods, then you're good to go, it'd be fairly expensive lol - but definitely doable
Thanks for all the useful information. I got a question. Let's say I mastered prompting, langchain and pinecone along with the necessary python coding, what job can I get with this skillset?
overlord of the skynet hivemind, ruler of the machine world, etc but for real, I don't know exactly. These technologies are changing the industry, whatever it is we'll be doing as ML devs and engineers, it will probably be that - but the skills are already in high demand, every startup founder and his dog is hiring people that can build with these tools
@@jamesbriggs Lool! Thanks a lot for the response. That's interesting because I am looking for a position like this but can't find where should I be looking for the opportunities you mentioned?
Hey James, thanks for putting together such a great video. Did you get access to the plugins just by being on ChatGPT plus and requesting access to the waitlist?
Is it possible for the plugin to store data that happens in the chat. For example if you're doing an RPG, could store the conversation history and use it as an external memory source for improved context?
Thanks James, you mentioned a related video around 19:27 but I'm not seeing a link (might just be something in my settings). Could you please add it to the description? Cheers :)
depends, in this case we'd be paying openai api because we're creating embeddings via the text-embedding-ada-002 model, but if the plugin is just an API that chatgpt interacts with, then it'd be chatgpt plus paying for it (other than the API hosting costs)
@@jamesbriggs If the embedding is generated by "text-embedding-ada-002" model, is the embedding general to all other models (e.g. user may default to davinci ) ?
This is a great tutorial! I tried to follow along, but in my case the application on DigitalOcean fails because of memory error: `Error during Build: Insufficient Memory` it installs everything and then [2023-06-11 15:04:09] │ command exited with code -1 [2023-06-11 15:04:09] │ [2023-06-11 15:04:11] │ ✘ build failed Even when I increased the RAM to 16GB it still fails. Did anyone else encountered this problem?
I stumbled upon a part of youtube I really have no business being, but I'm already here so I might as well go along to get along. I have zero coding/programming experience but I'm at timestamp 20:23 and I was good up to this point. I'm not sure if I need to create the notebook for colab or download one. Can anyone explain how do I get from there to having a notebook with prepped data that I could use?
"I don't know exactly how it's working because I've no idea it's OpenAI" - the state of OpenAI right now, they should rename themselves to ClosedAI, assuming they don't become a Microsoft subsidiary
@@andrew-does-marketing idk, maybe actual statistics about GPT-4 instead of a 96 page marketing puff piece they sold to the masses as science? Compare the papers they wrote before the micro$uck soft acquisition vs afterwards.
I have a subscription to chat gpt and in the center it indicates a chatGPT Plugin, but the drop down window doesn't offer a plugin options any suggestions. thank you Louis
I like the timing… well done Microsoft! 3 days ago Microsoft released a paper explaining the experience of their researchers using all the capabilities of GPT-4 prior being fine tuned. And guess what? GPT-4 can make use of tools! And yesterday OpenAI unveiled ChatGPT plugins… we have to wait for a sleeves of new features because this GPT-4 seem to be able to do far more than what they told us during the announcement
@@jamesbriggs I can understand it have been pretty crazy what happening lately in the AI space but doesn’t seem it will calm down any time soon. So you’ll have to take some time to live a little bit
I learned with the books "Programming not Painfully Boring", "Smarter Way to Learn Python" and "Head First Python". I like their method of learn by doing. After learning the fundamentals, other advanced concepts become much easier to learn.
Is there a way to test it in the UI without having access to plugins? I am all set thanks to the wonderful walkthrough, but I can't load the plugin from the UI. I understand that Plus subscription is mandatory (which I have), but cannot see the plugins dropdown!
Pinecone stores my documents in vectors (chonks) and when I want to look for some information, I retrieve a possible response from my stored document through the Pinecone API and pass it to the GPT API as a prompt, is that it?
how is that different from what we were doing before apart from the fact that we can plug it in into chatgpt UI 1. Using pinecone as a db 2. Embed docs and store them in pinecone 3. Retrieve similar docs using vector similarity 4. Add those docs to chatgpt prompt
openai are now paying for the completion (within the chatgpt subscription ofcourse), that's the main difference, but there is another fundamental difference to the previous examples. Before we were called the vector DB with every question, and there was "memory" of previous parts of the conversation. Here the LLM doesn't always refer to the vector DB, it only uses it when needed, this is naturally more efficient and makes sense, it doesn't need to use external info all the time - and ofcourse, there is now a memory of previous interactions Nonetheless, from our perspective as developers, the process is practically the same, as you outlined in your 1-4 steps
I got invited to help out with the retrieval api (initially not knowing it was anything to do with "chatgpt plugins" lol), that is documented and so I figured that part out early. Then on the chatgpt plugin side of things I had a ton of help from the relevance team at Pinecone (shoutout to Roy + Amnon), it sped things up a lot - I couldn't have got it out in that time otherwise
does this work if I am using openai chatcompleletion endpoint? Like I am using on google colab, but I want it to retrieve information from specific source.
James I know this is an unfair question to ask, but I'm a data science intern at the moment and idk what AI means for me. In terms of having a job is the future pretty bleak?
It isn't. In fact it is bright. Data is the new programming language. Cleaning it, splicing it, storing it and retrieving it. These tools will only make you more efficient
Yeah I'd agree with Chris, the future of our work (for anyone in programming) is going to change, but it's not going to wipe us out just yet - view these AI tools as tools, tools need humans to operate
Your videos are excellent and demonstrate outstanding work. I am a huge fan of your videos, thank you so much. I have a question regarding the storage location of the documents texts. As the vector database identifies similar document IDs using embedding vectors, the text documents must be stored and retrieved from a specific location to be incorporated into the prompt. Could you please provide clarification on this point, or am I missing something else?
Hey all wondering if you can help me iron out some issues I'm hitting. I'm getting response 500 for post and query. Do I need waitlist access to not get 500 error response, or am I being a noob with my rate limits or something else entirely?
this I'm not sure, but I know you can use many plugins in one chat, and I have seen chatgpt use the same plugin (the retrieval one here) twice in a row
My question would be: How much better will the repsonse be if I do this with OpenAI vs smaller OpenSource foundation models. My expectation is that the more important input is coming from the plugged-in knowledge base and the world model just adds some coherency to the response.
I think you can get good responses from both, especially when you consider models like LLaMa, a big part of the help here is coming from the knowledge base
@@jamesbriggs The world needs this: a hosted LLaMa connected to a hosted Plugin store. Wouldn't be Pinecone the right place for such a store. I appreciate OpenAI's developments, but it's a bit scary that the whole business world is gonna bleed their business secrets into one place.
Great video. This seems very similar to the chat gpt 4 chatbot video you made the other days in terms of how it works. Have openai eaten langchains lunch?
no definitely not, langchain is much broader in scope, this does eat a little bit of the agents in that lunch, but I'm not convinced that this is any better than what langchain offers yet
Why should you give them access to your knowledge bases? I mean, what is in there for the companies providing such plug-ins? They could use Langchain instead.
Check out the langchain PDF loaders: langchain.readthedocs.io/en/latest/modules/document_loaders/examples/pdf.html and my last video on data prep: ruclips.net/video/eqOfr4AGLk8/видео.html Both of these together should be everything you need - but I'll also make sure to cover handling PDFs in a future video too :)
Not all users of ChatGPT plus have access to plugins yet, access is being granted first to those on the waitlist - sign up here openai.com/waitlist/plugins
Did you also get access via waitlist? I have the feeling that the rollout is pretty slow
I've been waiting for 2 weeks
Oh no, yet another wait list. Wanted to give it a try today, but nope. While this tech is good, the slow access is a bit disappointing...
i got the email that i am approved for chatgpt plugin, but i can not access it yet. Do you have to wait further?
Fantastic video James! Epic speed of creating this level of detail too. Can’t believe how fast OpenAI are shipping!
thanks man! OpenAI need to relax for a moment so I can make videos on something else 😂
It's Microsoft wanting to step on Google as this is the opportunity they have wanted for over 20 years
Hi my friend! my upsert request recieves . do you know why? thanks! 🙏
My head hurts, but this is a great intro to a very exciting development. Thank you for this !
Same here, is there any easier way to build ChatGPT plugins?
This has to be one of the most useful thing to have - being able to feed it new or specialized information.
That was a fantastic intro to the plugins, and incredibly quickly put out! Thanks for explaining so clearly.
Glad it was helpful!
Hi my friend! my upsert request recieves . do you know why? thanks! 🙏
James, you are on top of your game! Really clean explain action on how to develop a chat GPT plugin! Love the way you teach us in a relax but so effective way
Hi my friend! my upsert request recieves . do you know why? thanks! 🙏
was just waiting for you to post this. Thank you very much! will follow along now :)
haha, hope you enjoy!
just finished watching, fantastic, thanks!! 👍🙏now waiting for clearing the plugin waitlist. This would be super useful to quickly get started as you are covering the nuances very well.
Interesting architecture, a lot of power is in developers' hands.
Thanks James for the intro, very helpful and top content!
This is a great video! Thank you for featuring the Warp terminal :-)
thanks for building it :)
Thanks you for your sharing knowledge about how to create plugin. You efforts will help A.I. adoption growth very fast. 😊
I hope so :)
Awesome, congrats.
I'm just wondering why, especially for smaller documents, data has to be stored in the database as embeddings and not just text. Is it only a matter of speed or are the answers more accurate (semantic search)?
In contrast, chatbots with access to the internet such as WebGPT or Bing Chat they just fetch clear text data and may create (if at all) the according embeddings of the webpage results on the fly.
What's your take on that?
Thanks for clarifying.
More accurate answers when using embeddings. (Semanticsearch)
With embeddings you can search semantically. That will get the most important snippets (in the example it was 5 snippets), what is then feed in to chatgpt to create natural language responses. You can't give entire documents to chatGPT because of the limitation of tokens it can have as input.
I got an error when I ran at 26:26. Is it because I need to pass the Chatgpt-plugin waitlist?
love your uploads man.. keep it up!
appreciate it!
After openAi releases chatGPT browse ability, is this still relevant? I mean now GPT can browse without any 3d party services like LangChain, so we don't need this complex architecture anymore. Am I right?
How did you get access to use the plugins? I am a ChatGPT Plus subscriber, but no plugins to use yet.
Awesome, thanks bro 👋🏽👋🏽
I have one question tho, how can we let the user to upload his own data? or how can we take additional data(could be in different format) directly from the user?
Thanks again 🙏🏽
maybe we could allow the plugin access to the /upsert endpoint, and give instructions like "if user says 'save this information, send it to the /upsert endpoint' - could be pretty cool
Side question, what is the background img for your terminal app (also what app is it? :p) and chrome background? Really liked them :)
hi James, thanks for sharing, what would be the case for current GPTs action? will this work with it as well?
Hmm question, could I use this to turn a github repository into a model and allow chatGPT to have have knowledge of my entire codebase in a conversation? This could be a powerful tool 🤔
Awesome video - thanks for the details!
thanks! I hope this means you'll be putting out more chatgpt videos!
thanks for the clear instructions. how do you upsert pdf documents instead of online data?
Hi James, That is all very interesting. Thanks for explaining this in layman terms. Currently I am trying to do essentially the same but not via plugins but via GPTs. Have you tried that, too? It works quite well over a Heroku app with open API and I use this to build and query a knowledge base. When I enter text and ask it to commit to the knowledge base that works quite well. Then I thought, why not try with complete documents as Chat GPT (in principle) accepts them... So I created another endpoint that accepts documents - and, over the open API (swagger ui) that works fine - but the custom GPT persistently tells me, it is not equipped for that in the "current environment" whatever that means... Do you know, if this is actually a limitation of the GPTs? Because file handling and passing to API out of chat GPT can be very interesting - e.g. if it is an audio/video and you like to run it through whisper for transcript, extraction of main topics, commit to knowledgebase - all in one GPT-instance...
Another banger of a tutorial. Appreciate it!
I love the quote, "How does it work? I have no idea, because it's OpenAi" LOL
If the agent in langchain can in built the same experience with plugins? By the way, what is the browser you used? It looks so cool
Was looking for a video like this. Great walk through! 👏
This is great, for a same q&a bot, you basically off loaded the gpt query api cost to the user. 😮
yeah openai are fully paying for the completion side of things here - out of the user subscription, but I think it's worth it
Such a great informative video. Kudos to you!!
Yo James this is a great video. Your channel gonna be huge one day! Keep it going.
Which Linux distro are you working with here? That panel looks hella slick,... I want!!
Would you say that thank chain is a plug-in and that the choice for a new developer would be either one uses line chain to do all of his GPT work or develop isolated plug-ins for the same work and if so which would you choose to use? Or is lying chain just to mature and capable to be called a plug-in had to compete with plug-ins and more so is a handholding friend to plug-ins?
The reason I ask is I have a client that wants to modify chat with STT and mostly friend and you I stuff but deeper stuff as well so I got to get them started on the right process and I’m brand new it coding so I wanna get in the right pool and swim in the water that is most likely going to be around for some time thank you again for all of your videos!!
How would a plug-in work that needs to lookup data from a sql db or an api work? For example I saw a plug-in for airline tickets. You would need the dates and airports at a minimum, and you would need to query the airlines for the prices. How would you extract those facts from the ChatGPT prompt? Or would you make your /query api contain those date and airport fields and ChatGPT would be responsible for filling those fields? Then your api would find out the prices and return the prices and options in a nice to read string or table?
yes chatgpt needs to populate the api fields, in fact, chatgpt is basically writing the api request - from what I can tell, chatgpt is looking at the openapi.yaml spec in order to figure out which endpoints to call, how to structure the requests etc - after sending the request the api does as you said, returns the prices and options, but in a JSON format - chatgpt then parses that information and returns it in a more user friendly (and conversational) format
Does this still have the same context length token limit?
Truly awesome presentation of this latest AI offering. Thanks James.
you're welcome!
Hi my friend! my upsert request recieves . do you know why? thanks! 🙏
I followed your instructions but keep getting a build error, needing more memory. So I added a droplet, volume, more containers, etc. I still get this error.
i have the same error, tried different plans up to 4GB RAM but still the same.
I'm getting the same error. Did you figure it out?
Just brilliant. Some thoughts on how this would scale to billions of docs would be very interesting!!
pinecone scales to billions of docs, you just increase hardware so you'd probably need to initialize the index separately (meaning, not via the chatgpt-retrieval-plugin) with *a lot* of s1 pods, then you're good to go, it'd be fairly expensive lol - but definitely doable
I guess the openai’s verification process of plug-in would check for this
Thanks for all the useful information. I got a question. Let's say I mastered prompting, langchain and pinecone along with the necessary python coding, what job can I get with this skillset?
overlord of the skynet hivemind, ruler of the machine world, etc
but for real, I don't know exactly. These technologies are changing the industry, whatever it is we'll be doing as ML devs and engineers, it will probably be that - but the skills are already in high demand, every startup founder and his dog is hiring people that can build with these tools
@@jamesbriggs Lool! Thanks a lot for the response. That's interesting because I am looking for a position like this but can't find where should I be looking for the opportunities you mentioned?
Hey James, thanks for putting together such a great video. Did you get access to the plugins just by being on ChatGPT plus and requesting access to the waitlist?
Hi my friend! my upsert request recieves . do you know why? thanks! 🙏
Is it possible for the plugin to store data that happens in the chat. For example if you're doing an RPG, could store the conversation history and use it as an external memory source for improved context?
Thanks James, you mentioned a related video around 19:27 but I'm not seeing a link (might just be something in my settings). Could you please add it to the description? Cheers :)
here it is ruclips.net/video/eqOfr4AGLk8/видео.html :)
Hi, James, if I'd create a plugin, based on your instruction. Do I need pay chatgpt plus or openai api ? or both ?
depends, in this case we'd be paying openai api because we're creating embeddings via the text-embedding-ada-002 model, but if the plugin is just an API that chatgpt interacts with, then it'd be chatgpt plus paying for it (other than the API hosting costs)
@@jamesbriggs If the embedding is generated by "text-embedding-ada-002" model, is the embedding general to all other models (e.g. user may default to davinci ) ?
Just getting around to this one. Thank you!
thanks Luke! don't go too crazy with the chatgpt bot plugins 😂
This is a great tutorial! I tried to follow along, but in my case the application on DigitalOcean fails because of memory error:
`Error during Build: Insufficient Memory`
it installs everything and then
[2023-06-11 15:04:09] │ command exited with code -1
[2023-06-11 15:04:09] │
[2023-06-11 15:04:11] │ ✘ build failed
Even when I increased the RAM to 16GB it still fails.
Did anyone else encountered this problem?
yep, happens to me as well.
I stumbled upon a part of youtube I really have no business being, but I'm already here so I might as well go along to get along. I have zero coding/programming experience but I'm at timestamp 20:23 and I was good up to this point. I'm not sure if I need to create the notebook for colab or download one. Can anyone explain how do I get from there to having a notebook with prepped data that I could use?
"I don't know exactly how it's working because I've no idea it's OpenAI" - the state of OpenAI right now, they should rename themselves to ClosedAI, assuming they don't become a Microsoft subsidiary
a little more visibility would be nice
lol they already are a microsoft subsidiary
What kind of visibility do you want? They are still uncovering the black box and they are competing with the largest businesses in the world.
@@andrew-does-marketing idk, maybe actual statistics about GPT-4 instead of a 96 page marketing puff piece they sold to the masses as science? Compare the papers they wrote before the micro$uck soft acquisition vs afterwards.
Great video, can we monetize chatgpt plugins in any way?
I have a subscription to chat gpt and in the center it indicates a chatGPT Plugin, but the drop down window doesn't offer a plugin options any suggestions. thank you Louis
I like the timing… well done Microsoft!
3 days ago Microsoft released a paper explaining the experience of their researchers using all the capabilities of GPT-4 prior being fine tuned. And guess what? GPT-4 can make use of tools! And yesterday OpenAI unveiled ChatGPT plugins… we have to wait for a sleeves of new features because this GPT-4 seem to be able to do far more than what they told us during the announcement
yeah it's awesome but I hope they stop releasing stuff so I can have a life again 😂
@@jamesbriggs I can understand it have been pretty crazy what happening lately in the AI space but doesn’t seem it will calm down any time soon. So you’ll have to take some time to live a little bit
@@jeanchindeko5477 haha yes I doubt it, but no worries I'm joking - looking forward to trying out palm, llama, etc soon :)
How to learn Python? I see many tutorials I don`t know where to get started.
I learned with the books "Programming not Painfully Boring", "Smarter Way to Learn Python" and "Head First Python". I like their method of learn by doing. After learning the fundamentals, other advanced concepts become much easier to learn.
Also, ChatGPT knows how to write Python...
Ask Chat GPT. It will not just tell you how to write the code but what to download to run it etc
Great vid! Are you on a mac or is that a linux dis?
Thanks! On mac
That is nuts!
Can i use this plugin , or i plugin similar to this but instead of using the chat interface , through the API?
Thank you for doing this a different way from how the github directs you in the quickstart. Poetry is a bit confusing to the python newbs like me.
Is there a way to test it in the UI without having access to plugins? I am all set thanks to the wonderful walkthrough, but I can't load the plugin from the UI. I understand that Plus subscription is mandatory (which I have), but cannot see the plugins dropdown!
It's not available for everyone now but it will be currently you have to join waitlist
can you query on the jupyter notebook and get a response from GPT?
Pinecone stores my documents in vectors (chonks) and when I want to look for some information, I retrieve a possible response from my stored document through the Pinecone API and pass it to the GPT API as a prompt, is that it?
how is that different from what we were doing before apart from the fact that we can plug it in into chatgpt UI
1. Using pinecone as a db
2. Embed docs and store them in pinecone
3. Retrieve similar docs using vector similarity
4. Add those docs to chatgpt prompt
openai are now paying for the completion (within the chatgpt subscription ofcourse), that's the main difference, but there is another fundamental difference to the previous examples. Before we were called the vector DB with every question, and there was "memory" of previous parts of the conversation. Here the LLM doesn't always refer to the vector DB, it only uses it when needed, this is naturally more efficient and makes sense, it doesn't need to use external info all the time - and ofcourse, there is now a memory of previous interactions
Nonetheless, from our perspective as developers, the process is practically the same, as you outlined in your 1-4 steps
I have access to ChatGPT plugins but where do I install the Langchain docs plugin?
James, how long did it take you to make this by yourself? How is that possible without any documentation, considering that it came out a day ago?
I got invited to help out with the retrieval api (initially not knowing it was anything to do with "chatgpt plugins" lol), that is documented and so I figured that part out early. Then on the chatgpt plugin side of things I had a ton of help from the relevance team at Pinecone (shoutout to Roy + Amnon), it sped things up a lot - I couldn't have got it out in that time otherwise
@@jamesbriggs incredible work either way. The quality is top notch!
@@KlimYadrintsev thanks man!
What kind of theme your using in visual studio code? I am self taught developer.
Excellent work dude!
does this work if I am using openai chatcompleletion endpoint?
Like I am using on google colab, but I want it to retrieve information from specific source.
James I know this is an unfair question to ask, but I'm a data science intern at the moment and idk what AI means for me. In terms of having a job is the future pretty bleak?
It isn't. In fact it is bright. Data is the new programming language. Cleaning it, splicing it, storing it and retrieving it. These tools will only make you more efficient
Yeah I'd agree with Chris, the future of our work (for anyone in programming) is going to change, but it's not going to wipe us out just yet - view these AI tools as tools, tools need humans to operate
Awesome video james 👏
thanks as always! 🔥
I get error404 at the making queries section...any idea why?
same here any luck in solving that issue?
Your videos are excellent and demonstrate outstanding work. I am a huge fan of your videos, thank you so much. I have a question regarding the storage location of the documents texts. As the vector database identifies similar document IDs using embedding vectors, the text documents must be stored and retrieved from a specific location to be incorporated into the prompt. Could you please provide clarification on this point, or am I missing something else?
yes they're stored in pinecone (the vector db), I talk about this in more detail here ruclips.net/video/rrAChpbwygE/видео.html
Hi my friend! my upsert request recieves . do you know why? thanks! 🙏
you deserve more subscribers
I dont understand how the semantically embedded document are return via the API as text, can you develop on this ?
How you make your videos?What software you use?and how you make your head on the right down?
this info is golden! thank you Sir.
glad you like it!
Why I don't have pluggin in my chatGPT4 plus account dropdow menu?
1 step closer to singularity !
I see this a lot, but singularity has a negative connotation (and rightfully so in my opinion)
Do you need chatgpt plus for plugins?
Hey all wondering if you can help me iron out some issues I'm hitting. I'm getting response 500 for post and query.
Do I need waitlist access to not get 500 error response, or am I being a noob with my rate limits or something else entirely?
my upsert request recieves . do you know why? thanks! 🙏
Can you please explain what the upsert does? @James Briggs
JAMES! I tried 'auto-gpt' and 'babyagi' and didnt use pinecone much outside of that, I was just billed $173.45.
Can you help me out kind sir
Hey Elijah, can you message Pinecone support explaining the issue, and also make sure you have deleted any indexed you have at app.pinecone.io
new sub is here ! thnx y man :)
Hey do you have a JavaScript equivalent tutorial?
Why do I not have access to these features on my chat interface?
How many plugins can it use in parallel (for one prompt with several different domain-triggerwords)?
this I'm not sure, but I know you can use many plugins in one chat, and I have seen chatgpt use the same plugin (the retrieval one here) twice in a row
@@jamesbriggs It would be great if OpenAI would make it possible to color-code output tokens accourding to their source: foundation model or plugin.
why is the rest of the colab code missing.
You are a god, man!
Does anyone know of when the plugins will be available for more people? I have the plus version of ChatGPT but I do not have the plugin feature yet.
try signing up for the waitlist here openai.com/waitlist/plugins - afaik they are gradually releasing to more people, but no idea at what rate
@@jamesbriggs Thanks James, I have signed up already and I was just wondering if some normies have been given access. Appreciate the info and help :)
how did you got access to chatGPT Plugins ? nice video btw. 😊
via Pinecone, but you can also join the waitlist :)
@@jamesbriggs i only joined via openAi website
@@jamesbriggs sorry for being a noob, but how did you get access via pinecone? did you make a video on that by any chance?
How can I scrape/retrieve a bunch of patents and research papers from the internet? Anyone have any advice?
Thank you so much!
My question would be: How much better will the repsonse be if I do this with OpenAI vs smaller OpenSource foundation models. My expectation is that the more important input is coming from the plugged-in knowledge base and the world model just adds some coherency to the response.
I think you can get good responses from both, especially when you consider models like LLaMa, a big part of the help here is coming from the knowledge base
@@jamesbriggs The world needs this: a hosted LLaMa connected to a hosted Plugin store. Wouldn't be Pinecone the right place for such a store. I appreciate OpenAI's developments, but it's a bit scary that the whole business world is gonna bleed their business secrets into one place.
What's that terminal alternative you're using?
it's Warp, super cool terminal alt imo - I have a referral code here app.warp.dev/referral/7G3N39 if you want it :)
Will it only work on gpt plus
Great video. This seems very similar to the chat gpt 4 chatbot video you made the other days in terms of how it works. Have openai eaten langchains lunch?
no definitely not, langchain is much broader in scope, this does eat a little bit of the agents in that lunch, but I'm not convinced that this is any better than what langchain offers yet
@@jamesbriggs thanks. Looking forward to playing around with it all a bit more.
Great video!
Why should you give them access to your knowledge bases? I mean, what is in there for the companies providing such plug-ins? They could use Langchain instead.
I cant like this video enough. Thank you!@
thanks for watching!
Hello, great video. Could you please make a video on how to do it from a pdf file with more than 100 pages?
Check out the langchain PDF loaders: langchain.readthedocs.io/en/latest/modules/document_loaders/examples/pdf.html
and my last video on data prep: ruclips.net/video/eqOfr4AGLk8/видео.html
Both of these together should be everything you need - but I'll also make sure to cover handling PDFs in a future video too :)
This plugin option is not showing in my paid chatGpt ... why ???
it's still behind the waitlist, you can sign up here openai.com/waitlist/plugins
Once the plugin docs are out, train it using this and ask it to write plugins for itself!
im not getting the plugins even after subscribing to chat gpt plus
Q: I wonder if james will do a vid on plug ins/apis. log on to youtube: BOOM
haha glad I got this one out quickly 😂
Are you a fan of Robb Wilson's "Age of Invisible Machines"?