Sorry, I am a novice in the area of AI, but I have a typical management question: What would it need to being able to use my own knowledge bot WITHOUT feeding the critical information to meta, Google or openai? Or: how can I ensure that my data is safe? By the way: great content- much appreciated
I am looking for an asnwer for this too. I think the answer is you need to run the chatbot locally/on a server you control. There are many different ways to do this, including using Portainer + Docker (this is what I was told by a exeriences coder)
WOW! Imagine using this to look for a specific "tag", "object", or whatever from the developer docs of a coding language... You could probably use this, to get an ai to write obscure languages by telling it the structure to use
Anyone having trouble with the code, llama index updated the name "of "GPTSimpleVectorIndex" to "GPTVectorStoreIndex". Just replace them and it should work if that is the error you are getting
I edited this because I want to emphasize the importance of search (and how poor RUclips search is). I had a vein in the center of my forehead trying to do this on my own about three weeks ago....and I'm only seeing Liams video now. Get your search alerts and notifications down during this revolution, and you just may be lucky enough to find the Liams of this world giving you a play-by-play breakdown of exactly what you want in your AI build.
@@LiamOttley understandable, you have been one of the best teachers when it comes to AI and how to leverage it! If your able to do a video on it in the future it would definitely help a lot. Until then I’ll be waiting for your next video!
Hey Liam! Awesome thanks...can you do a follow-up on an addon to index a website like a blog? And have the output to include a link to the article used for the answer so users can click through to read more?
I can't use the custom code for some reason. it says OpenAI is not defined here: llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.1, model_name="text-davinci-002")). I tried from openai import OpenAI but it is not working. Any suggestions anyone?
I am still getting this error. NameError Traceback (most recent call last) Cell In[37], line 6 2 from llama_index import LLMPredictor, GPTSimpleVectorIndex, PromptHelper 5 # define LLM ----> 6 llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.1, model_name="text-davinci-002")) 7 from langchain import OpenAI 8 from openai import OpenAI NameError: name 'OpenAI' is not defined
Very good. How do we train the bot to be context oriented? If I only want the bot to have knowledge of radio controlled cars for example. At the moment I can ask this bot about what's in its index, but I can also ask it about washing machines and it will answer.
Hey Liam, sorry not sure what Jupiter is, is that what your running the code in? Assuming we can run this locally? Also how would you integrate this into something you built, by referencing this new model or what within Open AI? Would like some more details on the code your using and integration???
This is all Python code running in Jupyter notebook. Super easy install with the anaconda launcher. Deploying apps is a bit trickier so you’d probably want to play around with things on your own as I am then hire a developer once you’re happy with it to create a product out of it
Great video!. I have some questions: 1) When we create an index, I understand that what's going on is that somehow based on the question we know which part of the files inside the index should be used to reply to it, and this is context information given to GPT as part of the prompt. Is that the case? 2) Is there a limit in how bit this index can be?
How is it private when we are loading Open AI's davinci Library to process custom data? Is private info not getting into public space? Great tutorial though!
Truth is your tutorials are nice and life changing but the problem is it's not beginners friendly you don't show us how you get started everything should be from scratch so that people can follow up fully understand what you are doing
Hi, I have a problem with the OpenAI API rate limit when using large set of data. This is when loading the GPTSimpleVectorIndex. For small data sets it's okay. Can u advise?
Dear Liam, great stuff. Already subscribed and looking forward to learn from you. A question: how to increase the length of the output ? I am using it to document some code and it stops before completing the task entirely. many thanks
Thanks a lot!, Great Content! There is a limit of data that you can index ? there is an option with openai api to work with chatGPT4 with custom indexes?
My question is will doing this bypass the content filter of ChatGPT? Could I host GPT and use llama-index or something to do that? No, it's not for sexy time. It won't talk about a lot of things like stock trading or cybersecurity because it flags it as bad content and gives an excuse instead of responding.
Can i train llama to write my books? Build it to help me write it prompt based after making my choices from multiple selection and move forward. Like based ona large overview? Maybe have it write the structure in the beginning?
Hi Liam! I'm interested to build a chatbot for an internal website, however, I worry that this might caused information leakage. What's your opinion on this?
Hey Liam! Great video mate 🙌 Can I ask if this can generate responses in a particular json format if needed after indexing any document? Thanks again for the video!
Leave your questions below! 😎
📚 My Free Skool Community: bit.ly/3uRIRB3
🤝 Work With Me: www.morningside.ai/
📈 My AI Agency Accelerator: bit.ly/3wxLubP
Could you please show us how to deploy this to a web app, so it can be used in a web app?
Is whole thing free?
God bless the algorithm for showing this channel to me!
🙏🏼 🙏🏼 🙏🏼
Same!
Amen😂
Maybe the algorithm is God 😮
The AI beast dropping knowledge bombs again! Awesome video Liam, punchy, engaging and dripping with actionable content 👏🏼 Cutting edge stuff.
Nice one Liam, always enjoying your contents.
Much appreciated mate, glad I could help!
Great info!
omg thank you! You've helped me not have to categorise my grocery shopping list into fruit, meat etc. manually.
Sorry, I am a novice in the area of AI, but I have a typical management question:
What would it need to being able to use my own knowledge bot WITHOUT feeding the critical information to meta, Google or openai? Or: how can I ensure that my data is safe?
By the way: great content- much appreciated
I am looking for an asnwer for this too. I think the answer is you need to run the chatbot locally/on a server you control. There are many different ways to do this, including using Portainer + Docker (this is what I was told by a exeriences coder)
Yeah I would like a video specifically on this
Did you find any solutions?
@@workinprogress2077 Did you find any solutions?
@@bl8596 Did you find any solutions?
Thanks for making this!
Yes an update to this with GPT 3.5 turbo (current model) would be incredible.
Great vid as always!
Appreciate it g
really cool stuff! Musch more efficient than traditional methods of custom training the model or making custom responses. Thanks a lot!
you deserve tones of subscribers
WOW! Imagine using this to look for a specific "tag", "object", or whatever from the developer docs of a coding language...
You could probably use this, to get an ai to write obscure languages by telling it the structure to use
Very Interesting
Anyone having trouble with the code, llama index updated the name "of "GPTSimpleVectorIndex" to "GPTVectorStoreIndex". Just replace them and it should work if that is the error you are getting
Cracking video ! Short, sharp and focused ! Information is spot on and really helpful. Thanks! Looking forward to seeing more good content.
I edited this because I want to emphasize the importance of search (and how poor RUclips search is). I had a vein in the center of my forehead trying to do this on my own about three weeks ago....and I'm only seeing Liams video now. Get your search alerts and notifications down during this revolution, and you just may be lucky enough to find the Liams of this world giving you a play-by-play breakdown of exactly what you want in your AI build.
Very kind words mate glad I could help ❤️🙏🏼
Liam, thanks for sharing this information. This is quality stuff.
Glad you enjoyed it 🙏🏼
@@LiamOttley wish i knew 5% of how to get any idea started, ugh, thanks you are golden
Liam never disappoints :)
Thanks mate 💪🏼
Well done Liam
Super interesting ! Thank you, Liam ! I was wondering how can we see the model used, and if we can control the temperature ?
I really like your content, thank you!
Thanks for watching 🤝
Amazing Liam bro..
🤝
Thank you, i can already see how to use it in education.
Thank you, I've been looking for something like this for weeks
FYI, "GPTSimpleVectorIndex" changed to "GPTVectorStoreIndex"
very informative video! Could we get a video on langchain soon? 👀
Quite a big beast to tackle, hard to not make it too technical for most of my viewers :/
@@LiamOttley understandable, you have been one of the best teachers when it comes to AI and how to leverage it! If your able to do a video on it in the future it would definitely help a lot. Until then I’ll be waiting for your next video!
Good guy Liam!
Thanks g x
Put speed for the video at 0.75 - Liam speaks really fast!
Superb Video, I loved it. ❤❤❤
The code isn't working anymore, got stuck with an error on GPTVectorStoreIndex. The libraries you have used have been modified.
Replace with GPTSimpleVectorIndex
keep making great videos
🤝
It seems anyone will be able to do this soon. The value will be in the data. Can you use AI to gather the data to train in on?
thank you for the video. What is the difference between using LlamaIndex vs Langchain? thank you
Awesome video! Going to mess around with all this as my first steps into AI programming... well sorta first steps anyway!
Great pacing and demo, thank you for the tutorial.
Hi, do I have to pay for open_ai key?
Awesome video...thanks bro
My pleasure mate
Great video
Respect for sharing your files 🙏
Really cool video
Great video as always. Keep up the good work. Wish you very best of luck for your channel. I hope it will rock in the near future.
Thank you. It is a vit above my head
Great Video. Thanks for sharing. 🎉
Wow impressive video, thsnk you so much!
Hey Liam! Awesome thanks...can you do a follow-up on an addon to index a website like a blog? And have the output to include a link to the article used for the answer so users can click through to read more?
Extremely cool! Looking forward to more awesome content.
Thanks. for sharing the skills
Thankyou, great video
Amazing tutorial! subscribed
Fantastic! Thank you.
How much would be the cost of the api usage , please tell me I need that !
oh..... now imagine throwing in all the D&D PDF's into one simple bot :D
Fucking brilliant man. Keep up the mad hustle.
This is very very good
I would like to see a Javascript version of this
Not sure if there are javascript equivalents for libraries like Llamaindex
I can't use the custom code for some reason. it says OpenAI is not defined here: llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.1, model_name="text-davinci-002")).
I tried from openai import OpenAI but it is not working.
Any suggestions anyone?
You need to add : from langchain import OpenAI
@@JebliMohamed good man
Thanks Jebli
I am still getting this error.
NameError Traceback (most recent call last)
Cell In[37], line 6
2 from llama_index import LLMPredictor, GPTSimpleVectorIndex, PromptHelper
5 # define LLM
----> 6 llm_predictor = LLMPredictor(llm=OpenAI(temperature=0.1, model_name="text-davinci-002"))
7 from langchain import OpenAI
8 from openai import OpenAI
NameError: name 'OpenAI' is not defined
@@millerco2000 from langchain import OpenAI this fixed it for me
Thanks, but I can't see any code that 'prompt' variable is used for openai's api.
Can u explain how can the chatbot remember previous chat?
I figured out it.
Need to modyify as query(prompt) intead of query(user_input)
🙏🏼
How do I break the word limit for an answer,Sometimes the answer feels half, not quite ,How can I modify it thank you
Very good. How do we train the bot to be context oriented? If I only want the bot to have knowledge of radio controlled cars for example. At the moment I can ask this bot about what's in its index, but I can also ask it about washing machines and it will answer.
Wow! Can we use any open source llm model instead of using openai api key?
Crazy Good Amigo
How to personalize the bot? To have a custom knowledge base but also tell the conversation style?
wow!!!! than you so much !!!
Question: the information that we index, is this shared with an external server loosing its confidentiality, or it just remain in the users computer?
I've the exact same question
Hey Liam, sorry not sure what Jupiter is, is that what your running the code in? Assuming we can run this locally? Also how would you integrate this into something you built, by referencing this new model or what within Open AI? Would like some more details on the code your using and integration???
This is all Python code running in Jupyter notebook. Super easy install with the anaconda launcher. Deploying apps is a bit trickier so you’d probably want to play around with things on your own as I am then hire a developer once you’re happy with it to create a product out of it
In this example, where does the transformer model sit? At facebook servers or locally?
can we teach it to iteratively improve its own code?
Great video!. I have some questions:
1) When we create an index, I understand that what's going on is that somehow based on the question we know which part of the files inside the index should be used to reply to it, and this is context information given to GPT as part of the prompt. Is that the case?
2) Is there a limit in how bit this index can be?
Great info. How did you capture page numbers from pdf and auto add to the output?
Which part do you mean? Drop the timestamp
can you do any of this stuff without open ai api ?
What do you recommend to build a construction engineering contractor estimating cooperation
great stuff, looking to tie this out as a slack bot to answer questions from employees for various business facing items contained in our KB
Hello, if I use the Google docs loader, will the file be updated every time I update the Google doc?
Great video! I was just wondering if it is possible to make it less expensive, because when I use big data bases it uses a lot of tokens.
GPT 3.5 Turbo is extremely cheap, hopefully they add support for it soon instead of davinci-003
Very good. Need a nice front end and link to a vector db like Pinecone
amazing one
Hi, great video thanks! But I am running into an error loading the notebook - ok to post it here for resolution?
Thanks for the session!
how do you get the info logs for token usage?
Good stuff here!, now friends don't let friends keep their api keys in public repositories! Remove it from the code and be safe.
I deleted the api mate it’s all good beginners would find it hard it I was creating env variables etc
@@LiamOttley good job
Does this training of the knowledge base, cut into the token limit?
Cool, but does it keep the context of all the document?
hi!
great video!
I tried to use the code provided in the notebook, but I got an error saying the openai API key is incorrect. maybe it's expired?
how come you use Jupyter and not google collab?
How is it private when we are loading Open AI's davinci Library to process custom data? Is private info not getting into public space? Great tutorial though!
you should not share something personal and things which should not be on the internet
Truth is your tutorials are nice and life changing but the problem is it's not beginners friendly you don't show us how you get started everything should be from scratch so that people can follow up fully understand what you are doing
Hi, I have a problem with the OpenAI API rate limit when using large set of data. This is when loading the GPTSimpleVectorIndex. For small data sets it's okay. Can u advise?
Dear Liam, great stuff. Already subscribed and looking forward to learn from you. A question: how to increase the length of the output ? I am using it to document some code and it stops before completing the task entirely. many thanks
often the token limit is too low
Very nice! Does this thing run locally? Especially if there is no internet connection available.
What's the difference between that and fine tuning?
Interesting, can it be done with a local trained model, like Llama or Vicuna 7b to keep it offline?
Can you use this offline locally?
Can you create one chatbot using either Langchain or Haystack and the recent star Alpaca?
I second the above request langchain + Alpaca would be really interesting.
Thanks a lot!, Great Content!
There is a limit of data that you can index ?
there is an option with openai api to work with chatGPT4 with custom indexes?
My question is will doing this bypass the content filter of ChatGPT? Could I host GPT and use llama-index or something to do that? No, it's not for sexy time. It won't talk about a lot of things like stock trading or cybersecurity because it flags it as bad content and gives an excuse instead of responding.
Good question, I'd assume because it's just using your API key and davinci-text-003 or whatever you set it as then it would still hit the filter
Very cool 😎 following!
Code doesn;t work anymore because from llama_index import GPTSimpleVectorIndex throwing an error now.
Great content! How would you get around the response character limit using this example?
Hi, great video! Totally new to python here. If i want to host this in a website, how do i do that? Thanks.
If I have a PDF of 300 Pages, will it still work as I saw another video using Lanchain and Pinecore (to store vector data for 300 pages)
Good question, I haven't seen anything on limits for these kinds of indexes so worth testing. Ask different questions about info on sample pages?
Can i train llama to write my books? Build it to help me write it prompt based after making my choices from multiple selection and move forward. Like based ona large overview? Maybe have it write the structure in the beginning?
Hi Liam! I'm interested to build a chatbot for an internal website, however, I worry that this might caused information leakage. What's your opinion on this?
I’d say OpenAI is taking privacy pretty seriously. I wouldn’t be worried personally, people have built huge apps using their APIs already.
Hey Liam! Great video mate 🙌
Can I ask if this can generate responses in a particular json format if needed after indexing any document?
Thanks again for the video!