Thanks a lot for your efforts and other similar courses. I've been a React developer for about 3 years now and my Next.js skills are kind of rusty since I stopped using it in 2021 (way before app router and server components), so these help out a ton.
WOW SO MUCH LOVE JOSH , the best ever programmer , is there any possibilities in the near future that you may consider to build a full stack e commerce app or some or ai integration chatbot in ang website snd that kinds of projects are so interesting thanks a lot for you hard work ❤️❤️☺️
just finished up, i ran into an issue where no matter what site i link to the AI it always thinks it is the first website in the DB, any idea where the issue could be?
38:14 Hi Josh, When you want to get the last element from a list, you can use .at(-1) it's more easy to use, for example in the messages simple you can use messages.at(-1) instead of messages[messages.length - 1] and thanks for your content
Excellent video, very well explained. I learnt a lot. My question as a beginner: can we chat with our own documents (pdf, texts) instead of doing it with sites? Thank you 🙂
"Attempted import error: 'LlamaParseReader' is not exported from 'llamaindex' (imported as 'LlamaParseReader')." Facing this error. Any help would be great!
I solved it by excluding llamaindex from my server components bundle. In your Next.js config add: experimental: { serverComponentsExternalPackages: [ "onnxruntime-node", "llamaindex", ], },
Hey man thanks for the build! I followed along build the entire thing out and deployed it on vercel. It was working perfectly fine but all of a sudden I've started getting 504 errors. I've tried redeploying the code as well but nothing helped. Can you suggest a fix or anyone for that matter?
im getting "Hydration failed because the initial UI does not match what was rendered on the server." can you help me solving it ? did you use some libraries that might be causing this ? and what do you think in which file must be this error?
I love yout work ❤❤ Just a quick question, is there any way to add primary key column in the vector db so that i can only generate messages from specific vector db data. Example: i go to wikipidia of sun then i go to wikipedia of moon and i ask question. but both of the data have been saved in the vector and i dont want my chat to know about the sun info as I am in the moon website
hey i'm getting : Server Error Error: Failed to create upstash LLM client: QSTASH_TOKEN not found. Pass apiKey parameter or set QSTASH_TOKEN env variable. any solution ?
Thank you for sharing. Do you have any suggestions on how to create an embed script that can be used on different sites, such as how intercom or any chat widget integration works?
hey! I also finished up, but I have an issue where no matter what website i open up, the AI thinks im talking about the first website in the database. any ideas?
Thanks a lot for your efforts and other similar courses. I've been a React developer for about 3 years now and my Next.js skills are kind of rusty since I stopped using it in 2021 (way before app router and server components), so these help out a ton.
WOW SO MUCH LOVE JOSH , the best ever programmer , is there any possibilities in the near future that you may consider to build a full stack e commerce app or some or ai integration chatbot in ang website snd that kinds of projects are so interesting thanks a lot for you hard work ❤️❤️☺️
just finished up, i ran into an issue where no matter what site i link to the AI it always thinks it is the first website in the DB, any idea where the issue could be?
38:14 Hi Josh, When you want to get the last element from a list, you can use .at(-1) it's more easy to use, for example in the messages simple you can use messages.at(-1) instead of messages[messages.length - 1]
and thanks for your content
Yup, -ve indexing is useless in Python too.
Bro have some serious skills
Amazing tutorial I’m going to try and build a next side project using upstash 😊
Excellent video, very well explained. I learnt a lot. My question as a beginner: can we chat with our own documents (pdf, texts) instead of doing it with sites? Thank you 🙂
you can, the rag chat API allow you to specify a local/network PDF source
@@jasonxu3412 thanks
Amazing project Josh!
cheers dude
Can you please tell how to deploy this project on vercel? Is it possible?
why am I getting this error?
error: No version matching "0.1.1" found for specifier "@llamaindex/openai" (but package exists)
🤔😔
Thanks a lot for your efforts. Could you continue with modern to enroll in a membership on a monthly or yearly?
Josh! You're amazing
"Attempted import error: 'LlamaParseReader' is not exported from 'llamaindex' (imported as 'LlamaParseReader')."
Facing this error. Any help would be great!
Its the version problem
I solved it by excluding llamaindex from my server components bundle. In your Next.js config add:
experimental: {
serverComponentsExternalPackages: [
"onnxruntime-node",
"llamaindex",
],
},
Amazing again josh
getting rate limit for the upstash
Hey man thanks for the build! I followed along build the entire thing out and deployed it on vercel. It was working perfectly fine but all of a sudden I've started getting 504 errors. I've tried redeploying the code as well but nothing helped. Can you suggest a fix or anyone for that matter?
why I am getting openai error for rate limit when we are using llama?
You should add generative ui
Super cool project
im getting "Hydration failed because the initial UI does not match what was rendered on the server." can you help me solving it ? did you use some libraries that might be causing this ? and what do you think in which file must be this error?
"use client" at the top of the file tends to solve this problem more often than not if it's a client side component.
amazing fr !!
Josh your wallpaper 😂
Woho Josh!
interesting
Awesome
shrek is love ♥
hell yea
i think you use too much of unnecessary divs dude
WE NEED OPEN SOURCE 😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅😅
good said
I love yout work ❤❤
Just a quick question, is there any way to add primary key column in the vector db so that i can only generate messages from specific vector db data.
Example: i go to wikipidia of sun then i go to wikipedia of moon and i ask question. but both of the data have been saved in the vector and i dont want my chat to know about the sun info as I am in the moon website
same question
I need your wallpaper
😂😂😂
hey
i'm getting :
Server Error
Error: Failed to create upstash LLM client: QSTASH_TOKEN not found. Pass apiKey parameter or set QSTASH_TOKEN env variable.
any solution ?
Love the vid, any plans to enable structured data generation? This makes RAG super easy but having the output as a schema would be next level!
Thank you for sharing. Do you have any suggestions on how to create an embed script that can be used on different sites, such as how intercom or any chat widget integration works?
Anderson Frank Hall Joseph White Mary
Harris Laura Rodriguez Scott Martin Cynthia
amazing
MacOS!!
do you have braces bro?
thanks for making this videos.....waiting for the next ones
Hi Jish, thankyou for creating this, plz help , how we can deploy this site on vercel or any platform..
can i use for other web site
instead of wiki
This thing is buzzing
How to finetune chatgpt 3.5.sonnet model
Hey! Just finished this project man! absolutely amazing
hey! I also finished up, but I have an issue where no matter what website i open up, the AI thinks im talking about the first website in the database. any ideas?
epic wallpaper
Can we deploy this ?
yeah for sure